The past two weeks have been wild - it seems like everybody is having a lot of fun with ChatGPT.

People have used it to write poems, songs, essays - and even software.

I've been playing around with ChatGPT, and while I was blown away initially, I quickly noticed its limits.

It's not a secret that ChatGPT is not very good at math, and - especially when it's wrong - it sounds overly confident (maybe to make up for a lack of knowledge?).

When I asked it build a todo-list application with SwiftUI and Firebase, it did quite well in the beginning, but didn't do too well when it had to write code for persisting todos in Cloud Firestore. In particular, it was very hard to convince it to use Codable instead of mapping Firestore documents manually. You might argue that its corpus only includes knowledge up to 2021, but Firestore has supported Codable since 2019, and even my article about mapping documents is from early 2021.

Don't get me wrong - giving more or less plain-language instructions to a computer and receiving meaningful (and often high-quality) output within seconds is pretty impressive.

Will it change our jobs? Yes, it will. Just like the invention of the assembler, the compiler, and other tools did. I see it as a productivity tool: great for brainstorming ideas and exploring a topic. Fantastic for summarising texts, and good for pointing you at potential issues with your code (be careful when feeding it code that might be owned by the company you work for!). Not entirely rubbish for writing code, although it made some spectacular blunders in some of my experiments.

But it's not going to replace software engineers, lawyers, journalists, scientists, or any other job. It's a tool, and a pretty powerful at that. It will make us more efficient - if we use it right. And it will also require us to be more alert - going forward, it will be difficult to say if a piece of text of code has been written by a human or a machine. And we should spend more time thinking about the moral implications of AI/ML technology. Software ethics no longer is an issue that can or should be discussed in the ivory tower - it is on us to set ourselves boundaries / decide if the use of a large natural language model is appropriate / acceptable for a specific use case or not. And - sooner rather than later, we need to discuss the moral implications as an industry and civilisation.

Thanks for reading

Peter 🔥  

What I am working on




AI and ML



Fun stuff