How I use AI: AI as Coworker?

The question will inevitably be asked, how do I use AI?

To begin, I am generally a slow adopter of consumer tech. I don’t think I owned an iPod until about 2009. I never adopted any social media that came out after Instagram. I only recently, as of last month, purchased a new Kindle. My previous, first-generation, Kindle is a mere fourteen years old. I’ve subscribed to Spotify for less than a year.

I am this way because I am skeptical, without being cynical, about any new developments in technology. The reason can be summarized in a conversation with a Dwarf NPC in an obscure video game called Arcanum. The dwarf comments that the short lives of humans make them zealous for industrious achievement. When new tech comes along, the first thing a human asks is “What can I use this for?” when the question ought to have been “What is the cost of its use?”

While I don’t think it’s an either-or question, asking both kept me from resorting to Chat GPT every time I encountered a new problem in Leetcode for the first time.

But AI has been around for years, and I have seen what it can do for me. My guidebook has been Co-intelligence: Living and Working with AI by Ethan Mollick. Ethan Mollick proposes that AI can be (among other things) a co-worker and a tutor.

AI as Co-Worker, or better yet, ‘legacy code’

The first thing I did was see what ChatGPT could do. I prompted it for some code with something like this: “Create a Cloudformation YAML file. In it, deploy a Serverless API gateway using Lambdas with only one GET endpoint. Have it return ‘hello world’ on the API path /hello. Write the Lambda code in JavaScript.”

I got a YAML file. I read through it. I did not understand everything I read, but I referenced AWS documentation to understand what I looked at. I estimate it would have taken me four or five hours to write that same code from scratch.

I went through a few iterations. I asked it to add a public S3 bucket (you know, for web hosting). I asked it to change the Lambda code to Python. I asked it to add two more endpoints, including POST request. This request did nothing but bounce back the request body to you with a 200 status. Each time the YAML file was updated with the correct code.

Well mostly correct. That same afternoon I tried to deploy in Cloudfront. I got several errors. Some of those errors were the result of permissions. I had not enabled my IAM account to deploy S3 buckets or Lambdas. But others were because the YAML code tried to set both a bucket ACL and bucket policy. AWS recommends using only one method or the other.

As I read through AWS documents to fix all that, I learned I had asked the wrong question. AWS doesn’t recommend public S3 buckets for simple websites anymore. The right way to do this is to use a private bucket and set it up as a source for a CloudFront distribution, which adds a whole new level of complexity to a YAML file.

I began this process assuming that AI could be a tutor, as had been recommended in Co-intelligence. However, I realized the mental process I went through is more like a co-worker, or maybe even a previous worker. What had I done to deploy this project? I examined code that had a bug, debugged it, and then optimized its legacy setup for more up-to-date best practices in AWS. In short, it was much like picking up on a forgotten project that has -for some reason- now become important to the company again.

This was my first, but not only, observation in using AI. I have not, for instance, talked about AI as tutor.

That’s for another blog and another time though.

Thanks for reading.