Monthly Archives: February 2025

How not to use AI

I’d like to share a story and then I’d like you, the reader to imagine, what you’d do in the bosses shoes.

You are reviewing a new client’s contract and looking at some numbers. Most of it is good, but one of the numbers is off. It appears, based on your prior experience, that it must have been calculated differently. Perhaps it was a typo. Maybe the wrong column from an Excel sheet got copied over. Perhaps there is something exceptional in this contract that you’re not aware of. So you send a message back to the employee who sent it to you and ask them to explain.

You get an email back. It appears slightly off-topic, but at least it’s a reply. The explanation you get confirms your first thought: this particular number was calculated differently than usual. You send a follow-up email and get another reply back. The answer continues to explain reasons for the change in calculation, but the answer does not satisfy you. So you speak with the employee in person to have them elaborate further.

At this point, you discover that this employee has been replying to recent emails with ChatGPT. You also suspect that this is how the contract, of which you have found one error, was generated to begin with.

How do you feel about this employee at this point? Perhaps not only as a subordinate, but also as a colleague, or worse, the person youm yourself report to?

This anecdote is only partially fiction. A similar incident happened to a friend of mine. Except in this case, he was a direct report of someone who relied on ChatGPT to communicate. Needless to say, he finds his position less than satisfactory.

Does Tech Make us too Lazy now?

It’s hard to know where to start with this, but I recollect a psych journal written in the early days of social media. It repeated several variations of a double-blind experiment. Each iteration tried to account for a different variable. Yet the same conclusion came up again and again: people frequently confused easy access to information, via the internet, with personal mastery of that information.

A more recent article from Futurism states a related conclusion: If you rely too much on AI, you atrophy your ability to think critically.

Yet I wonder, might there not be many out there who shrug their shoulders and say, ‘so what’? Or at the very least, behave that way as they type ‘Chat GPT, how do I get out of trouble with my boss today?’

AI is not your Relationship

It’s easy to say someone is lazy for replying to their co-workers with AI responses, but that’s not quite enough is it? Communication, especially written communication, is always subjective and relational between the writer and the audience. I feel silly pointing this out, but ChatGPT doesn’t know your co-worker like you do. It cannot therefore know your co-worker’s state of mind, behavior, expectations, or emotional state. It cannot understand what stakes might matter either. In the case of this anecdote, ChatGPT would not know if this miscalculation might have legal or financial consequences. It cannot know if this is a trivial or a deal-breaking change.

AI cannot choose the right words or mention any relevant contextual points in your replies. Does this matter, though? Well, that depends on how much you value your working relationship with someone. If someone sees an email from co-worker, and considers splat, cut, and paste reply via ChatGPT enough of a reply, I can’t help but think they value the relationship with the sender very little.

This says nothing of failures to communicate that evidently follow.

AI is not your brain

Now let’s consider a deeper problem here. In the story described, an employee did something that resulted in this strange point of data. The employee could not explain the decision. In all probability, this is because the employee did not make a decision.

The employee did not think.

Most of our work is habit. We do not always understand why we do something, but we would at least need to know if we had done something on autopilot. “I’m sorry, boss. I used our usual template contract and did not account for a currency exchange rate for this client,” could have been a good reply. But that’s not what happened here. The employee needed Chat GPT to make up an answer to a question she did not understand, despite it being her responsibility to do so.

As the Futurism article noted, ‘atrophied and unprepared’ is not the right way for your brain to work. The mind is a muscle like any other, and if it is not used well, it can weaken. Many of us, myself included, are knowledge workers, so the sharpness of our minds is as important as an industrial worker’s hammer-wielding muscle. We cannot completely substitute thinking with automation. Even if we could, do we want to release important details and company secrets into an AI?

Can a knowledge worker, who prefers not to think, be an excellent employee?

“But AI is tool!” some might point out. “A tool like a calculator! Or a book! You do not do complex math in your mind, nor commit paragraphs to memory like a bronze-age cleric!” That is correct, of course. I have no more desire to do complicated math or memorize books than I do to row a boat, burn candles for light, or even send postal letters to distant relatives. Humans are tool makers. Let us celebrate our amazing tools!

But one podcaster pointed out that there is no thinking that is not verbal processing. I add that there is little emotional processing that is not also verbal. It can’t be said enough: doing both must be a habit. It must be exercised to be done well.

Do we value the content of our minds and hearts so little that we prefer ChatGpt to do that for us? To be clear, AI can be useful in helping us think better, but it should not think for us. Doing so outsources our reasoning to an algorithm that’s well-known for making things up. It furthermore makes us dependent on a corporation whose motivations are opaque. It exposes us to something that can be manipulative.

In short, thinking poorly leads to a lack of freedom.

The consequences of bad thinking can’t be enumerated enough. We all know terrible examples of the ‘sunk cost fallacy.’ We might consider the 19th century ‘end times’ cults, where people upended their social and financial lives for nothing. We might shake our heads (even if we privately chuckle) when conspiracists get in trouble with the law. Thousands are in their graves because they believed their easy access to information about medicine made them experts in a novel virus. I’m sure many of them imagined themselves as bold Copernicans. If only they understood how difficult it is to know something new.

If these aren’t enough examples, I have some digital currency to sell you. Your favorite Tik toker promises it’s not a rug pull.

This is not to say that AI will necessarily deceive or manipulate. What I am saying is that an ‘atrophied and unprepared mind’ is an easy target for manipulation. In this light, a clumsy series of e-mails from one employee to her boss is trivial. What’s more important is: do you wish to be free and have power over yourself? If that’s so, then you’ll have to do the work of emotional and cognitive processing on your own.

At a minimum, we ought to be able to communicate why we made a particular decision in business. So please, don’t ask ChatGPT to do everything for you.