I did a personal experiment for the first six months since ChatGPT attacked the Internet back in November, 2022. These are some “brief” ramblings.

First of all, by saying “I missed the social media boat,” I mean I missed the boat in terms of joining various social media platforms and reconnecting with old friends (except WhatsApp, even though I find all the forwards very annoying). With regards to investments on the other hand, as I’m sure many of you will have similar experiences, some of my best long-term returns have been with the tech monopolies, including social media giants.

Let’s now move forward with some details of the “experiment.” There was no standalone app for GPT when it first released so I added a directly linked button that mimicked the app experience to the front page of my phone, which mind you, is intentionally barren. Prior to GPT & eventually Google Bard, I only had six buttons on my opening screen of my phone: phone, messages, an app that controls outdoor speakers, music, podcasts and notes. It was fairly distraction free—almost like clear water. When I added the chatbot buttons, it was like covering my distraction-free phone with vanilla frosting and then sticking onto the frosting a layer of Frosted Flakes. I couldn’t get enough.

Within a week, phone usage had increased substantially—probably 3x—maybe, 5x—because I would look up every possible query that came to my head. I’m not the smartest and most knowledge person out there, so let’s just say I was looking up a lot of obscure stuff—some of it laughable and completely unnecessary.

GPT provides a great (and somewhat embarrassing history of all your searches), so I went back to my history and decided that this was my most embarrassing and completely unnecessary prompt:

“The location will be a backyard. There is a large grassy area with a large trees giving shade. I’m in need of a layout for round seating tables and rectangular banquet tables for food and drinks. The goal is to accessibility easy to food for guests. There will be three 72ā€ round tables with 10 seats each. The two 8’ rectangular banquet tables will have food and drinks.”

May, 2023

Rather than replying with, “This is a really stupid question, Faisal. Why don’t you apply your own brain for a split second and figure it out yourself. It’s only five tables, for A.I.-God’s sake,” GPT instead came back with a comforting (I’m paraphrasing here): “What a great question! Let me answer that for you in 0.0005 seconds.”

The answer involved a multi-step process of arranging the tables and supported by ‘”bull-shit sounding” reasoning you hear when sitting in on a meeting right before lunch given by a know-it-all bullshitter who is getting paid 3x more than you. And if that wasn’t enough, GPT even drew a confusing old-school ASCII-art diagram to make me feel like I’ve struck A.I. gold when in reality I was pretty much shooting myself in the head from an intellectual standpoint.

If you’re thinking WTF is wrong with this guy, that is exactly what I thought…..about two weeks afterward when I woke up from swimming inside a pool of “A.I.” glitter. Like I said, I am not the most intelligent person in the room, but thankfully, I am someone who eventually gets it. Eventually.

Most of the queries were of course things you would typically Google, so a casual user will learn quickly why Google had this chatbot tech for so long, but decided to bury it away in the company closet full of “inventions that are not good for our core advertising business.” It makes me curious what else is hidden in that closet.

Back to the the risks and rewards of avoiding the A.I. chatbot revolution.

Both are doubled edged in a sense and the key is to figure out a moderate pattern of usage that is helpful to the user, while at the same time not making the user intellectually lazy. Unfortunately for lazy humans, there is no moderate pattern of usage. It’s like giving a heroin addict a 30-day prescription to their favorite opioid and expecting them to use it with care and moderation.

Let’s take risks for example. The risk of over-reliance on a A.I. chatbot is that you start thinking that it knows everything, when it obviously knows nothing. The tech is simply good at rehashing large amounts of text that it borrowed (or briefly “stole,” depending on your perspective) in the form of a large language model (i.e. tech-speak for “large compilation of pirated text from the Internet”). Using this text, the algorithm running the chatbot follows coded rules of language on how to arrange the text so that it falls within statistically-weighted guidelines.

And oh yeah, I forgot to mention one minor detail. It’s gets thing wrong.

A lot.

But you’d rarely know it because whatever it got wrong might be buried inside a tasty soup of things that it got generally right. As you might have heard, the genius tech-bros are shrewdly using the word “hallucination” as a way to apologize for these errors that are generously sprinkled throughout. I think as people get burned by these unpredictable hallucinations, they will formulate a new type of relationship with this technology.

I’m sorry, but I can’t let it go so easily. “Hallucination” is too kind and generous in this context. Rather than hallucination, all chatbot text should be preceded with a shameless disclaimer: “There are glaring errors in our chatbot algorithm that we can’t seem to detect, so please don’t believe everything because what you’re about to read might be total bullshit.” You must include the word “bullshit” because it really helps drive home the point.

The rewards of intentionally limited access to these chatbots are tremendous. Firstly, you need to convince yourself that you don’t need to know the answer to everything all the time. This takes time. We’re led to believe that we need to know everything and we need to know it now and we also need to feel like we what we learned in a few seconds is all we kneed to know. I’ll let that sink in.

Instead, we need to know answers to things that matter. The answers that matter need to be researched in a manner that promotes a deeper understanding of the topic in question, which is code for finding multiple sources, reading and learning from those said sources and then coming to a healthy understanding. Instant, app-based understand is in itself a hallucination. You can’t Uber Eats a good understanding about an important topic. Knowledge acquisition simply doesn’t work that way and shouldn’t work that way.

Of course, there’s a lot more I can write on this topic, but I think I’ve conveyed my general conclusions on these applied-statistics-powered chatbots.

Give me your thoughts below in the comments.

Check here for more reads from

, ,

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.