AI. Shifting. Pivoting. Accepting? And Still Questioning.
Part Two of an Unexpected Series on AI (Originally published May 2023)
<Oh hi there. This is the second piece I’ve written about AI and this one is from May of 2023. For the purpose of these pieces of writing, I’m mostly talking about generative AI. For the record, my writing —including this article—is unofficially certified A.I. free.>
The Space Race of AI — a.k.a. “Jane, stop this crazy thing!”
The incredible power and impact of AI +
The questionable ethics and risks of AI +
The fact that AI isn’t going anywhere =
The realization of not wanting to get left behind.
==============================================
Perhaps you feel the same way. So where does that leave us?
Here’s my answer: Observe it, learn it, apply it (to some extent), worry about it, and YES, question it.
(For the purpose of this piece of writing, I’m mostly talking about generative AI.)
It’s a fact. AI is here to stay. We know this. So — the fork in the road is whether you run and hide from it, or deep dive into it. In my case and maybe yours too, the fork is more like a spoon. Dip it into the hot soup, enjoy the flavor, but don’t burn your tongue. That’s my way of saying it seems the way to go is to test, learn, absorb, watch, experiment, and comprehend that there is a place for AI in life & biz (whether you like it or not), while at the same time keeping a hat of concern on which is multi-fold. I do believe it’s important to be aware and stay current with developments and how they may or may not be implemented in our various lives, use AI to the extent that it makes sense for you, in a positive, constructive, and ethical way, while at the same time, recognizing the risks. Case in point: If you haven’t already seen the open letter that has been circulating from the Future of Life Institute, take a look. It’s all about putting the development of AI on pause in order to mitigate risks to society and humanity. No small ask. Signers include Steve Wozniak (Apple co-founder). The letter is a doozy and doesn’t pull any punches. For that reason and more, I strongly encourage everyone to read it.
Whether or not the pause happens (I think the ship has already left the dock), the point of the letter is that everyone is steamrolling in front of everyone else to try to get ahead in the AI race and there are SERIOUS ramifications to be concerned about. Oversight, rules of the game, and downright ethical implications are being overlooked in many cases, and it’s the wild wild west of AI development without regard to safety and risk. Analogy: would you get in a self-driving car if you heard it had never been tested? Yeah, well. Me either.
NOT THE FIRST AI RODEO
It may seem new, but it’s not. AI in some form has been around for a long time (um, Alexa, GPS, face id, personalized social feeds, robotic surgery, chatbots, predictive texting to name several) and in fact, it’s actually been around since 1950. Don’t believe me? Google Alan Turing. It’s ok. I’ll wait.
But it hasn’t been around in the same way it is now, with all things ChatGPT and its competitors crashing into our lives mere months ago, and the floodgates have opened. It seems to be the collective shock of how fast we (businesses, marketing managers, entrepreneurs, and every industry under the sun) went from living with existing AI as mentioned above to AI in the form of ChatGPT that has taken over before our very eyes. And let’s face it. I think everyone is amazed OR freaked or both.
As with other major developments in tech, industry, inventions, and automation, there comes that point where it becomes accessible to every person. That’s where we are. When I first heard about and tried ChatGPT, I was WAY skeptical and for good reason. I still have my concerns (perhaps even more so), but I’ve gradually been coming around to realizing it’s better to learn and understand its possibilities and capabilities as they happen, instead of burying our heads in the proverbial sand and then not able to catch up. I’ve begun to pivot to realizing the best course of action to stay current is to stay current, and accept utilizing AI to some extent, as a tool. And to be clear, NOT (in my opinion) as a replacement for the brain’s capacity and capability to create and be creative. This may be early days and AI may be in its infancy compared to what’s ahead, but I still hold to the theory that AI cannot replace the human factor in the ways that matter. Does it make life easier? Sure, in some cases. Does it provide shortcuts for many users? Heck yeah. Are people utilizing it to do the work for them? YES in many cases. The further question goes to the ethics of that, as in when is that ok and when is it not ok, which goes into a whole other level of who is assigning the work and does that party know that the work may or may not be generated by AI, and does it matter, or not? Not to mention the intellectual property of anything AI generated, which I’ve decided will be the subject of my next article (um, I think this is moving into a series!).
TOOLS OF THE TRADE
When I look at ChatGPT as a tool, an assistant, etc., it’s easier to wrap my head around accepting it. When I look at it as completely taking over, that’s when I have issues with it and where it feels too much like science fiction. As the technology is rapidly advancing and evolving, so is our capacity to react and accept it. I think of the edge of the industrial revolution and I see the analogy. We know how that turned out. Most of us would be lost without our tech and our industrial advances. The difference, the game changer, the sci-fi-ness of it all is the current iteration of ChatGPT mimics our ability to do all the things. It’s not actually thinking for us, but it comes off like it is. And that’s where the rubber meets the road and makes it scary to so many, while at the same time the capability is so appealing.
BE CAREFUL WHAT YOU WISH FOR. THE DEPARTMENT OF IRONY DEPARTMENT.
Heard of Geoffrey Hinton? He’s known as “The Godfather of AI”, receiving his PhD in artificial intelligence nearly five decades ago and he is considered one of the most respected voices in the field. Having made significant contributions to the development of artificial intelligence, Hinton recently quit his work at Google, telling the New York Times that he’ll be warning the world of the “existential risk” and the potential threat of AI to humanity, which he said is coming sooner than he previously thought. He says it’s not inconceivable AI could lead to the extinction of the human race. This is not some conspiracy theory, this is looking at the “good vs. evil” quotient and seeing the forest for the trees if it ends up being the latter. The sheer POWER of AI could be utilized either way, or both. In fact, Geoffrey Hinton himself confirmed that he left his role at Google last week to speak out about the dangers of the technology he helped to develop. Let’s discuss further. Have humans developed artificial intelligence to the point of it being so powerful and unstoppable that it’s on track to replace our brains and take over, potentially leading to disaster beyond anyone’s control? The double edge sword is clear. The capability of ChatGPT is obviously astounding and amazing. But where does that leave the human, and where is the oversight? Why aren’t people LISTENING and PAYING ATTENTION to the “what if’s”, the disclaimers which are right out front, and more? What are the long-term ramifications if humans are replaced by machine learning? Will our brains which are so capable of learning and applying knowledge just turn to mush? Is it all happening at the peril of humans losing their ability to think and process and create on their own? I know I may sound like a skeptic, and I realize the pivot is real, but in this moment, it feels like the best course of action is to embrace it while at the same time question question QUESTION.
HELP OR HARM?
To be clear, of COURSE it would be aMAZing if AI technology could help cure disease (and of course it’s not that simple). I’m just concerned that in the current acceleration of AI tech, that oversight may get sideswiped in the process and cause other unknown ramifications. In medicine, AI is being developed to diagnose diseases and recommend treatments. Let’s discuss. It could be amazing. Till it’s not. Can you imagine AI being able to pinpoint a cancer diagnosis? On the one hand, if AI technology can detect disease better and faster than other diagnostic tests, that would be a game changer. For real. Amazing. Cutting edge. But if/when the technology malfunctioned and was wrong, where does that leave us? The Pew Research Center did a survey exploring public views on AI in health and medicine. The overarching common denominator is that it’s all happening too fast. Without oversight. Sound familiar?
I see four factors that tend to freak people out when it comes to the current iteration of ChatGPT.
1. The “science fiction” aspect of it, i.e. taking over our brain’s capacity to think and write and basically doing the thinking for us, though to be clear, it’s not a living/breathing entity, it’s the culmination of an insane amount of data that gets processed into the written form and more.
2. The concern that it could quite literally or potentially replace jobs and careers, though to be fair, it is also leading to new jobs and careers.
3. The split-second proliferation of mis/disinformation that can be created and distributed widely in seconds.
4. The very real concern about the negative ramifications of AI to society and humanity.
RESPONSIBLE AI PRINCIPLES
Yeah. So we’re here. And it’s pretty much a free-for-all. A space race of sorts of who can get out in front of the other to win the world of AI. Ok, yes, there’s room for everyone, right? But then why does it seem like everyone is speeding in the fast lane with no brakes? I’m not sure what will potentially cause a catastrophe first: the AI itself or the fact that everyone is rushing so much to get out in front that they’re not considering the risks and safety concerns enough. Don’t believe me? Just watch. But hopefully, and before things really get out of hand, Congress will step up and take action to pass reasonable regulation. Of course the irony isn’t lost on me. That humans developed the technology that eventually led to the advancement of AI to the point of AI possibly (more than ever) taking over? This can get out of control real fast and it’s already happening. If we build AI that is SO smart that it can outsmart humans, what comes next? Read that again.
As with the 2001 Space Odyssey reference I said in this article, we may be at a point where “life imitates art.” The lines are blurred. Is it learning or is it just appearing to learn? Ah the smoke and mirrors of it all. Stay tuned.
“ChatGPT is so impressive, but it can make horrible mistakes.”
~ Steve Wozniak, Apple co-founder
Open letter calling for a pause on AI: Excerpt from the open letter: AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
About AI, Steve Wozniak says, “We call it intelligence but we don’t know how the brain really works”.
60 Minutes Overtime with Google CEO Sundar Pichai. Even Pichai says AI will need to be regulated and also told 60 Minutes that: “AI will be as good or as evil as human nature allows and the revolution, is coming faster than you know.”
For those of a certain age, remember “Jane, stop this crazy thing!” from The Jetsons?
#ai #artificialintelligence #copywriter #google


