What Are the Dangers and Current Reality of AI?

Written by Chris Picard

Chris explains to Melissa the role AI has in our business.

There has been a lot of buzz around AI in recent months. This buzz has been combined with many people being afraid of AI. After all, our movies involving AI depict something bad that turns against humans; however, this is different from the reality of our current situation. That is not to say that there are no concerns with this technology.

In this blog post, I will explain three topics including:

  • The current state of AI

  •  The actual concerns over AI

  • What can be done about these concerns?


What people may think when they hear the word AI?

A goal of AI is, as we see in the movies, to create artificial life. For example, a computer that acts like a human or with human-like abilities. This is what you see in most movies and TV shows. Yet, we still need to reach this goal, and I don’t see this happening in our lifetime, our children’s lifetime, or even their children.

The reality and current state of AI

So, what is ChatGPT? It is a predictive text algorithm. This means it analyses a vast dataset of human writing and predicts the best word to write next. That’s it! It has taken the world by storm because of how good it has become at performing that task, but it is no more human than any other AI algorithm. Algorithms that perform tasks to mimic a subset of human behaviors but not real intelligence are our current state with AI. Given that context, let's talk about what concerns there are about AI.

Concerns of AI

1. AI without human involvement

A subset of AI takes actions without a human involved in the process. Some examples are self-driving cars, stock trading applications, and autonomous drone aircraft. These types of AI can become dangerous because they need morality, intuition, or adaptability. Instead, they have a set of programmed decisions to act in a particular way given a situation.

What can we do about this concern?

In this situation, it is challenging to identify all the edge cases it might face and program the proper actions. An excellent example of this breaking down was in 2013, when the Associated Press’s social media account was hacked. The hacker posted that there was an explosion in the White House, and the President of the U.S. had been injured. Before human traders could even realize what was happening, the stock market sank, and trading had to be paused. When dealing with these types of AI, it is essential to put in safeguards or to have humans included in the process, even at the most minor level.

2. Automation of work

Another big concern about AI is the automation of work. People are worried that AI will soon replace their job. I don’t think this will happen quickly for many jobs, nor will they outright replace people with a fully autonomous AI program. What you will see change with AI is manually intensive tasks that are easily repeatable, like note-taking, spreadsheet management, etc., will be replaced with AI. In some cases, this will result in fewer employees for particular roles.

What can we do about this concern?

For people in this position, the best thing you can do is learn how to use AI. Someone who already knows how to interact with a chatbot or knows how to tell it what tasks to perform has much higher job security than someone who resists the technology.

3. Misinformation

Finally, another issue and my biggest fear related to AI is misinformation. Even before AI, our society currently has an epidemic of misinformation. AI, used incorrectly, can give a megaphone to that information. It can churn out massive amounts of content quickly. This can drown out legitimate content, sway people's opinions, make finding accurate information extremely difficult, etc.

What can we do about this concern?

Some projects are trying to combat misinformation; however, changing our behavior is the best way to combat this issue. Knowing there is a problem with misinformation, we must start questioning the accuracy of what we read and hear and see if a reliable source can back up the claim. If we do this, we can at least limit the impact of AI-fueled misinformation.


I hope this post gives perspective on the actual dangers of AI. I hope I explained AI in an easily digestible way to our readers. If you want to continue the conversation, connect with us on LinkedIn or here.

Previous
Previous

The Rise of the Fractional CTO

Next
Next

Cooking Up a Software Team from Scratch