Catastrophic Forgetting, Hallucinating, Poisoned Models…Is AI OK?

Austin Chia, contributor to the CareerFoundry Blog.

Artificial intelligence is getting so advanced that it’s now capable of mimicking human abilities in various tasks such as natural language processing, generating content for marketing, and problem-solving.

However, with this advancement comes new concerns, such as catastrophic forgetting, hallucinating, and poisoned models.

Although these might sound like plot points from a science fiction movie, they are rapidly becoming real issues in the world of AI.

To help you understand these terms better, I’ve put together a list of unique terms used in AI and their implications.

Here’s what we’ll cover:

  1. Catastrophic forgetting
  2. AI hallucination
  3. Poisoned models
  4. Dead neuron
  5. Exploding gradients
  6. AI fallibility
  7. Blackbox AI

Curious to know what all these AI terms mean? Let’s have a look at seven unique AI terms you should know.

1. Catastrophic forgetting

First up, this term sounds like something straight out of a disaster movie, but it’s actually a phenomenon that occurs in AI systems.

Caused by neural networks, catastrophic forgetting is when a trained AI system “forgets” previously learned information while learning new ones. This phenomenon occurs due to the overwriting of weights in the network as it adapts to new data.

For example, if an AI model learns to identify cats and then later has to learn to identify dogs, it may “forget” how to recognize cats. This can have severe consequences in AI systems used in critical tasks, such as self-driving cars or medical diagnosis.

This issue arises largely because neural networks are not trained to have incremental learning, just like in a human brain. A human brain can store previously learned information and build on it, but neural networks do not have the ability to do so.

However, there are some strategies to counter this phenomenon, such as:

  • Progressive neural networks (PNNs)
  • Elastic weight consolidation (EWC)
  • Generative replay

2. AI hallucination

Next up, we have AI hallucinations. Before I get into it it’s important to note that this AI term  isn’t what you think it is. In the context of generative AI, hallucination refers to an AI system falsely identifying objects or patterns that aren’t present in the data.

Hallucinations in AI occur when a machine learning model produces incorrect outputs or predictions due to biases or flaws in the training data.

These hallucinations can have hugely harmful consequences, especially if the AI is used in high-stakes applications.

For example, when asking an AI to show a company’s financial report, it may hallucinate and produce false figures. This can mislead decisionmaking and lead to potential financial losses.

As a result of this it’s vital for not just machine learning engineers and data professionals to be aware of this possibility, but practically everyone who works with the likes of ChatGPT or Google’s Gemini.

3. Poisoned models 

Another concerning aspect of AI is the concept of poisoned models.

Poisoned models occur when an attacker deliberately manipulates or poisons the training data to compromise the performance of an AI model.

This approach can be used for malicious purposes, such as causing an AI to make incorrect predictions or decisions, leading to disastrous consequences.

For example, an attacker could manipulate a self-driving car’s training data, causing it to misidentify road signs and potentially cause accidents.

There’s also a concern about AI poisoning itself as well, where AI-generated content that’s available online is used as training datasets that influence future AI models. This can cause biased or inaccurate outcomes.

4. Dead neuron

This sounds scary and harmful (and like something in chemistry or biology), but it’s actually a common occurrence in AI models.

A dead neuron is a term used in neural networks found in AI models to describe a neuron that is no longer activated during the learning process.

It occurs when a Rectified Linear Unit (ReLU) function becomes inactive, resulting in the neuron not contributing to the output of the model.

Dead neurons can occur due to various reasons, such as faulty initialization, high learning rates, or vanishing gradients.

These dead neurons can significantly affect the performance of an AI system and require careful handling during training to avoid their occurrence.

To fix a dead neuron, here’s what you can try:

  • Adjusting the learning rate
  • Using different activation functions such as Leaky ReLU

5. Exploding gradients

On the other hand, we have exploding gradients, which can also affect an AI model’s performance.

Exploding gradients occur when the weights in a neural network grow exponentially during training, causing unstable and inaccurate predictions.

This issue is prevalent in deep learning models with many layers.

To explain this in a simple way, imagine if you were trying to balance a tower of blocks, but the blocks kept getting heavier and heavier, making it impossible to maintain balance.

Similarly, in AI models, exploding gradients can lead to unstable training and inaccurate predictions.

To avoid this issue, some techniques used by data scientists include:

  • Gradient clipping
  • Adaptive learning rate methods
  • Using different activation functions
  • Adjusting learning rates
  • Normalizing data

6. AI fallibility

AI fallibility refers to the fact that AI systems can make mistakes or produce incorrect results despite having been trained on vast amounts of data.

This phenomenon highlights the limitations of AI and the importance of human supervision and intervention in decision-making processes involving AI.

Although AI is becoming increasingly accurate, it’s still not perfect, which makes it critical to consider its fallibility when using it in important tasks.

But, with proper design and training, AI systems can be made more resilient and less prone to errors.

For example, a large language model chatbot, GPT-3, was found to produce biased and offensive text outputs. To address this issue, OpenAI, the creators of GPT-3, made fixes and launched ChatGPT as a replacement. Even more recently, Google had to pause its LLM briefly as it was producing racially controversial images.

7. Blackbox AI

With the recent interest in AI, many companies have adopted building their own versions of AI models.

This has led to the creation of “blackbox” AIs, which are trained on proprietary data and have opaque inner workings. This is usually done to have some gatekeeping and prevent their AI technology from being copied by competitors.

However, this lack of transparency can pose ethical concerns as the decisions made by these blackbox AIs cannot be easily explained or understood by humans. This makes it challenging to identify and address any machine learning biases.

There’s a growing need for more transparent and explainable AI systems, especially in industries such as healthcare and finance, where decisions made by AI can have significant impacts on individuals and society as a whole.

To counter this issue, researchers have suggested ways to make AI models more transparent, such as:

  • Using Explainable AI (XAI) techniques and creating regulations that require companies to disclose their AI models’ inner workings. 
  • Enforcing ethical guidelines and accountability measures for companies developing and using AI technology.
  • Encouraging collaboration and knowledge sharing among AI researchers to promote more transparent and explainable AI systems.

8. Final thoughts

AI has made remarkable advancements in recent years, and there are just so many new weird terms being thrown around.

However, it’s essential to understand these terms to gain a deeper understanding of the capabilities and limitations of AI technology. From catastrophic forgetting to blackbox AIs, I hope this simple guide has helped you understand the underlying issues in developing reliable AI systems!

Looking to get some basics down in machine learning and AI? CareerFoundry’s Machine Learning with Python specialization course is a good place to start!

Want to delve deeper into other areas within AI and machine learning? Check these out, too:

What You Should Do Now

  1. Get a hands-on introduction to data analytics and carry out your first analysis with our free, self-paced Data Analytics Short Course.

  2. Take part in one of our FREE live online data analytics events with industry experts, and read about Azadeh’s journey from school teacher to data analyst.

  3. Become a qualified data analyst in just 4-8 months—complete with a job guarantee.

  4. Don’t miss our biggest deal of the year! This month, get up to 30% off tuition with our End-of-Year Offer. Schedule a call with a program advisor today and take the first step toward your future!

What is CareerFoundry?

CareerFoundry is an online school for people looking to switch to a rewarding career in tech. Select a program, get paired with an expert mentor and tutor, and become a job-ready designer, developer, or analyst from scratch, or your money back.

Learn more about our programs
blog-footer-image