According to a CBC news article, experts are concerned that Indigenous content generated by AI may create problems for Indigenous language revitalization efforts. AI can invent words in languages such as Halq’eméylem, perpetuate stereotypes, and mislead people seeking to reconnect with their culture.
CBC consulted Dr. Michael Sherbert, a member of Algonquins of Pikwakanagan First Nation and an expert in AI ethics. Sherbert explained that AI generated Indigenous content is drawing attention away from authentic Indigenous context due to the flood of misinformation AI can generate.
“You could say that the AI is inadvertently colonizing and hurting Indigenous language revitalization because [people] are taking information generated by an artificial intelligence and putting it out there for people to read.”
Furthermore, AI programs such as ChatGPT utilize guesswork and they will often make content up to produce an answer if they cannot find a real one. As the information on Indigenous cultures and languages is limited on the internet, AI will often generate false content including elder teachings, language, culture, and history. While there are teams dedicated to making AI models that are more ethical, there is no governance system in place to monitor AI behaviour and usage to ensure ethics.

