Friday, November 29, 2024
HomeFeaturesThe future is closer than we think

The future is closer than we think

This article was published on November 7, 2018 and may be out of date. To maintain our historical record, The Cascade does not update or remove outdated articles.

When you hear the words “artificial intelligence,” what do you think of? Computers that can respond when we talk to them and solve problems for us? Cars that can drive themselves? Flying drones that adapt to their environment? Or, do you think about an image of the future straight out of science fiction, where robots are eerily similar to humans — a future where they look like us, talk like us, think like us?

While we’re not at the stage dozens of science fiction movies have warned us about, AI is becoming more and more prevalent: smartphones, laptops, and even Netflix use machine learning to assist or offer recommendations. Many of us have Siri or Alexa help us throughout the day, and all of us are affected by targeted advertising, which learns from the websites we visit, the kind of content we tend to consume, and by listening to our conversations. Jobs are becoming more automated, and even autocorrect learns which words we tend to type more and can recommend a word from the first letter entered. Increasingly, that distant sci-fi future is intersecting with the present.

Recently, the University of British Columbia (UBC) unveiled seven artificial intelligence floor-cleaning robots that are programmed to learn and adapt to their surroundings. The introduction of these machines raises several questions about the displacement of jobs due to advancements in artificial intelligence, and the future of AI in general.

The implementation of these cleaning robots at UBC raised immediate concerns among the custodial staff about their jobs being replaced by machines. However, both UBC and the robotics company that designed and built the devices, A&K Robotics, envision a future wherein humans and robots coexist in the workplace, with machines doing simple, repetitive tasks so human staff can focus on more challenging endeavours.

In an interview conducted by the CBC, co-founder of A&K Robotics, Anson Kung, stated that this is the heralding in of a new chapter in history.

“‘Today marks the beginning of a new future,’ said Kung. ‘A future where robots will become as commonplace as the phones in our pockets, a future where robots and people will work hand in hand to increase productivity, safety, and most important of all, our quality of life.”’

In this way, the cleaning robots are meant to complement the existing staff, and custodians’ concerns about being replaced are subsided. That doesn’t mean the threat of replacement is over for us, though.

Dr. Gabriel Murray received his PhD in Informatics at the University of Edinburgh. He is now  an associate professor at UFV in computer information systems, specifically teaching courses on AI and data mining.

“In the near term, and actually currently, we are seeing a big impact from automation,” Murray said. “I think what’s going to happen there is that there is going to be some job loss from AI automation, and I think that’s something we need to be realistic about; it is already happening.”

According to Murray, the first tasks that will shift into artificial intelligence are those that can be easily automated. Already, we see a shift towards automation in grocery stores with the introduction of self-checkout machines.

“That’s AI in the sense that it’s an automated technology, whether it’s intelligent or not — it involves some sort of intelligence of being able to scan barcodes and look up things in a system. It’s not a super sophisticated AI, but it is an example of an AI that’s changing our lives,” Murray said.

But could this displacement be a good thing? Like with UBC’s cleaning robots, introducing AI to take over simple tasks can free up human staff for more complicated, perhaps more important, tasks. Automating some aspects of a job doesn’t wipe out the job altogether — grocery stores are not completely devoid of human workers because of the introduction of self-checkout machines. Instead of worrying about how jobs will be wiped out, could we turn our attention to how jobs will be redesigned and businesses re-engineered to adapt to AI advancements?

“Increased automation, coupled with something like a guaranteed basic income, could free people up to do more creative endeavours,” Murray suggested. “We might find that as people transition away from the types of jobs that they have now, they have more time for things like creative pursuits, artistic pursuits, and travel.”

It’s certainly an interesting idea, because what are the lasting parts of human culture? Arts. As our technology, our medicine, our forms of travel advance, and old modes of doing things fade into obscurity in the face of better and more efficient models, art perseveres: we still read Beowulf, we praise Michelangelo’s works, we admire the architecture of the Notre-Dame Cathedral. A future with technology that seemed attainable only by imagination could also be a future of flourishing artistic focus; a sort of second Renaissance.

As idealistic as this imagined future could be, we can’t totally eliminate the possibility of a fantastical sci-fi world of evil robot overlords.

Catherine Stinson is a postdoctoral scholar at Rottman Institute of Technology. In an article published in the Globe and Mail she discusses the positives and negatives of advancing in this area of technology.

“Stories about AI now appear in the daily news, and these stories seem to be evenly split between hyperbolically self-congratulatory pieces by people in the AI world, about how deep learning is poised to solve every problem from the housing crisis to the flu, and doom-and-gloom predictions of cultural commentators who say robots will soon enslave us all. Alexa’s creepy midnight cackling is just the latest warning sign,” writes Stinson.

She cites the now 200-year-old warning tale of Dr. Frankenstein, as an ironic utterance to both his creature then, and perhaps to our creations now.

“Victor Frankenstein literally runs away after seeing the ugliness of his creation, and it is this act of abandonment that leads to the creature’s vengeful, murderous rampage,” she continues.  

“Frankenstein begins with the same lofty goal as AI researchers currently applying their methods to medicine: ‘What glory would attend the discovery if I could banish disease from the human frame and render man invulnerable to any but a violent death!’ In a line dripping with dramatic irony, Frankenstein’s mentor assures him that ‘the labours of men of genius, however erroneously directed, scarcely ever fail in ultimately turning to the solid advantage of mankind.’ Shelley knew how dangerous this egotistical attitude could be,” she concludes.

Arrogance needs to be counteracted by its comrade ethics, which is the primary issue in AI right now, apart from job loss concerns. Technologies that use AI are only as intelligent as the data they’re trained on; when we feed machines data, we feed them our own limitations, so that our biases and blind spots are programmed right in. AI algorithms absorb the information given, search it for commonalities, and then make predictions based on the patterns they pick up.

Machine learning — a branch of AI — comprises of a computer being taught to perform tasks by analyzing patterns, rather than applying rules it’s been given. This model typically involves supervised learning wherein a programmer assembles data, assigns labels to it so the system knows what to look for, and learns from the provided set of examples. In supervised learning, the supervisor penalizes incorrect answers, and the system absorbs that information and adjusts so it can improve its predictions.

“Deep learning” is a leading paradigm in AI modeled after the neural network in human brains, where the strength of connections are adjusted through learning processes over time. Currently, it is hoped to be the savior of the AI world.

Deep learning can be used for facial recognition and in this use, programmers supply labeled data — but rather than informing the system what features are important for indication, the computer extracts its own information.

“Deep learning has led to significant advances, but they tend to be very complex models, which means that they’re very hard to understand, and that lack of transparency is sometimes called a ‘black box’ because it’s making a prediction, but we don’t know exactly why it’s making that prediction,” Murray explained. “Something goes in, and something gets spit out, but we don’t know exactly what’s happening inside.”

This is where AI becomes alarming and inspires science fiction nightmares. Humans have a tendency to be afraid of things we don’t understand, or can’t see. (After all, we’ve all, at some point, been afraid of what might be lurking in the dark.) A machine that can think for itself in ways we’re not privy to terrifies us. What’s stopping it from questioning its service to us and rising up against us? We can’t ask a machine to explain why it made a certain decision the way we can with other people, and that makes identifying biases in these decisions difficult.

“It might just spit out an answer, and actually figuring out what were the steps that led to that answer aren’t obvious. Or it could be that when you actually start to look for the answer, it’s something that’s troubling,” said Murray.

This is where the data we input becomes essential. AI systems are making predictions based on past patterns, but those patterns are dependent on the data given. If we use AI in the hiring process to suggest whom to employ, and the data we’ve fed it consists of only white employees, it’s not a stretch to suggest that the system is going to reject any applicants that don’t fit that all-white pattern. Or, if the system has been fed an employee list that is predominately male, it may teach itself that male applicants are preferred.

In fact, it’s already happened: according to Isobel Asher Hamilton of “Business Insider,” Amazon had to pull their experimental AI hiring tool because of gender bias. The machine observed the pattern of predominantly male résumés being submitted in the last 10 years, took this as an indication that male applicants were preferable, and started penalizing résumés that contained the word “women’s” or were submitted by those who had attended all-women’s colleges.

AI is also increasingly being implemented in policing and federal security. In 2014, the CBC reported that the Calgary police were the first to implement facial recognition software to compare mugshots with video surveillance. While facial recognition has improved, it still has a long way to go until it’s a fail-proof method.

“You can never establish certainties; you can only establish probabilities of matches,” states Kelly Gates, author of Our Biometric Future in the article.

According to the CBC, Canada Border Services Agency announced intentions to implement facial recognition kiosks to compare people’s faces to their passports. Additionally, they reported that last summer Vancouver was the first Canadian city to follow in the footsteps of the United States and implement predictive policing — although Vancouver is using it to predict break-ins rather than recidivism.

“The problem is in some areas of the States, and possibly in other areas as well, they use a software system to help make those predictions about whether somebody’s going to reoffend and it’s a proprietary system, meaning it’s a company’s product and we’re not able to look under the hood and figure out what it’s actually doing,” Murray said on AI in the criminal justice system being used to predict recidivism.

Proprietary systems and the data their software is trained on are kept from public view, making it even more difficult to figure out if an algorithm is biased against someone because it was trained on human biases.

The nature of the data given can have an implicit effect, according to Murray. If the system was given data from a particular state or city, the demographics of that area will become “baked into” the prediction model, and this can have unexpected effects.

“It could make predictions that African Americans are much more likely to reoffend, for example, because it happened to have been trained on a data set that contained mostly African Americans. So that’s the kind of danger that you can get in, where it’s maybe not intentional human biases but if you’re not careful about the data that’s being fed into your system and how your system is using that data, you can end up with predictions that really are just enforcing implicit biases,” said Murray.

With the implementation of AI at UBC, it is evident the future is on our doorstep. AI on a university campus is no longer simply relegated to labs and select students, it’s roving the hallways alongside people from every faculty. Even if the progression of AI doesn’t mean widespread job loss, they’re becoming inherently involved in the process; they’re sorting our résumés, they’re inputting our data and answering our phones, they’re cleaning our floors. They might be the difference between receiving the job offer, and being passed over by a résumé that fits better with the programmed algorithm. Going forward, there needs to be a conscious awareness of the biases we’re programming into these machines, and implement ethical directives to counteract the suspected biases. If we’re looking at a future where we work side by side with machines that can think for themselves, how we’re teaching them to think is essential. The world around us is becoming increasingly automated; AI is not the distant future we envisioned — it’s happening now, and we need to be aware of it, or endure the repercussions of apathetic arrogance.

Image: Kayt Hine

Other articles
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

CIVL Shuffle

There’s no guide for grief

Players or profit?

More From Author