Follow us:

Phone: +60 (7) 213 2638

A Learning Community that Empowers Students to Achieve their Academic and Life Potential.

The Future of AI – An Interview with Mr.Ken Morrison

Technology Integrationist, Teacher & Apple Distinguished Educator

In 1942, American sci-fi author, Isaac Asimov, published a short story called Runaround. The book was set in 2015, and it revolves around two engineers who are doing experiments on the planet Mars. They created a robot to help them, and they encoded it to follow three rules. The robot may not injure a human being or allow a human to come to harm, the robot must obey orders given by humans that do not clash with the first law, and thirdly, the robot must protect itself without breaking the first and second laws. The plot begins to climax when the robot starts to become conflicted with the laws and begins to “run around.” That’s when the engineer felt it was necessary to step in and use the first law to his advantage. 

What’s fascinating about Asimov’s sci-fi stories is that they give a sneak peek into what our future looks like today, where Artificial Intelligence is a thing, and that we have to consider how we can integrate ethics when using smart technology to our own benefit. In the past, Runaround seemed implausible because people feared the invasion and wrath of robots, so they could not use robots to their full potential. Asimov’s story marked the first time the concept of AI was ever mentioned. 

In modern times, technology has transformed dramatically from Asimov’s stories, and it’s one of the tools we can’t give up easily. It’s hard to go around and not find a piece of technology (smart or not), whether it’s the computer you use for work, the GPS integrated with our cars, or the device you’re reading this article on. Furthermore, the fourth industrial revolution (IR 4) has made it possible to develop AI, the technology that knows what products or media to recommend to you using the algorithm, directs us to our destination and away from traffic, and you can even talk to them (e.g. Siri and Google Assistant). 

Notwithstanding the advancements, many ethical issues and uncertainties arise. Are robots going to invade the human race like past generations predicted? How do we know it’s safe to give personal information for it to use? How can a machine be compared to human intelligence? What does the future of AI hold for us? Well, it’s unlikely that robots will invade the human race. In fact, they can be quite beneficial to societal advancements and ensure our safety. To better understand AI, we sat down with the technology integrationist and computer science teacher at Raffles American School, Mr. Ken Morrison.

For over a decade, Mr. Morrison has been a teacher and is even certified to be an educator by Google and an Apple Distinguished Educator. He helps teachers by showing ways to make education more engaging with technology. Additionally, he demonstrates the latest learning skills to students and teachers to help them become more comfortable with new technology trends, tips, and tools.

Interview

 

Technology

Give a brief description of AI and its significance. 

At this stage in human history, I feel it is important that people realize that much of what is called Artificial Intelligence (AI) is still just computers following algorithms created by humans. I like IBM’s official definition of modern AI: “Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence.”

Its near-future significance is that AI can be used to do many of the mundane things that humans do, but in a much more efficient way and with fewer errors.  For that reason, humans will need to prepare for a world where we are working alongside (or managing) programmed devices that can replace some of our traditional tasks or do them at a larger scale.

The lines will continue to blur between big data, the Internet of things (IoT), human-made algorithms, machine learning, and true artificial intelligence.

How can we use AI in education? 

I feel there are two streams for this question. One would be regarding using AI to guide students, and another is how we can prepare students for a world with significant AI influence.

This past year has shown how important teachers are in helping students emotionally deal with large and small challenges. I feel that there are enough concerns out there about AI that it will be several decades before a corporation will be able to persuade governments to allow AI to lead a class of students. But I can see AI being used very soon to use student data to adjust the pace of instruction as well as personalize the instruction. Math word problems could be personalized based on student interests while the math concepts remain the same. Reading passages could be adjusted for reading level on the paragraph- or even sentence level. AI could give recommended homework suggestions based on engagement, energy level, understanding, and performance of different topics throughout the day.  In the much further future, if a student gave a vague answer on a quiz in hopes to be close enough to the answer to get half credit, AI could quickly create follow-up questions asking the student to be more specific.

Regarding how we can prepare students for a world with significant AI influences, teachers can help students identify algorithms and discuss the real and alternative effects of those algorithms.  We can also discuss how we could design more equitable algorithms with students. When we allow algorithms to follow programmed choices that affect real people, we can discuss who gets financially affected or left out of conversations.  When time allows, I think it can be valuable to have students attempt to reverse-engineer an algorithm and try to create one (without code) that is more efficient, juster, etc.

Why is ethics important when AI is in the discussion?

Because most of AI is a new concept for most of society, it is important that young people are equipped to speak, vote, and research wisely on issues surrounding the topic.

Many algorithms use data based on the user’s opinionated data. These opinions may change day by day, but the data can have unintended consequences.  A very sad example of discrimination is that Microsoft created a chatbot and released it on Twitter. It learned to make racist posts and comments in just a few hours simply by learning from other people’s posts.  These are examples of why we need many guardrails before we release AI in societies, corporations, and classrooms.

It is crucial for all of us to understand that within a few square miles in Silicon Valley, decisions and algorithms are being made that can affect how people all around the world are treated, how we get our news, how we spend our time, how we are compensated for our time, and how affordable basic goods are, and how we form opinions.  Many of these decisions are made by a small handful of people who do not look like them.

I give examples to students that algorithms may have already influenced who our real-world neighbors are. Whether on billboards, radio advertisements, hand-delivered flyers at malls, and especially on social media and internet advertising, the developers of our apartments and neighborhoods have most likely followed a formula created by a consulting firm of who they should target for advertising campaigns. I like this example because students see the fuzzy line between effectively using an advertising budget and accidentally excluding some groups of people from new neighborhoods when following social media advertising algorithms.


There is a significant lack of diversity in the technology industry. One recent trend that is being uncovered is that some facial recognition programs do not have enough samples of minorities to accurately identify some users before taking a standardized test. Some minorities are also misidentified in the street and are questioned or arrested simply because the creators of the facial recognition software did not test the software with enough people of color.  I ask students to imagine studying for months and waking up early to take a college entrance exam and find out they can not take it because the software does not recognize them. 

How do you perceive the future of AI? 

I want to perceive the future of AI as being bright. However, we need to pressure our elective officials to actively pursue knowledge on how to make wise decisions. I do have concerns that large corporations and lobbyists will be able to sway governments in directions that are very efficient for managing employees and collecting data on customers but may not be best for society. There are definitely possibilities for AI to help humans be more efficient, possibly more connected, and have a healthy work-life balance.

Is there anything in particular you think will be highly beneficial to future generations?

In a perfect world, I love the idea of digital assistants learning enough about us to personalize content to our learning style, interests, and current energy level as well as making connections of what we are learning this hour to something that we learned (in school or for fun) in the past. Somewhat related, an optional sidebar menu could pop up when a character is applying 

something that you learned about last week in school or in a book you read for fun. 

Is AI incorporated into RAS classrooms?

Each of my classes has some exposure to AI. Last year, my middle school coding class explored how to create smarter elevators based on collecting data on the schedule and lifestyle of residents. Secondary students learn how large companies use their data to feed algorithms (and future AI) that determine information like how people get the news, advertising, social media, etc. We also discuss which groups of society are being left out of important conversations that affect all of our collective futures.

Our Applied Digital Skills course has a unit on Machine Learning where they help ‘machines’ learn what pictures are humans and which pictures are objects, paintings of people, sculptures of people, etc. That unit also gives students imaginary driving start and stop points over the course of a month. Students try to ‘think’ like AI and make guesses about the lifestyles and working hours of anonymous drivers.

I have heard students talk about algorithms – mostly in the math context – in elementary classrooms. This is important groundwork for talking about AI in the future.

Conclusion 

In all the many ways AI operates and can be used to our advantage, there can be a consensus that the future looks bright ahead for us. And while robots now don’t seem as cool as the robots in the Star Wars franchise, they’re definitely far from Speedy, the robot from Isaac Asimov’s Runaround. At Raffles American School, we firmly believe in placing importance on the rise and use of technology and AI in the classroom. As Ken Morrison said, our students are exposed to lessons about the advancements in technology, the ethical issues, and how they can modify them in the future. 

Written by Jumana S Raggam 

Comments are closed.
Follow Us: