Artificial intelligence (AI) may have seemed like something out of a science-fiction novel only a few years ago, but today a few words typed into a website or program can generate detailed images, seemingly well-researched responses to questions and even entire novels. Generative AI burst onto the scene in 2022 with the explosion of ChatGPT, and has since snowballed, expanding and growing into iterations across platforms – from social media to search engines to software.
Easy access to computer-generated words, images and ideas immediately raised concerns about how people would choose to use it (historians like Dr. Aidan Forth say that humans haven’t always made the best choices when using new tech – more on that later), and the speed with which it has grown and expanded has made control and restrictions difficult to implement. But the reality is that generative AI, along with other tech like augmented reality and virtual reality, can be great tools in teaching and research – and for today’s students looking to be competitive in certain fields and sectors, they will need to become experts in tech-related competencies.
Across faculties, MacEwan experts are adapting technologies to help their students prepare for future careers, as well as using them in their own research.
AI is new to us, but it isn’t a recent invention
Dr. Calin Anton has been teaching his computer science students about AI for well over a decade, but his Intro to AI course has changed significantly in a relatively short time. “In the latest version of the course, I’m teaching things that weren’t invented when I started it.”
AI itself, he says, isn’t new. The first public mentions of the concept came from a lecture by Alan Turing in 1947, and the field of research officially began in 1956 at Dartmouth College. Even in the case of ChatGPT, development began back in 2018, well before the general public was aware of it. Dr. Anton describes the history of AI as generating interest in predictable waves, explained by the Gartner hype cycle.
“Basically, researchers work on technology like AI, something happens that catches people's attention and then everybody expects big things from it,” Dr. Anton explains. “The hype grows, and when the technology doesn’t deliver, interest drops abysmally. Then development gradually goes on and interest reaches normal levels of hype after that.”
ChatGPT and other generative AI technologies represent the fourth Gartner hype cycle of AI, says the associate professor. The waxing and waning interest he’s seen in the past makes Dr. Anton believe that these technologies will plateau as well.
“ChatGPT is a very good tool, but it has limitations,” he explains. “It’s a pre-trained product, so it’s only as good as the data it has been given.”
But that data does have its uses, especially in the classroom, says Dr. Anton. He encourages his computer science students to use AI, as long as they are upfront about how they’re using it, which prompts they’re using and the outputs they get. Looking critically at the outputs to ensure they are correct, requires students to find the answers on their own as well.
“The only way you can use the technology properly is if you know the answer,” says Dr. Anton. “You cannot trust it. You have to verify everything.”
He hopes the experience helps students not only understand how to use the tech, but also that it isn’t a replacement for learning and work – that generative AI can produce an answer, but cannot rationalize it. “Generative AI actually forces people to do more critical thinking, and makes that critical thinking even more important.”
Three different realities and how they impact learning beyond the classroom
Dr. Farzan Baradaran Rahimi’s research into digital interfaces like augmented reality (AR), virtual reality (VR) and mixed reality (MR) applies to learning that happens outside a formal educational context.
“I consider informal educational contexts – museums, libraries and science centres – as very important places to learn,” says the assistant professor in the Department of Design and Tier 2 Canada Research Chair in Immersive Learning.
Dr. Baradaran Rahimi’s design students – along with peers in computer science, communication studies and commerce – were able to experience that focus firsthand in a partnership with the Telus World of Science Edmonton. Working in teams, they created immersive learning experiences, including one that used smart glasses, AI and AR to expand guests’ learning. As they walked through exhibitions, guests communicated with AI to learn about the science centre and followed AR visual cues and guides right in their glasses.
“It can help visitors to personalize their museum experience, to take agency over what they wanted to learn and spend their time with the technology in multiple ways,” explains Dr. Baradaran Rahimi.
This isn’t his first experience working with museums to allow visitors to immerse themselves in what they’re seeing. In an exhibition about the city of Los Angeles in the 1940s, he worked with organizers to provide two unique experiences: one with photos and video clips displayed in a conventional gallery exhibition, and one with a virtual reality experience for visitors to immerse themselves in the streets of Los Angeles in the past.
In studying the experiences, Dr. Baradaran Rahimi found that the immersive learning VR environment – seeing recognizable landmarks, fashion trends and even traffic flows – significantly increased visitors’ learning and enjoyment.
“Some of the participants who had lived in Los Angeles told me that in the virtual reality environment, they could go to the specific area that they lived in and recall the memories that they had from the city.” These learnings, he says, can be translated to the classroom as well.
Generating a work of art – artificially?
“Artists have always used the technologies of their time,” says Erandy Vergara-Vargas, noting that artists began using computer-generated images and editing in their work long before generative AI was so readily available. Her research delves into exactly how artists use the tools and tech available to them.
“It's an interesting time to see this shift that was enabled by technologies, but also by the available data,” says the assistant professor of studio arts.
In 2020, she curated an exhibition in Montreal called Through Secrets: The Art of Creating Spaces Between the Lines, which dealt with critical approaches that looked at personal data and desires. The works presented encouraged viewers to consider the uncertainty, precarity and lack of precision in their interpersonal search histories and emotions.
While Vergara-Vargas sees the benefits of using tech for things like synthesizing information and doing administrative tasks, she also sees limitations of its application in art. Outputs of AI are dependent on the data available to them, which can present potential biases, so artists need to think critically about what data they’re using.
They also need to have a clear reason for using tech like AI, she says. “I will ask of any work of art: What's the point? Why does it matter that it is made in this way? Why does it matter that they are using this technology?”
Emerging tech is old news for nursing students
Simulations have been part of medical training for a quarter century, says Dr. Jill Vihos, allowing students to understand clinical environments before they’re actually in them. She’s using VR simulations here at MacEwan to research their efficacy in preparing nurses for their in-person work with patients.
“VR really gets to that metacognitive way of thinking,” explains the assistant professor and associate dean in the Faculty of Nursing. “It helps provide insight into how they think and how they're making decisions in the moment.”
When MacEwan’s nursing students put on a VR headset, they find themselves in a patient’s hospital room. A fellow student controls the scene and chooses the responses from the patients, so each attempt is subtly unique. In the virtual scene, participants can move around, using common equipment and monitors that they would find in a real hospital room in order to check vital signs, take blood pressure readings and complete other assessments and interventions.
“It isn't a replacement for experiences related to actual tactile feel,” says Dr. Vihos. “It is another tool to integrate among many tools, whether it's hands-on practice with task trainers or working with mannequins in a multi-person learning experience.”
In those other types of exercises, though, students can feel pressure from others watching and critiquing them. Dr. Vihos says students self-reported that wearing the VR goggles and being unable to see their peers in the room made them feel less stressed and allowed them to have a better focus on what they were doing. It’s also potentially a way of drawing in different types of learners, allowing for a fun, gamified introduction to the day-to-day work of nurses.
But the big win, she says, is in the way that VR helps students to develop soft skills like communication with their patients.
“In the context of care, you have to develop relationships,” Dr. Vihos explains. “Developing communication competencies was significant. We were surprised by how much students really valued that in their education. Sometimes, experiences like this are the first time students see that everybody can learn to be better communicators.”
Virtual placements shape social work student skill sets
In the School of Social Work, Kealey Dube is partnering with Alberta Health Services to design a simulated field placement for her students. The virtual placement is in its first trial run this term and will help to prepare students for their in-person placements.
In their virtual placements, students are presented with vignettes of typical day-to-day scenarios. Their responses to actors in the videos are recorded, and instructors then critique their communication skills to help them improve.
“We're helping to not only build capacity in the sector in terms of increased field placements, we’re also really helping students to receive targeted feedback around skills that we know are important,” says the assistant professor.
And it’s not the only new tech in the School of Social Work this year. In September, Dube launched an AI reading bot with the help of Dr. Galicia Blackman, an educational developer in the university’s Centre for Teaching and Learning. Students in a first-year social work philosophy and ethics course used the bot to help them learn the Social Work Code of Ethics.
“They had to summarize their understanding of the code using a guided reading app that would ask them questions to help them further their depth of understanding,” Dube explains. “If they didn’t know the answer, or were unsure, students could type that into the app, and it would break down the question further.”
Analysis of outcomes of those projects is still emerging, but students anecdotally report enjoying using the chatbot.
What history has to say – not all tech growth is positive
Not everyone is optimistic about the speed at which generative AI is developing and becoming mainstream. Dr. Aidan Forth – who is writing a book about the history of technology in the 19th century – says we can learn a lot from looking back in time and warns that we should be careful about how dependent we become on AI for the skill sets people should be developing.
The associate history professor says that while he sometimes uses ChatGPT to help him illustrate a point in class, he’s also wary of outputs. In his own research, he tried to use the AI bot to find a quote to use as evidence from a 19th century politician. ChatGPT generated the correct name, date and location associated with the politician, but the quote itself was fabricated – Dr. Forth was unable to find it anywhere, and upon further investigation, the bot admitted it had made it up.
Historically, times of great technological change have caused problems that weren’t yet imaginable, and such may be the case with generative AI. Just look to the invention of the printing press as one example, says Dr. Forth.
He notes that one of the most popular titles after the Bible during the early days of the printing press in the 1500s was a conspiracy book called Hammer of the Witches. While the printing press drove higher literacy rates and increased education, it also resulted in the widespread availability of disinformation that led to the European witch hunts. If we’re not careful, Dr. Forth is worried that an equally devastating explosion of disinformation may be on the horizon.
“Change – when it happens gradually, when it evolves, when it responds to human needs and desires and experiences – can be very positive,” he says. “But there are times when change is traumatic. I fear that we're entering a period of traumatic change. Democracy depends on human beings making informed decisions,” Dr. Forth adds. “What happens when social media distorts our ability to know what is true, and when AI algorithms make our decisions for us?”