Research that Matters Transcript: Episode 5, Societies in Drastic Change

Research that Matters | Episode 5 | Transcript| Robotics | Torrens Uni | Large

When past generations talk about how different the world looked ‘in their time,’ it’s hard to imagine what life was like for them. This episode of Research that Matters features researchers pushing the boundaries of Artificial Intelligence (AI) to solve some of societies biggest problems.

Research that Matters, Episode 5:  Societies in Drastic Change

Research that Matters is a 9-part podcast series featuring researchers from Torrens University Australia, who are working towards solving complex global problems and propelling innovation. For more information, and to access all other episodes in the series, click here.

In this episode, our researchers talk about pushing the boundaries of AI further to solve some of our biggest problems, like COVID-19, supply chain efficiencies, and accounting audits.

Featuring:

Host:              Clement Paligaru (in bold)

Guests:          Associate Professor Ali Mirjalili (AM)
                       Adjunct Professor Heinz Herman (HH)

Full Transcript

If CSIRO scientist, John O’Sullivan and his research colleagues in Radio Astronomy, weren't looking for the faint echoes of black holes in the 70s, they may never have eventually developed Wi-Fi technology.  

Picture a world without research.  It would be a world where Apollo astronauts wouldn't be able to secure their devices.  Thanks to the creation of Velcro, a world where smartphone cameras didn't exist and we couldn't snap pressures, family moments, and a world where we wouldn't have revolutionised forensic investigations without the discovery of DNA fingerprinting.  Research can expand the view of our place in the universe.

AM:  Research matters because to be able to grow, because to be able to solve the problems that we don't know how to solve yet, at least efficiently, we need to tap on the outside of the sphere that is dark black.  If you think about a circle or let's say a three-dimensional sphere, we live in that sphere now, right?  What is inside the sphere?  We know what is outside the sphere, we don't know.  We need people to push the boundaries to make the radius of this circle or sphere bigger and bigger.

Research lays a solid ground for education.

HH:  Research matters in many ways as a university, because research informs our teaching.  Your teaching is informed by research by constantly refreshing the curriculum, and keeping the students up to date with what the most foremost thinking is in the sector.  And that's why most of our lecturers, including researchers, have a very strong industry background like myself.

Research is the vehicle for innovation.

AM:  I would say fundamental research overall might need more investment.  I go back to that analogy of engine and a car.  Someone like me, someone in my team, is good at designing engines, which is of course the main driver, but there are also research who developed the car, the different models and then put the engine in and then use it in different applications, deliver a product, commute to work et cetera, et cetera.  If we overlook the fundamental research, we are designing a car without an engine.

This is Research that Matters.  I'm Clement Paligaru.  This series explores the work of researchers from Torrens University Australia.  We’ll take you behind the curtain to hear what drives their passion and the impact their work has on all of us.  In this episode, we'll look at Societies in Drastic Change.

AM:  I believe that I found research as a way to be creative and feed my creative personality.  Because in research, the first step you have to read and read, to learn what other people are doing.  Once you put in enough effort and you understand the state of the arts, that is where you can start, make some contribution to push the boundaries of knowledge in that area. 

Hi, my name is Associate Professor Ali Mirjalili, and I am the Director of the Centre for Artificial Intelligence Research and Optimisation at Torrens University Australia. 

I found that process, the process of discovering new things and something that nobody figured out before or discovered before, very rewarding.  And I believe that that led me to publish a lot and survive in academia, I would say, and research, because the research space is very difficult.  If you have put efforts now, you will get rewarded maybe two years down the road.  Why?  Because it takes time to, of course, understand the literature.  It takes time to collect data.  It takes time to write, send it to review and get it published.  You need to endure this and to be very dedicated, to wait for that long period of time

At the Centre for Artificial Intelligence, Research and Optimisation, there are three research clusters.  One is machine learning, one is robotics, and one is optimisation.  Professor Mirjalili’s research looks at the specific area of computational intelligence, which itself is a subset of artificial intelligence.  It's like a world within a world within a world.  His focus is evolutionary and nature inspired algorithms.

AM:  What is a natural intelligence?  Of course, it's a natural organism that produce some sort of intelligence behaviour or behave intelligently.  For example, it can be a worm to navigate in the soil and find food sources and avoid predators.  So, if you think about it now, those natural intelligences can be simulated or mimicked in the machine.  When we get the machine to behave intelligently, that is where we call artificial intelligence.  It's an intelligence that is artificially made or mimicked in a computer.  And of course, it covers a wide range of areas. 

In this area we’re inspired from nature.  We will look into nature and see how nature solves problems, what sort of principles it follows.  For example, evolution, right?  The way how creatures evolve over time to cope with environmental challenges.  For example, snakes don’t have legs, but they can camouflage, right, to survive, to increase their chance of survival.  Birds developed feathers to be able to fly and avoid predators and better find food.  So we're inspired from those.  We mimic the same principle in computer and solve some of the challenges that we face as humans.  Because we believe that the best problem solver on this planet is mother nature.  It has been solving problems for billions of years, so it makes sense.  And it's wise to inspire from nature and develop algorithm, develop solutions to solve our problems.

And mother nature is bursting with inspiration.  But what does fish swimming in a coordinated way to protect themselves from sharks have to do with algorithms, for instance?  And how exactly does mimicking mother nature help solve our growing list of challenges?  A very good example is the way ants find food sources.

AM:  Whenever you leave some food on your kitchen counter or anywhere in your house ants will eventually find it and it has been proved that they can find the shortest path between their nest to food sources.  How do they do that?  Once they’ve found a food source, they will deposit pheromone all the way back to the nest.  And over time, more ants get gravitated towards the food source and at some point they will establish a path between nest and the food source. 

Imagine a truck is an ant, for the truck we want to find the best path, the optimal path, between warehouses and the customers.  Let's say we've got a logistic company in Australia.  If you think about only 20 cities, finding the best path to connect these and minimise, let's say the cost involved in delivering parcels into those cities is one of the most challenging problems that we face.

We use algorithms from my research to find an optimal path very quickly, I should say, in a reasonable time with reasonable resources.  So we will follow the same principles and find the shortest path for the truck to say fuel, or achieve any other objective that the company wants us to do.  This is just a quick example, how we leverage on natural principles to solve some of our problems.  In the first stage we try to of course use some sort of free datasets or some sort of synthetic case studies, which is where we create, let's say all the random locations on a map and try to challenge the algorithm.  After we do that, of course, we develop the proof of concept and the minimum viable product.  Then we would take it to an industry partner and say, this is what we can offer.  That's where we can get their case study and test it and see how it works.  And of course, by that interaction, we will be able to test the algorithm in practise in a real world case study and environment.

We've been able to help a number of companies to solve some of those logistic problems, minimising the delay of delivering a product and minimising the cost and fuel consumption in this case, which is the main cost in those applications.  They get the best or the state of art solutions to their problems.  I've been involved of course, with industry partners, not just in Australia, but other countries, mainly in India, we have been able to work with quite a number of industry partners in India.  India, of course has been always one of the main research partners and in the space of computational intelligence, they are leading and I've been able to work with top AI centres and top universities in India.  So I've been able to work with a lot of Arab countries.  Jordan has been one of the main ones and Egypt.  They've got research centres there in AI.

So whether on home ground or overseas, whenever there's a need to improve a process, that's where Professor Mirjalili’s optimisation algorithms come in.

AM:  We are producing data in an exponential manner every single day.  To be able to analyse that data and make logical decision based on them, and reasonable decision based on them, we need to use some sort of new technologies.  Machine learning and data analytics are those areas that have been quite popular because it helps companies and governments to be able to analyse the data, understand the data, and make decisions based on the data.  I develop a lot of optimisation algorithms like machine learning techniques, they need some sort of learning process, right?  So, these algorithms are problem solvers, right?  They are either search techniques or optimisation algorithms.  They help us to find an optimal solution for a given problem. 

We develop the engine and the engine is what's working behind the scene.  Of course, a lot of interfaces are the techniques that people use in companies and governments to analyse data and better make a decision it's like working on or developing a mathematical equation that is used to design an aircraft wing.  I am still contributing, but I'm not the engineer who designed the wing.  I'm contributing on the foundational level.  If you think about a wind farm, the configuration of towers in a wind farm can be optimised to maximise the yield.  In terms of the blade of each wind turbine, they have a shape that can be optimised to be able to maximise their efficiency or reduce the operational cost. 

If you consider the machine learning area, we’ve got something like an artificial neural network, which is where we feed it with data, it learns about the data, the patterns in data, and helps us to do some prediction.  For example, we can feed a neural network with the amount of rainfall in the last decade after the machine learns, after the neural network learns the data, it can predict for us how much we are going to have rainfall, let's say next year, at this time.  The algorithm that learned the data in the neural network is the one that I can develop, that I can improve, that I can impact on.

Unsurprisingly, a lot of companies and governments have started to leverage AI as a sort of engine of productivity and economic growth.  And when it comes to big ticket issues like tackling a pandemic, AI has also taken centre stage.  Professor Mirjalili’s research has played a significant role in helping to detect the highly infectious coronavirus.

AM: Most of the governments that have been successful in controlling and restricting the spread leverage on AI, whether in terms of image processing, whether in terms of contact tracing et cetera, or whether in terms of data driven decision making.  A lot of governments realised in the COVID-19 pandemic that they can quickly adapt a new technology to manage a disaster.  I believe that this is where AI comes into play.  Machine learning techniques, data analytic techniques have been widely now used by governments.  So, one of the areas that we've been working on and I have quite a number of publications with my collaborator internally and also externally, mainly in India, has been to detect COVID-19 by looking at an x-ray of chest.  What happens is that we train in machine learning techniques by showing it a wide range of x-rays some with COVID, some with pneumonia and some healthy lungs.  And after that, we develop an app and then using that app, you can now take a photo of an x-ray and it gives you with a certain accuracy, whether that x-ray indicates healthy lung, pneumonia or COVID-19.  This has been a project that I've been involved and definitely we managed to contribute to the global attempt and global combat against this nasty disease.

Of course, there are many different ways research can have an impact and it's safe to say all researchers want their work to influence society.  But how does Professor Mirjalili judge the success of his own research?

AM:  The joy of research is to share it and disseminate it.  One of the reasons why I've been able to expose my research to a wide range of communities is the fact that I share every single algorithm that I do develop.  I share every single software that I develop publicly free online, whether through my own personal website, whether through different communities that you can share, or whether through one-to-one interaction in conferences or seminars, or even by email, so you get that exposure.  And I would say, I get a lot of exposure by video content and educational content that I also develop around my research.  I've got a YouTube channel with 4,000 now, I think, subscribers.  I've got a lot of online courses with 12,000, I think, now, students.  So that allows me to not only broaden the exposure of my research, but also educate people.  To be honest, I was just quite passionate to share what I'm doing and help some people.  A lot of people use my techniques to publish, grow, to solve problems, find their industry partners and help their governments to solve some of the problems in their societies.

In the time that Professor Mirjalili has been working as a researcher, he's picked up a number of prestigious awards, prizes, and honours, which has made him somewhat of an international influencer.  But making the global list of highly cited researchers is what's been a key milestone for him, a goal that he had set his sight on from the start of his career.

AM:  The Web of Science, which is one of the biggest publication venues and indexing venues in science will sit down and choose the leading research in any area.  And I believe that when I started my PhD, I could see myself in that list.  To prove to myself that everything is possible and you can achieve the highest level and you can achieve the best if you dedicate yourself, if you enjoy what you're doing.  And also, every time that I get invited to become part of an editorial board of a journal, that's also a prestigious position and indicates that I am leading the research in that space.  It's an international recognition of your work, and it helps you to build and expand your international collaborative network.  It helps you to get research grants from different governments, whether domestically, or internationally, and it helps you to, I would say, build your brand in research.  And when people know that you are leading an area and you are the one who always produces work at the cutting edge, they can better trust you, right?  They want to work with you because they believe in the quality of your work, in the outcome of your project.  It's one of the main reasons why we've been able to collaborate with a wide range of researchers and practitioners across the globe.

While AI is a key gift, it's also one of the most disruptive technologies of this century, and there's a sense of fear foreboding about AI.  For a lot of people, AI is all about robotics, killer robots, in fact.

AM:  The public perception about AI is, I would say, killer robots, right?  Killer robots that they're going to take over the world, but robotic is one subfield of AI.  There are so many other subfields that we can talk about for days.  For me, as an AI researcher, working in current technology in that space, the future is exciting and intimidating as well.   We can leverage an AI and solve some of the problems that we are struggling at the moment, but also AI can be far more devastating than a nuclear weapon.  Because if you think about it, if you reached a point that the human intelligence is outperformed completely, that can be used against us.  So, we really need regulation around it.  We need more legislation and policies around how to use them.  In my research, when I develop an algorithm, when we share it with people as a part of licence, we limit some of the application.  For example, military application.

Regulation of AI is closely tied to ethics.  Ethics in AI is one focus area for Adjunct Professor Heinz Herman at the Australian Graduate School of the leadership.

HH:  One of the things we have done as part of our research on ethics in artificial intelligence is we've Googled ourselves to see what we come up with because that's how many of the misusers or abusers of artificial intelligence have actually come to light, including deep fakes where different faces have been put onto people.

I am the co-lead of our research cluster that looks at banking, financial services, insurance, and accounting, but there have also been spectacular failures of artificial intelligence in recent years, in many areas.  The most widely written about ones probably relate to facial recognition algorithms that led to wrongful arrests or are being used for surveillance of ethnic minorities in some countries.  If there are certain groups that are being discriminated against, artificial intelligence, hasn't got any conscious, it just learns.  And that's where a lot of research that my university, including myself are conducting are to do with preventing this from happening. 

There are projects, for example, on crowdsourcing public opinions on is this algorithm fair?  You put an algorithm out to the public and see what results it produces for individual people.  And people can give feedback and that then can be corralled and aggregated to find out whether there has been a bias or a fairness issue with a particular algorithm.

Our research is seeking to address all of the ethical issues that have been identified.  We're going into the area of unknown unknowns.  In ethics, it's one of things where there are unknown unknowns, things we don't even know that we're going to be faced with.  Think of the Terminator movie, or my all-time favourite Stanley Kubrick’s or Odyssey 2001, where HAL, the computer, took over and refused to be switched off.  The dominance of machines over humans is one key philosophical question, as well as increasingly practical question, it's called super intelligence.  So as machines get increasingly intelligent, there comes a point of super intelligence, which is where machines have become more capable and intelligent than humans.  Will machines actually have a conscience of their own?  And once machines become such moral agents, do they deserve human rights? 

It's the relationship between human and machine that is also being illuminated from that philosophical aspect increasingly.  And the question is, what can we do to make sure humankind remains autonomous?  And we don't have machines dominating us and become a species superior in the hierarchy of order.  And they are not just philosophical questions.  Experts predict that at a 30% probability, this is going to happen within 30 years at a 60% probability is going to happen in 60 years, so it's not far away.

Hang on, hang on.  Let's just pause.  So when Arnold Schwarzenegger plays the role of a cyborg assassin, in Terminator 2:  Judgment Day and delivers the famous line, “Come with me, if you want to live”, could that actually become a reality?

HH:  It can become a reality unless we don't take proactive steps now to prevent that from happening by guiding research and commercialisation in a responsible way, this is called the autonomy principle, that humankind will continue to enjoy autonomy over machines.  And that's quite often a misguided term.  When you read about in the media, autonomy quite often refers to the autonomy of machines.  Like a robot is autonomous and makes decisions independently of humans.  That autonomy is a technical term from an ethical term, it means the exact opposite.  It means the autonomy of humankind over machines. 

We are taking a constructivist approach where we are integrating a very solid definition of what these terms mean, but also looking at how these terms have evolved in the media and we'd like to bring those two worlds together.  And from that, we can start having a discussion and guidelines.  What can we do to actively help practitioners with guidelines, frameworks, checklists, software, even the processes?  We are developing artificial intelligence responsibly, including the autonomy principle, and this is with engagement of the politicians. So we do this with the pollies, not just in isolation.  

But this is not possible without fundamental research.  The zeitgeist nowadays is that most countries have developed their own artificial intelligence frameworks.  Certainly, Australia has got one on the Federal Department of Industry, which you just go jump on and read it and you can see the principles that already have been agreed for Australia.  We have, with the CSIRO in Australia, the leading science organisation, taken the lead on something that’s called Responsible Innovation.

Artificial intelligence is everywhere.  There’s simply no escaping it.  It’s even present when you’re getting a doctor’s check-up.

HH:  The other day I snapped my what's called the monkey muscle in my calf with COVID not enough exercise, did some exercise, snapped my muscle.  I had to get a sonar sound.  And there is so much artificial intelligence in interpreting medical imaging.   Aged care – effective computing is an area where artificial intelligence is increasingly being used to deal with people who are lonely, companion bots, for example.  Social benefits of AI are quite often combined with robotics.  If you look at a lot of those technologies that come out of Japan, you see them sometimes on the news because they're so cute, little toy pets that are little robots that emulate being an animal, they do certain things, to useful bots like delivery of takeaways to homes.  They are extensively being trialled in the US.  We have entire cities that are the test ground for self-driving cars and different levels of artificial intelligence are being used for those. 

When we talk about AI today, it's algorithmic AI, also called Subsymbolic AI.  And that works on the basis of requiring lots and lots of data and learning data.  So much data that there is actually a separate body of research that looks at how many carbon dioxide emissions is artificial intelligence causing literally as a contribution to the climate problems that we have, because you need massive supercomputers to chew through all of the data and then learn from the data.

So what role can or does AI play in jobs in the future of work?  One of Professor Herman's research projects is about a conceptual framework that looks at AI and its impact on the accounting profession specifically.

HH:  We're looking at accounting, for example, audits.  The single most widely deployed AI in accounting is in the audit field where pieces of AI can go through the books basically and spot irregularities, which would (a) take a phenomenal amount of time for a human to perform; and (b) humans may not be as sophisticated in picking up on certain intricacies.  You can then flag to an accountant that this is an anomaly, you should be looking at it, or you can completely automate the decision.  This is where autonomy comes in and say, hey, this is an anomaly.  One classic example where autonomy has gone awry in Australia is a robo-tax.  For example, when you're using algorithms autonomously without human intervention, it can cause a lot of grief and unfairness. 

We've seen expert systems in the accounting profession first, especially towards the turn of the century.  We're now seeing a massive replacement of these expert systems towards algorithmic AI.  Emerging technologies like blockchain are very popular in the research field, accounting research, but also the practitioners.  If we look at what happened in the mid-1990s, we can learn something from automation that was deployed in the 1990s, business process re-engineering was the term that was used then.  What we've seen is massive job losses, just for the sake of automation and replacing people through technology. 

What we really will see in the future in the workforce as a result of AI is a polarisation, and that sometimes is called a dumbbell shape.  So we see highly skilled technical jobs will continue to be in demand and highly paid and the lower skilled more service orientated jobs will also be in demand but badly paid.  But there's this thing in middle, the handle of your dumbbell, the mid qualification jobs, the majority of jobs, really, they are under pressure and reduced because they are relatively predictable and most likely to be automated.  The middle managers will be most impacted by those because the focus here shifts from making less predictions based on a manager's experience towards understanding how to integrate predictions that are made by artificial intelligence.  And that means that middle of the dumbbell middle management is under significant re-skilling requirements.

To put this research to the test professor Herman's research team, went door knocking worldwide to engage some big players in banking, financial services, and insurance, also known as BFSI.

HH:  We went around globally and asked board members, CEOs, COOs of larger banks, insurance, and financial services companies about artificial intelligence, what the benefits are and what the risks are and the benefits in the entire sector of BFSI unanimously is fraud detection.  Anything from insurance fraud to credit card fraud, to cyber fraud, artificial intelligence is the technology to use.  It's a common denominator in the sector, in any organisation.  The problem is to what level should humans be involved?  So it's the autonomy question again and the recommendation in the sector is to use humans in the loop to make the final decision.  Just like when in a military application you have a drone and the drone has artificial intelligence and the drone decides, is this a civilian target, or this is not a civilian target.  So should you have someone pushing that fire the missile button as a human to make that final decision or should you leave it autonomously to the drone?  

The same applies in the BFSI sector.  So that was primary based research where you're really just asking people and that is quite useful because these are people at board level who ultimately hold legal accountability.  And if you're asking those people whose head is on the block, you know, it'd be chopped off by the law enforcement organisations, people could go to jail.  So this is a fairly reliable source because people give you the answer most truthfully. 

So the other ways we are leading practise in this is literature reviews, where we use a methodology called science mapping to map the science of artificial intelligence in particular research fields in a quantitative way, so it's a lot of statistics.

When it comes to getting the best outcome in research, Professor Herman says, what matters most is true representation, especially when it comes to ethics in AI.  So, are international collaborations, the answer to stamping out bias?

HH:  This is where I research innovation system or ecosystem is important because different partners have different connections, especially if you're doing global research, that is quite important.  We believe at our university that currently too much thinking has been shaped by the rich Western well-developed economies.  We have a vast under-representation of other cultures, so countries.  It's interesting that the moral machine experiment was conducted in the context of self-driving cars, where a hypothetical situation was put to people in, was it 150 countries or so, which was the self-driving car is driving and you're caught behind a Ute and the Ute drops its load onto the freeway.  And the AI in a self-driving car needs to make a decision to sweep to the left, where you have a motorbike rider without a helmet, or go into the right-hand lane, where you have a mother with three children driving the car, who are you going to kill?

It was very interesting that across this global experiment, how ethical values differ across different nations and it's given us very unique insights.  When we are looking at partnerships, we would like to partner with those parts of the world that are under-represented in having a voice in the ethics of artificial intelligence.  It's about equitable representation of our population, otherwise, we get bias.  When you make a choice to train artificial intelligence, you have to choose as a researcher, what data am I going to use to train that algorithm?  The field of artificial intelligence is a very testosterone driven field with people who have PhDs largely from the Western economies.  We need non geeks.  Now we can't expect people in data science who develop algorithms to be world experts in philosophy.  That's why the secret ingredient to successful AI is about participation and cross-functional teams. 

We need journalists, philosophers, human rights experts to become part of the design at the very front end of the research and innovation process, not as an afterthought and that's the issue right now with AI, we've got too many afterthoughts.  We need to have representation, not only of different disciplinaries at the front end of AI design, but also a representation of the public and the public is not one country.  The public is the world.  And as the moral machine experiment has shown, whose ethics?  If we use world ethics, we’ve got to go a much longer way than what we've done before with participative integration.

There aren't many technologies like AI and nothing matches its growth rate, which is sitting somewhere around 30% annually.  Keeping up with this pace and having sway alongside the technology heavyweights requires focus.  So what's driving Professor Herman's research team?

HH:  To have an impact in among that sort of level of growth to become known and a leader particularly, when you have giants like Microsoft and Google and Amazon and Facebook being very active in that space, you have to have a particular focus.  Our focus is on specific industry projects that either come to us unsolicitedly or we actively look for it.  And that involves partnerships with industry through organisations, such as the Australian Computer Society and others, running seminars, running educational sessions, using personal relationships, like the ones I have got from the industry, for example.  We are very focused on publishing and conducting research that gets lots of citations, and therefore is acknowledged as making the main contribution. 

We have started to be interdisciplinary in our research and want to take the world by storm, by being transdisciplinary.  We are very much driven by what the research questions are that either industry funding or a research grant is asking us, we've got one piece of research, for example, that we're involved in that is looking at establishing a textile industry, primarily focused on Afghan women in Australia.  So it's very much around social responsibility and enhancing our society.  That's one aspect, and it's been driven very much by a research question that was driven at the policy level.  There's politicians involved at state and federal level; we're at a scoping stage.

We know that artificial intelligence leads to increased revenue, reduced costs and greater efficiencies in organisations and a greater differentiation.  Professor Herman reminds us that that's something that happens in all industry sectors, but he also cautions just like his colleague, Professor Mirjalili, that AI has the potential to be a dangerous weapon.

HH:  AI has become a weapon, so we have to start protecting AI from itself.  You’ve got to be really careful with the data.  There's a really good show on Netflix recommended to anyone it's called the Startup.  It's about this startup, which starts an electronic currency, but it's a blockchain related startup and it shows how initial good intent has then evolved into financing terrorism, and fairly other unethical things.  The unintended consequences of AI are the unknown unknowns.  The only thing we know is that the dark world and cybercrime is thriving equally, if not more, than the benefits that have been provided to society.

Personally, has professor Herman ever felt the full brunt of AI.  It appears he has, but not in the way he you’d expect.

HH:  I was explaining that Alexa actually now has a mode against domestic violence.  So if you talk in an aggressive mode to Alexa, it switches off and it doesn't listen to you for a while.  So I was talking to my students about this and I said, Hey, Google.  And I forgot that I shouldn't have said that.  So in the middle of the lecture, my Google home starts switching on and waffling and it wouldn't switch off.  So I had to stand up and I did not realise, I had my tie on, and my shirt and my boxer shorts.  So that was the biggest laughter in the classroom.  See, in the lecture I get up to shut down the AI.  I've learned to be very careful with AI.

In the next episode of Research that Matters:

Professor Kerry London: Examples of impact can be changing behaviours, changing attitudes, introducing new tools to organisations being on national standards committees, those sorts of things, those sorts of impressive activities, where you have everlasting and sustainable change in the communities that our research would support.

Research that Matters was produced by Written & Recorded.  This is a Torrens University Australia podcast, and I'm Clement Paligaru.  To hear more search for Research that Matters on the Torrens University website or wherever you get your podcasts.

X
Cookies help us improve your website experience.
By using our website, you agree to our use of cookies.
Confirm