top of page
Writer's pictureShambhavi Siddhida

Dating Chatbots, AI Sci-Fi & Unemployment: Read WAI Culture & Ethics Chief answer it all


In our last article “The Contemporary Fundamental Differences Between AI and Humans,” we established the basic differences between artificial intelligence and human intelligence. However, the topic remains incomplete without delving a little deeper. To gain expert insights on some of the most debatable ethical issues around AI we interviewed WAI Chief of Culture & Ethics team Elizabeth M. Adams. Elizabeth is an AI Ethics & Tech Inclusion Influencer, Advisor, Author, Researcher, Keynote Speaker, and Scholar-Practitioner. Her work has revolved around responsible & ethical AI. She’s also the Chief AI Ethicist at Paravision, a company that develops critical computer vision. In 2020, Elizabeth became a Stanford fellow and worked on a project -Civic Tech: Racial Equity, Technology & the City of Minneapolis.


Q1. In the last article, I had written about the philosophical differences between AI & Humans, but there are also technical and other sophisticated differences between AI & Humans. What’s your general opinion on the implications of such differences for us?


I want to start by wondering out loud about this question. Is AI making us humans physically and mentally, lazy? I say that because AI is helping us create efficiencies but it is also creating a lot of ease in many areas of our lives. Let’s talk about Grammarly, an application that helps us to write better. I use it in a lot of my studies. You go out to Grammarly, it is reasoning based on the massive amount of data that the application is combing about how people write. It instantly suggests edits and I’m not really required to do much except for rejecting or accepting recommendations. Prior to Grammarly, speaking from a US perspective, you really had to understand the benefits of writing, and you had to write for yourself in this regard. I understand that it is helping me but what about beyond it, what about the movies presented to me based on the traffic patterns, or any of these items that get us to think the way that AI is thinking. I’m almost wondering if AI and its ability without guardrails to reason and emotion and all of that is creating a bit of laziness. Because why should I believe that system and what it says about me if I understand that someone had to design and train that model?

I’m also concerned without guidance, AI could soon start policing itself especially when we talk about responsible AI. It is important to keep the humans in the loop, however, we also carry bias. Let’s say there is an AI system that some org. is using to catch bias in their algorithm and it returns with results of the audit and says it’s all good? If left with that, the org. might believe they don't really need to do more of a thorough check to make sure their system is unbiased or unethical.


Q2. We live in a society where science fiction movies like terminator paint the popular image/ narrative around AI & Robots. A lot of people seem to have apprehensions regarding a super-intelligent autonomous ai-powered computer becoming self-aware and going rogue. Do you think such apprehensions are valid?


Well, this is an excellent question. All we have to do is go back to 2016 when Microsoft AI Chatbot Tay was released. It was an experiment based on conversational understanding. In less than a day, people started tweeting with this bot, and all sorts of misogynistic, racist and discriminative remarks were soon a part of its vocabulary. In essence, tay became a robot parrot- garbage in, garbage out. This is why I do think it’s important to have an oversight of accountability, equity, and fairness but it has to be non-negotiable. And it also had to be the tenet of our responsible ai framework, because these things can certainly happen. We need to look into how is a particular model being trained? There are data scientists or others who are training it and if they are not ethical, they certainly could be training AI to go out and do dangerous things, train itself and create additional problems out in society.


Q3. When talking about using automation at workplaces, it has the ability to unemploy a lot of people. How should we look at this situation?


This happens all the time in businesses, where they decide to go in a different direction, and then they may have to lay off a set of their workforce. AI is not making that a new phenomenon, however, when we are talking about the advancement of innovation, upskilling and reskilling are definitely needed. We need real innovation to figure out what types of roles will we need, to figure out what kind of AI co-workers will be needed, and pay attention to trends. Researchers are so important, especially those who are practitioners who can bring their ideas into an applied way. One theory I want to talk about is called the long-wave theory of innovation, it is indicated by a period of evolution and self-correction, which is brought on by technological innovation. For instance, from the first industrial revolution which happened from 1785-1885, there was a 60 year period of innovation, and next was 55 years, and then 50 and now we are right around 20-25 years of major technological innovation that will impact our global economy and jobs. What's interesting for me is, the longer the innovation, that meant we as a society had time to catch up and determine what jobs and skills will be needed to take us to the next step of innovation. With the closing of the cycles from 60 years to 20 years we had less time to figure that out and we also have more orgs that are adopting automation to replace the workforce. So this is kind of scary for me but is also something we need to be aware of. I'm hoping that the new generation of researchers and practitioners can provide us with the answers that we need based on their studies. But absolutely, businesses need upskilling and reskilling but also with an understanding of creating an environment with us having shared responsibility with AI-co workers.


Can you think of jobs that are very secure and do not face a threat from AI?

The only jobs that I can think of are those that are leaders of teams and systems. So if you are the leader of the team, you are the one directing what work of the team should be done. To decide whether it is going to be a half AI workforce and half-human workforce would absolutely be needed. We would certainly need people who oversee the process, so if we are going to have robots operating on people, you are absolutely going to need a human in the loop to make sure things are great. We need people who are digital-savvy so they understand digital transformation, innovation, and intelligent systems, and they are going to have to look across the teams and people to find the best way forward and to make sure that society isn't harmed in the process.


Q4. There is a lot of debate around using AI-powered machines or chatbots to aid human relationships. (dating, geriatric care, friendship, etc.) Even though humans are emotionally and interpersonally more intelligent, why do you think this trend has emerged?


You and I are able to chat because maybe we have an extraverted sense but there are some people who do not. A lot of people don’t so do well in social settings but still crave companionship. Once, AI became the norm in terms of innovation, quite naturally people want to explore what it can do. Think of a person who might run an assisted living facility, or in a hospital that uses AI to cater to patients. I think it is natural for them to see if AI to be used for good and help those in need. Maybe when they need human touch and action when they cannot have it or to simulate these human interactions. Due to covid, we have seen this a lot.

During the height of the pandemic in 2020, I would have loved if there was some sort of AI-enabled responsible ethical robot to stimulate the mind and heart when we couldn't be there. I’m looking forward to amazon's Astro because I'm looking forward to a robot that speaks, plays music, or dance. I think it can be a great alternative for the assisted living facility, as long as it is ethically done. This is why I signed up to be the ethical advisor Hume.AI initiative, and we wrote ethical guidelines about how emotional AI should be developed.



Are you a female expert, researcher or practitioner, or someone who’s just curious about the field of AI? We would love to hear from you! Join the WAI Community and let us give a platform to your ideas!


0 comments

Comments


bottom of page