top of page
Writer's pictureWomen in AI (WAI)

Women in AI Ethics in Action Speaker Series: AI and Education, The Ethical Impacts

24 October 2024

Principal Author: Karen Jensen

Contributing Authors: Zuhra Matkurbanova, Charlotte Tao


As part of our ongoing Speaker Series, on September 16, 2024, the Global Ethics and Culture team welcomed an esteemed panel, and our very own Claudia Ramly as moderator, to discuss actionable items that organizations can use to implement Ethical AI solutions in education domains.


In 2023, the Global Ethics and Culture team launched a Speaker Series that focused on education and awareness of bias in AI, and our Global Hackathon event challenged organizations worldwide to design AI solutions that addressed the ongoing challenges of gender parity and equity.


In 2024, we’re building on our 2023 success(es) with a new Speaker Series that will identify actionable solutions organizations can implement to overcome bias and foster change in AI ecosystems and infrastructures.


Madhavi Najana, Dr. Denise Turley, and Sahaj Vaidya were our global panelists in today's series. (Please see the links below to our speakers' profiles on LinkedIn.)



Our panelists first discussed their respective journeys into Artificial Intelligence.  They were also asked a series of questions, along with questions submitted by our global audience, on how to design policies for Ethical Educational AI.  Their responses have been summarized here.  To follow our series, please visit our blog at https://www.womeninai.co/blog and find our WAI Talks from the Global Women in AI Ethics and Culture office.  (Links to each blog and YouTube videos are included at the end of this blog).


Welcome from Bhuva:


Bhuva is our Chief Ethics and Culture Officer for Global Women in AI.  The Ethics and Culture office Speaker Series is focused on Ethical and Cultural AI, but our global initiatives include Education, Entrepreneurship, Innovation, and Research.  Our mission at Women in AI is to make AI accessible and inclusive for everyone, with a special focus on women and girls.


Welcome to our third speaker session of the year!


Panelist introductions:


Madhavi: Madhavi works for the Federal Home Loan Bank of Cincinnati here in the US and is an AI advisory board member for Ashland University.  Madhavi is also our Women in AI ambassador for Ohio State University and is a recognized LinkedIn voice.


Dr. Turley: Dr. Turley’s work is centered on both education and industry domains.  She is an Assistant Professor of Business and a Leader in Tech for the United States Chamber of Commerce.  This fall, Dr. Turley will be teaching at a Girls in STEM program.


Claudia Ramly: Claudia also comes from a multi-discipline background, with areas of interest in public health, and in training users in AI-powered medical devices. Additionally, Claudia obtained her second master’s degree in the Psychology of Education (Learning Sciences) and has a strong focus on adult student learning.  Claudia is our Women in AI EU Regional Lead.


Our moderator asked our panelists the following questions:


What has been the impact of Artificial Intelligence on Education?

Dr. Turley: In her role at the United States Chamber of Commerce, Dr. Turley indicated that they are leaning rapidly into adopting AI technologies.  In the education domain, Dr. Turley is working on STEM initiatives for girls and encouraging them to adopt AI technologies in both their personal and career endeavors.  Last but not least, while wearing her teaching hat, Dr. Turley has integrated AI applications into her business courses by introducing pilot AI applications, like image creations, to encourage students to explore these technologies. Dr. Turley has also integrated AI applications into her personal endeavors.


Madhavi: Madhavi emphasized that AI is not a new technology, but what we are experiencing now is the hype of AI.  Madhavi focuses on AI tools for daily use and is a powerful advocate for AI applications in government and regulatory best practices.  Madhavi believes that Human in the Loop (HILT) is of key importance in AI applications, and she is also focused on empowering women in this space by ensuring that barriers to access for women are removed.


How have AI applications and tools been integrated into Higher Education?

Dr. Turley: AI applications in education are still very new.  Institutions are still considering the acceptable use of these technologies, both for the student body and for faculty usage.  Each institution is different, but some applications include:


  • Design and implementation of Acceptable use policies for AI (both student and faculty-oriented)

  • ChatGPT for Teams and CoPilot are in use (both student and faculty-oriented)

  • Using custom-built agents to reduce wait time for responses to student questions (one that Dr. Turley is using)

  • Using AI applications to help with APA citations and to build problem-solving skills (student-oriented)

  • Beautiful.AI (both student and faculty oriented)

  • Using AI application to build grading rubrics (faculty-oriented)


Madhavi: AI applications like Natural Language Processing (NLP) are finding usage in research, by faculty and by students.  Some of the usages for these applications include:


  • Identifying key trends for research disciplines and publications

  • Improved productivity for scholarly work

  • Data analytics in decision making

    • Measurement and analysis of student performance metrics, learning outcomes

    • Big Data in Curricula Design

    • Content creation in courses

  • AI as a Virtual Assistant and in tutoring for students


Claudia: Claudia shared some of the applications that she has seen before Generative AI including intelligent tutoring, designed to help students learn complex topics, like chemistry, while also assisting educators in points of vulnerability for students, and customizing the learning experience.  She also mentioned the use of AI applications trained to recognize and identify misinformation.


What are the main AI skills students should develop

Madhavi: The field of AI is a vast domain, so skill development can be dependent on usage.  Some examples of AI domains include Natural Language Processing (NLP), research, policy, governance, prompt engineering, and data science, just to name a few.  We, as users, need to make sure that we are integrating Human In The Loop (HILT) frameworks so that we can identify hallucinations and validate outputs.


Dr. Turley: Many of the opportunities in AI disciplines involve the use of Generative AI.  In this domain, and others, soft skills will continue to be in demand, like critical thinking, problem-solving, and Human in the Loop (HILT) skills.  Dr. Turley emphasized that coding skills, while important, are not always necessary to work in the AI disciplines.


Claudia: AI is not a replacement for humans.  Look to your areas of interest and/or expertise, then match developing or existing use of AI technologies to find a career match.  Additionally, Claudia suggested that finding Key Opinion Leaders (KOL) and following them could lead to career pathways and insights.


How can educators navigate new and existing AI technologies?

Dr. Turley: Educators are under pressure to both understand and adopt emerging technologies, so time is a challenge.  Institutions are building professional development opportunities and designing by following the structure of learning and student privacy (under FERPA).  More support is needed for educators to apply these technologies within existing frameworks. Educators have access to resources and open forum groups to build skills and define application usage.  Start small and develop pilot projects that can be easily measured.



Claudia: As we dive into Education 4.0 and the 4th Industrial Revolution, educational institutions should focus on testing applications in a low-risk environment and personalized learning technologies.


How are emerging technologies evolving in Education?


Madhavi: We are entering the practices and ideologies of Education 4.0 and there are unique ethical challenges in shaping learning outcomes.


Key trends include:

  • AI-driven personalized learning: tailoring content for personal experience

  • Concerns about student privacy and student data

  • Learning with AI including augmented and virtual reality to make complex subjects more engaging and design more experiences to resolve socio-economic and accessibility barriers to education

  • Adaptive Learning: curated content and deeper dives into training AI data to build diversity into these applications

  • Blockchain for faculty credentialing: a tamper-proof way of ensuring students have access to faculty with validated credentials

  • Ensuring guardrails are in place to shape the future of learning and we should remember to create a balance of ethics and innovation


What are some current Educational Institution Goals that should be considered for Ethical AI applications?

Dr. Turley: While emerging technologies are on the horizon, they are not yet ready for full implementation.  Institutions should:


  • Build data privacy and security into all applications for both collecting student data and student use of applications

  • Don’t be quick to adopt – proceed in a thoughtful way

  • Budget considerations will impact educational institution’s ability to adopt these technologies

  • Ensure that the objective of emerging technologies is to have a positive impact on students


What are some of the existing regulatory frameworks that Institutions can build on for the adoption of AI applications and solutions?

  • Family Educational Rights and Privacy Act (FERPA): a federal law (US) that protects student education records. The act grants students the right to control how their information is used. What’s unique about this act is that FERPA applies to all educational institutions with federal funding, ensuring that federal schools are not able to share student personal data without the permission of the student or parent. 

  • Children’s Online Protection Act: a federal law (US) that provides additional online protection and security of data for children under 13 and specific rights for those children’s parents. This act obligates companies to obtain parental consent for using personal information as well as what information is being collected, and how that information is stored. 

  • Privacy laws (US): There is no federal privacy law in the US, but there are 19 states (and counting) that have privacy laws in place. For example, the California Consumer Privacy Act (CCPA) is one of many state-level acts that permits residents to have access to and control over personal data. 

  • Americans with Disabilities Act (ADA): this federal law (US) defines equal opportunity for individuals with disabilities and covers many aspects of public life including employment. Under the ADA, employers cannot discriminate against individuals with disabilities from fair hiring, pay, promotion, benefits, etc., and are required to provide accessibility measures. 

  • EU Artificial Intelligence Act:  Europe’s Artificial Intelligence law categorizes AI applications and technologies by risk categories. The act protects high-risk AI systems used in healthcare, law enforcement, and various institutions while also promoting innovations in these areas. 



How can educational institutions prevent bias in AI tools?


Dr. Turley: Creating awareness is key to the prevention of bias.  The assumption should always be that bias is present, yet institutions can take steps to address these biases:


  • Awareness is the first step in understanding and recognizing bias. Progress is happening, but where does the bias originate? 

  • Clarify the results of bias – what does it mean and what are the potential outcomes?

  • Engage with Large Language Models in a critical context and include more diversity training to include more truths about African American populations.


Claudia: These challenges exist in data modeling for Arab communities, where image generation is often lacking in cultural awareness for members of these communities.


Audience questions:


How can the United States contribute to building the infrastructure for emerging technologies in other countries?

Dr. Turley: Global policy is in play as emerging technologies evolve.  The US currently invests in other countries’ infrastructures, supporting social good and equitable access.


Madhavi: The International Organization for Standardization (ISO) maintains global standards that apply to AI applications and deployments. MIT recently released a risk management database as a resource and framework for use.


For women who are interested in learning these technologies, what is the best way to gain expertise in these technologies?  Is coding a mandatory skill?

Dr. Turley: Focus on your area of interest, as AI is a broad discipline.  Start out with tools for personal use, like recipes or travel plans, then build as you go.


Madhavi: There are many points of entry in emerging technologies that offer no code/low code opportunities.


Claudia: Stanford University has a Code in Place course that is free.


What are some ways to introduce AI to younger students?

Dr. Turley: Be present with younger students and monitor usage.  Parents and educators can pre-screen tools to build narrow use cases that are age-appropriate and provide the appropriate levels of data privacy and security.  Be open with younger students about bias and its impact.


Madhavi: Use characters (Cosmo) to create chatbots and build simple robotics.  Make sure use cases are simple, easy to understand, and age-appropriate. 


Bhuva’s conclusion:


Thank you to all of our panelists for an enlightening conversation!  We appreciate your targeted responses to the issues of data privacy and regulatory frameworks.  We also appreciate the discussions about how students and institutions can integrate emerging technologies and adhere to the tenets of responsible and ethical AI.


Takeaways and Learning:


AI has the potential to enhance education: From personalized learning and faster Q&A turnaround for student questions, AI is poised to advance student learning outcomes.  However, educational institutions should proceed responsibly in the deployment of these applications to ensure that student privacy is maintained and that bias in these applications is recognized.

Educational institutions should start small: Educational institutions face budget concerns for the development and implementation of AI technologies.  Some strategies for these institutions are to start small and begin with pilot projects.  These projects should include professional development for faculty and clear application of policies to protect student information and regulate bias.

Entry points for AI careers: Students and those with established careers have a place in emerging technologies through multiple points of entry.  Focus on your areas of interest or expertise.  AI applications exist in many different industries and disciplines, so coding skills are not a critical barrier to entry into this field.


Event recording

You can view the recording of the event using this link.


Ethics & Culture Team


Please see the links below to our Team’s profiles on LinkedIn.

 



0 comments
bottom of page