drawing of row of children in lab coats

Possible Self

Ethics in Artificial Intelligence

In the field of Artificial Intelligence (AI), we ask ourselves:

  • Is this ethical?
  • What is the impact on humanity?
  • What will the future of our society be with this powerful technology?

AI Ethics: The Good, The Bad, and The Technology

Friday, April 28, 2023

10 a.m.–4:30 p.m., Appreciation Hall @ Foothill College

Join the Foothill-De Anza Center for Applied Humanities and the STEM division with guest speakers, including some from Stanford University and Google, to discuss these pressing issues and grapple with the ethics of technology for our future.

ZOOM

https://fhda-edu.zoom.us/j/84171928838

 

Agenda

 

9:45–10 a.m. Refreshments and Networking

Please join us before the first session begin.

 

10–11:15 a.m. The Ethical Framework: Six Principles that Govern Decision-making in A.I.

angel evanAngel Evan: Business and Technology Curriculum Director and Instructor at Stanford Continuing Studies

Angel Evan is an emerging leader in the field of ethical data and responsible A.I. With a career in data science spanning over ten years; he has a deep technical knowledge of large-scale machine learning systems paired with a real-world understanding of how to implement these technologies safely and ethically. Angel believes that A.I. can solve some of the world's most pressing problems and is dedicated to helping others conscientiously use A.I. by ensuring responsible innovation and protecting vulnerable constituencies from potential harm. 

11:30 a.m.– 1 p.m. Lunch break

Lunch provided by Foothill College.

There will also be STEM Project Lab technology exhibits

1–2:15 p.m. AI Applications and Ethics

Students will gain a comprehension of algorithmic unfairness, and how product fairness testing through qualitative methods can be used to uncover unfair or prejudicial bias and improve model outcomes. The talk will highlight how Google's AI ethics team, Responsible Innovation, works to ensure that we design models aligned to our AI Principles that are inclusive for everyone and do not perpetuate harm against communities. Our work is sociotechnical and merges the social sciences and sociological context with technology.  

I. AI applications: the hazards of taking BIG steps with little ethics. 

diane korngiebelDiane M. Korngiebel has been an ELSI (ethical, legal, and social implications) Scholar and AI Ethicist on the Responsible Innovation Team at Google since May 2022. Dr. Korngiebel started with Google in Oct. 2021 as a Bioethicist on the Google Bioethics team and was a Research Scholar at The Hastings Center, an independent, non-partisan, non-profit bioethics center in Garrison, New York, the previous year. Before joining The Hastings Center in 2020, she was an Associate Professor in the Department of Biomedical Informatics and Medical Education and an adjunct Associate Professor in the Department of Bioethics and Humanities at the University of Washington School of Medicine in Seattle; she maintains affiliate faculty status in both UW departments.  

Her interests include the ethics of using AI for health and wellness applications, broadly construed, and the potential and limitations of Big Data science, and appropriate (and inappropriate) design and deployment of digital health and wellness applications.  

Dr. Korngiebel’s work has appeared in the American Journal of Public Health, Nature: Genetics in Medicine, NPJ Digital Medicine, and PLoS Genetics. She was recently the principal investigator on a grant funded by the National Human Genome Research Institute and the National Institutes of Health’s Office of the Director on developing an ethics framework to guide biomedical data scientists constructing data models and algorithms. She chairs the American Medical Informatics Association (AMIA) ELSI Working Group and serves on the AMIA Ethics Committee. 

II. An Introduction to Responsible Innovation & Product Fairness  

Gia PaigeGia Paige was born and raised in The Bahamas and is passionate about creating more fair, equitable, and inclusive products, processes, and experiences. She first explored these interests through her studies at Stanford University in Science, Technology & Society, with a self-designed concentration in Race & Gender. She enjoys being creative, brainstorming, and community building and is excited to translate these passions and skills to her work. Gia joined Google in 2019 as a Human Resources Associate and worked on the Compensation Programs and Reporting & Insights teams. Gia is now a senior strategist on the Product Fairness team in Responsible Innovation where she provides socio-technical advice and conducts proactive algorithmic fairness testing to ensure Google’s technologies are not reflecting or perpetuating sociological or socioeconomic inequalities, in support of Google’s AI Principles, especially AI Principle #2: Avoid creating or reinforcing unfair bias. 

III. Stories That Can Change You: Building Ethics Through Narratives

Scott RobsonScott Robson is a writer and content expert with a background in law and ethics. He has worked to combat human trafficking, advocated for universal access to science education, and helped lead efforts to ensure that AI and other advanced technologies are developed more responsibly. His most recent work at Google focused on developing new strategies to teach and inspire ethics in the tech industry.

2:30–3 p.m. Refreshments and Networking

Please enjoy a break with other attendees!

3–4:30 p.m. Round Table Panel Discussion

Join a panel of Foothill College faculty and students to explore issues of ethics in AI. 

The Panelists are:

  • David Hoekenga (Faculty, Philosophy)
  • Eric Reed (Faculty, Computer Science)
  • Alisha Sinha (STEM Student)
  • Giselle Aviles (STEM Student)

Please direct any questions about this event to Mona Rawal at rawalmona@fhda.edu

 

 

 

Karl Welch

Questions?
We're Here to Help!

Karl Welch, Possible Self Director

(510) 695-3282


karlwelch2563@gmail.com


STEM Center, 4203

Top