Presented at
CactusCon 11 (2023),
Jan. 27, 2023, 8 p.m.
(60 minutes).
Advances in artificial intelligence (AI) have impacted numerous areas including from image recognition, language understanding, autonomous driving, and medicine. These have represented significant progress in areas such as novel architectures and mature techniques for model training and managing such systems. However, t these advances have been primarily due to “black box” neural methods which do not provide any explanation as to how the system arrived at a particular result. This has serious societal implications due to the consequences for safety, security, and fairness. Recent “large language models,” such as ChatGPT, have highlighted this risk as they often produce plausible sounding but incorrect responses, which has resulted in the recent ban on ChatGPT by StackOverflow. We will discuss ongoing efforts – both in terms of technology and education - to enhance the safety, fairness, security, and trustworthiness of AI systems.
Additional Resources:
Neuro Symbolic Channel (also includes machine learning tutorial videos):
https://www.youtube.com/@neurosymbolic
PyReason GitHub site:
https://github.com/lab-v2/pyreason
Machine learning fairness, bias, and discrimination resources:
https://labs.engineering.asu.edu/labv2/machine-learning-fairness-resources/
Presenters:
-
Paulo Shakarian
- Associate Professor and Director of Lab V2, Arizona State University
Links:
Similar Presentations: