Everyone has a role to play in making the usage of Artificial Intelligence ethical and responsible as users and developers of the technology. As per Morning Consult’s IBM Global AI Index 2021, over 90 percent of businesses using AI say Trustworthy and explainable AI is critical to business. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety, and security, AI systems can cause significant harm to people and society and result in monetary and reputation loss for companies. Enabling ethical and equitable AI requires a comprehensive approach around people, processes, systems, data and algorithms. What are the pillars of responsible AI? How can we incorporate responsible AI principles in every phase of AI solution development, from concept to deployment? In this talk, we will discuss practical approaches to incorporate responsible AI principles with tools, frameworks and industry case studies.
By the end of this course, you should be able to:
- Identify pillars of responsible AI such as fairness, transparency, inclusion, security, safety and sustainability and why they matter.
- Define human-centric approach to developing AI Solutions.
- Implement ethical AI solutions as developers, users or advocates.
Duration: 60 minutes
Closed Caption: English