Episodes

  • Empowering Communities in AI Development with Vilas Dhar, President & Trustee, Patrick J. McGovern Foundation | EP 15
    Feb 27 2025

    In this episode of the Responsible AI Report, Vilas Dhar, President & Trustee of the Patrick J. McGovern Foundation, discusses the critical role of philanthropy in shaping responsible AI development amidst rapid technological changes. He emphasizes the need for inclusive participation from communities, the importance of global governance, and the necessity of education in empowering policymakers and citizens alike. Vilas highlights the urgency of ensuring accountability and transparency in AI systems, advocating for a future where technology serves the common good.

    Takeaways

    • Philanthropy can help ensure equitable AI development.
    • Communities should be active participants in shaping AI.
    • Global governance requires shared capacity and public investment.
    • Education is crucial for understanding and regulating AI.
    • Policymakers need to envision a technological future for all.
    • Citizens hold power in advocating for responsible AI.
    • Public movements are essential for accountability in AI.
    • The future of technology should align with common welfare.
    • Curiosity and engagement are key to digital literacy.
    • The responsibility for a better AI future lies with us all.

    Learn more at:

    https://www.mcgovern.org/

    https://www.linkedin.com/in/vilasdhar/

    https://www.linkedin.com/company/mcgovern-foundation/


    Vilas Dhar is a leading advocate on AI for public purpose and a global expert on artificial intelligence (AI) policy. He serves as President and Trustee of the Patrick J. McGovern Foundation, a $1.5 billion philanthropy advancing AI and data solutions for a sustainable and equitable future. He champions a new digital compact that prioritizes individuals and communities in the development of new products, inspires economic and social opportunities, and empowers the most vulnerable.

    Appointed by UN Secretary-General António Guterres to the High-Level Advisory Body on AI, Vilas is also the U.S. Government Nominated Expert to the Global Partnership on AI. He serves on the OECD Expert Working Group on AI Futures, the Global Future Council on AI at the World Economic Forum, and Stanford's Advisory Council on Human-Centered AI. He is Chair of the Center for Trustworthy Technology. His LinkedIn Learning course, Ethics in the Age of Generative AI, is the most-viewed AI ethics course globally, reaching over 300,000 learners.

    Vilas holds a J.D. from NYU School of Law, an M.P.A. from Harvard Kennedy School, a dual Bachelor's degrees in Biomedical Engineering and Computer Science from the University of Illinois and is pursuing doctoral studies at the University of Birmingham.






    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins
  • AI in the Entertainment Industry with Duncan Crabtree-Ireland, National Executive Director & Chief Negotiator, SAG-AFTRA | EP 14
    Feb 26 2025

    In this episode of the Responsible AI Report, Duncan Crabtree-Ireland, National Executive Director and Chief Negotiator at SAG-AFTRA, discusses the impact of AI on the entertainment industry, particularly in light of the historic 2023 strike. He outlines the protections negotiated for artists regarding their likenesses and creative work, emphasizing the importance of informed consent and fair compensation. Duncan also shares insights on how SAG-AFTRA plans to evolve its AI guidelines to keep pace with emerging technologies and highlights the vision of using AI to augment rather than replace human creativity. The discussion concludes with a call to action for ongoing engagement and education around AI's role in the industry.

    Takeaways

    • SAG-AFTRA negotiated significant protections for artists against AI misuse.
    • Informed consent and fair compensation are key principles in AI negotiations.
    • The entertainment industry is in a constant state of negotiation regarding AI.
    • AI tools can enhance creativity rather than replace it.
    • Negotiations with various industries help shape AI guidelines.
    • The need for transparency in AI's use of artists' likenesses.
    • AI's rapid advancement requires adaptive strategies from unions.
    • Maintaining a level playing field for human artists is crucial.
    • Collaboration between artists and technology can lead to positive outcomes.

    Learn more at:

    https://www.sagaftra.org/

    https://www.linkedin.com/company/screen-actors-guild/

    https://www.linkedin.com/in/duncanci/

    @duncanci

    @sagaftra






    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    21 mins
  • Responsible AI & Model Risk Management in Finance with Christophe Rougeaux, MRM Executive, TD Bank | EP 13
    Feb 25 2025

    In episode 13 of the Responsible AI Report, Patrick speaks with Christophe Rougeaux about the importance of responsible AI and model risk management in the financial sector. They discuss how banks can expand their model risk management capabilities to include AI oversight, the challenges of building specialized expertise in risk management teams, and strategies for accelerating AI deployment while maintaining robust risk management practices. Christophe emphasizes the need for a holistic understanding of the AI lifecycle, continuous improvement, and a supportive culture for safe AI implementation.

    Takeaways

    • Financial institutions must prioritize dedicated governance teams for AI.
    • Model risk management should include new risk dimensions like bias and cybersecurity.
    • Talent acquisition in AI is a challenge for banks competing with tech companies.
    • A holistic view of the AI lifecycle is essential for speed to market.
    • Continuous improvement is necessary for effective AI processes.
    • Change management is crucial for implementing new practices in banks.
    • Building specialized expertise in risk management is vital for AI validation.
    • Communication from leadership is key to fostering a culture of safe AI.
    • Banks need to adapt their operating models to optimize AI deployment.

    Learn more at:

    https://www.linkedin.com/in/christopherougeaux/

    Christophe Rougeaux is an expert in analytics who helps global organizations ensure effective and sustainable management of their analytics, through robust oversight governance. Christophe previously co-led McKinsey's Model Risk Management service line. Since 2024, he is a Model Risk Management Executive at TD Bank Group where he is heading the model validation of the non-retail portfolio and leading strategic AI/Model Governance initiatives.







    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins
  • Understanding Your AI Usage with Carmine Valente, Global Head of Cybersecurity Risk, Paramount | EP 12
    Feb 25 2025

    In this episode of the Responsible AI Report, Patrick speaks with Carmine Valente, the Global Head of Cybersecurity Risk at Paramount, about the intersection of AI and cybersecurity in the entertainment industry. They discuss the legal and security risks associated with AI, the importance of understanding AI usage, and the need for robust cybersecurity protocols to protect intellectual property. The conversation also explores how entertainment companies can balance innovation with risk management and the frameworks that can guide responsible AI governance.

    Takeaways

    • Organizations must understand their AI usage.
    • Tracking content origins is essential for security.
    • Fake content poses significant challenges in media.
    • AI can be used to mitigate AI-related risks.
    • Tools for data loss prevention are becoming essential.
    • Understanding technology risks is vital for innovation.
    • Legal frameworks are evolving to address AI challenges.
    • Collaboration in the industry is necessary for progress.
    • Embracing AI responsibly can lead to significant advancements.

    Learn more at:

    https://www.linkedin.com/in/carminevalente/

    Carmine Valente is an Information Security Executive with extensive cross-cultural experience in all aspects of Cyber Security, Risk Management, Incident Response, Attack Surface Management, AI Security, Audit, Business Resilience, Data Security, and Board Advisory. With a background in computer science and software engineering and a Master of Science in Cybersecurity Leadership, he has advised global clients in multiple sectors including high tech, telco, healthcare, government administration, media, financial, and professional services providing strategic influence over many cross-cultural Fortune 100 and 500 organizations. In current and past roles, Carmine provides visionary leadership to combat the Advanced Persistent Security threat to organizational infrastructures while balancing risk, privacy, compliance and empower the business lines. He is published by Springer as part of the book “Machine Learning and Data Mining in Pattern Recognition” and awarded for the research in AI-driven Network Security during the 6th International Conference on Data Mining in Leipzig (Germany). Carmine has been recognized by peers as a highly skilled subject matter expert in the field of Cyber Security.




    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins
  • Why You Need AI Governance Amidst AI Regulation with Betty Louie, Partner & General Counsel, The Brandtech Group | EP 11
    Feb 25 2025

    In this episode of the Responsible AI Report, Patrick and Betty Louie discuss the evolving landscape of responsible AI, focusing on the importance of developing internal governance frameworks for AI compliance amidst fragmented global regulations. Betty emphasizes the need for companies to establish their own AI principles and policies to navigate the complexities of AI regulation effectively. They also explore the significance of self-regulation and the proactive steps organizations should take to ensure ethical AI use and compliance with emerging regulations.

    Takeaways

    • The AI regulatory landscape is fragmented and evolving.
    • Companies should develop their own internal AI governance frameworks.
    • AI principles should guide compliance efforts.
    • Understanding the strictest regulations helps in compliance.
    • A multidisciplinary team is essential for evaluating AI use.
    • Transparency in AI use is crucial for ethical practices.
    • Self-regulation is critical in the absence of federal regulations.
    • Companies should assess the AI tools their employees are using.
    • Establishing a green list of AI tools can aid compliance.
    • Continuous evaluation of AI principles is necessary.

    Learn more at:
    https://www.linkedin.com/in/betty-louie-039a1920/
    https://www.linkedin.com/company/the-brandtech-group/
    https://thebrandtechgroup.com/

    Recent work:
    AdExchanger: https://www.adexchanger.com/data-driven-thinking/5-tips-for-drafting-an-ethical-generative-ai-policy/ and https://www.adexchanger.com/adexchanger-talks/405939/
    Creative Ops: https://creativeops.fm/episode/e19-legal-as-co-pilot-in-accelerating-creatives-ai-adoption-w-betty-louie-of-brandtech-group
    BrXnd: https://brxnd.ai/sessions/navigating-legal-risks-in-gen-ai-a-practical-guide-for-companies-with-betty-louie-and-shareen-pathak
    Authority Magazine: https://medium.com/authority-magazine/c-suite-perspectives-on-ai-betty-louie-of-the-brandtech-group-on-where-to-use-ai-and-where-to-rely-aa920c35cd83

    Betty Louie is a Partner and General Counsel at The Brandtech Group. She has more than 25 years’ experience advising both public and private tech companies, and was previously a partner at a leading international law firm. She has been consistently ranked in Chambers Global and Legal500 since 2012. Betty oversaw Brandtech’s 2023 acquisitions of Jellyfish, a digital media company, and Pencil, a Generative AI platform, and works extensively with major global brands to design robust and ethical AI and Gen AI policies. She spearheaded Brandtech’s green-listing system to enable companies to experiment and explore new Gen AI tools within certain legal, tech, and ethical parameters. She is a leading speaker and industry thought leader.


    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    25 mins
  • How AI Agents Will Impact Businesses with Jeff Redel, Managing Director of Data & AI Governance, ATB Financial | EP 10
    Feb 13 2025

    In this of the Responsible AI Report, Patrick speaks with Jeff Redel, Managing Director of the Data and AI governance team at ATB Financial. They discuss the evolution of AI agents, the importance of ethical implementation in banking, the skills financial professionals will need in the future, the necessity of human oversight in AI processes, and the regulatory challenges that accompany the rapid advancement of AI technology. Jeff emphasizes the need for a strong foundation in data governance and ethics, as well as the importance of education and adaptability for team members in the face of AI integration.

    Takeaways

    • AI is evolving from simple tools to sophisticated agents.
    • Ethics and data governance are foundational for AI implementation.
    • Education and training are crucial for team members.
    • Data literacy is essential for effective AI use.
    • Human oversight is necessary for decision-making with AI.
    • AI agents improve performance but require human input.

    Learn more at:
    https://www.atb.com/personal/
    https://www.linkedin.com/in/jeff-redel-2097291/

    Jeff Redel is the Managing Director of ATB Financial’s Data & AI Governance team. His team focuses on ethical and responsible AI, data governance excellence, and strategic leadership and vision. Key areas include championing fairness and inclusivity, prioritizing transparency and explainability, upholding privacy and security, promoting responsible AI use, establishing data quality and integrity, ensuring data security and compliance, driving data literacy and accessibility, optimizing data management and architecture, aligning data and AI with business goals, fostering collaboration and communication, promoting a culture of innovation and learning, and building a high-performing team.

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins
  • AI Risk & Ethical Considerations with Amy Challen, Global Head of AI at Shell | EP 09
    Jan 30 2025

    In this episode of the Responsible AI Report, Patrick speaks with Amy Challen, the Global Head of AI at Shell. They discuss the current landscape of AI, including the ethical considerations in AI development, the importance of risk management, and the public discourse surrounding responsible AI. The conversation highlights the need for a balanced approach to AI innovation and the role of leadership in navigating these challenges.

    Takeaways

    • The development of AI must serve humanity's interests.
    • Ethical considerations are crucial in AI and AGI development.
    • Different countries address AI risks in varied ways.
    • Public discussions on AI often overlook everyday ethics.
    • AI risks should be assessed pragmatically and holistically.
    • Technological innovation requires public and private partnerships.
    • Responsible AI is a collective effort across industries.

    Learn more at:
    https://www.shell.com/what-we-do/digitalisation/artificial-intelligence.html

    Amy Challen is the Global Head of Artificial Intelligence at Shell, responsible for driving delivery and adoption of AI technologies, including natural language processing, computer vision, and deep reinforcement learning.

    She spent the first decade of her career in academia as a researcher in applied econometrics, before joining McKinsey & Company as a strategy consultant. As a consultant she solved real-world problems across diverse functions and industries, for some of the world’s largest organizations, delivering significant commercial value. She joined Shell in 2019.

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    22 mins
  • The Significance of AI System Cards with Bryan McGowan and Christopher Jambor, Trusted AI Team at KPMG | EP 08
    Jan 16 2025

    In this episode, Patrick speaks with Brian McGowan and Chris Jambor from KPMG about the importance of responsible AI practices. They discuss the limitations of AI models, the development and significance of AI system cards, and how these tools can help mitigate risks associated with AI technologies. The conversation emphasizes the need for a structured approach to AI governance and the role of transparency and accountability in building trust in AI systems.

    Takeaways

    • AI tools are evolving rapidly and need proper guardrails.
    • AI system cards provide a structured way to assess AI systems.
    • Transparency and explainability are crucial in AI governance.
    • System cards help improve AI literacy in the workplace.
    • A trust score helps users understand AI system performance.
    • AI governance must be scalable and adaptable to technology changes.
    • Robust testing and validation are key to responsible AI.

    Learn more at:
    https://kpmg.com/xx/en/what-we-do/services/kpmg-trusted-ai.html

    Bryan McGowan is a Principal in the KPMG Advisory practice and leader of US Trusted AI for Consulting. In this role, Bryan continues to expand his passion of leveraging technology to drive efficiency, enhance insights, and improve results. Trusted AI combines deep industry expertise across the firm’s Risk Services, Lighthouse, and Cyber businesses with modern technical skills to help business leaders harness the power of AI to accelerate value in a trusted manner—from strategy and design through to implementation and ongoing operations. Bryan also leads the Trusted AI go-to-market efforts for the Risk Services business and co-developed the firm’s Risk Intelligence product suite to help identify, manage, and quantify risks across the enterprise. His primary focus areas are business process improvement, control design and automation, and managing risks associated with emerging technologies. Bryan has over 20 years’ experience running large, complex projects across a variety of industries. This includes supporting clients on their automation and analytics journey for the better part of the last decade—designing and developing bots, RPA, initial AI/ML models, and more.

    Chris is a member of the KPMG AI & Digital Innovation Group's Trusted AI Team with a specialized focus on AI literacy and the responsible & ethical uses of AI. Before joining the Trusted AI team, Chris was an AI Strategy Consultant & Analytics Engineer working in industries such as technology, entertainment, healthcare, pharmaceuticals, marketing/advertising, higher education, and cybersecurity.







    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    16 mins