Nouveauté

AI Governance & Reputation Risk for Boards: Policy Recommendations for a Responsible Future

Par : Andy Bhatt
Offrir maintenant
Ou planifier dans votre panier
Disponible dans votre compte client Decitre ou Furet du Nord dès validation de votre commande. Le format ePub est :
  • Compatible avec une lecture sur My Vivlio (smartphone, tablette, ordinateur)
  • Compatible avec une lecture sur liseuses Vivlio
  • Pour les liseuses autres que Vivlio, vous devez utiliser le logiciel Adobe Digital Edition. Non compatible avec la lecture sur les liseuses Kindle, Remarkable et Sony
Logo Vivlio, qui est-ce ?

Notre partenaire de plateforme de lecture numérique où vous retrouverez l'ensemble de vos ebooks gratuitement

Pour en savoir plus sur nos ebooks, consultez notre aide en ligne ici
C'est si simple ! Lisez votre ebook avec l'app Vivlio sur votre tablette, mobile ou ordinateur :
Google PlayApp Store
  • FormatePub
  • ISBN8231868612
  • EAN9798231868612
  • Date de parution12/07/2025
  • Protection num.pas de protection
  • Infos supplémentairesepub
  • ÉditeurWalzone Press

Résumé

AI Governance & Reputation Risk for Boards: Policy Recommendations for a Responsible Future offers a timely and actionable blueprint for corporate boards, regulators, and industry leaders navigating the accelerating AI revolution. Authored by Andy Bhatt, M. S.-M. I. S., this Think Tank report from SW & Associates confronts the widening gap between rapid AI adoption-especially Generative AI-and board-level readiness, exposing a growing "AI oversight debt" that poses ethical, legal, and reputational risks.
Drawing on cross-sector research and real-world case studies, the report highlights the most urgent challenges facing boards: algorithmic bias, data privacy breaches, black-box AI opacity, deepfakes, and the rise of deceptive "AI washing." It scrutinizes existing governance frameworks-including the OECD AI Principles, NIST AI RMF, and EU AI Act-and identifies critical shortcomings in corporate structures, such as fragmented AI ownership and low board literacy.
The report presents a structured and pragmatic set of policy recommendations: For boards: Build robust governance structures, mandate AI on the agenda, invest in AI fluency, and require explainability and ethical audits. For regulators: Support innovation-friendly regulation through risk-based frameworks, sandboxes, transparency mandates, and liability reform. For industry: Embed responsible AI from design to deployment through bias mitigation, human-in-the-loop systems, and strong data governance.
This comprehensive guide not only outlines the risks of inaction but positions responsible AI as a strategic differentiator. It urges organizations to treat ethics, transparency, and trust not as regulatory burdens, but as competitive advantages. As AI reshapes global industries, this report serves as both a warning and a roadmap for boards ready to lead responsibly in the age of intelligent systems.
AI Governance & Reputation Risk for Boards: Policy Recommendations for a Responsible Future offers a timely and actionable blueprint for corporate boards, regulators, and industry leaders navigating the accelerating AI revolution. Authored by Andy Bhatt, M. S.-M. I. S., this Think Tank report from SW & Associates confronts the widening gap between rapid AI adoption-especially Generative AI-and board-level readiness, exposing a growing "AI oversight debt" that poses ethical, legal, and reputational risks.
Drawing on cross-sector research and real-world case studies, the report highlights the most urgent challenges facing boards: algorithmic bias, data privacy breaches, black-box AI opacity, deepfakes, and the rise of deceptive "AI washing." It scrutinizes existing governance frameworks-including the OECD AI Principles, NIST AI RMF, and EU AI Act-and identifies critical shortcomings in corporate structures, such as fragmented AI ownership and low board literacy.
The report presents a structured and pragmatic set of policy recommendations: For boards: Build robust governance structures, mandate AI on the agenda, invest in AI fluency, and require explainability and ethical audits. For regulators: Support innovation-friendly regulation through risk-based frameworks, sandboxes, transparency mandates, and liability reform. For industry: Embed responsible AI from design to deployment through bias mitigation, human-in-the-loop systems, and strong data governance.
This comprehensive guide not only outlines the risks of inaction but positions responsible AI as a strategic differentiator. It urges organizations to treat ethics, transparency, and trust not as regulatory burdens, but as competitive advantages. As AI reshapes global industries, this report serves as both a warning and a roadmap for boards ready to lead responsibly in the age of intelligent systems.