- Home
- / Insights
- / Whitepapers
- / Beyond the Hype: The Realities of Building and Scaling AI Responsibly
Beyond the Hype: The Realities of Building and Scaling AI Responsibly
-
September 17, 2024
-
As artificial intelligence (“AI”) dominates management and boardroom discussions, many organizations are eagerly embracing new technologies. For some, this enthusiasm stems from the promise of innovation, while for others, it’s driven by the fear of falling behind competitors on the adoption curve. After all, many of the technical breakthroughs fueling AI’s recent rise are now within reach for certain companies. These include advancements that offer more powerful and affordable ways to process data, the availability of large datasets to train models, and open-source frameworks and libraries that are democratizing AI development. However, while these advancements represent a significant array of new capabilities, it's important to note that they may not be readily available or applicable to every organization.
These advancements primarily benefit companies that are developing models—particularly large language models—that others rely on to create their solutions. While computing power has become less expensive, many organizations only benefit because the leaders in AI development have been able to leverage these resources first.
Defining Responsible AI
If AI, at the highest level, is the ability of a computer to mimic human cognitive functions, what is responsible AI?
The set of principles, actions and decisions that result in the design, deployment and use of AI that is ethical and transparent.
Responsible AI reflects and propagates fairness, transparency, accountability, privacy and safety in ways that advance humanity and foster trust between humans and technology.
Rushing development or adopting an ad hoc approach to planning, however, can lead to costly consequences. Call these the “dark sides of AI.” Overestimating its capabilities. Neglecting data quality. Ignoring ethical issues. Underestimating the complexity of implementation or the challenges in recruiting in-house technical talent. Evidence of these miscalculations is mounting now, as some businesses and their development teams struggle to keep up with ROI projections and the pressure to monetize this expensive investment, especially on time and at scale.
The Importance of a Holistic Approach
Hype aside, there are savvy and sophisticated management and technical teams that are “hitting their marks.” They’re pragmatic, reasoned and disciplined in approaching AI development, traditional and generative – not just realistically, but also responsibly. They know building and scaling AI responsibly requires a holistic and multifaceted approach to several core elements with significant ethical, legal and societal implications. Here’s what their experience – and ours – suggests.
Data and Lifecycle Model: The Critical Framework
If the foundation of the house is cracked, it may not immediately collapse, but repairing it can be costly and may even require rebuilding from scratch, causing a significant loss in time and money. Similarly, with responsible AI, having a solid foundation or framework is critical. One of the most important foundational elements is to get the right framework in place – and that starts with the data and lifecycle model. At the minimum, this should include data-related controls, such as sampling, quality, selection and collection and model-related controls, including design, auditability, maintainability, interpretability, and robustness. Additionally, it should include the following:
- Data quality and bias: Bad data feeds biased, inaccurate and misleading outcomes. Your data lifecycle model must allow monitoring and intervention at various points in the process via tactics such as data cleaning to improve quality, data augmentation to offset data scarcity, and bias detection to uncover and counter unfairness.
- Privacy and security: Your data lifecycle model, from acquisition and ingestion to governance and management, should comply with complex privacy regulations and protect secure information. It should also guard against compromise through unauthorized access, sharing or breaches.
- Accountability and auditing: If your data lifecycle model is understandable, you’ll be able to analyze who accessed, changed or used the data – and for what purposes. You’ll also be able to audit it, which helps ensure compliance with regulations and internal policies.
- Continuous improvement: Your model should allow you to monitor its critical facets via metrics and KPIs that you can use to improve its performance over time.
Algorithm Transparency: A Clear and Explainable Set of Instructions
AI capabilities can’t be housed in a black box, even if it’s accessible by a few highly skilled technical specialists. To be managed and controlled, AI must be more broadly understood by a wider range of users, from auditors to third-party evaluators and ethical monitors. That can be a tall order because many AI algorithms, especially deep neural networks, may tap into millions or billions of parameters.
For example, users should be able to track the data's source and path, ensuring clear data provenance, examine the data itself, and understand the rationale behind the algorithm's decision to use it through interpretable results and explainable predictions. Additionally, they should be able to evaluate whether the algorithm is accessing proprietary or restricted data, supported by auditable models and clear documentation.
This transparency is even more vital in certain industries, applications and usage arenas. For instance, in healthcare application, risks include missed diagnoses, incorrect treatments, inaccurate projections of drug efficacy or surgical robot errors. In criminal justice, it can result in biased investigations and unfair sentencing. In financial trading, algorithms could misread market signals and trigger catastrophic financial losses. Autonomous vehicles could cause accidents and fatalities, while automated weapon systems could target incorrectly, resulting at minimum in civilian deaths and at worst in uncontrollable escalation and consequences that threaten peace.
Human-in-the Loop: Judgement, Ethics and Adaptability
The cognitive gap between humans and technology is widening – and will continue to challenge AI development, in general, and responsible AI in particular. Humans are naturally constrained in their ability to understand complex algorithms, and special training and education can mitigate this deficit only to a certain extent.
Despite this challenge, responsible AI depends critically on human involvement at many important points on AI’s development path. One is monitoring for bias and quality and gathering insights on how to counter these. Another is explaining to others why an algorithm is generating a particular outcome – especially to audiences without technical knowledge.
A third is ethics, a particularly nuanced and subjective arena that deserves its own discussion. Ethics often reflect value systems and belief frameworks that differ across cultures, organizations and individuals. These concepts of right and wrong are complex, context-dependent, interconnected and constantly changing.
Risk Assessment and Management: Balancing the Rewards and Costs
Building and scaling AI responsibly across each of the three areas outlined above, as well as others, is essentially about risk and value creation. The lens is like the ones organizations use today to manage other complex systems with strategic implications for their performance – like those driving supply chain complexity, customer relationships, enterprise resource planning, business intelligence, cloud computing and cybersecurity. Applied specifically to responsible AI, key risk management elements typically involve the following.
- Key types of AI-related risks: Major categories include the AI implications for regulatory, financial, technological, reputational and operational risk.
- Common risk issues: While bias and accuracy top the list, another is the possibility that AI could be used for nefarious or harmful purposes. Also of concern is that AI systems could become uncontrollable or unpredictable or simply begin “hallucinating,” a concept coined by economist and Nobel winner Stephen Thaler, who first wrote about technology spontaneously generating new ideas or concepts without explicit external inputs.1
- Levers of control: These include methods and practices such as statistical techniques, regulatory requirements and AI model assessment, typically provided by independent experts like those embedded with FTI Consulting’s AI Services & Advisory, Technology, Corporate Finance, and Data & Analytics teams.
Responsible AI: It’s Really About Life, and Living
Why is responsible AI so important? As one of the most powerful technologies to ever shape our world, responsible AI is fundamentally about us. Responsible AI is about people – a concept that can be expressed in a few simple principles.
- Trust: Responsible AI cultivates trust, most ostensibly between humans and technology. As an abstract notion, however, trust begets trust. In other words, it’s an asset with transferable value. We will never have enough advanced experts in AI. But with responsible AI, it’s not hard to project an ecosystem where end-users and beneficiaries of AI in its various forms learn to trust the “stewardship” of AI and the crucial roles played by AI’s experts, governance systems and regulatory authorities.
- Fairness: Responsible AI helps preserve equity, impartiality and justice – even if, like ethics, these are complex and subjective notions that vary across identities, communities and regions.
- Safety: Responsible AI is about safety and security, and the challenges associated with building confidence across constituents, from executives to regulators, that the benefits of AI are well balanced with the risks of harm or injury.
Responsible AI is also about one more factor: whether its development, regulation and use contribute to a better world, however we choose to define that. As AI continues to evolve, it's crucial to remember its ultimate purpose is to serve humanity. Responsible AI ensures this relationship is one of mutual benefit, where technology enhances human capabilities without compromising our values.
Footnotes:
1: Thaler, Stephen, (December 1995). "Virtual input phenomena within the death of a simple pattern associator", Science Direct (December 1995).
Related Insights
Published
September 17, 2024
Key Contacts
Senior Managing Director, Global Head of Data Science
Senior Managing Director