Establishing The TRiSM Systems For AI In Enterprises
From the 1950s to 2010, AI has been in the picture for quite some time now. Sure, recent years have seen much more drastic progress in computational technologies, but most enterprises have been employing AI for predictive analysis. They mainly stuck to simply using machine learning to predict outcomes based on historical data. Back then, the use cases broadly revolved around sales projections and fraud detection.
However, one big technological breakthrough happened when large companies like Google improved their algorithms and made computational power more affordable. This was the point when a more complex form of AI emerged. Multi-layered neural networks allowed AI to recognise and interpret data much more nuancedly. The availability of labelled datasets, such as ImageNet in 2010, further accelerated AI development.
Nowadays, every previous use case is much more thoroughly attended to. Fraud detection, for instance, goes beyond transactional data to analyze emails and conversations using deep learning and large language models (LLMs). Thanks to developments in natural language processing (NLP), chatbots have evolved from simple, rule-based systems to more advanced, contextual, and conversational interfaces.
AI: The Current Concerns & Risks
In today's context, enterprises leverage AI by taking advantage of broader access to large datasets and advanced AI models, which were once limited to their internal data. Now, with resources provided by companies like Meta and OpenAI, businesses can build on pre-trained models using their own data. Thus accelerating their AI use cases. This allows enterprises to start at a more advanced stage rather than from scratch. It makes it easier for even those who haven't previously invested in AI to adopt and implement it more quickly. As a result, many enterprises are now investing in AI to serve their customers better and stay competitive.
As One Might Expect, All This Progress Isn’t Wholly Free
AI presents several well-known risks, particularly for enterprises and consumers. Ranging from data privacy concerns to internal data leakage, the risks are indeed many. AI systems often collect and process large amounts of personal data.
Data privacy is a significant concern, as AI systems often collect and process large amounts of personal data. For example, a facial recognition AI used by a retail store could capture and store images of shoppers without their consent. The data the system ingests can be easily misused and stolen if confidential.
In addition, enterprises need to work with more nuanced risks. For example, deepfakes pose unprecedented levels of risk. Deepfakes enable people to be whole persons in concept and act like one in the flesh. How can companies protect themselves against something like that?
Before We Freak Out, Is That All?
No.
The fact that nothing is ever error-free, presents as a hard pill to swallow. While we are moving towards an era where everything is potentially AI-generated, hallucinations bring in an element of doubt. It’s very common to notice that in every result obtained. A model can generate incorrect or unreliable outputs. With the right (or, more accurately said, wrong) set of conditions, such errors might even go undetected and reach the end user.
Another significant issue is bias in AI, both from the data used to train the models and the societal biases of those developing the systems. All of these lead to skewed outcomes. Furthermore, enterprises must protect customer data, prevent leaks, and address inappropriate interactions, such as profanity. Have you wondered what would happen if your chatbot picked up on cuss words and decided to implement them on a particularly loyal customer? It’d eventually make you lose out on sales.
One surefire way to deal with bias, in particular, is to diversify the training data and personnel. However, most of these risks require proactive management, and many enterprises are still grappling with how to address these emerging threats.
AI TRiSM To The Rescue
Gartner's AI TRiSM framework (AI Trust, Risk, and Security Management) provides guidelines to help enterprises address the various risks and security issues associated with AI. While it doesn't offer a foolproof solution, it serves as a foundation for understanding and mitigating these risks by putting systems and guardrails in place. This framework is expected to evolve as more risks and security challenges are identified. Additionally, initiatives like the U.S. government's NIST framework also aim to address AI risks by offering standardized mechanisms and policies, though these, too, are not absolute guarantees of protection.
Addressing The T & The R
The framework addresses common risks like biases, data privacy, etc., to strengthen AI's trustworthiness. Following up on all generated outcomes with the background workings helps the user understand the results better. AI TRiSM also ensures that ample policies are in place to enable users to mask/block sensitive data.
Out of the four, two pillars of AI TRiSM, which help secure the trust and risk management aspects of any AI system, are explainability and robustness.
“Explainability” is understanding and explaining how an AI system arrives at its decisions. This is important for building trust in AI systems and ensuring they are used responsibly. For instance, if a bank uses an AI system to assess loan applications, it's crucial to understand how it decides which applications to approve or reject. This understanding helps verify that the system isn't biased against certain groups of people.
On the other hand, “robustness” refers to an AI system's ability to withstand unexpected inputs or changes in its environment. Robust AI systems are less likely to make errors or be manipulated. For example, a self-driving car must be able to navigate safely even in unexpected situations, like encountering a road closure or adverse weather conditions.
A special case of risk mitigation is considering the inherent biases that come with the very nature of such systems. The third pillar of AI TRiSM involves “fairness.” The fairness of an AI system ensures that it is not biased against certain groups of people. This is essential for ensuring that AI is used ethically and doesn't perpetuate existing societal inequalities. For example, an AI system used for recruitment should not be biased against candidates based on their gender, race, or other protected characteristics.
What About Security Then?
On the security side, AI models need protection against model poisoning. Bad data can swiftly corrupt models, leading to harmful outcomes. Especially in critical systems like healthcare or infrastructure. Secure training processes, data validation, and governance policies help prevent malicious injections and ensure AI operates safely. Governance policies can also be automated to filter or mask sensitive data, ensuring interactions with AI models remain secure and exploitation-free.
With the rise of advanced AI tools like chatbots powered by natural language processing (NLP), end users now interact directly with these systems. This wasn't the case previously when AI was more insulated within internal enterprise operations. This increased exposure means customers may inadvertently share sensitive data, such as social security numbers, credit card details, or healthcare information, making it crucial to safeguard such data.
The fourth pillar of AI TRiSM takes care of this part. “Lineage” refers to the ability to track the origins and development of an AI system. This includes information about the data used to train the system, the algorithms used, and the people who developed it. This enforces accountability and the ability to audit AI systems to identify potential problems, thus enhancing the security of the models in use.
What Slips Past & What Can Be Done About It?
The future of AI risk management will likely evolve as new, unknown use cases emerge. Many potential risks are still not fully understood, even within the industry. One key area of concern is sentiment analysis in conversational AI. Furthermore, as these AI systems interact more intimately with users, there's a need to implement sentiment analysis to ensure that AI does not unintentionally influence users to take harmful actions. Recently, a child somehow managed to use ChatGPT to learn how to build an explosive. While that specific case was mitigated, it doesn’t take too many words to warn someone against the potential for misuse of AI. These risks highlight the need for continuous learning and adaptation as the industry uncovers new potential challenges and works to safeguard against them.
One notable "unknown use case" that recently emerged involves code generation with AI models like ChatGPT. Initially, engineers at companies like Samsung began using AI to generate or fix code by injecting their proprietary algorithms into the model. However, it was later discovered that other individuals, including bad actors, could reverse-engineer these interactions, effectively uncovering Samsung’s private algorithms.
This example illustrates how sharing sensitive data or code with public AI models can lead to unintended exposure, making it clear that data privacy concerns extend beyond traditional use cases. It highlights the risks of using AI for proprietary or sensitive operations and the need for enterprises to be cautious when interacting with public AI systems.
Before You Go Ahead & Use AI
The opportunities AI presents for enterprises suggest a cautious approach to implementation. We recommend starting with internal use cases before expanding to customer-facing applications. That way, even if you face some hiccups on the way, you’ll be able to mitigate them before fully facing your entire client base. You may also feel the need for a custom framework that helps you address your specific AI-related issues. Existing models like the AI TRiSM Framework and the U.S. government's NIST AI security framework can aid you in formulating them. Oftentimes, using ready-made platforms like can empower businesses to build comprehensive, tailored solutions that maximise AI's benefits. All, while safeguarding against potential pitfalls.
In conclusion, as AI continues to evolve, a commitment to ongoing learning, adaptation and responsible implementation will be paramount for businesses to move forward ethically and securely.