Given the facts about the importance of Artificial Intelligence, for several months, I have been delving into this topic, but with caution, seeking to understand the key point of how we can protect applications based on Gen AI.
We are living in times when AI has been the central theme in all areas that can provide benefits to organizations and end users. In addition to adding benefits to protection and anticipation in detecting cyber threats, whether with threat intelligence, incident response and EDR, SOAR solutions, etc.
However, the question we must ask is how much we are looking at the risks of this emerging technology. As AI gains traction through the use of solutions like ChatGPT, Claude.ai, Meta.ai, Copilot, Gemini, Grok, among other SaaS-based solutions, it is a fact that AI is invading and taking over our routine. Those who do not adapt to this technology may be considered a piece off the board.
Whether it’s to record meetings and create automated summaries with Hynote.ai, or to prepare a Cyber Risk Management plan, we really have to recognize the speed and agility that AI provides to daily tasks. We already see that in some companies, where an employee decides to change jobs, the position may no longer be replaced, and the organization considers conducting studies evaluating the feasibility of replacing certain tasks with AI.
The importance of understanding the risks behind AI
Here comes a point of concern that perhaps some professionals are not noticing. Are we being dominated by AI? Perhaps the answer may be “Yes,” “No” or “Maybe.” However, it will depend on the degree of visibility we have of this technology; the reality is that there is a danger behind it, and so our job is to know how to prepare to remediate such risks.
Should we presume where we should fit professionally and how we should prepare to maintain our employability? Given that risks of hallucinations are still feasible and occur in AI solutions, the answer to a given question may not be accurate, as the existing methods behind diffusion models used in AI training utilize code generation techniques such as “transformers” in LLMs used by most AI tools like ChatGPT, Gemini and Claude. For now, they still have limitations to validate a certain piece of data before providing the answer to the question that was entered by the requester (end user, API, etc.).
This means and indicates that it is still necessary to consider the human factor to evaluate certain responses before providing a decision that may involve relevant decision-making in the corporate and human context, such as the result of an analysis of a medical diagnosis still requires an evaluation by a specialist, or in the case of a quantitative risk assessment of a critical system. In fact, it is a point to think about!
Another important question to be asked is “how far should we limit and trust an AI solution?” Certainly, as mentioned above, many results can be positive, but remembering that every organization should think and maintain caution with privacy, bias, fairness and transparency risks through the data provided, among other points, which have raised the attention of information security professionals.
Basically, the execution process of an AI model is composed of 3 phases:
The data that is inserted in a request.
This data is evaluated by a training model that involves an entire architecture.
The result of the information that will be delivered
From an information security point of view. That is the point that we, information security professionals, must judge in the scope of evaluation from the perspective of data security.
In the market, there are already frameworks such as NIST RMF AI, SAIF Google, ISO 42001, AI Control Matrix (AICM) and AI Model Risk Management Framework from Cloud Security Alliance (CSA), in addition to MITRE ATLAS for the Threat Intelligence line and OWASP Top 10 LLMs for security in AI application development, which presents the main known vulnerabilities in the AI/ML field, which I intend to explore in another article.
I have particularly been studying NIST RMF AI and the AI Model Risk Management Framework from CSA, and in my view, they are effective in helping to map and add AI/ML solutions to the scope of the organization’s Cyber Risk Management program. These frameworks help to understand and map the AI applications used, address risks, impacts and damages that can occur with the use of this technology in the corporate context.
About the use of NIST AI RMF and AI Control Matrix (AICM) Frameworks
Making an agenda about the most popular frameworks, NIST AI RMF 1.0 focuses on looking at the group of people, organization and the ecosystem involved. In which they are divided into phases of
Govern (cultivates a risk culture)
Map (context of identified risk and asset classification)
Measure (identifies, assesses and mitigates risk)
Manage (risks are prioritized according to impact)
This model allows integration with NIST CSF for the lens of cybersecurity controls. The most interesting thing about this is that the framework is free.
Another model that I cannot fail to mention is the AI Control Matrix (AICM), which contains 243 control objectives divided into 18 security domains and allows evaluating all security pillars of an AI solution, specifically in a cloud environment.
AICM is integrated with AI-CAIQ, which covers frameworks including BSI AIC4 Catalog, NIST AI RMF and ISO 42001. Any robust AI solution needs large processing and energy, which can only be found in data centers. For this reason, we have seen large investments by big techs in the expansion of new data centers in all regions.
Therefore, deeply understanding cloud concepts and shared responsibilities is fundamental to implement an AI solution with precision, securely and that satisfies the business, leveraging the organization’s capacity against competition.
As a result, I have been doing some research on GenAI and especially studying these frameworks. In view of this, characterizing and visualizing the great challenge for us, cybersecurity professionals, is to establish the relevance of the connection between the teams involved in an AI project. Therefore, the connection is a priority for the project’s success and, therefore, we should not neglect the involvement of the areas of governance, cyber risk, ethics, regulatory and compliance, cybersecurity, data scientists, AI and ML developers, human Resources and infrastructure and IT operations.
Cybersecurity professionals need to be trained
Another great challenge, which I emphasize, is that cybersecurity professionals need to be trained in this AI/ML technology, understanding the technical aspects of architecture, training models used, techniques such as RAG (retrieval-augmented generation) that propose improvements in the model, and the application of the cybersecurity strategy to be judged. The eyes of the CISO and the cybersecurity team and other fronts involved must look at the importance of controls within these areas:
Data
Infrastructure
Model
Application
Assurance
Governance
Looking at the shared responsibility perspective in cloud
Another factor to be considered in scope: I reinforce the shared responsibility of AI applications in a cloud environment. The security principles of GenAI models and respective applications must consider how the AI application model will run.
Here, I highlight some crucial points to establish shared responsibilities in the cloud:
GenAI as an application (Public, SaaS)
GenAI as a platform (IaaS or PaaS)
Build your own AI application (IaaS, on premises).
In this model, with the help of the frameworks mentioned above, it will be possible to map, order and manage risks in the AI strategy adopted regarding access control for data and training models, data management and training, prompt controls, model development, model inference, model monitoring and infrastructure.
Thinking outside the box
Finally, the key point I wish to awaken readers in this article is to lead them to think about how we cybersecurity professionals are observing the risks and impacts in our environment, and how to prepare to evaluate such GenAI solutions in our realistic world.
I have particularly been exploring some of these known models in a cloud environment (SaaS), but I have been having a lot of fun with the Anthropic solution (claude.ai). In this model, it is possible to build robust solutions with code; you just have to be creative. As an example, I requested that it create a complete Cyber Risk Management (GRC) solution and suggested comparing it with some solutions on the market and meet best practices such as PCI-DSS, NIST, CRI, ISO 27001, etc. So that the solution created the entire back-end, only missing the front-end creation. A project that could take months can be reduced to a few months or weeks, depending on the investment and effort of the workforce involved.
Certifications that can add value to a career
In conclusion, as I had the opportunity to participate as a contributor to the review program for the Cloud Security Alliance (CSA) recently launched new certification, Trusted AI Safety Expert (TAISE). I definitely recommend it for those who are interested in understanding and exploring the architecture and main protection mechanisms in AI/ML solutions.
However, taking into account other certifications in this area, I have also been exploring and recommending ISACA’s AAISM certification material, which in turn is very interesting from the point of view of risk governance and security in AI/ML. Nevertheless, TAISE brings a more comprehensive root in the technical scope that also incorporates risk management and governance in cloud environments, which is the real world. AI needs large processing and the ideal environment to run is the cloud.
However, both complement each other and it is up to the professional to evaluate the best alternative and direction to follow. As the wise proverb says: “Do not abandon wisdom and knowledge, and it will guard you; love it, and it will protect you.”
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
No Responses