{"id":2660,"date":"2025-04-08T06:00:00","date_gmt":"2025-04-08T06:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=2660"},"modified":"2025-04-08T06:00:00","modified_gmt":"2025-04-08T06:00:00","slug":"10-things-you-should-include-in-your-ai-policy","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=2660","title":{"rendered":"10 things you should include in your AI policy"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>The popularity of generative AI has created a tricky terrain for organizations to navigate. On the one hand, there is this transformative technology with the potential to reduce costs and increase revenues, on the other hand, misuse of AI can upend entire industries, lead to public relations disasters, customer and employee dissatisfaction, and security breaches. Not to mention lots of money wasted on failed AI projects.<\/p>\n<p>Researchers disagree about how much return enterprises are seeing on their AI investments, but surveys show increased adoption of generative AI in more business use cases, and a steady growth of projects moving from pilot to production. A Zscaler <a href=\"https:\/\/www.zscaler.com\/campaign\/threatlabz-ai-security-report\">AI security report<\/a>, released in late March, saw a 3,464% increase in enterprise AI activity.<\/p>\n<p>But with the growing awareness of the potential of generative AI, there\u2019s also a growing awareness of its risks. For example, according to Zscaler, enterprises currently block 60% of all AI transactions, with ChatGPT being the individual application blocked most often. One reason? There were around 3 million attempts by users to upload sensitive data to ChatGPT alone, Zscaler reports.<\/p>\n<p>A carefully thought AI use policy can help a company set criteria for risk and safety, protect customers, employees, and the general public, and help the company zero in on the most promising AI use cases. \u201cNot embracing AI in a responsible manner is actually reducing your advantage of being competitive in the marketplace,\u201d says Bhrugu Pange, managing director who leads the technology services group at AArete, a management consulting firm.<\/p>\n<p>According to a <a href=\"https:\/\/www.littler.com\/publication-press\/publication\/littler-ai-csuite-survey-report-2024\">survey<\/a> of over 300 C-suite executives by employment and labor law practice Littler, 42% of companies had an AI policy in place as of September 2024 \u2014 up from just 10% a year earlier. Another 25% of organizations are currently working on AI policies, and 19% are considering one.<\/p>\n<p>If you\u2019re still working on your AI policy \u2014 or are updating your existing one \u2014 here are ten key areas your policy should cover.<\/p>\n<h2 class=\"wp-block-heading\">Clear definition of AI<\/h2>\n<p>AI means different things to different people. Search engines have AI in them. So do grammar checkers and Photoshop. Nearly every enterprise vendor is busy adding AI functionality to their platforms. Even things that have barely any intelligence at all are being rebranded as AI to get attention and funding.<\/p>\n<p>It helps to have a common definition of AI when discussing risks, benefits, and investments.<\/p>\n<p>Aflac began creating its official AI policy in early 2024, Tera Ladner, Aflac\u2019s deputy global CISO says, based on its existing policy frameworks. Aflac isn\u2019t the only company that realized that AI can be a very vague term.<\/p>\n<p>Principal Financial Group CIO Kathy Kay says they also had to come up with a clear definition of AI, because they realized very quickly that people were talking about AI differently. The firm had to develop a definition for what AI meant in the context of the company, so when talking about AI, they are all aligned.<\/p>\n<p>And, since AI can mean different things to different people, it helps to have a variety of viewpoints involved in the discussion.<\/p>\n<h2 class=\"wp-block-heading\">Input from all stakeholders<\/h2>\n<p>At Aflac, the security team took the initial lead on developing the company\u2019s AI policy. But AI is not just a security concern. \u201cAnd it\u2019s not just a legal concern,\u201d Ladner says. \u201cIt\u2019s not just a privacy concern. It\u2019s not just a compliance concern. You need to bring all the stakeholders together. I also highly recommend that your policy be sanctioned or approved by some sort of governing committee or body, so it has the teeth you need.\u201d<\/p>\n<p>An AI policy must serve the entire company, including individual business units.<\/p>\n<p>At Principal Financial, Kay says that she and the company\u2019s chief compliance officer were the executive sponsors of their AI policy. \u201cBut we had business unit representations, legal, compliance, technologists, and we even had HR engaged,\u201d she adds. \u201cEverybody learns together and you can align the outcomes you want to achieve.\u201d<\/p>\n<p>Intuit also put together a multidisciplinary team to create its AI policy. That helped the company create enterprise-wide governance policies and helped it cover legal requirements, industry standards, and best practices, according to Liza Levitt, Intuit\u2019s VP and deputy general counsel. \u201cThe team includes people with expertise in data privacy, AI, data science, engineering, product management, legal, compliance, security, ethics, and public policy.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Start at the organization\u2019s core principles<\/h2>\n<p>An AI policy needs to start with the organization\u2019s core values around ethics, innovation, and risk. \u201cDon\u2019t just write a policy to write a policy to meet a compliance checkmark,\u201d says Avani Desai, CEO at Schellman, a cybersecurity firm that works with companies on assessing their AI policies and infrastructure. \u201cBuild a governance framework that\u2019s resilient, ethical, trustworthy, and safe for everyone \u2014 not just so you have something that nobody looks at.\u201d<\/p>\n<p>Starting with core values will help with the creation of the rest of the AI policy. \u201cYou want to establish clear guidelines,\u201d Desai says. \u201cYou want everyone from top down to agree that AI has to be used responsibly and has to align with business ethics.\u201d<\/p>\n<p>Having these principles in place will also help companies stay ahead of regulations.<\/p>\n<h2 class=\"wp-block-heading\">Align with regulatory requirements<\/h2>\n<p>According to <a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2025-02-17-gartner-predicts-forty-percent-of-ai-data-breaches-will-arise-from-cross-border-genai-misuse-by-2027\">Gartner<\/a>, AI governance will become a requirement of all sovereign AI laws and regulations worldwide by 2027.<\/p>\n<p>The biggest AI regulation that\u2019s already in place is the <a href=\"https:\/\/www.csoonline.com\/article\/1258597\/how-the-eu-ai-act-regulates-artificial-intelligence-and-what-it-means-for-cybersecurity.html\">EU\u2019s AI Act<\/a><strong>. <\/strong>\u201cThe EU AI Act is the only framework that I\u2019ve seen that covers everything,\u201d says Schellman\u2019s Desai. And it applies to all countries that are delivering their product in the EU or to EU citizens.<\/p>\n<p>The act sets certain minimum standards that all sizable companies need to follow, she says. \u201cIt\u2019s very similar to what happened with GDPR. US companies were forced to comply because they couldn\u2019t bifurcate the data of who is in the EU and who\u2019s not. You don\u2019t want to build a new system just for EU data.\u201d<\/p>\n<p>The GDPR isn\u2019t the only regulation that applies to others, there are plenty of regulations around the world that touch on data privacy issues, which are relevant to AI deployment as well. And, of course, there are industry-specific data privacy rules, such as those for health care and financial services.<\/p>\n<p>Some regulators and standards-setting bodies have already begun looking at how to update their policies for generative AI. \u201cWe depended heavily on the NAIC guidance that was released that was specific to insurance companies,\u201d says Aflac\u2019s Ladner. \u201cWe wanted to be sure that we were capturing the guidelines and guardrails that NAIC was prescribing and making sure they were in place.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Establish clear responsible use guidelines<\/h2>\n<p>Can employees use public AI chatbots or only secure, company-approved tools? Can business units create and deploy their own AI agents? Can HR switch on and use the new AI-powered features in their HR software? Can sales and marketing use AI-generated images? Should humans review all AI output, or are reviews only necessary for high-risk use cases?<\/p>\n<p>These are the kinds of questions that go into the responsible use section of a company\u2019s AI policy and depend on an organization\u2019s specific needs.<\/p>\n<p>For example, at Principal Financial, code generated by AI needs review, says Kay. \u201cWe\u2019re not just unleashing code into the wild. We will have a human in the middle.\u201d Similarly, if the firm builds an AI tool to serve customer-facing employees, there will be a human checking the output, she says.<\/p>\n<p>Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. \u201cYou don\u2019t want to overly restrict the low-risk stuff,\u201d he says. \u201cIf you\u2019re using Copilot to transcribe an interview, that\u2019s relatively low risk. But if you\u2019re using AI to make a loan decision or decide what an insurance rate should be, that has more consequences and you need to provide more human review.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Don\u2019t forget the impact of third parties<\/h2>\n<p>If something goes wrong due to an AI-related issue, the public isn\u2019t going to care that it wasn\u2019t your fault but that of a vendor or contractor. Whether the problem is a data breach or a violation of a fair lending law, the buck stops with you.<\/p>\n<p>That means that an AI policy can\u2019t just cover a company\u2019s own internal systems and employees but also vendor selection and oversight.<\/p>\n<p>Some vendors will offer indemnification and contractual protection. Avoiding vendor lock-in will also help reduce third-party risk. If a provider violates their customers\u2019 AI policy, it can be difficult to switch.<\/p>\n<p>When it comes to AI model providers, being model agnostic from the start will help manage that risk. This means that, instead of hard-coding one AI or another into an enterprise workflow, the choice of model is left flexible, so it can be changed later.<\/p>\n<p>It does take more work up front, but there are other business benefits in addition to reducing risk. \u201cThe technology is changing,\u201d says PwC\u2019s Sen. \u201cYou don\u2019t know if one model will be better than another two years from now.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Establish clear governance structure<\/h2>\n<p>An AI policy that sets clear expectations is half the battle, but the policy is not going to be particularly effective if it doesn\u2019t also lay out how it will be enforced.<\/p>\n<p>Only 45% of organizations are at the level of AI governance maturity where their AI policy is aligned with their operating model, says Lauren Kornutick, analyst at Gartner, citing a 2024 Gartner survey. \u201cThe rest may have a policy in place of what\u2019s acceptable use, or have a responsible AI policy in place, but haven\u2019t effectively operationalized it throughout the organization,\u201d she says.<\/p>\n<p>Who gets to decide if a particular use case meets a company\u2019s guidelines, and who gets to enforce this decision?<\/p>\n<p>\u201cPolicy is great but it\u2019s not enough,\u201d she says. \u201cI hear that pretty consistently from our CISOs and our privacy officers.\u201d Getting this straightened out is valuable, she says. Companies who are effective at this are 12% more advanced in their technology deployment.<\/p>\n<p>And the first step, says Sanjeev Vohra, chief technology and innovation officer at Genpact, is to figure out what AI the company already has in place. \u201cMany companies don\u2019t have a full inventory of their usage of AI. That\u2019s what we recommend as the first thing, and you\u2019ll be surprised by how much time it takes.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Use technology to ensure compliance<\/h2>\n<p>One way to check if an AI policy is being followed is to use automated systems. \u201cWe\u2019re seeing technology emerge to support policy enforcement,\u201d says Gartner\u2019s Kornutick.<\/p>\n<p>For example, an AI-powered workflow can include a manual review step, where a human steps in and checks the work. Or data loss prevention tools can be used to ensure that employees don\u2019t upload sensitive data to public chatbots.<\/p>\n<p>\u201cEvery client that I work with has monitoring capabilities to see where there\u2019s exfiltration of data, see what\u2019s downloaded into their environment, and has ways to block access to sites that haven\u2019t been approved or that represent risks to enterprises,\u201d says Dan Priest, chief AI officer at PwC. \u201cThe risk is real but there are good ways to manage those risks.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Plan for all possibilities, including the worst<\/h2>\n<p>Things happen. No matter how good and comprehensive an AI policy is, there will be violations, and there will be problems. A company chatbot will say something embarrassing or make a promise the company can\u2019t keep because the right guardrails weren\u2019t activated.<\/p>\n<p>\u201cYou hear some interesting and fun examples of where AI has gone wrong,\u201d says Priest. \u201cBut it\u2019s a very minor part of the conversation, because there are reasonable ways to manage those risks. And if there\u2019s any volume of those risks manifesting, you activate countermeasures at the architectural layer, at the policy layer, and at the training layer.\u201d<\/p>\n<p>And just as a company needs to have technical measures in place for when AI goes off track, an AI policy also needs to include incident response in case the problem is bigger, and management response for cases in which employees, customers, or business partners deliberately or accidentally violate the policy.<\/p>\n<p>For example, employees in a particular department might routinely forget to review documents before they are sent to customers, or a business unit might set up a shadow AI system that ignores data privacy or security requirements.<\/p>\n<p>\u201cWho do you call?\u201d asks Shellman\u2019s Desai.<\/p>\n<p>There needs to be a process, and training, to ensure that people are in place to deal with violations and have the power they need to set things right. And if there\u2019s a problem with an entire AI process, there needs to be a way for the system to be shut off without doing damage to the company.<\/p>\n<h2 class=\"wp-block-heading\">Plan for change<\/h2>\n<p>AI technology moves quickly. That means that much of what goes into a company\u2019s AI policy needs to be reviewed and updated on a regular basis.<\/p>\n<p>\u201cIf you design a policy that doesn\u2019t have an ending date, you\u2019re hurting yourself,\u201d says Rayid Ghani, a professor at Carnegie Mellon University. That might mean that certain provisions are reviewed every year \u2014 or every quarter \u2014 to make sure they\u2019re still relevant.<\/p>\n<p>\u201cWhen you design the policy, you have to flag the things that are likely to change and require updates,\u201d he says. The changes could be a result of technological progress, or new business needs, or new regulations.<\/p>\n<p>At the end of the day, an AI policy should spur innovation and development, not hinder it, says Sinclair Schuller, principal at EY. \u201cWhoever is at the top \u2014 the CEO or the CSO \u2014 should say, \u2018we\u2019re going to institute an AI policy to enable you to adopt AI, not to prevent you from adopting AI\u2019,\u201d he says.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>The popularity of generative AI has created a tricky terrain for organizations to navigate. On the one hand, there is this transformative technology with the potential to reduce costs and increase revenues, on the other hand, misuse of AI can upend entire industries, lead to public relations disasters, customer and employee dissatisfaction, and security breaches. [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":2661,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2660","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2660"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2660"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2660\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/2661"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2660"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2660"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2660"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}