{"id":4644,"date":"2025-09-03T07:30:00","date_gmt":"2025-09-03T07:30:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=4644"},"modified":"2025-09-03T07:30:00","modified_gmt":"2025-09-03T07:30:00","slug":"how-the-generative-ai-boom-opens-up-new-privacy-and-cybersecurity-risks","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=4644","title":{"rendered":"How the generative AI boom opens up new privacy and cybersecurity risks"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>It was one of the viral tech news stories at the start of July when WeTransfer, the popular file sharing service used massively by companies and end users alike, had changed its terms of use.<\/p>\n<p>It\u2019s the kind of thing that is usually accepted without going too deeply into it, but on this occasion they had added an element connected to artificial intelligence. As of early August, WeTransfer <a href=\"https:\/\/www.artnews.com\/art-news\/news\/wetransfer-changes-terms-licensing-rights-ai-1234747368\/\" target=\"_blank\" rel=\"noopener\">reserved<\/a> the right to use the documents it managed to \u201coperate, develop, market and improve the service or new technologies or services, including improving the performance of machine learning models.\u201d User information, whatever it was, could be used to train AI, it was understood.<\/p>\n<p>The scandal was huge and WeTransfer ended up backtracking, explaining to the media that, in fact, what they wanted to cover was the possibility of using AI to moderate content and not exactly what their users had understood.<\/p>\n<p>However, the WeTransfer scandal became a very visible sign of a potential new risk in cybersecurity, privacy and even protection of sensitive information. A lot of data is needed to power AI and a lot of data is used for it, causing privacy policies of very popular online services to change to adapt to this new environment.<\/p>\n<p>Add to that the fact that the incorporation of artificial intelligence is being done in real time, testing and trying things out. This opens up potential problems, such as the fact that company workers may be using services they have known about in their personal lives \u2014 such as ChatGPT \u2014 for work uses where they should not be using them. All the corporate privacy policies matter little if a random worker then uploads that confidential information to ChatGPT to have a translation done or a letter written.<\/p>\n<p>Thus, this new context opens up new questions, both for end users at the personal level and for CIOs and CISOs at the corporate level as those responsible for IT strategy and security.<\/p>\n<h2 class=\"wp-block-heading\">The owners of the data<\/h2>\n<p>One such issue is that of information, who owns the data and to whom it may belong. This is leading to updated terms of use for different services in order to be able to use the data that their users have generated to train AI. This has happened, for example, with social networks, such as Meta; but it is also happening with services widely used in corporate environments. Panda reminds us that Slack uses customer data by default for its machine learning models.<\/p>\n<p>This state of affairs is not exactly new. Public data no longer reaches organizations to develop their AIs and they need new sources of data. \u201cThe datasets collected in their applications are worth a lot,\u201d Herv\u00e9 Lambert, global consumer operations manager at Panda Security, explains in an <a href=\"https:\/\/www.pandasecurity.com\/es\/mediacenter\/tus-datos-personales-alimento-para-la-ia-y-para-fines-comerciales\/\" target=\"_blank\" rel=\"noopener\">analysis<\/a>. \u201cWhich explains why most of these companies are rushing to modify their privacy policies to be able to make use of them, and to adapt to new data protection regulations that force them to be more transparent about the use of the information they collect and store,\u201d he adds.<\/p>\n<p>Of course, this is first and foremost a problem for the IT and cybersecurity managers of the companies concerned, who have to change their rules of use. But then they can become headaches for companies that use their services in one way or another, or that know that their employees will do so regardless.<\/p>\n<p>\u201cThey want to open the door to new ways of exploiting data in areas such as AI, advanced marketing and product development,\u201d Lambert points out, \u201cbut at the same time they need to be in good standing with legislation.\u201d This makes the terms of use and privacy conditions include broad mentions or the separation between uses becomes a \u201cvery thin\u201d line.<\/p>\n<h2 class=\"wp-block-heading\">Privacy and cybersecurity risks<\/h2>\n<p>Another major problem lies in potential privacy and cybersecurity breaches, both for end users and for the companies themselves.<\/p>\n<p>Panda warns how AIs fed with large amounts of personal data can become a gateway to fraud or to create much more sophisticated and infallible attacks if they fall into the wrong hands. \u201cWhen we dump personal data into AI tools without convenient control we are exposing ourselves to the fact that the information can be copied, shared or used without our consent,\u201d notes its head of security operations.<\/p>\n<p>Sometimes it doesn\u2019t even have to fall into the wrong hands, but rather end users\u2019 lack of expertise allows sensitive information to surf the web. There\u2019s the case of ChatGPT conversations indexed by Google. \u201cWhen the \u2018make this chat discoverable\u2019 option is activated<em>,<\/em> the user of certain AI solutions such as ChatGPT agree to make them public and accessible from Google or other search engines and appear in search results, which generates controversy because some of these chats may contain sensitive data, business ideas, commercial strategies or personal experiences,\u201d explains Lambert.<\/p>\n<p>In fact, AI is already one of the most worrying issues for CISOs who are beginning to show signs of burnout from an increasingly complex work environment. While 64% of security managers believe that enabling the use of generative AI tools is a strategic goal in two years\u2019 time, they are also concerned about the risks they pose. This is confirmed by data from Proofpoint\u2019s fifth annual Voice of the CISO report.<\/p>\n<p>\u201cAI has gone from a concept to a fundamental element, transforming the way defenders and adversaries alike operate,\u201d explains Ryan Kalember, chief strategy officer at Proofpoint. \u201cCISOs now face a dual responsibility: to leverage AI to strengthen their security posture while ensuring its ethical and responsible use,\u201d he adds. To do so, they will have to make \u201cstrategic decisions\u201d but with the added complexity that CISOs are not the only decision-makers in the implementation of this resource.<\/p>\n<p>The secure use of generative AI is already a priority for 48% of CISOs.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>It was one of the viral tech news stories at the start of July when WeTransfer, the popular file sharing service used massively by companies and end users alike, had changed its terms of use. It\u2019s the kind of thing that is usually accepted without going too deeply into it, but on this occasion they [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":4645,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-4644","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4644"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4644"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4644\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/4645"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4644"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4644"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4644"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}