{"id":2734,"date":"2025-04-11T12:33:49","date_gmt":"2025-04-11T12:33:49","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=2734"},"modified":"2025-04-11T12:33:49","modified_gmt":"2025-04-11T12:33:49","slug":"openai-slammed-for-putting-speed-over-safety","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=2734","title":{"rendered":"OpenAI slammed for putting speed over safety"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>OpenAI, the AI research powerhouse with popular projects like the GPT series, Codec, DALL-E, and Whisper, might be rushing through its AI deployment without adequate protections.<\/p>\n<p>According to a Financial Times <a href=\"https:\/\/www.ft.com\/content\/8253b66e-ade7-4d1f-993b-2d0779c7e7d8\">report<\/a>, the ChatGPT maker is now assigning staff and third-party groups only a few days to assess the risks and performance of its latest large language models (LLMs) as compared to several months they were given earlier.<\/p>\n<p>This could possibly have to do with the push for faster model release and a shift in focus towards inference (generating new data) rather than just training models.<\/p>\n<p>\u201cAI is becoming a very competitive field with all tech companies launching their models at breath-taking speed,\u201d said Pareekh Jain, CEO and lead analyst at Parekh Consulting. \u201cOpenAI\u2019s edge has been that it was an early player in this race and they must be wanting to maintain that edge and accelerate production by slashing testing time.\u201d<\/p>\n<h2 class=\"wp-block-heading\"><strong>Testers say they had more time before<\/strong><\/h2>\n<p>OpenAI has scaled back its safety testing efforts, dedicating fewer resources and less time to risk assessments, according to eight people FT cited in its report who are familiar with OpenAI\u2019s testing processes.<\/p>\n<p>\u201cWe had more thorough safety testing when it was less important,\u201d the FT report said quoting one of their sources that was testing OpenAI\u2019s upcoming o3 model, while referring to the LLM technology.<\/p>\n<p>OpenAI\u2019s approach to safety testing for its GPT models has varied over time. For GPT-4, the company dedicated<a href=\"https:\/\/openai.com\/index\/our-approach-to-ai-safety\/\"> over six months <\/a>to safety evaluations before its public release. For the GPT-4 Omni model, however, OpenAI condensed the testing phase into <a href=\"https:\/\/www.ctol.digital\/news\/openai-faces-scrutiny-over-gpt-4-omni-safety-testing\/\">just one week<\/a> to meet a May 2024 launch deadline.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Reduced testing could compromise model integrity<\/strong><\/h2>\n<p>Reducing the safety testing time could severely impact the quality of the launching model, experts add.<\/p>\n<p>\u201cIf there are cases of any hallucination or damage due to model outputs, then OpenAI will <a href=\"https:\/\/www.csoonline.com\/article\/1259919\/ai-enters-production-systems-even-as-trust-emerges-as-a-growing-concern.html\">lose people\u2019s trust<\/a> and face derailed adoption,\u201d Jain added. \u201cIt can be blamed on slashing testing time. Already, OpenAI has an image problem by converting it from a non-profit to a profit enterprise. Any bad incident can further tarnish its image that, for profit, they are sacrificing responsible testing.\u201d<\/p>\n<p>One of the sources called the reduction in testing time \u201creckless,\u201d and a \u201crecipe for disaster.\u201d Another involved in GPT-4 testing said some dangerous capabilities were only discovered two months into testing.<\/p>\n<p>While OpenAI did not immediately respond to requests for comment, the LLM giant has had experience dealing with such allegations in the past.<\/p>\n<p>Responding to a similar backlash, in September 2024, OpenAI <a href=\"https:\/\/openai.com\/index\/update-on-safety-and-security-practices\/\">turned<\/a> its Safety and Security Committee into an independent \u201cBoard oversight committee\u201d with the power to delay model launches over safety concerns.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Improved AI could be pushing faster tests<\/strong><\/h2>\n<p>While few obvious fingers point at escalated tests as dangerous to model integrity, there\u2019s one rare way of looking at it. Jain hinted at the possibility of OpenAI being actually capable of speeding up tests without compromising security.<\/p>\n<p>\u201cOpenAI must be using a lot of AI in their internal processes also,\u201d he said. \u201cThey must be drinking their own champagne to convince the world that, with AI, they could do fast testing. We should give them the benefit of the doubt if they are trying to accelerate their model launch with more AI use.\u201d Backing this thought is a <a href=\"https:\/\/openai.com\/index\/early-access-for-safety-testing\/\">claim<\/a> from OpenAI from December 2024, where they said their testing models are becoming more capable quickly with AI.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>OpenAI, the AI research powerhouse with popular projects like the GPT series, Codec, DALL-E, and Whisper, might be rushing through its AI deployment without adequate protections. According to a Financial Times report, the ChatGPT maker is now assigning staff and third-party groups only a few days to assess the risks and performance of its latest [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":2735,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2734","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2734"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2734"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2734\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/2735"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2734"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2734"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2734"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}