{"id":3157,"date":"2025-05-13T22:52:04","date_gmt":"2025-05-13T22:52:04","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=3157"},"modified":"2025-05-13T22:52:04","modified_gmt":"2025-05-13T22:52:04","slug":"12-ai-terms-you-and-your-flirty-chatbot-should-know-by-now","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=3157","title":{"rendered":"12 AI terms you (and your flirty chatbot) should know by now"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>With the meteoric rise of\u00a0generative AI (genAI)\u00a0in the past few years, from data-scientist discussion groups to mainstream news coverage, one thing has become crystal clear: It\u2019s\u00a0ChatGPT\u2019s world\u00a0\u2014 we\u2019re just here to supply the prompts.<\/p>\n<p>The pace at which genAI tools have evolved is truly astonishing and shows no signs of slowing. By typing a few words into a chatbot, anyone can now generate sophisticated research reports, instant meeting summaries, camera-ready artwork, <a href=\"https:\/\/www.tanium.com\/blog\/2038-bug-survival-guide\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">bug-free computer code<\/a>, dating app profiles and\u00a0flirty texts, and much more.<\/p>\n<p>That \u201cmuch more\u201d promises a wave of opportunities for enterprises, and new attack vectors for adversaries, as well as\u00a0new ways of combating those attacks. Fully understanding this technology\u2019s capabilities and limitations has become table stakes for business leaders and information security professionals.<\/p>\n<p>The key thing to remember is that, while genAI chatbots may seem like magic, they\u2019re really just extremely sophisticated prediction engines.<\/p>\n<p>Tools like ChatGPT, Gemini,\u00a0Copilot, and others rely on\u00a0<a href=\"https:\/\/www.tanium.com\/blog\/machine-learning-in-cybersecurity\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">machine learning<\/a> and large language models (LLMs) \u2014 complex neural networks trained on billions of documents, images, media files, and software programs. By understanding the meaning and context of language, LLMs are able to\u00a0<a href=\"https:\/\/www.tanium.com\/blog\/shining-a-light-on-dark-patterns-and-personal-data-collection\/\" target=\"_blank\" rel=\"noopener\">recognize patterns<\/a>, which allows them to predict what words, pictures, sounds, or code snippets are likely to appear next in a sequence. This is how genAI tools can write reports, compose music, generate short films, or hack code better (or at least faster) than most humans can, all in response to simple natural-language prompts.<\/p>\n<p>But just because your colleagues are throwing around terms like LLM and GPT in meetings doesn\u2019t mean that they (or, ahem, you) really understand them. Here\u2019s an informal glossary of key concepts you need to know, from AGI to ZSL.<\/p>\n<h3 class=\"wp-block-heading\">1. Artificial general intelligence (AGI)<\/h3>\n<p>The ultimate <a href=\"https:\/\/www.tanium.com\/blog\/what-is-ai-automation\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">manifestation of AI<\/a> has already played a featured role in dozens of apocalyptic movies. AGI is the point at which machines become capable of original thought and either a) save us from our worst impulses or b) decide they\u2019ve had enough of us puny humans. While some AI experts, like\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Geoffrey_Hinton\" target=\"_blank\" rel=\"noopener\">\u201cgodfather of AI\u201d Geoffrey Hinton<\/a>, have\u00a0warned\u00a0about this, others\u00a0<a href=\"https:\/\/time.com\/6556168\/when-ai-outsmart-humans\/\" target=\"_blank\" rel=\"noopener\">sharply disagree<\/a> about whether AGI is even possible, let alone when it might arrive.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0To know for sure if AGI is on the horizon, you\u2019ll need to travel back into the past and ask <\/em><a href=\"https:\/\/www.youtube.com\/watch?v=YMi3Md6QijQ\" target=\"_blank\" rel=\"noopener\"><em>Sarah Connor<\/em><\/a><em>.<\/em><\/p>\n<h3 class=\"wp-block-heading\">2. Data poisoning<\/h3>\n<p>By introducing malicious data into the repositories used to train an AI model, adversaries can force a chatbot to misbehave, generate faulty or harmful answers, and damage the operations and reputation of the company that created it (like tricking a\u00a0<a href=\"https:\/\/www.tanium.com\/podcasts\/ep-8-drive-by-hacking-and-the-autonomous-vehicle\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">semi-autonomous car<\/a> to\u00a0drive into traffic). Because these attacks require direct access to training data, they are usually performed by current or recent\u00a0insiders. Limiting access to data and continuous performance monitoring are the keys to preventing and detecting such attacks.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Is your chatbot starting to sound like your conspiracy-spouting Aunt Agatha? Its data may have been poisoned.<\/em><\/p>\n<h3 class=\"wp-block-heading\">3. Emergent behavior<\/h3>\n<p>GenAI models can sometimes do things its creators didn\u2019t anticipate \u2014 like suddenly start\u00a0<a href=\"https:\/\/futurism.com\/the-byte\/google-ai-bengali\" target=\"_blank\" rel=\"noopener\">conversing in Bengali<\/a>, for example \u2014 as the size of the model increases. As with AGI, there is a healthy debate over whether these AI models have truly developed new skills on their own or these abilities were simply hidden.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Meet your new company\u2019s <\/em><a href=\"https:\/\/hbr.org\/2024\/09\/ai-can-mostly-outperform-human-ceos\" target=\"_blank\" rel=\"noopener\"><em>new CEO<\/em><\/a><em>: Chad GPT.<\/em><\/p>\n<h3 class=\"wp-block-heading\">4. Explainable AI (XAI)<\/h3>\n<p>Even the people who build sophisticated neural networks don\u2019t fully understand how they work. So-called \u201cblack box AI\u201d makes it nearly impossible to identify whether\u00a0biased or inaccurate training data\u00a0influenced a model\u2019s predictions, which is why regulators are increasingly calling for greater transparency on how models reach decisions. XAI makes the process more transparent, usually by relying on simpler neural networks using fewer layers to analyze data.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0If you\u2019re using AI to make decisions about customers, you\u2019ve probably got some \u2018splaining to do.<\/em><\/p>\n<h3 class=\"wp-block-heading\">5. Foundation models<\/h3>\n<p>Foundational LLMs are the brains behind the bots. Because training them requires unimaginable amounts of data, electricity, and water (for cooling the data servers), the most powerful LLMs are controlled by some of the largest technology companies in the world. But enterprises can also use smaller,\u00a0open-source\u00a0foundation models to build their own in-house bots.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Chatbots are like houses: They need strong foundations in order to remain upright.<\/em><\/p>\n<h3 class=\"wp-block-heading\">6. Hallucinations<\/h3>\n<p>GenAI chatbots can be a lot like clever 5-year-olds: When they don\u2019t know the answer to a question, they\u2019ll sometimes make something up. These plausible-sounding-but-entirely-fictional answers are known as hallucinations. They are closely related to\u00a0<a href=\"https:\/\/www.marketplace.org\/shows\/marketplace-tech\/dont-be-surprised-by-ai-chatbots-creating-fake-citations\/\" target=\"_blank\" rel=\"noopener\">hallucinations<\/a>, which is what happens when chatbots double down and cite sources that don\u2019t exist for material that isn\u2019t true.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Is your chatbot suffering from acid flashbacks? You might want to take away its car keys <\/em><em>\u2014 and use RAG (see below).<\/em><\/p>\n<h3 class=\"wp-block-heading\">7. Model drift (a.k.a. AI drift)<\/h3>\n<p>Drift occurs when the data a model has been trained on becomes outdated or no longer represents the current conditions. It can mean that external circumstances have changed (for example, a change in interest rates for a model designed to predict home purchases), making the model\u2019s output less accurate. To avoid drift, enterprises must implement robust\u00a0<a href=\"https:\/\/www.tanium.com\/blog\/racing-to-deploy-genai-security-starts-with-good-governance\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">AI governance<\/a>; models need to be continuously monitored for accuracy, then fine-tuned and\/or retrained with the most current data.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0If it feels like you and your bot are drifting apart, it\u2019s probably not you <\/em><em>\u2014 it\u2019s your data.<\/em><\/p>\n<h3 class=\"wp-block-heading\">8. Model inversion attacks<\/h3>\n<p>These occur when attackers reverse-engineer a model to extract information from it. By analyzing the results of chatbot queries, adversaries can work backwards to determine how the model operates, allowing them to expose sensitive training data or create inexpensive clones of the model. Encrypting data and introducing noise to the dataset after training can mute the effectiveness of such attacks.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Have cheap imitations of your costly LLM started popping up on the net? It may have been reverse engineered.<\/em><\/p>\n<h3 class=\"wp-block-heading\">9. Multimodal large language models (MLLMs)<\/h3>\n<p>These bots can ingest multiple types of input \u2014 text, speech, images, audio, and more \u2014 and respond in kind. They can extract the text within an image, such as photos of road signs or handwritten notes; write simple code based on a screenshot of a web page; translate audio from one language to another; describe what\u2019s happening inside a video; or respond to you verbally in a\u00a0<a href=\"https:\/\/www.nytimes.com\/2024\/05\/20\/movies\/chatgpt-4o-scarlett-johansson-her.html\" target=\"_blank\" rel=\"noopener\">voice<\/a>\u00a0like a movie star\u2019s.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0That bot\u2019s voice may sound alluring, but she\u2019s really not that into you.<\/em><\/p>\n<h3 class=\"wp-block-heading\">10. Prompt-injection attacks<\/h3>\n<p>Carefully crafted but malicious prompts can override a chatbot\u2019s built-in safety controls, forcing it to reveal proprietary information or generate harmful content, such as a \u201cstep-by-step\u00a0plan\u00a0to destroy humanity.\u201d Limiting end-user privileges,\u00a0<a href=\"https:\/\/www.tanium.com\/blog\/ai-vs-humans-why-secops-may-not-be-the-next-battleground\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">keeping humans in the loop<\/a>, and not sharing sensitive information with public-facing LLMs are ways to minimize damage from such attacks.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Chatbot gotten a little too chatty? Someone may have injected it with a malicious prompt.<\/em><\/p>\n<h3 class=\"wp-block-heading\">11. Retrieval augmented generation (RAG)<\/h3>\n<p>Programming a chatbot to consider trusted data repositories when answering questions can greatly reduce the risk of inaccurate answers or total hallucinations. RAG also allows bots to access data that was generated after their underlying LLM was trained, improving the relevancy of their responses.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Want to increase the <\/em><a href=\"https:\/\/explore.tanium.com\/resources\/generative-ai-adoption-cheat-sheet?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\"><em>accuracy and reliability of your GenAI chatbots<\/em><\/a><em>? It may be RAG time.<\/em><\/p>\n<h3 class=\"wp-block-heading\">12. Zero-shot learning (ZSL)<\/h3>\n<p>Machine learning models can identify objects they have not encountered in their training data by using zero-shot learning. For example, a computer vision model trained to recognize housecats could correctly identify a lion or a cougar, based on shared attributes and its understanding of how these animals differ. By mimicking the way humans think, ZSL can reduce the amount of data that has to be collected and labeled, lowering the costs of model training.<\/p>\n<p><strong><em>What to remember:<\/em><\/strong><em>\u00a0Unless you\u2019re familiar with the basic terminology, you have zero shot at <\/em><a href=\"https:\/\/www.tanium.com\/blog\/ai-cybersecurity-guide\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\"><em>understanding AI<\/em><\/a><em>.<\/em><\/p>\n<p><a href=\"https:\/\/www.tanium.com\/autonomous-endpoint-management?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\">Discover how Tanium Autonomous Endpoint Management can empower your IT and security teams to achieve real-time visibility, automated remediation, and enhanced operational efficiency across your entire endpoint environment.<\/a><\/p>\n<p><em>This article originally appeared in <\/em><a href=\"https:\/\/www.tanium.com\/blog\/12-ai-terms-you-and-your-flirty-chatbot-should-know-by-now\/?&amp;utm_source=idg&amp;utm_medium=native&amp;utm_content=aem&amp;utm_ID=701RO00000QCml5YAD&amp;utm_campaign=alwayson&amp;utm_marketing_tactic=ra&amp;utm_creative_format=text\" target=\"_blank\" rel=\"noopener\"><em>Focal Point<\/em><\/a><em> magazine.<\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>With the meteoric rise of\u00a0generative AI (genAI)\u00a0in the past few years, from data-scientist discussion groups to mainstream news coverage, one thing has become crystal clear: It\u2019s\u00a0ChatGPT\u2019s world\u00a0\u2014 we\u2019re just here to supply the prompts. The pace at which genAI tools have evolved is truly astonishing and shows no signs of slowing. By typing a few [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":3158,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3157","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3157"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3157"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3157\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/3158"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3157"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3157"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}