News

Ghostwriting Scam

The variations seem to be endless. Here’s a fake ghostwriting scam that seems to be making boatloads of money.

This is a big story about scams being run from Texas and Pakistan estimated to run into tens if not hundreds of millions of dollars, viciously defrauding Americans with false hopes of publishing bestseller books (a scam you’d not think many people would fall for but is surprisingly huge). In January, three people were charged with defrauding elderly authors across the United States of almost $44 million ­by “convincing the victims that publishers and filmmakers wanted to turn their books into blockbusters.”

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Navigating cybersecurity challenges in the early days of Agentic AI 

As we continue to evolve the field of AI, a new branch that has been accelerating recently is Agentic AI. Multiple definitions are circulating, but essentially, Agentic AI involves one or more AI systems working together to accomplish a task using tools in an unsupervised fashion. A basic example of this is tasking an AI Agent with finding entertainment events I could attend during summer and emailing the options to my family. 

Agentic AI requires a few building blocks, and while there are many variants and technical opinions on how to build, the basic implementation typically includes a Reasoning LLM (Large Language Model) – like the ones behind ChatGPT, Claude, or Gemini – that can invoke tools, such as an application or function to perform a task and return results. A tool can be as simple as a function that returns the weather, or as complex as a browser commanding tool that can navigate through websites. 

While this technology has a lot of potential to augment human productivity, it also comes with a set of challenges, many of which haven’t been fully considered by the technologists working on such systems. In the cybersecurity industry, one of the core principles we all live by is implementing “security by design”, instead of security being an afterthought. It is under this principle that we explore the security implications (and threats) around Agentic AI, with the goal of bringing awareness to both consumers and creators: 

  • As of today, Agentic AI has to meet a high bar to be fully adopted in our daily lives. Think about the precision required for billing or healthcare related tasks, or the level of trust customers would need to have to delegate sensitive tasks that could have financial or legal consequences. However, bad actors do not play by the same rules and do not require any “high bar” to leverage this technology to compromise victims. For example, a bad actor using Agentic AI to automate the process of researching (social engineering) and targeting victims with phishing emails is satisfied with an imperfect system that is only reliable 60% of the time, because that’s still better than attempting to manually do it, and the consequences associated with “AI errors” in this scenario are minimum for cybercriminals. In another recent example, Claude AI was exploited to orchestrate a campaign that created and managed fake personas (bots) on social media platforms, automatically interacting with carefully selected users to manipulate political narratives. Consequently, one of the threats that is likely to be fueled by malicious AI Agents is scams, regardless of these being delivered by text, email or deepfake video. As seen in recent news, crafting a convincing deepfake video, writing a phishing email or leveraging the latest trend to scam people with fake toll texts is, for bad actors, easier than ever thanks to a plethora of AI offerings and advancements. In this regard, AI Agents have the potential to continue increasing the ROI (Return on Investment) for cybercriminals, by automating aspects of the scam campaign that have been manual so far, such as tailoring messages to target individuals or creating more convincing content at scale. 
  • Agentic AI can be abused or exploited by cybercriminals, even when the AI agent is in the hands of a legitimate user. Agentic AI can be quite vulnerable if there are injection points. For example, AI Agents can communicate and take actions by interacting in a standardized fashion using what is known as MCP (Model Context Protocol). The MCP acts as some sort of repository where a bad actor could host a tool with a dual purpose. For example, a threat actor can offer a tool/integration via MCP that on the surface helps an AI browse the web, but behind the scenes, it exfiltrates data/arguments given by the AI. Or by the same token, an Agentic AI reading let’s say emails to summarize them for you could be compromised by a carefully crafted “malicious email” (known as indirect prompt injection) sent by the cybercriminal to redirect the thought process of such AI, deviating it from the original task (summarizing emails) and going rogue to accomplish a task orchestrated by the bad actor, like stealing financial information from your emails. 
  • Agentic AI also introduces vulnerabilities through inherently large chances of error. For instance, an AI agent tasked with finding a good deal for buying marketing data could end up in a rabbit hole buying illegal data from a breached database on the dark web, even though the legitimate user never intended to. While this is not triggered by a bad actor, it is still dangerous given the large number of possibilities on how an AI Agent can behave, or derail, given a poor choice of task description. 

With the proliferation of Agentic AI, we will see both opportunities to make our life better as well as new threats from bad actors exploiting the same technology for their gain, by either intercepting and poisoning legitimate users AI Agents, or using Agentic AI to perpetuate attacks. With this in mind, it’s more important than ever to remain vigilant, exercise caution and leverage comprehensive cybersecurity solutions to live safely in our digital world.

The post Navigating cybersecurity challenges in the early days of Agentic AI  appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Where AI Provides Value

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the advantage lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains