News

Apple’s New Memory Integrity Enforcement

Apple has introduced a new hardware/software security feature in the iPhone 17: “Memory Integrity Enforcement,” targeting the memory safety vulnerabilities that spyware products like Pegasus tend to use to get unauthorized system access. From Wired:

In recent years, a movement has been steadily growing across the global tech industry to address a ubiquitous and insidious type of bugs known as memory-safety vulnerabilities. A computer’s memory is a shared resource among all programs, and memory safety issues crop up when software can pull data that should be off limits from a computer’s memory or manipulate data in memory that shouldn’t be accessible to the program. When developers—­even experienced and security-conscious developers—­write software in ubiquitous, historic programming languages, like C and C++, it’s easy to make mistakes that lead to memory safety vulnerabilities. That’s why proactive tools like special programming languages have been proliferating with the goal of making it structurally impossible for software to contain these vulnerabilities, rather than attempting to avoid introducing them or catch all of them.

[…]

With memory-unsafe programming languages underlying so much of the world’s collective code base, Apple’s Security Engineering and Architecture team felt that putting memory safety mechanisms at the heart of Apple’s chips could be a deus ex machina for a seemingly intractable problem. The group built on a specification known as Memory Tagging Extension (MTE) released in 2019 by the chipmaker Arm. The idea was to essentially password protect every memory allocation in hardware so that future requests to access that region of memory are only granted by the system if the request includes the right secret.

Arm developed MTE as a tool to help developers find and fix memory corruption bugs. If the system receives a memory access request without passing the secret check, the app will crash and the system will log the sequence of events for developers to review. Apple’s engineers wondered whether MTE could run all the time rather than just being used as a debugging tool, and the group worked with Arm to release a version of the specification for this purpose in 2022 called Enhanced Memory Tagging Extension.

To make all of this a constant, real-time defense against exploitation of memory safety vulnerabilities, Apple spent years architecting the protection deeply within its chips so the feature could be on all the time for users without sacrificing overall processor and memory performance. In other words, you can see how generating and attaching secrets to every memory allocation and then demanding that programs manage and produce these secrets for every memory request could dent performance. But Apple says that it has been able to thread the needle.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Details About Chinese Surveillance and Propaganda Companies

Details from leaked documents:

While people often look at China’s Great Firewall as a single, all-powerful government system unique to China, the actual process of developing and maintaining it works the same way as surveillance technology in the West. Geedge collaborates with academic institutions on research and development, adapts its business strategy to fit different clients’ needs, and even repurposes leftover infrastructure from its competitors.

[…]

The parallels with the West are hard to miss. A number of American surveillance and propaganda firms also started as academic projects before they were spun out into startups and grew by chasing government contracts. The difference is that in China, these companies operate with far less transparency. Their work comes to light only when a trove of documents slips onto the internet.

[…]

It is tempting to think of the Great Firewall or Chinese propaganda as the outcome of a top-down master plan that only the Chinese Communist Party could pull off. But these leaks suggest a more complicated reality. Censorship and propaganda efforts must be marketed, financed, and maintained. They are shaped by the logic of corporate quarterly financial targets and competitive bids as much as by ideology­—except the customers are governments, and the products can control or shape entire societies.

More information about one of the two leaks.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Surveying the Global Spyware Market

The Atlantic Council has published its second annual report: “Mythical Beasts: Diving into the depths of the global spyware market.”

Too much good detail to summarize, but here are two items:

First, the authors found that the number of US-based investors in spyware has notably increased in the past year, when compared with the sample size of the spyware market captured in the first Mythical Beasts project. In the first edition, the United States was the second-largest investor in the spyware market, following Israel. In that edition, twelve investors were observed to be domiciled within the United States—­whereas in this second edition, twenty new US-based investors were observed investing in the spyware industry in 2024. This indicates a significant increase of US-based investments in spyware in 2024, catapulting the United States to being the largest investor in this sample of the spyware market. This is significant in scale, as US-based investment from 2023 to 2024 largely outpaced that of other major investing countries observed in the first dataset, including Italy, Israel, and the United Kingdom. It is also significant in the disparity it points to ­the visible enforcement gap between the flow of US dollars and US policy initiatives. Despite numerous US policy actions, such as the addition of spyware vendors on the Entity List, and the broader global leadership role that the United States has played through imposing sanctions and diplomatic engagement, US investments continue to fund the very entities that US policymakers are making an effort to combat.

Second, the authors elaborated on the central role that resellers and brokers play in the spyware market, while being a notably under-researched set of actors. These entities act as intermediaries, obscuring the connections between vendors, suppliers, and buyers. Oftentimes, intermediaries connect vendors to new regional markets. Their presence in the dataset is almost assuredly underrepresented given the opaque nature of brokers and resellers, making corporate structures and jurisdictional arbitrage more complex and challenging to disentangle. While their uptick in the second edition of the Mythical Beasts project may be the result of a wider, more extensive data-collection effort, there is less reporting on resellers and brokers, and these entities are not systematically understood. As observed in the first report, the activities of these suppliers and brokers represent a critical information gap for advocates of a more effective policy rooted in national security and human rights. These discoveries help bring into sharper focus the state of the spyware market and the wider cyber-proliferation space, and reaffirm the need to research and surface these actors that otherwise undermine the transparency and accountability efforts by state and non-state actors as they relate to the spyware market.

Really good work. Read the whole thing.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Time-of-Check Time-of-Use Attacks Against LLMs

This is a nice piece of research: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.:

Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e.g., prompt injection) and data-oriented threats (e.g., data exfiltration), time-of-check to time-of-use (TOCTOU) remain largely unexplored in this context. TOCTOU arises when an agent validates external state (e.g., a file or API response) that is later modified before use, enabling practical attacks such as malicious configuration swaps or payload injection. In this work, we present the first study of TOCTOU vulnerabilities in LLM-enabled agents. We introduce TOCTOU-Bench, a benchmark with 66 realistic user tasks designed to evaluate this class of vulnerabilities. As countermeasures, we adapt detection and mitigation techniques from systems security to this setting and propose prompt rewriting, state integrity monitoring, and tool-fusing. Our study highlights challenges unique to agentic workflows, where we achieve up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. When combining all three approaches, we reduce the TOCTOU vulnerabilities from an executed trajectory from 12% to 8%. Our findings open a new research direction at the intersection of AI safety and systems security.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Hacking Electronic Safes

Vulnerabilities in electronic safes that use Securam Prologic locks:

While both their techniques represent glaring security vulnerabilities, Omo says it’s the one that exploits a feature intended as a legitimate unlock method for locksmiths that’s the more widespread and dangerous. “This attack is something where, if you had a safe with this kind of lock, I could literally pull up the code right now with no specialized hardware, nothing,” Omo says. “All of a sudden, based on our testing, it seems like people can get into almost any Securam Prologic lock in the world.”

[…]

Omo and Rowley say they informed Securam about both their safe-opening techniques in spring of last year, but have until now kept their existence secret because of legal threats from the company. “We will refer this matter to our counsel for trade libel if you choose the route of public announcement or disclosure,” a Securam representative wrote to the two researchers ahead of last year’s Defcon, where they first planned to present their research.

Only after obtaining pro bono legal representation from the Electronic Frontier Foundation’s Coders’ Rights Project did the pair decide to follow through with their plan to speak about Securam’s vulnerabilities at Defcon. Omo and Rowley say they’re even now being careful not to disclose enough technical detail to help others replicate their techniques, while still trying to offer a warning to safe owners about two different vulnerabilities that exist in many of their devices.

The company says that it plans on updating its locks by the end of the year, but have no plans to patch any locks already sold.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Lawsuit About WhatsApp Security

Attaullah Baig, WhatsApp’s former head of security, has filed a whistleblower lawsuit alleging that Facebook deliberately failed to fix a bunch of security flaws, in violation of its 2019 settlement agreement with the Federal Trade Commission.

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.

Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams.

More news coverage.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

From Cyberbullying to AI-Generated Content – McAfee’s Research Reveals the Shocking Risks

The landscape of online threats targeting children has evolved into a complex web of dangers that extend far beyond simple scams. New research from McAfee reveals that parents now rank cyberbullying as their single highest concern, with nearly one in four families (22%) reporting their child has already been targeted by some form of online threat. The risks spike dramatically during the middle school years and peak around age 13, precisely when children gain digital independence but may lack the knowledge and tools to protect themselves.

The findings paint a troubling picture of digital childhood, where traditional dangers like cyberbullying persist alongside emerging threats like AI-generated deepfakes, “nudify” technology, and sophisticated manipulation tactics that can devastate young people’s mental health and safety.

Cyberbullying is Parents’ Top Concern

Cyberbullying and harassment are devastating to young people’s digital experiences. The research shows that 43% of children who have encountered online threats experienced cyberbullying, making it the most common threat families face. The impact disproportionately affects girls, with more than half of targeted girls (51%) experiencing cyberbullying compared to 39% of boys.

The peak vulnerability occurs during early adolescence, with 62% of targeted girls and 52% of targeted boys aged 13-15 facing harassment online. For parents of teen daughters aged 13-15, cyberbullying ranks as the top concern for 17% of families, reflecting the real-world impact these digital attacks have on young people’s well-being.

AI-Generated Content Creates New Dangers

The emergence of AI-powered manipulation tools has introduced unprecedented risks to children’s online safety. Nearly one in five targeted kids (19%) have faced deepfake and “nudify” app misuse, with rates doubling to 38% among girls aged 13-15. These statistics become even more alarming when considering that 18% of parents overall list AI-generated deepfakes and nudify technology among their top three concerns, rising to one in three parents (33%) under age 35.

The broader landscape of AI-generated content exposure is widespread, with significant implications for how children understand truth and authenticity online. The research underscores the challenge parents face in preparing their children to navigate an environment where sophisticated forgeries can be created and distributed with relative ease.

“Today’s online threats aren’t abstract risks — families are facing them every day,” said Abhishek Karnik, head of threat research for McAfee. “Parents’ top concerns are the toll harmful content, particularly cyberbullying and AI-generated deepfakes, takes on their children’s mental health, self-image, and safety. That’s why it’s critical to pair AI-powered online protection with open, ongoing conversations about what kids encounter online. When children know how to recognize risks and misinformation and feel safe talking about these issues with loved ones, they’re better prepared to navigate the digital world with confidence.”

The Growing Confidence Gap

As digital threats become more sophisticated, parents find themselves increasingly outpaced by both technology and their children’s technical abilities. The research reveals that nearly half of parents (48%) admit their child knows more about technology than they do, while 42% say it’s challenging to keep up with the pace of evolving risks.

This knowledge disparity creates real vulnerabilities in family digital safety strategies. Only 34% of parents feel very confident their child can distinguish between real and fake content online, particularly when it comes to AI-generated material or misinformation. The confidence crisis deepens as children age and gain more independence online, precisely when threats become most complex and potentially harmful.

The monitoring habits of families reflect these growing challenges. While parents identify late at night (56%) and after school (41%) as the times when children face the greatest online risks, monitoring practices don’t align with these danger windows. Only about a third of parents (33%) check devices daily, and 41% review them weekly, creating significant gaps in oversight during high-risk periods.

Age-Related Patterns Reveal Critical Vulnerabilities

The research uncovers troubling patterns in how online safety behaviors change as children mature. While 95% of parents report discussing online safety with their children, the frequency and effectiveness of these conversations decline as kids enter their teen years. Regular safety discussions drop from 63% with younger children to just 54% with teenagers, even as threats become more severe and complex.

Daily device monitoring shows even sharper declines, plummeting to just 20% for boys aged 16-18 and dropping as low as 6-9% for girls aged 17-18. This reduction in oversight occurs precisely when older teens face heightened risks of blackmail, “scamtortion,” and other sophisticated threats. The research shows that more than half of targeted boys aged 16-18 (53%) have experienced threats to release fake or real content, representing one of the most psychologically damaging forms of online exploitation.

Gaming and Financial Exploitation

Online gaming platforms have become significant vectors for exploitation, particularly targeting boys. The research shows that 30% of children who have been targeted experienced online gaming scams or manipulation, with the rate climbing to 43% among targeted boys aged 13-15. These platforms often combine social interaction with financial incentives, creating opportunities for bad actors to manipulate young users through false friendships, fake rewards, and pressure tactics.

Real-World Consequences Extend Beyond Screens

The emotional and social impact of online threats creates lasting effects that extend well into children’s offline lives. Among families whose children have been targeted, the consequences reach far beyond momentary embarrassment or frustration. The research shows that 42% of affected families report their children experienced anxiety, felt unsafe, or were embarrassed after online incidents.

The social ramifications prove equally significant, with 37% of families dealing with issues that spilled over into school performance or friendships. Perhaps most concerning, 31% of affected children withdrew from technology altogether after negative experiences, potentially limiting their ability to develop healthy digital literacy skills and participate fully in an increasingly connected world.

The severity of these impacts has driven many families to seek professional support, with 26% requiring therapy or counseling to help their children cope with online harms. This statistic underscores that digital threats can create trauma requiring the same level of professional intervention as offline dangers.

Building Trust Through Technology Agreements

Creating a foundation for open dialogue about digital safety starts with establishing clear expectations and boundaries. McAfee’s Family Tech Pledge provides parents with a structured framework to initiate these crucial conversations with their children about responsible device use. Currently, few families have implemented formal agreements about technology use, representing a significant opportunity for improving digital safety through collaborative rule-setting.

A technology pledge serves as more than just a set of rules, it becomes a collaborative tool that helps parents and children discuss the reasoning behind safe online practices. By involving children in the creation of these agreements, families can address age-appropriate concerns while building trust and understanding. The process naturally opens doors to conversations about the threats identified in the research, from predators and cyberbullying to AI-generated content and manipulation attempts.

These agreements work best when they evolve alongside children’s digital maturity. What starts as basic screen time limits for younger children can expand to include discussions about social media interactions, sharing personal information, and recognizing suspicious content as they enter their teen years. The key is making the technology pledge a living document that adapts to new platforms, emerging threats, and changing family circumstances.

Advanced Protection Through AI-Powered Detection

While conversations and agreements form the foundation of digital safety, today’s threat landscape requires technological solutions that can keep pace with rapidly evolving risks. McAfee’s Scam Detector represents a crucial additional layer of defense, using artificial intelligence to identify and flag suspicious links, manipulated content, and potential threats before they can cause harm.

The tool’s AI-powered approach is particularly valuable given the research findings about manipulated media and deepfake content. With AI-generated content becoming weapons used against children, especially teenage girls, automated detection becomes essential for catching threats that might bypass both parental oversight and children’s developing digital literacy skills.

For parents who feel overwhelmed by the pace of technological change, 42% report struggling to keep up with the risk landscape, Scam Detector provides professional-grade protection without requiring extensive technical knowledge. It offers families a way to maintain security while fostering the trust and communication that the research shows is essential for long-term digital safety.

The technology is especially crucial during the high-risk periods identified in the research. Since 56% of parents recognize that late-night hours present the greatest danger, and monitoring naturally decreases during these times, automated protection tools can provide continuous vigilance when human oversight is most difficult to maintain.

A Path Forward for Families

The research reveals that addressing online threats requires a comprehensive approach combining technology, communication, and ongoing education. Parents need practical tools and strategies that can evolve with both the threat landscape and their children’s developing digital independence.

Effective protection starts with pairing parental controls with regular, judgment-free conversations about harmful content, coercion, and bullying, ensuring children know they can seek help without fear of punishment or restrictions. Teaching children to “trust but verify” by checking sources and asking for help when something feels suspicious becomes especially important as AI-generated content makes deception increasingly sophisticated.

Keeping devices secure with updated security settings and AI-powered protection tools like McAfee’s Scam Detector helps create multiple layers of defense against evolving threats. These technological safeguards work best when combined with family agreements that establish clear expectations for online behavior and regular check-ins that maintain open communication as children mature.

Research Methodology

This comprehensive analysis is based on an online survey conducted in August 2025 of approximately 4,300 parents or guardians of children under 18 across Australia, France, Germany, India, Japan, the United Kingdom, and the United States. The research provides crucial insights into the current state of children’s online safety and the challenges families face in protecting their digital natives from increasingly sophisticated threats.

The data reveals that today’s parents are navigating unprecedented challenges in protecting their children online, with peak vulnerability occurring during the middle school years when digital independence collides with developing judgment and incomplete knowledge of online risks. While the threats may be evolving and complex, the research shows that informed, proactive families who combine technology tools with open communication are better positioned to help their children develop the skills needed to safely navigate the digital world.

The post From Cyberbullying to AI-Generated Content – McAfee’s Research Reveals the Shocking Risks appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

How a Tech Expert Lost $13,000 to a Job Scam

Sam M. has spent more than 20 years building websites, testing systems, and managing technology projects. He knows code, he understands how the internet works, and he’s trained to spot digital red flags. None of that stopped him from losing $13,000 to scammers.

“I’ve been around long enough that I should have seen it coming,” Sam admits. “But when you’re looking for work, you’ve got blinders on. You just want something to work out.”

His story reflects a growing reality. McAfee data shows that job-related scams have exploded by over 1,000% from May through July 2025, making Sam part of a massive wave of Americans facing increasingly sophisticated employment fraud. But here’s what’s empowering: with the right protection, these scams can be spotted before they hit you and your wallet.

The Perfect Setup

Sam’s scam started with what looked like a legitimate opportunity: a polished website offering part-time work reviewing products online. The site had all the right elements: professional design, user authentication, and a logical process. Even his wife, who warned him that “if it sounds too good to be true, it probably is,” had to admit the pay rates weren’t unrealistic.

“I thought it was worth a try,” Sam said. “I’ve built websites, and this one looked okay. You had to log in, authenticate. Everything seemed legit.”

This sophisticated approach reflects how job scammers have evolved. They’re no longer sending obviously fake emails with spelling errors. Today’s scammers study real job platforms, mimic legitimate processes, and exploit the specific language that job seekers expect to see. McAfee’s analysis shows scammers are particularly focused on benefits-related terms like “resume,” “recruit,” “maternity,” and “paternity” to make their offers sound more credible. The good news? Advanced scam detection technology can automatically identify these sophisticated tactics before you even encounter them.

The Hook and the Trap

The scam followed a classic pattern – establish trust, then exploit it. Sam was paired with a trainer, guided through reviewing products, asked to upload screenshots. Then came the crucial moment.

“That first payout, a couple hundred dollars, hooked me,” Sam recalled. “I thought, this is working. This is real.”

But once Sam was invested, the ground shifted. A “special product” appeared, and suddenly his account showed a negative balance. The trainer explained he needed to deposit money to continue. It seemed reasonable at first, but it was the beginning of a financial death spiral.

“They kept telling me, ‘Just a little more and you’ll unlock it,’” Sam said. “And I kept chasing it.”

This “advance fee” model has become increasingly common in job scams. Victims are asked to pay for training materials, background checks, or equipment. Each payment is followed by a request for more money, creating a cycle that’s psychologically difficult to break.

The Scope of the Problem

Sam’s experience fits into a much larger crisis, but understanding the scope helps us stay ahead of it. According to McAfee data, 45% of Americans say they’ve either personally experienced a job search scam or know someone who has. That means nearly half the country has been touched by employment fraud in some way.

The reach extends beyond individual stories. Nearly 1 in 3 Americans (31%) report receiving job offer scams via text message, showing how these schemes have moved beyond email into our daily conversations. People now receive an average of 14 scam messages daily across all platforms. Email job scams alone rose 60% between June and July 2025, with “resume” being the most frequently used lure word. But here’s what’s encouraging: when scams can be identified automatically, people can stay one step ahead of scammers before any damage occurs.

The Real Cost

By the time Sam extracted himself from the scam, he was down more than $13,000. His loss reflects broader trends: McAfee research shows scam victims lose an average of $1,471 per scam, with $12 billion reported lost to fraud in 2024 alone, up 21% from the previous year. But the financial loss wasn’t the worst part for Sam.

“I was furious at them, but also at myself,” he said. “I’m supposed to know better. I felt stupid. I felt worn out.”

This emotional impact extends beyond individual embarrassment. These schemes attack people when they’re already vulnerable, turning the search for legitimate work into another source of stress and suspicion.

“It wears you down,” Sam explained. “Every time you think you’ve found something good, it turns out to be a scam. You get beat down again. And you start to wonder if you’ll ever find something real.”

The solution isn’t to stop trusting altogether. It’s having the right tools to confidently distinguish between what’s real and what’s fake before you click.

Staying One Step Ahead

Despite his losses, Sam maintains perspective about his situation. He knows people who’ve lost everything to scams, including their homes and savings.

“As hard as this was, I didn’t lose everything,” he said. “My family’s life didn’t have to change. Others aren’t so lucky.”

Now Sam sticks to established job platforms like LinkedIn and Glassdoor, avoiding websites that promise easy money. He’s also committed to sharing his story as a warning to others.

“I got caught, I admit it,” he said. “But I’m not the only one. And if telling my story helps someone else stop before it’s too late, then it’s worth it.”

The reality is that in today’s digital landscape, where people receive 14 scam messages daily, individual vigilance alone isn’t enough. What’s needed is automatic protection that works in the background, identifying suspicious texts, emails, and videos before you even encounter them. McAfee’s Scam Detector provides exactly that: real or fake? Scam Detector knows.

Know What’s Real Before You Click

Sam’s experience highlights several warning signs that job seekers should recognize, but modern scam protection goes far beyond manual vigilance:

Traditional Warning Signs:

  • Upfront payments (legitimate employers don’t ask employees to pay for the privilege of working)
  • Vague job descriptions (real jobs have specific requirements and clear responsibilities)
  • Pressure tactics (scammers often create artificial urgency to prevent careful consideration)
  • Too-good-to-be-true pay (research typical salaries for similar roles in your area)
  • Poor communication (legitimate companies use professional email addresses and clear contact information)

Lightning-fast alerts: With McAfee’s Scam Detector, you get automatic alerts about suspicious texts, emails, and videos before you click. The technology automatically identifies risky messages using advanced AI, so you don’t have to wonder what’s real and what’s fake online.

The explosive growth in job scams, with their 1,000%+ increase over just a few months, shows this challenge isn’t disappearing. But as scam technology evolves, so does scam protection. Intelligence and experience alone aren’t enough to combat well-crafted deception, but automatic detection technology can identify these sophisticated schemes before they reach you.

Sam’s story reminds us that anyone can be targeted, but with the right protection, you can spot scams before they hit you and your wallet. In a job market where people receive multiple suspicious messages daily, confidence comes from knowing you have technology working in the background to distinguish what’s real from what’s fake. With proactive scam protection designed with you in mind, you can enjoy the peace of a scam-free search and focus on finding legitimate opportunities. Real or fake? You’ll know before you click.

The post How a Tech Expert Lost $13,000 to a Job Scam appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains