Top Tips for Cybersecurity Awareness Month

Imagine this: you wake up one morning to find that your bank account has been emptied overnight. Someone halfway across the world has accessed your account using a password you thought was secure. Incidents like these are unfortunately becoming more common, with identity theft and fraud cases steadily increasing over the last decade.

This month is Cybersecurity Awareness Month, with the theme “Secure Our World,” which serves as a timely reminder to reassess and enhance your cybersecurity strategies against ever-evolving cyber threats. In an election year, the digital landscape becomes a breeding ground for cyber scams and malicious activities aimed at exploiting political fervor and public uncertainty. With the 2024 election on the horizon, it’s more critical than ever to strengthen our cybersecurity defenses.

By prioritizing cybersecurity awareness and implementing robust protective measures during this dedicated month, you can safeguard your personal information, protect your financial assets, and ensure the security of your digital interactions. Let’s explore five simple yet powerful ways to increase your internet security and have peace of mind in today’s digital landscape.

1. Use Complex Passwords and a Password Manager

Passwords serve as the first line of defense against unauthorized access to your accounts but 78% of people use the same password for more than one account. Here’s how you can create and manage complex passwords:

  • Aim for at least 12 characters.
  • Use a mix of uppercase and lowercase letters, numbers, and special characters.
  • Avoid easily guessable information like birthdays or names.
  • Consider using a password manager, which generates and stores complex passwords for each of your accounts securely. They also autofill passwords for you, reducing the temptation to use weak or duplicate passwords.

2. Turn On Multifactor Authentication

Multifactor authentication (MFA) adds an extra layer of security by requiring two or more of the following factors to access your accounts:

  • Knowledge Factor: Something only the user knows, such as a password or PIN.
  • Possession Factor: Something the user has, like a smartphone or security token that generates a one-time code.
  • Inherence Factor: Something inherent to the user, such as biometric data (fingerprint, facial recognition).

Follow these steps to enable multifactor authentication:

  • Go to your account settings on each platform (e.g., email, social media, banking).
  • Look for the security or privacy settings.
  • Enable MFA by choosing to receive a code via SMS, email, or a dedicated app like Google Authenticator or Authy.
  • Follow the prompts to complete the setup.

3. Recognize and Report Phishing Attempts

Phishing is a common tactic used by cybercriminals to trick you into revealing sensitive information by impersonating legitimate entities, such as banks or reputable companies, to lure individuals into disclosing sensitive information like passwords or credit card numbers. These attacks often occur via email, text messages, or fake websites designed to appear authentic, exploiting human trust and curiosity to steal valuable data for malicious purposes.

How to spot phishing attempts

Identifying Phishing Emails:

  • Check the sender’s email address for inconsistencies or suspicious domains.
  • Be skeptical of urgent language, requests for personal information, or threats of consequences.
  • Verify links by hovering over them to see the actual URL before clicking.
  • If an email contains an attachment that you weren’t expecting or that seems suspicious (such as .exe files), do not open it. Attachments can contain malware that compromises your computer’s security.

Reporting Phishing:

  • If you receive a phishing email, report it to the organization being impersonated (e.g., your bank).
  • Most email providers also have tools to report phishing directly from your inbox.

4. Update Software Regularly

Software updates, also known as patches, often include security fixes to protect against known vulnerabilities. Here’s how to keep your software up to date:

Updating Operating Systems and Applications:

  • Enable automatic updates on your devices: Most operating systems (such as Windows, macOS, and Linux) and software applications allow you to enable automatic updates. This feature ensures that your software receives the latest patches and security fixes without requiring manual intervention.
  • Check for Updates Manually: Even if you have automatic updates enabled, it’s a good practice to manually check for updates periodically, especially for critical software like your operating system, web browsers, antivirus programs, and any applications you use frequently.
  • Mobile Devices and Apps: For smartphones and tablets, regularly check for updates in your device’s settings. Additionally, update apps from trusted app stores like Google Play Store (Android) or Apple App Store (iOS).
  • Update All Software: It’s not just your operating system and major applications that need updating. Remember to update plugins (like Adobe Flash, Java, and browser extensions) and firmware for devices like routers connected to your network.

5. Secure Your Social Media

Social media platforms are integral parts of modern communication, but they also pose significant security risks if not managed carefully. Here are essential tips to enhance your social media security:

  • Review Privacy Settings: Regularly review and adjust your privacy settings on social media platforms to control who can see your posts, personal information, and contact details. Limit the visibility of your profile and posts to only those you trust. Social Privacy Manager can streamline the customization of over 100 privacy settings on your social media accounts with just a few clicks.
  • Be Cautious of Third-Party Apps: Review and revoke access for third-party applications linked to your social media accounts that no longer need access or seem suspicious. These apps can potentially compromise your account’s security.

By implementing these straightforward yet effective cybersecurity practices, you can significantly reduce the risk of falling victim to online threats. McAfee+ can also keep you more secure and private online with 24/7 scans of the dark web to ensure your personal and financial info is safe, alerts about suspicious financial transactions and credit activity, and up to $2 million in identity theft coverage and restoration.

The post Top Tips for Cybersecurity Awareness Month appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

AI and the 2024 US Elections

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This essay originally appeared in The Atlantic.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Crooked Cops, Stolen Laptops & the Ghost of UGNazi

A California man accused of failing to pay taxes on tens of millions of dollars allegedly earned from cybercrime also paid local police officers hundreds of thousands of dollars to help him extort, intimidate and silence rivals and former business partners, the government alleges. KrebsOnSecurity has learned that many of the man’s alleged targets were members of UGNazi, a hacker group behind multiple high-profile breaches and cyberattacks back in 2012.

A photo released by the government allegedly showing Iza posing with several LASD officers on his payroll.

A federal complaint (PDF) filed last week said the Federal Bureau of Investigation (FBI) has been investigating Los Angeles resident Adam Iza. Also known as “Assad Faiq” and “The Godfather,” Iza is the 30-something founder of a cryptocurrency investment platform called Zort that advertised the ability to make smart trades based on artificial intelligence technology.

But the feds say investors in Zort soon lost their shorts, after Iza and his girlfriend began spending those investments on Lamborghinis, expensive jewelry, vacations, a $28 million home in Bel Air, even cosmetic surgery to extend the length of his legs.

The complaint states the FBI started looking at Iza after receiving multiple reports that he had on his payroll several active deputies with the Los Angeles Sheriff’s Department (LASD). Iza’s attorney did not immediately respond to requests for comment.

The complaint cites a letter from an attorney for a victim referenced only as “E.Z.,” who was seeking help related to an extortion and robbery allegedly committed by Iza. The government says that in March 2022, three men showed up at E.Z.’s home, and tried to steal his laptop in an effort to gain access to E.Z. cryptocurrency holdings online. A police report referenced in the complaint says three intruders were scared off when E.Z. fired several handgun rounds in the direction of his assailants.

The FBI later obtained a copy of a search warrant executed by LASD deputies in January 2022 for GPS location information on a phone belonging to E.Z., which shows an LASD deputy unlawfully added E.Z.’s mobile number to a list of those associated with an unrelated firearms investigation.

“Damn my guy actually filed the warrant,” Iza allegedly texted someone after the location warrant was entered. “That’s some serious shit to do for someone….risking a 24 years career. I pay him 280k a month for complete resources. They’re active-duty.”

The FBI alleges LASD officers had on several previous occasions tried to kidnap and extort E.Z. at Iza’s behest. The complaint references a November 2021 incident wherein Iza and E.Z. were in a car together when Iza asked to stop and get snacks at a convenience store. While they were still standing next to the car, a van with several armed LASD deputies showed up and tried to force E.Z. to hand over his phone. E.Z. escaped unharmed, and alerted 911.

E.Z. appears to be short for Enzo Zelocchi, a self-described “actor” who was featured in an ABC News story about a home invasion in Los Angeles around that same time as the March 2020 home invasion, in which Zelocchi is quoted as saying at least two men tried to rob him at gunpoint (we’ll revisit Zelocchi’s acting credits in a moment).

One of many self portraits published on the Instagram account of Enzo Zelocchi.

The criminal complaint makes frequent references to a co-conspirator of Iza (“CC-1”) — his girlfriend at the time — who allegedly helped Iza run his businesses and spend the millions plunked down by Zort investors. We know what E.Z. stands for because Iza’s girlfriend then was a woman named Iris Au, and in November 2022 she sued Zelocchi for allegedly stealing Iza’s laptop.

The complaint says Iza also harassed a man identified only as T.W., and refers to T.W. as one of two Americans currently incarcerated in the Philippines for murder. In December 2018, a then 21-year-old Troy Woody Jr. was arrested in Manilla after he was spotted dumping the body of his dead girlfriend Tomi Masters into a local river.

Woody is accused of murdering Masters with the help of his best friend and roommate at the time: Mir Islam, a.k.a. “JoshTheGod,” referred to in the Iza complaint as “M.I.” Islam and Woody were both core members of UGNazi, a hacker collective that sprang up in 2012 and claimed credit for hacking and attacking a number of high-profile websites.

In June 2016, Islam was sentenced to a year in prison for an impressive array of crimes, including stalking people online and posting their personal data on the Internet. Islam also pleaded guilty to reporting dozens of phony bomb threats and fake hostage situations at the homes of celebrities and public officials (Islam participated in a swatting attack against this author in 2013).

Troy Woody Jr. (left) and Mir Islam, are currently in prison in the Philippines for murder.

In December 2022, Troy Woody Jr. sued Iza, Zelocchi and Zort, alleging (PDF) Iza and Zelocchi were involved in a 2018 home invasion at his residence, wherein Woody claimed his assailants stole laptops and phones containing more than $200 million in cryptocurrencies.

Woody’s complaint states that Masters also was present during his 2018 home invasion, as was another core UGNazi member: Eric “CosmoTheGod” Taylor. CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which published the address, Social Security numbers and other personal information of public figures, including the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general. The group also swatted many of the people they doxed.

Exposed was built with the help of identity information obtained and/or stolen from ssndob dot ru.

In 2017, Taylor was sentenced to three years probation for participating in multiple swatting attacks, including the one against my home in 2013.

The complaint against Iza says the FBI interviewed Woody in Manilla where he is currently incarcerated, and learned that Iza has been harassing him about passwords that would unlock access to cryptocurrencies. The FBI’s complaint leaves open the question of how Woody and Islam got the phones in the first place, but the implication is that Iza may have instigated the harassment by having mobile phones smuggled to the prisoners.

The government suggests its case against Iza was made possible in part thanks to Iza’s propensity for ripping off people who worked for him. The complaint cites information provided by a private investigator identified only as “K.C.,” who said Iza hired him to surveil Zelocchi but ultimately refused to pay him for much of the work.

K.C. stands for Kenneth Childs, who in 2022 sued Iris Au and Zort (PDF) for theft by deception and commercial disparagement, after it became clear his private eye services were being used as part of a scheme by the Zort founders to intimidate and extort others. Childs’ complaint says Iza clawed back tens of thousands of dollars in payments he’d previously made as part of their contract.

The government also included evidence provided by an associate of Iza’s — named only as “R.C.” — who was hired to throw a party at Iza’s home. According to the feds, Iza paid the associate $50,000 to craft the event to his liking, but on the day of the party Iza allegedly told R.C. he was unhappy with the event and demanded half of his money back.

When R.C. balked, Iza allegedly surrounded the man with armed LASD officers, who then extracted the payment by seizing his phone. The government says Iza kept R.C.’s phone and spent the remainder of his bank balance.

A photo Iza allegedly sent to Tassilo Heinrich immediately after Heinrich’s arrest on unsubstantiated drug charges.

The FBI said that after the incident at the party, Iza had his bribed sheriff deputies pull R.C. over and arrest him on phony drug charges. The complaint includes a photo of R.C. being handcuffed by the police, which the feds say Iza sent to R.C. in order to intimidate him even further. The drug charges were later dismissed for lack of evidence.

The government alleges Iza and Au paid the LASD officers using Zelle transfers from accounts tied to two different entities incorporated by one or both of them: Dream Agency and Rise Agency. The complaint further alleges that these two entities were the beneficiaries of a business that sold hacked and phished Facebook advertising accounts, and bribed Facebook employees to unblock ads that violated its terms of service.

The complaint says Iza ran this business with another individual identified only as “T.H.,” and that at some point T.H. had personal problems and checked himself into rehab. T.H. told the FBI that Iza responded by stealing his laptop and turning him in to the government.

KrebsOnSecurity has learned that T.H. in this case is Tassilo Heinrich, a man indicted in 2022 for hacking into the e-commerce platform Shopify, and leaking the user database for Ledger, a company that makes hardware wallets for storing cryptocurrencies.

Heinrich pleaded guilty and was sentenced to time served, three years of supervised release, and ordered to pay restitution to Shopify. Upon his release from custody, Heinrich told the FBI that Iza was still using his account at the public screenshot service Gyazo to document communications regarding his alleged bribing of LASD officers.

Prosecutors say Iza and Au portrayed themselves as glamorous and wealthy individuals who were successful social media influencers, but that most of that was a carefully crafted facade designed to attract investment from cryptocurrency enthusiasts. Meanwhile, the U.K. tabloids reported this summer that Au was dating Davide Sanclimenti, the 2022 co-winner on the dating reality show Love Island.

Au was featured on the July 2024 cover of “Womenpreneur Middle East.”

Recall that we promised to revisit Mr. Zelocchi’s claimed acting credits. Despite being briefly listed on the Internet Movie Data Base (imdb.com) as the most awarded science fiction actor of all time, it’s not clear whether Mr. Zelocchi has starred in any real movies.

Earlier this year, an Internet sleuth on Youtube showed that even though Zelocchi’s IMDB profile has him earning more awards than most other actors on the platform (here he is holding a Youtube top viewership award), Zelocchi is probably better known as the director of the movie once rated the absolute worst sci-fi flick on IMDB: A 2015 work called “Angel’s Apocalypse.” Most of the videos on Zelocchi’s Instagram page appear to be brief clips, some of which look more like a commercial for men’s cologne than scenes from a real movie.

A Reddit post from a year ago calling attention to Zelocchi’s sci-fi film Angel’s Apocalypse somehow earning more audience votes than any other movie in the same genre.

In many ways, the crimes described in this complaint and the various related civil lawsuits would prefigure a disturbing new trend within English-speaking cybercrime communities that has bubbled up in the past few years: The emergence of “violence-as-as-service” offerings that allow cybercriminals to anonymously extort and intimidate their rivals.

Found on certain Telegram channels are solicitations for IRL or “In Real Life” jobs, wherein people hire themselves out as willing to commit a variety of physical attacks in their local geographic area, such as slashing tires, firebombing a home, or tossing a brick through someone’s window.

Many of the cybercriminals in this community have stolen tens of millions of dollars worth of cryptocurrency, and can easily afford to bribe police officers. KrebsOnSecurity would expect to see more of this in the future as young, crypto-rich cybercriminals seek to corrupt people in authority to their advantage.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains