News

Hacking ChatGPT by Planting False Memories into Its Data

This vulnerability hacks a feature that allows ChatGPT to have long-term memory, where it uses information from past conversations to inform future conversations with that same user. A researcher found that he could use that feature to plant “false memories” into that context window that could subvert the model.

A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker’s website.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Top Tips for Cybersecurity Awareness Month

Imagine this: you wake up one morning to find that your bank account has been emptied overnight. Someone halfway across the world has accessed your account using a password you thought was secure. Incidents like these are unfortunately becoming more common, with identity theft and fraud cases steadily increasing over the last decade.

This month is Cybersecurity Awareness Month, with the theme “Secure Our World,” which serves as a timely reminder to reassess and enhance your cybersecurity strategies against ever-evolving cyber threats. In an election year, the digital landscape becomes a breeding ground for cyber scams and malicious activities aimed at exploiting political fervor and public uncertainty. With the 2024 election on the horizon, it’s more critical than ever to strengthen our cybersecurity defenses.

By prioritizing cybersecurity awareness and implementing robust protective measures during this dedicated month, you can safeguard your personal information, protect your financial assets, and ensure the security of your digital interactions. Let’s explore five simple yet powerful ways to increase your internet security and have peace of mind in today’s digital landscape.

1. Use Complex Passwords and a Password Manager

Passwords serve as the first line of defense against unauthorized access to your accounts but 78% of people use the same password for more than one account. Here’s how you can create and manage complex passwords:

  • Aim for at least 12 characters.
  • Use a mix of uppercase and lowercase letters, numbers, and special characters.
  • Avoid easily guessable information like birthdays or names.
  • Consider using a password manager, which generates and stores complex passwords for each of your accounts securely. They also autofill passwords for you, reducing the temptation to use weak or duplicate passwords.

2. Turn On Multifactor Authentication

Multifactor authentication (MFA) adds an extra layer of security by requiring two or more of the following factors to access your accounts:

  • Knowledge Factor: Something only the user knows, such as a password or PIN.
  • Possession Factor: Something the user has, like a smartphone or security token that generates a one-time code.
  • Inherence Factor: Something inherent to the user, such as biometric data (fingerprint, facial recognition).

Follow these steps to enable multifactor authentication:

  • Go to your account settings on each platform (e.g., email, social media, banking).
  • Look for the security or privacy settings.
  • Enable MFA by choosing to receive a code via SMS, email, or a dedicated app like Google Authenticator or Authy.
  • Follow the prompts to complete the setup.

3. Recognize and Report Phishing Attempts

Phishing is a common tactic used by cybercriminals to trick you into revealing sensitive information by impersonating legitimate entities, such as banks or reputable companies, to lure individuals into disclosing sensitive information like passwords or credit card numbers. These attacks often occur via email, text messages, or fake websites designed to appear authentic, exploiting human trust and curiosity to steal valuable data for malicious purposes.

How to spot phishing attempts

Identifying Phishing Emails:

  • Check the sender’s email address for inconsistencies or suspicious domains.
  • Be skeptical of urgent language, requests for personal information, or threats of consequences.
  • Verify links by hovering over them to see the actual URL before clicking.
  • If an email contains an attachment that you weren’t expecting or that seems suspicious (such as .exe files), do not open it. Attachments can contain malware that compromises your computer’s security.

Reporting Phishing:

  • If you receive a phishing email, report it to the organization being impersonated (e.g., your bank).
  • Most email providers also have tools to report phishing directly from your inbox.

4. Update Software Regularly

Software updates, also known as patches, often include security fixes to protect against known vulnerabilities. Here’s how to keep your software up to date:

Updating Operating Systems and Applications:

  • Enable automatic updates on your devices: Most operating systems (such as Windows, macOS, and Linux) and software applications allow you to enable automatic updates. This feature ensures that your software receives the latest patches and security fixes without requiring manual intervention.
  • Check for Updates Manually: Even if you have automatic updates enabled, it’s a good practice to manually check for updates periodically, especially for critical software like your operating system, web browsers, antivirus programs, and any applications you use frequently.
  • Mobile Devices and Apps: For smartphones and tablets, regularly check for updates in your device’s settings. Additionally, update apps from trusted app stores like Google Play Store (Android) or Apple App Store (iOS).
  • Update All Software: It’s not just your operating system and major applications that need updating. Remember to update plugins (like Adobe Flash, Java, and browser extensions) and firmware for devices like routers connected to your network.

5. Secure Your Social Media

Social media platforms are integral parts of modern communication, but they also pose significant security risks if not managed carefully. Here are essential tips to enhance your social media security:

  • Review Privacy Settings: Regularly review and adjust your privacy settings on social media platforms to control who can see your posts, personal information, and contact details. Limit the visibility of your profile and posts to only those you trust. Social Privacy Manager can streamline the customization of over 100 privacy settings on your social media accounts with just a few clicks.
  • Be Cautious of Third-Party Apps: Review and revoke access for third-party applications linked to your social media accounts that no longer need access or seem suspicious. These apps can potentially compromise your account’s security.

By implementing these straightforward yet effective cybersecurity practices, you can significantly reduce the risk of falling victim to online threats. McAfee+ can also keep you more secure and private online with 24/7 scans of the dark web to ensure your personal and financial info is safe, alerts about suspicious financial transactions and credit activity, and up to $2 million in identity theft coverage and restoration.

The post Top Tips for Cybersecurity Awareness Month appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

AI and the 2024 US Elections

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This essay originally appeared in The Atlantic.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains