—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Author: admin
CeranaKeeper Emerges as New Threat to Thai Government Networks
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Cybersecurity Spending on the Rise, But Security Leaders Still Feel Vulnerable
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Northern Ireland Police Data Leak Sees Service Fined by ICO
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Crypto-Doubling Scams Surge Following Presidential Debate
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Email Phishing Attacks Surge as Attackers Bypass Security Controls
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
FIN7 Gang Hides Malware in AI “Deepnude” Sites
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
Weird Zimbra Vulnerability
Hackers can execute commands on a remote computer by sending malformed emails to a Zimbra mail server. It’s critical, but difficult to exploit.
In an email sent Wednesday afternoon, Proofpoint researcher Greg Lesnewich seemed to largely concur that the attacks weren’t likely to lead to mass infections that could install ransomware or espionage malware. The researcher provided the following details:
- While the exploitation attempts we have observed were indiscriminate in targeting, we haven’t seen a large volume of exploitation attempts
- Based on what we have researched and observed, exploitation of this vulnerability is very easy, but we do not have any information about how reliable the exploitation is
- Exploitation has remained about the same since we first spotted it on Sept. 28th
- There is a PoC available, and the exploit attempts appear opportunistic
- Exploitation is geographically diverse and appears indiscriminate
- The fact that the attacker is using the same server to send the exploit emails and host second-stage payloads indicates the actor does not have a distributed set of infrastructure to send exploit emails and handle infections after successful exploitation. We would expect the email server and payload servers to be different entities in a more mature operation.
- Defenders protecting Zimbra appliances should look out for odd CC or To addresses that look malformed or contain suspicious strings, as well as logs from the Zimbra server indicating outbound connections to remote IP addresses.
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
My Instagram Has Been Hacked – What Do I Do Now?
In my world of middle-aged mums (mams), Instagram is by far the most popular social media platform. While many of us still have Facebook, Instagram is where it all happens: messaging, sharing, and yes, of course – shopping!! So, when one of my gal pals discovers that her Instagram account has been hacked, there is understandably a lot of panic!
How Popular Is Instagram?
Believe it or not, Facebook is still hanging onto the top spot as the most popular social media platform with just over 3 billion active monthly users, according to Statista. YouTube comes in 2nd place with 2.5 billion users. Instagram and WhatsApp tie in 3rd place with 2 billion users each. Interestingly, TikTok has 1.5 billion users and is in 4th place – but watch this space, I say!
Why Do Hackers Want To Hack My Instagram?
Despite Facebook having the most monthly users, it isn’t where the personal conversations and engagement take place. That’s Instagram’s sweet spot. Instagram messaging is where links are shared and real personal interaction occurs. In fact, a new report shows that Instagram accounts are targeted more than any other online account and makeup just over a quarter of all social media hacks. So, it makes sense why hackers would expend considerable energy in trying to hack Instagram accounts. They’ll have a much greater chance of success if they use a platform where there is an appetite and trust for sharing links and personal conversations.
But why do they want to get their hands on your account? Well, they may want to steal your personal information, scam your loyal followers by impersonating you, sell your username on the black market or even demand ransoms! Hacking Instagram is big business for professional scammers!!
What To Do If You’ve Been Hacked
So, you reach for your phone early one morning to do a quick scroll on Instagram before you start the day, but you can’t seem to log on. Mmmmm. You then see some texts from friends checking whether you have in fact become a cryptocurrency expert overnight. OK – something’s off. You then notice an email from Instagram notifying you that the email linked to your account has been changed. Looks like you’ve been hacked! But please don’t spend any time stressing. The most important thing is to take action ASAP as the longer hackers have access to your account, the greater the chance they can infiltrate your life and create chaos.
The good news is that if you act quickly and strategically, you may be able to get your account back. Here is what I suggest you do – fast!:
1. Change Your Password & Check Your Account
If you are still able to log in to your account then change your password immediately. And ensure it is a password you haven’t used anywhere else. Then do a quick audit of your account and fix any changes the hacker may have made eg remove access to any device you don’t recognise, any apps you didn’t install, and delete any email addresses that aren’t yours.
Next, turn on two-factor authentication (2FA) to make it harder for the hacker to get back into your account. This will take you less than a minute and is absolutely critical. Instagram will give you the option to receive the login code either via text message or via an authentication app. I always recommend the app in case you ever lose control of your phone.
But, if you are locked out of your account then move on to step 2.
2. Locate The Email From Instagram
Every time there is a change to your account details or some new login activity, Instagram will automatically send a message to the email address linked with the account
But there’s good news here. The email from Instagram will ask you if you in fact made the changes and will provide a link to secure your account in case it wasn’t you. Click on this link!! If you can access your account this way, immediately check that the only linked email address and recovery phone number are yours and delete anything that isn’t yours. Then change your password.
But if you’ve had no luck with this step, move on to step 3.
3. Request a Log-In Link
You can also ask Instagram to email or text you a login link. On an iPhone, you just need to select ‘forgot password?’ and on your Android phone, tap ‘get help logging in’. You will need to enter the username, email address, and phone number linked to your account.
No luck? Keep going…
4. Request a Security Code
If the login link won’t get you back in, the next step is to request a security code. Simply enter the username, email address, or phone number associated with your account, then tap on “Need more help?” Select your email address or phone number, then tap “Send security code” and follow the instructions.
5. Video Selfie
If you have exhausted all of these options and you’ve had no luck then chances are you have found your way to the Instagram Support Team. If you haven’t, simply click on the link and it will take you there. Now, if your hacked account contained pictures of you then you might just be in luck! The Support Team may ask you to take a video selfie to confirm who you are and that in fact you are a real person! This process can take a few business days. If you pass the test, you’ll be sent a link to reset your password.
How To Prevent Yourself From Getting Hacked?
So, you’ve got your Instagram account back – well done! But wouldn’t it be good to avoid all that stress again? Here are my top tips to make it hard for those hackers to take control of your Insta.
1. It’s All About Passwords
I have no doubt you’ve heard this before but it’s essential, I promise! Ensuring you have a complex and unique password for your Instagram account (and all your online accounts) is THE best way of keeping the hackers at bay. And if you’re serious about this you need to get yourself a password manager that can create (and remember) crazily complex and random passwords that are beyond any human ability to create. Check out McAfee’s TrueKey – a complete no-brainer!
2. Turn on Multifactor Authentication (MFA)
Multi-factor authentication adds another layer of security to your account making it that much harder for a hacker to get in. It takes minutes to set up and is essential if you’re serious about protecting yourself. It simply involves using a code to log in, in addition to your password. You can choose to receive the code via a text message or an authenticator app – always choose the app!
3. Choose How To Receive Login Alerts
Acting fast is the name of the game here so ensure your account is set up with your best contact details, so you receive login alerts ASAP. This can be the difference between salvaging your account and not. Ensure the alerts will be sent to where you are most likely to see them first so you can take action straight away!
4. Audit Any Third-Party Apps
Third-party apps that you have connected to your account could potentially be a security risk. So, only ever give third-party apps permission to access your account when absolutely necessary. I suggest taking a few minutes to disconnect any apps you no longer require to keep your private data as secure as possible.
Believe it or not, Instagram is not just an arena for middle-aged mums! I can guarantee that your teens will be on there too. So, next time you’re sharing a family dinner, why not tell them what you’re doing to prevent yourself from getting hacked? And if you’re not convinced they are listening? Perhaps remind them just how devastating it would be to lose access to their pics and their people. I am sure that might just work.
Till next time
Stay safe online!
Alex
The post My Instagram Has Been Hacked – What Do I Do Now? appeared first on McAfee Blog.
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains
A Single Cloud Compromise Can Feed an Army of AI Sex Bots
Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.
Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.
Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled full logging of LLM activity (by default, logs don’t include model prompts and outputs), and thus they lacked any visibility into what attackers were doing with that access.
So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.
Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.
“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.
“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”
Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.
“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”
Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.
“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”
AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.
“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.
In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.
“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”
Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.
Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”
Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.
Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:
Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.
Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.
KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.
AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.
Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.
Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.
Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.
Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Most of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key.
Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.
“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”
In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:
“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”
“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”
AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.
The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.
Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to jailbreaks.
“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”
Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.
Update: 5:01 p.m. ET: Chub has issued a statement saying they are only hosting the role-playing characters, and that the LLMs they use run on their own infrastructure.
“Our own LLMs run on our own infrastructure,” Chub wrote in an emailed statement. “Any individuals participating in such attacks can use any number of UIs that allow user-supplied keys to connect to third-party APIs. We do not participate in, enable or condone any illegal activity whatsoever.”
—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains