How Solid Protocol Restores Digital Agency

The current state of digital identity is a mess. Your personal information is scattered across hundreds of locations: social media companies, IoT companies, government agencies, websites you have accounts on, and data brokers you’ve never heard of. These entities collect, store, and trade your data, often without your knowledge or consent. It’s both redundant and inconsistent. You have hundreds, maybe thousands, of fragmented digital profiles that often contain contradictory or logically impossible information. Each serves its own purpose, yet there is no central override and control to serve you—as the identity owner.

We’re used to the massive security failures resulting from all of this data under the control of so many different entities. Years of privacy breaches have resulted in a multitude of laws—in US states, in the EU, elsewhere—and calls for even more stringent protections. But while these laws attempt to protect data confidentiality, there is nothing to protect data integrity.

In this context, data integrity refers to its accuracy, consistency, and reliability…throughout its lifecycle. It means ensuring that data is not only accurately recorded but also remains logically consistent across systems, is up-to-date, and can be verified as authentic. When data lacks integrity, it can contain contradictions, errors, or outdated information—problems that can have serious real-world consequences.

Without data integrity, someone could classify you as a teenager while simultaneously attributing to you three teenage children: a biological impossibility. What’s worse, you have no visibility into the data profiles assigned to your identity, no mechanism to correct errors, and no authoritative way to update your information across all platforms where it resides.

Integrity breaches don’t get the same attention that confidentiality breaches do, but the picture isn’t pretty. A 2017 write-up in The Atlantic found error rates exceeding 50% in some categories of personal information. A 2019 audit of data brokers found at least 40% of data broker sourced user attributes are “not at all” accurate. In 2022, the Consumer Financial Protection Bureau documented thousands of cases where consumers were denied housing, employment, or financial services based on logically impossible data combinations in their profiles. Similarly, the National Consumer Law Center report called “Digital Denials” showed inaccuracies in tenant screening data that blocked people from housing.

And integrity breaches can have significant effects on our lives. In one 2024 British case, two companies blamed each other for the faulty debt information that caused catastrophic financial consequences for an innocent victim. Breonna Taylor was killed in 2020 during a police raid on her apartment in Louisville, Kentucky, when officers executed a “no-knock” warrant on the wrong house based on bad data. They had faulty intelligence connecting her address to a suspect who actually lived elsewhere.

In some instances, we have rights to view our data, and in others, rights to correct it, but these sorts of solutions have only limited value. When journalist Julia Angwin attempted to correct her information across major data brokers for her book Dragnet Nation, she found that even after submitting corrections through official channels, a significant number of errors reappeared within six months.

In some instances, we have the right to delete our data, but—again—this only has limited value. Some data processing is legally required, and some is necessary for services we truly want and need.

Our focus needs to shift from the binary choice of either concealing our data entirely or surrendering all control over it. Instead, we need solutions that prioritize integrity in ways that balance privacy with the benefits of data sharing.

It’s not as if we haven’t made progress in better ways to manage online identity. Over the years, numerous trustworthy systems have been developed that could solve many of these problems. For example, imagine digital verification that works like a locked mobile phone—it works when you’re the one who can unlock and use it, but not if someone else grabs it from you. Or consider a storage device that holds all your credentials, like your driver’s license, professional certifications, and healthcare information, and lets you selectively share one without giving away everything at once. Imagine being able to share just a single cell in a table or a specific field in a file. These technologies already exist, and they could let you securely prove specific facts about yourself without surrendering control of your whole identity. This isn’t just theoretically better than traditional usernames and passwords; the technologies represent a fundamental shift in how we think about digital trust and verification.

Standards to do all these things emerged during the Web 2.0 era. We mostly haven’t used them because platform companies have been more interested in building barriers around user data and identity. They’ve used control of user identity as a key to market dominance and monetization. They’ve treated data as a corporate asset, and resisted open standards that would democratize data ownership and access. Closed, proprietary systems have better served their purposes.

There is another way. The Solid protocol, invented by Sir Tim Berners-Lee, represents a radical reimagining of how data operates online. Solid stands for “SOcial LInked Data.” At its core, it decouples data from applications by storing personal information in user-controlled “data wallets”: secure, personal data stores that users can host anywhere they choose. Applications can access specific data within these wallets, but users maintain ownership and control.

Solid is more than distributed data storage. This architecture inverts the current data ownership model. Instead of companies owning user data, users maintain a single source of truth for their personal information. It integrates and extends all those established identity standards and technologies mentioned earlier, and forms a comprehensive stack that places personal identity at the architectural center.

This identity-first paradigm means that every digital interaction begins with the authenticated individual who maintains control over their data. Applications become interchangeable views into user-owned data, rather than data silos themselves. This enables unprecedented interoperability, as services can securely access precisely the information they need while respecting user-defined boundaries.

Solid ensures that user intentions are transparently expressed and reliably enforced across the entire ecosystem. Instead of each application implementing its own custom authorization logic and access controls, Solid establishes a standardized declarative approach where permissions are explicitly defined through control lists or policies attached to resources. Users can specify who has access to what data with granular precision, using simple statements like “Alice can read this document” or “Bob can write to this folder.” These permission rules remain consistent, regardless of which application is accessing the data, eliminating the fragmentation and unpredictability of traditional authorization systems.

This architectural shift decouples applications from data infrastructure. Unlike Web 2.0 platforms like Facebook, which require massive back-end systems to store, process, and monetize user data, Solid applications can be lightweight and focused solely on functionality. Developers no longer need to build and maintain extensive data storage systems, surveillance infrastructure, or analytics pipelines. Instead, they can build specialized tools that request access to specific data in users’ wallets, with the heavy lifting of data storage and access control handled by the protocol itself.

Let’s take healthcare as an example. The current system forces patients to spread pieces of their medical history across countless proprietary databases controlled by insurance companies, hospital networks, and electronic health record vendors. Patients frustratingly become a patchwork rather than a person, because they often can’t access their own complete medical history, let alone correct mistakes. Meanwhile, those third-party databases suffer regular breaches. The Solid protocol enables a fundamentally different approach. Patients maintain their own comprehensive medical record, with data cryptographically signed by trusted providers, in their own data wallet. When visiting a new healthcare provider, patients can arrive with their complete, verifiable medical history rather than starting from zero or waiting for bureaucratic record transfers.

When a patient needs to see a specialist, they can grant temporary, specific access to relevant portions of their medical history. For example, a patient referred to a cardiologist could share only cardiac-related records and essential background information. Or, on the flip side, the patient can share new and rich sources of related data to the specialist, like health and nutrition data. The specialist, in turn, can add their findings and treatment recommendations directly to the patient’s wallet, with a cryptographic signature verifying medical credentials. This process eliminates dangerous information gaps while ensuring that patients maintain an appropriate role in who sees what about them and why.

When a patient—doctor relationship ends, the patient retains all records generated during that relationship—unlike today’s system where changing providers often means losing access to one’s historical records. The departing doctor’s signed contributions remain verifiable parts of the medical history, but they no longer have direct access to the patient’s wallet without explicit permission.

For insurance claims, patients can provide temporary, auditable access to specific information needed for processing—no more and no less. Insurance companies receive verified data directly relevant to claims but should not be expected to have uncontrolled hidden comprehensive profiles or retain information longer than safe under privacy regulations. This approach dramatically reduces unauthorized data use, risk of breaches (privacy and integrity), and administrative costs.

Perhaps most transformatively, this architecture enables patients to selectively participate in medical research while maintaining privacy. They could contribute anonymized or personalized data to studies matching their interests or conditions, with granular control over what information is shared and for how long. Researchers could gain access to larger, more diverse datasets while participants would maintain control over their information—creating a proper ethical model for advancing medical knowledge.

The implications extend far beyond healthcare. In financial services, customers could maintain verified transaction histories and creditworthiness credentials independently of credit bureaus. In education, students could collect verified credentials and portfolios that they truly own rather than relying on institutions’ siloed records. In employment, workers could maintain portable professional histories with verified credentials from past employers. In each case, Solid enables individuals to be the masters of their own data while allowing verification and selective sharing.

The economics of Web 2.0 pushed us toward centralized platforms and surveillance capitalism, but there has always been a better way. Solid brings different pieces together into a cohesive whole that enables the identity-first architecture we should have had all along. The protocol doesn’t just solve technical problems; it corrects the fundamental misalignment of incentives that has made the modern web increasingly hostile to both users and developers.

As we look to a future of increased digitization across all sectors of society, the need for this architectural shift becomes even more apparent. Individuals should be able to maintain and present their own verified digital identity and history, rather than being at the mercy of siloed institutional databases. The Solid protocol makes this future technically possible.

This essay was written with Davi Ottenheimer, and originally appeared on The Inrupt Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Phishers Target Aviation Execs to Scam Customers

KrebsOnSecurity recently heard from a reader whose boss’s email account got phished and was used to trick one of the company’s customers into sending a large payment to scammers. An investigation into the attacker’s infrastructure points to a long-running Nigerian cybercrime ring that is actively targeting established companies in the transportation and aviation industries.

Image: Shutterstock, Mr. Teerapon Tiuekhom.

A reader who works in the transportation industry sent a tip about a recent successful phishing campaign that tricked an executive at the company into entering their credentials at a fake Microsoft 365 login page. From there, the attackers quickly mined the executive’s inbox for past communications about invoices, copying and modifying some of those messages with new invoice demands that were sent to some of the company’s customers and partners.

Speaking on condition of anonymity, the reader said the resulting phishing emails to customers came from a newly registered domain name that was remarkably similar to their employer’s domain, and that at least one of their customers fell for the ruse and paid a phony invoice. They said the attackers had spun up a look-alike domain just a few hours after the executive’s inbox credentials were phished, and that the scam resulted in a customer suffering a six-figure financial loss.

The reader also shared that the email addresses in the registration records for the imposter domain — roomservice801@gmail.com — is tied to many such phishing domains. Indeed, a search on this email address at DomainTools.com finds it is associated with at least 240 domains registered in 2024 or 2025. Virtually all of them mimic legitimate domains for companies in the aerospace and transportation industries worldwide.

An Internet search for this email address reveals a humorous blog post from 2020 on the Russian forum hackware[.]ru, which found roomservice801@gmail.com was tied to a phishing attack that used the lure of phony invoices to trick the recipient into logging in at a fake Microsoft login page. We’ll come back to this research in a moment.

JUSTY JOHN

DomainTools shows that some of the early domains registered to roomservice801@gmail.com in 2016 include other useful information. For example, the WHOIS records for alhhomaidhicentre[.]biz reference the technical contact of “Justy John” and the email address justyjohn50@yahoo.com.

A search at DomainTools found justyjohn50@yahoo.com has been registering one-off phishing domains since at least 2012. At this point, I was convinced that some security company surely had already published an analysis of this particular threat group, but I didn’t yet have enough information to draw any solid conclusions.

DomainTools says the Justy John email address is tied to more than two dozen domains registered since 2012, but we can find hundreds more phishing domains and related email addresses simply by pivoting on details in the registration records for these Justy John domains. For example, the street address used by the Justy John domain axisupdate[.]net — 7902 Pelleaux Road in Knoxville, TN — also appears in the registration records for accountauthenticate[.]com, acctlogin[.]biz, and loginaccount[.]biz, all of which at one point included the email address rsmith60646@gmail.com.

That Rsmith Gmail address is connected to the 2012 phishing domain alibala[.]biz (one character off of the Chinese e-commerce giant alibaba.com, with a different top-level domain of .biz). A search in DomainTools on the phone number in those domain records — 1.7736491613 — reveals even more phishing domains as well as the Nigerian phone number “2348062918302” and the email address michsmith59@gmail.com.

DomainTools shows michsmith59@gmail.com appears in the registration records for the domain seltrock[.]com, which was used in the phishing attack documented in the 2020 Russian blog post mentioned earlier. At this point, we are just two steps away from identifying the threat actor group.

The same Nigerian phone number shows up in dozens of domain registrations that reference the email address sebastinekelly69@gmail.com, including 26i3[.]net, costamere[.]com, danagruop[.]us, and dividrilling[.]com. A Web search on any of those domains finds they were indexed in an “indicator of compromise” list on GitHub maintained by Palo Alto NetworksUnit 42 research team.

SILVERTERRIER

According to Unit 42, the domains are the handiwork of a vast cybercrime group based in Nigeria that it dubbed “SilverTerrier” back in 2014. In an October 2021 report, Palo Alto said SilverTerrier excels at so-called “business e-mail compromise” or BEC scams, which target legitimate business email accounts through social engineering or computer intrusion activities. BEC criminals use that access to initiate or redirect the transfer of business funds for personal gain.

Palo Alto says SilverTerrier encompasses hundreds of BEC fraudsters, some of whom have been arrested in various international law enforcement operations by Interpol. In 2022, Interpol and the Nigeria Police Force arrested 11 alleged SilverTerrier members, including a prominent SilverTerrier leader who’d been flaunting his wealth on social media for years. Unfortunately, the lure of easy money, endemic poverty and corruption, and low barriers to entry for cybercrime in Nigeria conspire to provide a constant stream of new recruits.

BEC scams were the 7th most reported crime tracked by the FBI’s Internet Crime Complaint Center (IC3) in 2024, generating more than 21,000 complaints. However, BEC scams were the second most costly form of cybercrime reported to the feds last year, with nearly $2.8 billion in claimed losses. In its 2025 Fraud and Control Survey Report, the Association for Financial Professionals found 63 percent of organizations experienced a BEC last year.

Poking at some of the email addresses that spool out from this research reveals a number of Facebook accounts for people residing in Nigeria or in the United Arab Emirates, many of whom do not appear to have tried to mask their real-life identities. Palo Alto’s Unit 42 researchers reached a similar conclusion, noting that although a small subset of these crooks went to great lengths to conceal their identities, it was usually simple to learn their identities on social media accounts and the major messaging services.

Palo Alto said BEC actors have become far more organized over time, and that while it remains easy to find actors working as a group, the practice of using one phone number, email address or alias to register malicious infrastructure in support of multiple actors has made it far more time consuming (but not impossible) for cybersecurity and law enforcement organizations to sort out which actors committed specific crimes.

“We continue to find that SilverTerrier actors, regardless of geographical location, are often connected through only a few degrees of separation on social media platforms,” the researchers wrote.

FINANCIAL FRAUD KILL CHAIN

Palo Alto has published a useful list of recommendations that organizations can adopt to minimize the incidence and impact of BEC attacks. Many of those tips are prophylactic, such as conducting regular employee security training and reviewing network security policies.

But one recommendation — getting familiar with a process known as the “financial fraud kill chain” or FFKC — bears specific mention because it offers the single best hope for BEC victims who are seeking to claw back payments made to fraudsters, and yet far too many victims don’t know it exists until it is too late.

Image: ic3.gov.

As explained in this FBI primer, the International Financial Fraud Kill Chain is a partnership between federal law enforcement and financial entities whose purpose is to freeze fraudulent funds wired by victims. According to the FBI, viable victim complaints filed with ic3.gov promptly after a fraudulent transfer (generally less than 72 hours) will be automatically triaged by the Financial Crimes Enforcement Network (FinCEN).

The FBI noted in its IC3 annual report (PDF) that the FFKC had a 66 percent success rate in 2024. Viable ic3.gov complaints involve losses of at least $50,000, and include all records from the victim or victim bank, as well as a completed FFKC form (provided by FinCEN) containing victim information, recipient information, bank names, account numbers, location, SWIFT, and any additional information.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains