Friday Squid Blogging: Beaked Whales Feed on Squid

A Travers’ beaked whale (Mesoplodon traversii) washed ashore in New Zealand, and scientists conlcuded that “the prevalence of squid remains [in its stomachs] suggests that these deep-sea cephalopods form a significant part of the whale’s diet, similar to other beaked whale species.”

Blog moderation policy.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Third Interdisciplinary Workshop on Reimagining Democracy (IWORD 2024)

Last month, Henry Farrell and I convened the Third Interdisciplinary Workshop on Reimagining Democracy (IWORD 2024) at Johns Hopkins University’s Bloomberg Center in Washington DC. This is a small, invitational workshop on the future of democracy. As with the previous two workshops, the goal was to bring together a diverse set of political scientists, law professors, philosophers, AI researchers and other industry practitioners, political activists, and creative types (including science fiction writers) to discuss how democracy might be reimagined in the current century.

The goal of the workshop is to think very broadly. Modern democracy was invented in the mid-eighteenth century, using mid-eighteenth-century technology. If democracy were to be invented today, it would look very different. Elections would look different. The balance between representation and direct democracy would look different. Adjudication and enforcement would look different. Everything would look different, because our conceptions of fairness, justice, equality, and rights are different, and we have much more powerful technology to bring to bear on the problems. Also, we could start from scratch without having to worry about evolving our current democracy into this imagined future system.

We can’t do that, of course, but it’s still still valuable to speculate. Of course we need to figure out how to reform our current systems, but we shouldn’t limit our thinking to incremental steps. We also need to think about discontinuous changes as well. I wrote about the philosophy more in this essay about IWORD 2022.

IWORD 2024 was easily the most intellectually stimulating two days of my year. It’s also intellectually exhausting; the speed and intensity of ideas is almost too much. I wrote the format in my blog post on IWORD 2023.

Summaries of all the IWORD 2024 talks are in the first set of comments below. And here are links to the previous IWORDs:

IWORD 2025 will be held either in New York or New Haven; still to be determined.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

AI Will Write Complex Laws

Artificial intelligence (AI) is writing law today. This has required no changes in legislative procedure or the rules of legislative bodies—all it takes is one legislator, or legislative assistant, to use generative AI in the process of drafting a bill.

In fact, the use of AI by legislators is only likely to become more prevalent. There are currently projects in the US House, US Senate, and legislatures around the world to trial the use of AI in various ways: searching databases, drafting text, summarizing meetings, performing policy research and analysis, and more. A Brazilian municipality passed the first known AI-written law in 2023.

That’s not surprising; AI is being used more everywhere. What is coming into focus is how policymakers will use AI and, critically, how this use will change the balance of power between the legislative and executive branches of government. Soon, US legislators may turn to AI to help them keep pace with the increasing complexity of their lawmaking—and this will suppress the power and discretion of the executive branch to make policy.

Demand for Increasingly Complex Legislation

Legislators are writing increasingly long, intricate, and complicated laws that human legislative drafters have trouble producing. Already in the US, the multibillion-dollar lobbying industry is subsidizing lawmakers in writing baroque laws: suggesting paragraphs to add to bills, specifying benefits for some, carving out exceptions for others. Indeed, the lobbying industry is growing in complexity and influence worldwide.

Several years ago, researchers studied bills introduced into state legislatures throughout the US, looking at which bills were wholly original texts and which borrowed text from other states or from lobbyist-written model legislation. Their conclusion was not very surprising. Those who borrowed the most text were in legislatures that were less resourced. This makes sense: If you’re a part-time legislator, perhaps unpaid and without a lot of staff, you need to rely on more external support to draft legislation. When the scope of policymaking outstrips the resources of legislators, they look for help. Today, that often means lobbyists, who provide expertise, research services, and drafting labor to legislators at the local, state, and federal levels at no charge. Of course, they are not unbiased: They seek to exert influence on behalf of their clients.

Another study, at the US federal level, measured the complexity of policies proposed in legislation and tried to determine the factors that led to such growing complexity. While there are numerous ways to measure legal complexity, these authors focused on the specificity of institutional design: How exacting is Congress in laying out the relational network of branches, agencies, and officials that will share power to implement the policy?

In looking at bills enacted between 1993 and 2014, the researchers found two things. First, they concluded that ideological polarization drives complexity. The suggestion is that if a legislator is on the extreme end of the ideological spectrum, they’re more likely to introduce a complex law that constrains the discretion of, as the authors put it, “entrenched bureaucratic interests.” And second, they found that divided government drives complexity to a large degree: Significant legislation passed under divided government was found to be 65 percent more complex than similar legislation passed under unified government. Their conclusion is that, if a legislator’s party controls Congress, and the opposing party controls the White House, the legislator will want to give the executive as little wiggle room as possible. When legislators’ preferences disagree with the executive’s, the legislature is incentivized to write laws that specify all the details. This gives the agency designated to implement the law as little discretion as possible.

Because polarization and divided government are increasingly entrenched in the US, the demand for complex legislation at the federal level is likely to grow. Today, we have both the greatest ideological polarization in Congress in living memory and an increasingly divided government at the federal level. Between 1900 and 1970 (57th through 90th Congresses), we had 27 instances of unified government and only seven divided; nearly a four-to-one ratio. Since then, the trend is roughly the opposite. As of the start of the next Congress, we will have had 20 divided governments and only eight unified (nearly a three-to-one ratio). And while the incoming Trump administration will see a unified government, the extremely closely divided House may often make this Congress look and feel like a divided one (see the recent government shutdown crisis as an exemplar) and makes truly divided government a strong possibility in 2027.

Another related factor driving the complexity of legislation is the need to do it all at once. The lobbyist feeding frenzy—spurring major bills like the Affordable Care Act to be thousands of pages in length—is driven in part by gridlock in Congress. Congressional productivity has dropped so low that bills on any given policy issue seem like a once-in-a-generation opportunity for legislators—and lobbyists—to set policy.

These dynamics also impact the states. States often have divided governments, albeit less often than they used to, and their demand for drafting assistance is arguably higher due to their significantly smaller staffs. And since the productivity of Congress has cratered in recent years, significantly more policymaking is happening at the state level.

But there’s another reason, particular to the US federal government, that will likely force congressional legislation to be more complex even during unified government. In June 2024, the US Supreme Court overturned the Chevron doctrine, which gave executive agencies broad power to specify and implement legislation. Suddenly, there is a mandate from the Supreme Court for more specific legislation. Issues that have historically been left implicitly to the executive branch are now required to be either explicitly delegated to agencies or specified directly in statute. Either way, the Court’s ruling implied that law should become more complex and that Congress should increase its policymaking capacity.

This affects the balance of power between the executive and legislative branches of government. When the legislature delegates less to the executive branch, it increases its own power. Every decision made explicitly in statute is a decision the executive makes not on its own but, rather, according to the directive of the legislature. In the US system of separation of powers, administrative law is a tool for balancing power among the legislative, executive, and judicial branches. The legislature gets to decide when to delegate and when not to, and it can respond to judicial review to adjust its delegation of control as needed. The elimination of Chevron will induce the legislature to exert its control over delegation more robustly.

At the same time, there are powerful political incentives for Congress to be vague and to rely on someone else, like agency bureaucrats, to make hard decisions. That empowers third parties—the corporations, or lobbyists—that have been gifted by the overturning of Chevron a new tool in arguing against administrative regulations not specifically backed up by law. A continuing stream of Supreme Court decisions handing victories to unpopular industries could be another driver of complex law, adding political pressure to pass legislative fixes.

AI Can Supply Complex Legislation

Congress may or may not be up to the challenge of putting more policy details into law, but the external forces outlined above—lobbyists, the judiciary, and an increasingly divided and polarized government—are pushing them to do so. When Congress does take on the task of writing complex legislation, it’s quite likely it will turn to AI for help.

Two particular AI capabilities enable Congress to write laws different from laws humans tend to write. One, AI models have an enormous scope of expertise, whereas people have only a handful of specializations. Large language models (LLMs) like the one powering ChatGPT can generate legislative text on funding specialty crop harvesting mechanization equally as well as material on energy efficiency standards for street lighting. This enables a legislator to address more topics simultaneously. Two, AI models have the sophistication to work with a higher degree of complexity than people can. Modern LLM systems can instantaneously perform several simultaneous multistep reasoning tasks using information from thousands of pages of documents. This enables a legislator to fill in more baroque detail on any given topic.

That’s not to say that handing over legislative drafting to machines is easily done. Modernizing any institutional process is extremely hard, even when the technology is readily available and performant. And modern AI still has a ways to go to achieve mastery of complex legal and policy issues. But the basic tools are there.

AI can be used in each step of lawmaking, and this will bring various benefits to policymakers. It could let them work on more policies—more bills—at the same time, add more detail and specificity to each bill, or interpret and incorporate more feedback from constituents and outside groups. The addition of a single AI tool to a legislative office may have an impact similar to adding several people to their staff, but with far lower cost.

Speed sometimes matters when writing law. When there is a change of governing party, there is often a rush to change as much policy as possible to match the platform of the new regime. AI could help legislators do that kind of wholesale revision. The result could be policy that is more responsive to voters—or more political instability. Already in 2024, the US House’s Office of the Clerk has begun using AI to speed up the process of producing cost estimates for bills and understanding how new legislation relates to existing code. Ohio has used an AI tool to do wholesale revision of state administrative law since 2020.

AI can also make laws clearer and more consistent. With their superhuman attention spans, AI tools are good at enforcing syntactic and grammatical rules. They will be effective at drafting text in precise and proper legislative language, or offering detailed feedback to human drafters. Borrowing ideas from software development, where coders use tools to identify common instances of bad programming practices, an AI reviewer can highlight bad law-writing practices. For example, it can detect when significant phrasing is inconsistent across a long bill. If a bill about insurance repeatedly lists a variety of disaster categories, but leaves one out one time, AI can catch that.

Perhaps this seems like minutiae, but a small ambiguity or mistake in law can have massive consequences. In 2015, the Affordable Care Act came close to being struck down because of a typo in four words, imperiling health care services extended to more than 7 million Americans.

There’s more that AI can do in the legislative process. AI can summarize bills and answer questions about their provisions. It can highlight aspects of a bill that align with, or are contrary to, different political points of view. We can even imagine a future in which AI can be used to simulate a new law and determine whether or not it would be effective, or what the side effects would be. This means that beyond writing them, AI could help lawmakers understand laws. Congress is notorious for producing bills hundreds of pages long, and many other countries sometimes have similarly massive omnibus bills that address many issues at once. It’s impossible for any one person to understand how each of these bills’ provisions would work. Many legislatures employ human analysis in budget or fiscal offices that analyze these bills and offer reports. AI could do this kind of work at greater speed and scale, so legislators could easily query an AI tool about how a particular bill would affect their district or areas of concern.

This is a use case that the House subcommittee on modernization has urged the Library of Congress to take action on. Numerous software vendors are already marketing AI legislative analysis tools. These tools can potentially find loopholes or, like the human lobbyists of today, craft them to benefit particular private interests.

These capabilities will be attractive to legislators who are looking to expand their power and capabilities but don’t necessarily have more funding to hire human staff. We should understand the idea of AI-augmented lawmaking contextualized within the longer history of legislative technologies. To serve society at modern scales, we’ve had to come a long way from the Athenian ideals of direct democracy and sortition. Democracy no longer involves just one person and one vote to decide a policy. It involves hundreds of thousands of constituents electing one representative, who is augmented by a staff as well as subsidized by lobbyists, and who implements policy through a vast administrative state coordinated by digital technologies. Using AI to help those representatives specify and refine their policy ideas is part of a long history of transformation.

Whether all this AI augmentation is good for all of us subject to the laws they make is less clear. There are real risks to AI-written law, but those risks are not dramatically different from what we endure today. AI-written law trying to optimize for certain policy outcomes may get it wrong (just as many human-written laws are misguided). AI-written law may be manipulated to benefit one constituency over others, by the tech companies that develop the AI, or by the legislators who apply it, just as human lobbyists steer policy to benefit their clients.

Regardless of what anyone thinks of any of this, regardless of whether it will be a net positive or a net negative, AI-made legislation is coming—the growing complexity of policy demands it. It doesn’t require any changes in legislative procedures or agreement from any rules committee. All it takes is for one legislative assistant, or lobbyist, to fire up a chatbot and ask it to create a draft. When legislators voted on that Brazilian bill in 2023, they didn’t know it was AI-written; the use of ChatGPT was undisclosed. And even if they had known, it’s not clear it would have made a difference. In the future, as in the past, we won’t always know which laws will have good impacts and which will have bad effects, regardless of the words on the page, or who (or what) wrote them.

This essay was written with Nathan E. Sanders, and originally appeared in Lawfare.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

AI Mistakes Are Very Different from Human Mistakes

Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death.

Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes.

Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.

Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.

Human Mistakes vs AI Mistakes

Life experience makes it fairly easy for each of us to guess when and where humans will make mistakes. Human errors tend to come at the edges of someone’s knowledge: Most of us would make mistakes solving calculus problems. We expect human mistakes to be clustered: A single calculus mistake is likely to be accompanied by others. We expect mistakes to wax and wane, predictably depending on factors such as fatigue and distraction. And mistakes are often accompanied by ignorance: Someone who makes calculus mistakes is also likely to respond “I don’t know” to calculus-related questions.

To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models—particularly LLMs—make mistakes differently.

AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.

And AI mistakes aren’t accompanied by ignorance. A LLM will be just as confident when saying something completely wrong—and obviously so, to a human—as it will be when saying something true. The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is.

How to Deal with AI Mistakes

This situation indicates two possible areas of research. The first is to engineer LLMs that make more human-like mistakes. The second is to build new mistake-correcting systems that deal with the specific sorts of mistakes that LLMs tend to make.

We already have some tools to lead LLMs to act in more human-like ways. Many of these arise from the field of “alignment” research, which aims to make models act in accordance with the goals and motivations of their human developers. One example is the technique that was arguably responsible for the breakthrough success of ChatGPT: reinforcement learning with human feedback. In this method, an AI model is (figuratively) rewarded for producing responses that get a thumbs-up from human evaluators. Similar approaches could be used to induce AI systems to make more human-like mistakes, particularly by penalizing them more for mistakes that are less intelligible.

When it comes to catching AI mistakes, some of the systems that we use to prevent human mistakes will help. To an extent, forcing LLMs to double-check their own work can help prevent errors. But LLMs can also confabulate seemingly plausible, but truly ridiculous, explanations for their flights from reason.

Other mistake mitigation systems for AI are unlike anything we use for humans. Because machines can’t get fatigued or frustrated in the way that humans do, it can help to ask an LLM the same question repeatedly in slightly different ways and then synthesize its multiple responses. Humans won’t put up with that kind of annoying repetition, but machines will.

Understanding Similarities and Differences

Researchers are still struggling to understand where LLM mistakes diverge from human ones. Some of the weirdness of AI is actually more human-like than it first appears. Small changes to a query to an LLM can result in wildly different responses, a problem known as prompt sensitivity. But, as any survey researcher can tell you, humans behave this way, too. The phrasing of a question in an opinion poll can have drastic impacts on the answers.

LLMs also seem to have a bias towards repeating the words that were most common in their training data; for example, guessing familiar place names like “America” even when asked about more exotic locations. Perhaps this is an example of the human “availability heuristic” manifesting in LLMs, with machines spitting out the first thing that comes to mind rather than reasoning through the question. And like humans, perhaps, some LLMs seem to get distracted in the middle of long documents; they’re better able to remember facts from the beginning and end. There is already progress on improving this error mode, as researchers have found that LLMs trained on more examples of retrieving information from long texts seem to do better at retrieving information uniformly.

In some cases, what’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly.

Humans may occasionally make seemingly random, incomprehensible, and inconsistent mistakes, but such occurrences are rare and often indicative of more serious problems. We also tend not to put people exhibiting these behaviors in decision-making positions. Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.

This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.

EDITED TO ADD (1/24): Slashdot thread.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Biden Signs New Cybersecurity Order

President Biden has signed a new cybersecurity order. It has a bunch of provisions, most notably using the US governments procurement power to improve cybersecurity practices industry-wide.

Some details:

The core of the executive order is an array of mandates for protecting government networks based on lessons learned from recent major incidents­—namely, the security failures of federal contractors.

The order requires software vendors to submit proof that they follow secure development practices, building on a mandate that debuted in 2022 in response to Biden’s first cyber executive order. The Cybersecurity and Infrastructure Security Agency would be tasked with double-checking these security attestations and working with vendors to fix any problems. To put some teeth behind the requirement, the White House’s Office of the National Cyber Director is “encouraged to refer attestations that fail validation to the Attorney General” for potential investigation and prosecution.

The order gives the Department of Commerce eight months to assess the most commonly used cyber practices in the business community and issue guidance based on them. Shortly thereafter, those practices would become mandatory for companies seeking to do business with the government. The directive also kicks off updates to the National Institute of Standards and Technology’s secure software development guidance.

More information.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Friday Squid Blogging: Opioid Alternatives from Squid Research

Is there nothing that squid research can’t solve?

“If you’re working with an organism like squid that can edit genetic information way better than any other organism, then it makes sense that that might be useful for a therapeutic application like deadening pain,” he said.

[…]

Researchers hope to mimic how squid and octopus use RNA editing in nerve channels that interpret pain and use that knowledge to manipulate human cells.

Blog moderation policy.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Social Engineering to Disable iMessage Protections

I am always interested in new phishing tricks, and watching them spread across the ecosystem.

A few days ago I started getting phishing SMS messages with a new twist. They were standard messages about delayed packages or somesuch, with the goal of getting me to click on a link and entering some personal information into a website. But because they came from unknown phone numbers, the links did not work. So—this is the new bit—the messages said something like: “Please reply Y, then exit the text message, reopen the text message activation link, or copy the link to Safari browser to open it.”

I saw it once, and now I am seeing it again and again. Everyone has now adopted this new trick.

One article claims that this trick has been popular since last summer. I don’t know; I would have expected to have seen it before last weekend.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

FBI Deletes PlugX Malware from Thousands of Computers

According to a DOJ press release, the FBI was able to delete the Chinese-used PlugX malware from “approximately 4,258 U.S.-based computers and networks.”

Details:

To retrieve information from and send commands to the hacked machines, the malware connects to a command-and-control server that is operated by the hacking group. According to the FBI, at least 45,000 IP addresses in the US had back-and-forths with the command-and-control server since September 2023.

It was that very server that allowed the FBI to finally kill this pesky bit of malicious software. First, they tapped the know-how of French intelligence agencies, which had recently discovered a technique for getting PlugX to self-destruct. Then, the FBI gained access to the hackers’ command-and-control server and used it to request all the IP addresses of machines that were actively infected by PlugX. Then it sent a command via the server that causes PlugX to delete itself from its victims’ computers.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

A Tumultuous Week for Federal Cybersecurity Efforts

Image: Shutterstock. Greg Meland.

President Trump last week issued a flurry of executive orders that upended a number of government initiatives focused on improving the nation’s cybersecurity posture. The president fired all advisors from the Department of Homeland Security’s Cyber Safety Review Board, called for the creation of a strategic cryptocurrency reserve, and voided a Biden administration action that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security.

On his first full day back in the White House, Trump dismissed all 15 advisory committee members of the Cyber Safety Review Board (CSRB), a nonpartisan government entity established in February 2022 with a mandate to investigate the causes of major cybersecurity events. The CSRB has so far produced three detailed reports, including an analysis of the Log4Shell vulnerability crisis, attacks from the cybercrime group LAPSUS$, and the 2023 Microsoft Exchange Online breach.

The CSRB was in the midst of an inquiry into cyber intrusions uncovered recently across a broad spectrum of U.S. telecommunications providers at the hands of Chinese state-sponsored hackers. One of the CSRB’s most recognizable names is Chris Krebs (no relation), the former director of the Cybersecurity and Infrastructure Security Agency (CISA). Krebs was fired by President Trump in November 2020 for declaring the presidential contest was the most secure in American history, and for refuting Trump’s false claims of election fraud.

South Dakota Governor Kristi Noem, confirmed by the U.S. Senate last week as the new director of the DHS, criticized CISA at her confirmation hearing, TheRecord reports.

Noem told lawmakers CISA needs to be “much more effective, smaller, more nimble, to really fulfill their mission,” which she said should be focused on hardening federal IT systems and hunting for digital intruders. Noem said the agency’s work on fighting misinformation shows it has “gotten far off mission” and involved “using their resources in ways that was never intended.”

“The misinformation and disinformation that they have stuck their toe into and meddled with, should be refocused back onto what their job is,” she said.

Moses Frost, a cybersecurity instructor with the SANS Institute, compared the sacking of the CSRB members to firing all of the experts at the National Transportation Safety Board (NTSB) while they’re in the middle of an investigation into a string of airline disasters.

“I don’t recall seeing an ‘NTSB Board’ being fired during the middle of a plane crash investigation,” Frost said in a recent SANS newsletter. “I can say that the attackers in the phone companies will not stop because the review board has gone away. We do need to figure out how these attacks occurred, and CISA did appear to be doing some good for the vast majority of the federal systems.”

Speaking of transportation, The Record notes that Transportation Security Administration chief David Pekoske was fired despite overseeing critical cybersecurity improvements across pipeline, rail and aviation sectors. Pekoske was appointed by Trump in 2017 and had his 5-year tenure renewed in 2022 by former President Joe Biden.

AI & CRYPTOCURRENCY

Shortly after being sworn in for a second time, Trump voided a Biden executive order that focused on supporting research and development in artificial intelligence. The previous administration’s order on AI was crafted with an eye toward managing the safety and security risks introduced by the technology. But a statement released by the White House said Biden’s approach to AI had hindered development, and that the United States would support AI systems that are “free from ideological bias or engineered social agendas,” to maintain leadership.

The Trump administration issued its own executive order on AI, which calls for an “AI Action Plan” to be led by the assistant to the president for science and technology, the White House “AI & crypto czar,” and the national security advisor. It also directs the White House to revise and reissue policies to federal agencies on the government’s acquisition and governance of AI “to ensure that harmful barriers to America’s AI leadership are eliminated.”

Trump’s AI & crypto czar is David Sacks, an entrepreneur and Silicon Valley venture capitalist who argues that the Biden administration’s approach to AI and cryptocurrency has driven innovation overseas. Sacks recently asserted that non-fungible cryptocurrency tokens and memecoins are neither securities nor commodities, but rather should be treated as “collectibles” like baseball cards and stamps.

There is already a legal definition of collectibles under the U.S. tax code that applies to things like art or antiques, which can be be subject to high capital gains taxes. But Joe Hall, a capital markets attorney and partner at Davis Polk, told Fortune there are no market regulations that apply to collectibles under U.S. securities law. Hall said Sacks’ comments “suggest a viewpoint that it would not be appropriate to regulate these things the way we regulate securities.”

The new administration’s position makes sense considering that the Trump family is deeply and personally invested in a number of recent memecoin ventures that have attracted billions from investors. President Trump and First Lady Melania Trump each launched their own vanity memecoins this month, dubbed $TRUMP and $MELANIA.

The Wall Street Journal reported Thursday the market capitalization of $TRUMP stood at about $7 billion, down from a peak of near $15 billion, while $MELANIA is hovering somewhere in the $460 million mark. Just two months before the 2024 election, Trump’s three sons debuted a cryptocurrency token called World Liberty Financial.

Despite maintaining a considerable personal stake in how cryptocurrency is regulated, Trump issued an executive order on January 23 calling for a working group to be chaired by Sacks that would develop “a federal regulatory framework governing digital assets, including stablecoins,” and evaluate the creation of a “strategic national digital assets stockpile.”

Translation: Using taxpayer dollars to prop up the speculative, volatile, and highly risky cryptocurrency industry, which has been marked by endless scams, rug-pulls, 8-figure cyber heists, rampant fraud, and unrestrained innovations in money laundering.

WEAPONIZATION & DISINFORMATION

Prior to the election, President Trump frequently vowed to use a second term to exact retribution against his perceived enemies. Part of that promise materialized in an executive order Trump issued last week titled “Ending the Weaponization of the Federal Government,” which decried “an unprecedented, third-world weaponization of prosecutorial power to upend the democratic process,” in the prosecution of more than 1,500 people who invaded the U.S. Capitol on Jan. 6, 2021.

On Jan. 21, Trump commuted the sentences of several leaders of the Proud Boys and Oath Keepers who were convicted of seditious conspiracy. He also issued “a full, complete and unconditional pardon to all other individuals convicted of offenses related to events that occurred at or near the United States Capitol on January 6, 2021,” which include those who assaulted law enforcement officers.

The New York Times reports “the language of the document suggests — but does not explicitly state — that the Trump administration review will examine the actions of local district attorneys or state officials, such as the district attorneys in Manhattan or Fulton County, Ga., or the New York attorney general, all of whom filed cases against President Trump.”

Another Trump order called “Restoring Freedom of Speech and Ending Federal Censorship” asserts:

“Over the last 4 years, the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve,” the Trump administration alleged. “Under the guise of combatting ‘misinformation,’ ‘disinformation,’ and ‘malinformation,’ the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.”

Both of these executive orders have potential implications for security, privacy and civil liberties activists who have sought to track conspiracy theories and raise awareness about disinformation efforts on social media coming from U.S. adversaries.

In the wake of the 2020 election, Republicans created the House Judiciary Committee’s Select Subcommittee on the Weaponization of the Federal Government. Led by GOP Rep. Jim Jordan of Ohio, the committee’s stated purpose was to investigate alleged collusion between the Biden administration and tech companies to unconstitutionally shut down political speech.

The GOP committee focused much of its ire at members of the short-lived Disinformation Governance Board, an advisory board to DHS created in 2022 (the “combating misinformation, disinformation, and malinformation” quote from Trump’s executive order is a reference to the board’s stated mission). Conservative groups seized on social media posts made by the director of the board, who resigned after facing death threats. The board was dissolved by DHS soon after.

In his first administration, President Trump created a special prosecutor to probe the origins of the FBI’s investigation into possible collusion between the Trump campaign and Russian operatives seeking to influence the 2016 election. Part of that inquiry examined evidence gathered by some of the world’s most renowned cybersecurity experts who identified frequent and unexplained communications between an email server used by the Trump Organization and Alfa Bank, one of Russia’s largest financial institutions.

Trump’s Special Prosecutor John Durham later subpoenaed and/or deposed dozens of security experts who’d collected, viewed or merely commented on the data. Similar harassment and deposition demands would come from lawyers for Alfa Bank. Durham ultimately indicted Michael Sussman, the former federal cybercrime prosecutor who reported the oddity to the FBI. Sussman was acquitted in May 2022. Last week, Trump appointed Durham to lead the U.S. attorney’s office in Brooklyn, NY.

Quinta Jurecic at Lawfare notes that while the executive actions are ominous, they are also vague, and could conceivably generate either a campaign of retaliation, or nothing at all.

“The two orders establish that there will be investigations but leave open the questions of what kind of investigations, what will be investigated, how long this will take, and what the consequences might be,” Jurecic wrote. “It is difficult to draw firm conclusions as to what to expect. Whether this ambiguity is intentional or the result of sloppiness or disagreement within Trump’s team, it has at least one immediate advantage as far as the president is concerned: generating fear among the broad universe of potential subjects of those investigations.”

On Friday, Trump moved to fire at least 17 inspectors general, the government watchdogs who conduct audits and investigations of executive branch actions, and who often uncover instances of government waste, fraud and abuse. Lawfare’s Jack Goldsmith argues that the removals are probably legal even though Trump defied a 2022 law that required congressional notice of the terminations, which Trump did not give.

“Trump probably acted lawfully, I think, because the notice requirement is probably unconstitutional,” Goldsmith wrote. “The real bite in the 2022 law, however, comes in the limitations it places on Trump’s power to replace the terminated IGs—limitations that I believe are constitutional. This aspect of the law will make it hard, but not impossible, for Trump to put loyalists atop the dozens of vacant IG offices around the executive branch. The ultimate fate of IG independence during Trump 2.0, however, depends less on legal protections than on whether Congress, which traditionally protects IGs, stands up for them now. Don’t hold your breath.”

Among the many Biden administration executive orders revoked by President Trump last week was an action from December 2021 establishing the United States Council on Transnational Organized Crime, which is charged with advising the White House on a range of criminal activities, including drug and weapons trafficking, migrant smuggling, human trafficking, cybercrime, intellectual property theft, money laundering, wildlife and timber trafficking, illegal fishing, and illegal mining.

So far, the White House doesn’t appear to have revoked an executive order that former President Biden issued less than a week before President Trump took office. On Jan. 16, 2025, Biden released a directive that focused on improving the security of federal agencies and contractors, and giving the government more power to sanction the hackers who target critical infrastructure.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

MasterCard DNS Error Went Unnoticed for Years

The payment card giant MasterCard just fixed a glaring error in its domain name server settings that could have allowed anyone to intercept or divert Internet traffic for the company by registering an unused domain name. The misconfiguration persisted for nearly five years until a security researcher spent $300 to register the domain and prevent it from being grabbed by cybercriminals.

A DNS lookup on the domain az.mastercard.com on Jan. 14, 2025 shows the mistyped domain name a22-65.akam.ne.

From June 30, 2020 until January 14, 2025, one of the core Internet servers that MasterCard uses to direct traffic for portions of the mastercard.com network was misnamed. MasterCard.com relies on five shared Domain Name System (DNS) servers at the Internet infrastructure provider Akamai [DNS acts as a kind of Internet phone book, by translating website names to numeric Internet addresses that are easier for computers to manage].

All of the Akamai DNS server names that MasterCard uses are supposed to end in “akam.net” but one of them was misconfigured to rely on the domain “akam.ne.”

This tiny but potentially critical typo was discovered recently by Philippe Caturegli, founder of the security consultancy Seralys. Caturegli said he guessed that nobody had yet registered the domain akam.ne, which is under the purview of the top-level domain authority for the West Africa nation of Niger.

Caturegli said it took $300 and nearly three months of waiting to secure the domain with the registry in Niger. After enabling a DNS server on akam.ne, he noticed hundreds of thousands of DNS requests hitting his server each day from locations around the globe. Apparently, MasterCard wasn’t the only organization that had fat-fingered a DNS entry to include “akam.ne,” but they were by far the largest.

Had he enabled an email server on his new domain akam.ne, Caturegli likely would have received wayward emails directed toward mastercard.com or other affected domains. If he’d abused his access, he probably could have obtained website encryption certificates (SSL/TLS certs) that were authorized to accept and relay web traffic for affected websites. He may even have been able to passively receive Microsoft Windows authentication credentials from employee computers at affected companies.

But the researcher said he didn’t attempt to do any of that. Instead, he alerted MasterCard that the domain was theirs if they wanted it, copying this author on his notifications. A few hours later, MasterCard acknowledged the mistake, but said there was never any real threat to the security of its operations.

“We have looked into the matter and there was not a risk to our systems,” a MasterCard spokesperson wrote. “This typo has now been corrected.”

Meanwhile, Caturegli received a request submitted through Bugcrowd, a program that offers financial rewards and recognition to security researchers who find flaws and work privately with the affected vendor to fix them. The message suggested his public disclosure of the MasterCard DNS error via a post on LinkedIn (after he’d secured the akam.ne domain) was not aligned with ethical security practices, and passed on a request from MasterCard to have the post removed.

MasterCard’s request to Caturegli, a.k.a. “Titon” on infosec.exchange.

Caturegli said while he does have an account on Bugcrowd, he has never submitted anything through the Bugcrowd program, and that he reported this issue directly to MasterCard.

“I did not disclose this issue through Bugcrowd,” Caturegli wrote in reply. “Before making any public disclosure, I ensured that the affected domain was registered to prevent exploitation, mitigating any risk to MasterCard or its customers. This action, which we took at our own expense, demonstrates our commitment to ethical security practices and responsible disclosure.”

Most organizations have at least two authoritative domain name servers, but some handle so many DNS requests that they need to spread the load over additional DNS server domains. In MasterCard’s case, that number is five, so it stands to reason that if an attacker managed to seize control over just one of those domains they would only be able to see about one-fifth of the overall DNS requests coming in.

But Caturegli said the reality is that many Internet users are relying at least to some degree on public traffic forwarders or DNS resolvers like Cloudflare and Google.

“So all we need is for one of these resolvers to query our name server and cache the result,” Caturegli said. By setting their DNS server records with a long TTL or “Time To Live” — a setting that can adjust the lifespan of data packets on a network — an attacker’s poisoned instructions for the target domain can be propagated by large cloud providers.

“With a long TTL, we may reroute a LOT more than just 1/5 of the traffic,” he said.

The researcher said he’d hoped that the credit card giant might thank him, or at least offer to cover the cost of buying the domain.

“We obviously disagree with this assessment,” Caturegli wrote in a follow-up post on LinkedIn regarding MasterCard’s public statement. “But we’ll let you judge— here are some of the DNS lookups we recorded before reporting the issue.”

Caturegli posted this screenshot of MasterCard domains that were potentially at risk from the misconfigured domain.

As the screenshot above shows, the misconfigured DNS server Caturegli found involved the MasterCard subdomain az.mastercard.com. It is not clear exactly how this subdomain is used by MasterCard, however their naming conventions suggest the domains correspond to production servers at Microsoft’s Azure cloud service. Caturegli said the external Internet address of these servers is mostly Cloudflare, but internally the domains all resolve to Internet addresses at Microsoft.

“Don’t be like Mastercard,” Caturegli concluded in his LinkedIn post. “Don’t dismiss risk, and don’t let your marketing team handle security disclosures.”

One final note: The domain akam.ne has been registered previously — in December 2016 by someone using the email address um-i-delo@yandex.ru. The Russian search giant Yandex reports this user account belongs to an “Ivan I.” from Moscow. Passive DNS records from DomainTools.com show that between 2016 and 2018 the domain was connected to an Internet server in Germany, and that the domain was left to expire in 2018.

This is interesting given a comment on Caturegli’s LinkedIn post from an ex-Cloudflare employee who linked to a report he co-authored on a similar typo domain apparently registered in 2017 for organizations that may have mistyped their AWS DNS server as “awsdns-06.ne” instead of “awsdns-06.net.” DomainTools reports that this typo domain also was registered to a Yandex user (playlotto@yandex.ru), and was hosted at the same German ISP — Team Internet (AS61969).

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains