News

What Is Generative AI? How Does It Work?

It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me. 

The most famous of these mainstream AI tools are ChatGPT, Voice.ai, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work? 

Here’s the simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools. 

What Is Generative AI? 

Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Gen AI’s earliest uses were for online chat bots in the 1960s.1 Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.  

How Does Generative AI Work? 

Think of generative AI as a sponge that desperately wants to delight the users who ask it questions. 

First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes’ worth of facts through the year 2021.2 The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.  

From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes. 

Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions.3 So, while the technology is certainly smart, it’s not exactly creative. 

How to Use Generative AI Responsibly 

The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double check your work!  

One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist.4 This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.  

Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. Though, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.   

The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgement and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content. 

1TechTarget, “What is generative AI? Everything you need to know 

2BBC Science Focus, “ChatGPT: Everything you need to know about OpenAI’s GPT-4 tool”  

360 Minutes, “Artificial Intelligence Revolution 

4The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post What Is Generative AI? How Does It Work? appeared first on McAfee Blog.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Friday Squid Blogging: Giant Squid Nebula

Pretty:

A mysterious squid-like cosmic cloud, this nebula is very faint, but also very large in planet Earth’s sky. In the image, composed with 30 hours of narrowband image data, it spans nearly three full moons toward the royal constellation Cepheus. Discovered in 2011 by French astro-imager Nicolas Outters, the Squid Nebula’s bipolar shape is distinguished here by the telltale blue-green emission from doubly ionized oxygen atoms. Though apparently surrounded by the reddish hydrogen emission region Sh2-129, the true distance and nature of the Squid Nebula have been difficult to determine. Still, a more recent investigation suggests Ou4 really does lie within Sh2-129 some 2,300 light-years away. Consistent with that scenario, the cosmic squid would represent a spectacular outflow of material driven by a triple system of hot, massive stars, cataloged as HR8119, seen near the center of the nebula. If so, this truly giant squid nebula would physically be over 50 light-years across.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

The AI Dividend

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains

Top Suspect in 2015 Ashley Madison Hack Committed Suicide in 2014

When the marital infidelity website AshleyMadison.com learned in July 2015 that hackers were threatening to publish data stolen from 37 million users, the company’s then-CEO Noel Biderman was quick to point the finger at an unnamed former contractor. But as a new documentary series on Hulu reveals [SPOILER ALERT!], there was just one problem with that theory: Their top suspect had killed himself more than a year before the hackers began publishing stolen user data.

The new documentary, The Ashley Madison Affair, begins airing today on Hulu in the United States and on Disney+ in the United Kingdom. The series features interviews with security experts and journalists, Ashley Madison executives, victims of the breach and jilted spouses.

The series also touches on shocking new details unearthed by KrebsOnSecurity and Jeremy Bullock, a data scientist who worked with the show’s producers at the Warner Bros. production company Wall to Wall Media. Bullock had spent many hours poring over the hundreds of thousands of emails that the Ashley Madison hackers stole from Biderman and published online in 2015.

Wall to Wall reached out in July 2022 about collaborating with Bullock after KrebsOnSecurity published A Retrospective on the 2015 Ashley Madison Breach. That piece explored how Biderman — who is Jewish — had become the target of concerted harassment campaigns by anti-Semitic and far-right groups online in the months leading up to the hack.

Whoever hacked Ashley Madison had access to all employee emails, but they only released Biderman’s messages — three years worth. Apropos of my retrospective report, Bullock found that a great many messages in Biderman’s inbox were belligerent and anti-Semitic screeds from a former Ashley Madison employee named William Brewster Harrison.

William Harrison’s employment contract with Ashley Madison parent Avid Life Media.

The messages show that Harrison was hired in March 2010 to help promote Ashley Madison online, but the messages also reveal Harrison was heavily involved in helping to create and cultivate phony female accounts on the service.

There is evidence to suggest that in 2010 Harrison was directed to harass the owner of Ashleymadisonsucks.com into closing the site or selling the domain to Ashley Madison.

Ashley Madison’s parent company — Toronto-based Avid Life Media — filed a trademark infringement complaint in 2010 that succeeded in revealing a man named Dennis Bradshaw as the owner. But after being informed that Bradshaw was not subject to Canadian trademark laws, Avid Life offered to buy AshleyMadisonSucks.com for $10,000.

When Bradshaw refused to sell the domain, he and his then-girlfriend were subject to an unrelenting campaign of online harassment and blackmail. It now appears those attacks were perpetrated by Harrison, who sent emails from different accounts at the free email service Vistomail pretending to be Bradshaw, his then-girlfriend and their friends.

[As the documentary points out, the domain AshleyMadisonSucks.com was eventually transferred to Ashley Madison, which then shrewdly used it for advertising and to help debunk theories about why its service was supposedly untrustworthy].

Harrison even went after Bradshaw’s lawyer and wife, listing them both on a website he created called Contact-a-CEO[.]com, which Harrison used to besmirch the name of major companies — including several past employers — all entities he believed had slighted him or his family in some way. The site also claimed to include the names, addresses and phone numbers of top CEOs.

A cached copy of Harrison’s website, contact-the-ceo.com.

An exhaustive analysis of domains registered to the various Vistomail pseudonyms used by Harrison shows he also ran Bash-a-Business[.]com, which Harrison dedicated to “all those sorry ass corporate executives out there profiting from your hard work, organs, lives, ideas, intelligence, and wallets.” Copies of the site at archive.org show it was the work of someone calling themselves “The Chaos Creator.”

Will Harrison was terminated as an Ashley Madison employee in November 2011, and by early 2012 he’d turned his considerable harassment skills squarely against the company. Ashley Madison’s long-suspected army of fake female accounts came to the fore in August 2012 after the former sex worker turned activist and blogger Maggie McNeill published screenshots apparently taken from Ashley Madison’s internal systems suggesting that a large percentage of the female accounts on the service were computer-operated bots.

Ashley Madison’s executives understood that only a handful of employees at the time would have had access to the systems needed to produce the screenshots McNeill published online. In one exchange on Aug. 16, 2012, Ashley Madison’s director of IT was asked to produce a list of all company employees with all-powerful administrator access.

“Who or what is asdfdfsda@asdf.com?,” Biderman asked, after being sent a list of nine email addresses.

“It appears to be the email address Will used for his profiles,” the IT director replied.

“And his access was never shut off until today?,” asked the company’s general counsel Mike Dacks.

A Biderman email from 2012.

What prompted the data scientist Bullock to reach out were gobs of anti-Semitic diatribes from Harrison, who had taken to labeling Biderman and others “greedy Jew bastards.”

“So good luck, I’m sure we’ll talk again soon, but for now, Ive got better things in the oven,” Harrison wrote to Biderman after his employment contract with Ashley Madison was terminated. “Just remember I outsmarted you last time and I will outsmart and out maneuver you this time too, by keeping myself far far away from the action and just enjoying the sideline view, cheering for the opposition.”

A 2012 email from William Harrison to former Ashley Madison CEO Noel Biderman.

Harrison signed his threatening missive with the salutation, “We are legion,” suggesting that whatever comeuppance he had in store for Ashley Madison would come from a variety of directions and anonymous hackers.

The leaked Biderman emails show that Harrison made good on his threats, and that in the months that followed Harrison began targeting Biderman and other Ashley Madison executives with menacing anonymous emails and spoofed phone calls laced with profanity and anti-Semitic language.

But on Mar. 5, 2014, Harrison committed suicide by shooting himself in the head with a handgun. This fact was apparently unknown to Biderman and other Ashley Madison executives more than a year later when their July 2015 hack was first revealed.

Does Harrison’s untimely suicide rule him out as a suspect in the 2015 hack? Who is The Chaos Creator, and what else transpired between Harrison and Ashley Madison prior to his death? We’ll explore these questions in Part II of this story, to be published early next week.

—————
Free Secure Email – Transcom Sigma
Boost Inflight Internet
Transcom Hosting
Transcom Premium Domains