Analysis | ChatGPT is now writing legislation. Is this the future? – The Washington Post
ChatGPT is now writing legislation. Is this the future?
It’s not unheard of for legislators in the United States to turn to interest groups to help draft large chunks of legislation, even when they may be the target of proposed regulations.
But in what may be a first, a Massachusetts state senator has used a surging new tool to help write a bill aimed at restricting it: ChatGPT, the artificial intelligence chatbot.
On Friday, state Sen. Barry Finegold (D) introduced legislation to set data privacy and security safeguards for the service and others like it that was “drafted with the help of ChatGPT.”
The tool, which channels AI language models to generate humanlike responses to queries, “has taken the internet by storm,” as my colleagues Pranshu Verma and Rachel Lerman wrote. “Humans are asking it questions, and it’s sending answers back that are eerily lifelike, chatty, sometimes humorous and at other times unsettling and problematic,” they wrote.
Now, for better or worse, the tool is contributing to the democratic process.
Finegold and chief of staff Justin Curtis said in an interview that while the chatbot initially rejected their request to whip up a bill to regulate services like ChatGPT, with some trial and error it eventually produced a draft that the state senator described as “70 percent there.”
“It definitely required a little bit of nudging and a little bit of specificity in terms of what the prompt actually was. You couldn’t just say, ‘draft a bill to regulate ChatGPT’ … but if you had broad ideas, it could have a little bit more particularity with it,” Curtis said.
ChatGPT created a draft, later refined and formatted by Finegold’s office, that outlined restrictions against discriminatory data use and plagiarism and requirements that companies maintain “reasonable security practices,” according to screenshots shared with The Technology 202.
While much of it was in response to specific queries, Curtis said the tool did make some original contributions. “It actually had some additional ideas that it generated, especially around de-identification, data security,” he said.
Finegold said they hatched the idea to highlight the tool’s power — and the need to craft rules around its use.
“This is an incredibly powerful technology now. … Where we missed the boat with Facebook, with some of these other early [tech companies], we didn’t put in proper guardrails, and I think these companies actually need that,” Finegold said.
But he also argued the tool, while imperfect, could help elected officials conduct the business of the people. “I think it’s going to be able to expedite us doing things,” he said.
While the chatbot has generated enormous buzz in tech circles, it’s also increasingly drawn scrutiny for some of those imperfections, including reports of racial and gender biases seeping into its responses, along with inaccuracies and falsehoods.
If the tool is picked up by other legislators, those issues could have ripple effects.
Daniel Schuman, a policy director at the Demand Progress advocacy group, argued that there is a place for AI-driven tools like ChatGPT in the legislative process, from summarizing documents to comparing materials and bills — but not without significant human oversight.
“AI also can have significant biases that can arise from the dataset used to create it and the developers who create it, so humans must always be in the loop to make sure that it is a labor-saving device, not a democracy-replacement device,” he said in an email.
Zach Graves, executive director of the Lincoln Network think tank, said he doesn’t expect ChatGPT to be used to draft bills often. But it could help with other functions, like communicating with constituents or the press.
“In particular, this could include initial drafts of constituent letters or casework, boosting the efficiency of district offices and [legislative correspondents],” he said. “But it could also help with drafting dear colleague letters, tweets, press releases and other functions.”
With one bill in the works, its backers say those discussions are only just starting.
“This legislation is just really a first step to start a conversation,” Finegold said.
Our top tabs
Biden taps aide who was on Facebook’s board for chief of staff post
Jeff Zients, who oversaw the Biden administration’s response to the coronavirus pandemic, will replace White House chief of staff Ron Klain, who plans to step down from his post in the coming weeks, Tyler Pager and Yasmeen Abutaleb report. Zients, a management consultant, worked in the Obama administration, leading the National Economic Council and helping turn around the disastrous HealthCare.gov website.
A spokesperson for the White House declined to comment.
“After leaving the Obama administration, Zients ran an investment firm and spent two years on the board of Facebook, experience that has drawn criticism from liberals,” Tyler and Yasmeen write. “Zients left the Facebook board after disagreements about the company’s direction.” Zients liquidated his Facebook stock in 2020, Biden’s presidential transition team told the Verge in 2020.
Real-world attacks rise alongside Twitter hate
Spikes in some Twitter hate speech — like antisemitic and anti-gay slurs — have been increasing, and so have physical attacks, Joseph Menn reports. The Network Contagion Research Institute, which tracks misinformation, is set to release research this month suggesting that there’s a connection between variations of the word “groomer” and real-life incidents. The word, which is often aimed at gay people, suggests that they aim to seduce children. Gay people are not more likely to be predators than straight people.
“In the past three to four months, we have seen an increase in anti-LGBTQ incidents, and you can see a statistical correlation between these real-world incidents and the increased use of the term ‘groomer’ on Twitter,” said Alexander Reid Ross, an analyst with the Network Contagion Research Institute who shared the findings. Ross didn’t say that the term’s use led to the real-world violence.
Meta tweaks Russia-Ukraine content policies
Facebook parent Meta removed a far-right militia group known as the Azov Regiment from its list of dangerous groups, a move that will enable its members to create accounts and will also allow other users to praise the group, Naomi Nix reports. The change comes around 11 months after Russia invaded Ukraine.
“In this case, Meta argues that the Azov Regiment is now separate from the far-right nationalist Azov Movement. It notes that the Ukrainian government has formal command and control over the unit,” Naomi writes. “Meta said in a statement that other ‘elements of the Azov Movement, including the National Corp., and its founder Andriy Biletsky’ are still on its list of dangerous individuals and organizations.” The company won’t allow users to post “hate speech, hate symbols, calls for violence and any other content which violates our Community Standards,” it said.
Inside the industry
Workforce report
- The Senate Judiciary Committee holds a hearing on competition in the live entertainment industry on Tuesday at 10 a.m.
- Sen. Cynthia M. Lummis (R-Wyo.) speaks at an R Street Institute event on disclosing government requests and communications with social media companies on Tuesday at 3 p.m. The group hosts an event on privacy and security legislation on Thursday at 4 p.m.
Before you log off
That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with tips, feedback or greetings on Twitter or email.