Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Does Elon Musk are planning to use artificial intelligence to run the US government? This seems to be his plan, but experts say it's a “very bad idea.”
Musk has fired tens of thousands of federal government officials through its Ministry of Government Efficiency (DOGE) and, according to its messages, requires other workers to send a weekly five -point department describing what they have achieved this week.
As this will undoubtedly flood the dog with hundreds of thousands of these types of emails, Musk relies on artificial intelligence to process reactions and help determine who should remain hired. Part of this plan is also reported to replace many state workers with AI systems.
It is not yet clear what any of these AI systems looks like or how they work – something democrats in the United States Congress require them to be filled – but experts warn that using AI in the federal government without stable testing and checking these instruments can have catastrophic consequences.
“In order to use AI tools responsibly, they must be designed for a specific purpose. They must be tested and validated. It is unclear if anything is being done here, “says Carrie Kolian, a professor of law and political science at the University of Pennsylvania.
Coglianese says that if AI is used to make decisions about who should be terminated from their work, it will be “very skeptical” to this approach. He says there is a lot of real potential for mistakes that need to be made, and to be prejudiced for other potential problems.
“This is a very bad idea. We know nothing about how AI would make such decisions (including how the main algorithms were trained), the data on which such decisions would be based, or why we should believe that this is reliable, “says Shobita Parthasarati, Professor of Public Policy at Michigan University.
These concerns do not seem to keep the current government, especially with Musk, a billionaire businessman and a close advisor to US President Donald Trump, the prosecution of these efforts.
The US Department of State, for example, plans to use AI to scan accounts in social media of foreign citizens to identify anyone who can be Hamas's adherent in an attempt to cancel his visa. The US government is not transparent so far how these types of systems can work.
“The Trump administration is really interested in pursuing AI at all costs and I would like to see an honest, fair and fair use of AI,” says Hilke Shelman, a professor of journalism at New York University and an artificial intelligence expert. “There may be a lot of damage to remain undiscovered.”
AI experts say there are many ways in which government use of AI can be confused, so it must be taken carefully and conscientiously. Coglianese says that governments around the world, including the Netherlands and the United Kingdom, have had problems with poorly executed AIs, which can make mistakes or show biases and, as a result, misdiability the benefits of the well -being of the residents they need, for example.
In the US, Michigan state had a problem with AI, which was used to find fraud in its unemployment system when it incorrectly identified thousands of cases of suspected fraud. Many of these benefits have been considered roughly, including being hit by multiple penalties and charged with fraud. People were arrested and even filed for bankruptcy. After a five-year period, the state acknowledged that the system was defective and a year later, it eventually recovered $ 21 million to residents who were wrongly accused of fraud.
“Most of the time, employees who buy and unfold these technologies know a little about how they work, their biases and restrictions and mistakes,” says Parthasarati. “Since low incomes and otherwise marginalized communities tend to have the greatest contact with governments through social services (such as unemployment benefits, foster care, law enforcement), they tend to be affected by the most problematic AI.”
AI has also caused problems in the government when it is used in the courts to define things such as misunderstanding of conditional release or in police departments when used to try to predict where a crime will probably occur.
Schellmann says AI used by police departments is usually trained for historical data from these departments, and this can make AI recommend over -politning areas that have long been overcrowded, especially in community colors.
One of the problems with the potential use of AI to replace workers in the federal government is that there are so many different types of jobs in the government that require specific skills and knowledge. IT person at the Ministry of Justice can have a very different job from one in the Ministry of Agriculture, for example, although they have the same position. Therefore, the AI program should be complex and highly trained in order to do mediocre work when replacing a human worker.
“I don't think you can at random the work of people and then (replace them with any AI),” says Kolian. “The tasks that these people perform are often highly specialized and specific.”
Schellmann says you could use AI to do parts of one's work that may be predictable or repetitive, but you can't just replace someone completely. This would theoretically be possible if you spend years developing the right AI tools to do many, many different types of jobs – a very difficult task, not what the government seems to be right now.
“These workers have a real experience and nuanced understanding of the problems that AI does not. Ai doesn't really understand, says Parthasarati. “This is the use of computing methods for finding models based on historical data. And so it is likely to have limited utility and even strengthen the historical biases. “
Former US President Joe Biden's administration issued A executive order in 2023 focused on the responsible use of AI in the government and how AI would be tested and checked, but this order was canceled by the Trump administration in January. Schellmann says this has made this less likely to use AI responsibly in the government or researchers to be able to understand how AI is used.
All this said that if AI is developed responsibly, it can be very useful. AI can automate repetitive tasks so that workers focus on more important things or help workers solve the problems they are struggling with. But it is necessary to take the time to be unfolded in the right way.
“This does not mean that we could not use AI tools wisely,” Kolian says. “But governments are mistaken when trying to rush and do things quickly without a proper public contribution and in -depth validation and verification of how the algorithm actually works.”