R.I.P. Executive Order 14110 — AI is Dancing on Your Grave

R.I.P. Executive Order 14110 — AI is Dancing on Your Grave

On January 20th, President Trump rescinded Executive Order 14110, a Biden directive fostering the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

Big Tech can now, as Mark Zuckerberg said, “Move fast and break things.”

Among the provisions struck down were protections for consumer privacy, national security, and support for workers disrupted (replaced?) by AI.

I think most of us would favor those goals.

The President’s rescission order also removed recognition that “the United States should be a global leader in AI development” and will “protect inventors and creators, and promote AI innovation, including at startups and small businesses.”

And Big Tech didn’t have to donate $6M to an inaugural for that endorsement.

E.O. 14110 was not perfect. Nothing is. But it was a start. Now, we face a leadership void with no public principles guiding decisions on how AI should be implemented — except those agreements being made with “a wink and a nod.”

So, where do we stand on a replacement Executive Order?

On January 23rd, the President followed up his initial recision with a brief replacement order calling for a complete plan within 180 days. It’s scary to think of how much can happen in the fast-paced world of AI during that stretch of time. Like the DeepSeek freak-out.

By one measure, the President’s replacement order weighs in at 658 words, of which 148 are boilerplate. In contrast, Biden’s original order contained 19,718 words. This difference may seem nitpicky, but it gives clues as to what will come.

Perhaps by design, the President’s replacement order is sparse in detail and skips direct references to consumer privacy, national security, and worker support. Instead, the new order seems to bundle these protections under “human flourishing.” Such high-level guidance leaves significant wiggle room for shenanigans during the 180-day hiatus.

In a nod to the culture wars, the new order also calls for AI to be free from “ideological bias or engineered social agendas.” Again, opaque words of guidance that open the door to a world of interpretations. Who defines which ‘ideologies’ and ‘social agendas’ are on the chopping block. Hopefully, adherence to the Constitution is not one of them.

So, what are our other options?

Thankfully, we live in a country with numerous avenues to help us navigate a safe and secure rollout of artificial intelligence that benefits all citizens. Let’s examine the available tools.

1. Self-Regulation

Big Tech would love the freedom to run their own show, and given how cozy they have become with the new administration, they may get their wish. Self-regulation can indeed unleash innovation. Take the internet as an example of how industry-created standards deliver the words you are now reading.

We should also not assume that all government regulations result in positive outcomes. Consider Section 230 of the Communications Decency Act, which gave internet providers immunity for user-generated content and opened the floodgates to anonymous hate speech and disinformation that, ironically, is destroying the fabric of decency.

But, lest we think self-regulation is the answer, there are notable examples where it was insufficient. Consider the Great Recession of 2008 driven by risky subprime loans or the Boeing 737 MAX crashes due to industry self-certification or Purdue Pharma marketing OxyContin as a non-addictive painkiller.

These examples demonstrate that while self-regulation can work in certain circumstances, it often fails when profit motives conflict with public safety, transparency, or accountability. In such cases, external oversight is crucial.

2. Congressional Action

Recent history raises doubt on the ability of the U.S. Congress to overcome its divisions and pass statutes to protect citizens and hold industry accountable. Despite 86% public support for the Kids Online Safety Act introduced in May 2023, it has yet to be passed into law. This is particularly concerning as social media platforms introduce AI into their algorithms, influencing our children’s exposure to harmful information.

Yet, the only way to overcome Executive Order whiplash with each newly elected President is for Congress to instill long-term consistency by performing their legislative duty. Congress should prepare for a review of the President’s 180-day AI plan to ensure its lawfulness and to identify potential elements for inclusion in future legislation.

3. State Governments

Despite the pitfalls of creating an unnavigable patchwork of AI rules and regulations, state governments may represent our best hope at optimizing the promises of AI. In that vein, Governor Newsom’s September 2023 executive order N-12–23 established California as a leader in developing “a deliberate and responsible process for evaluation and deployment of AI within state government.”

Although Governor Newsom vetoed SB 1047 — a bill with potentially significant impacts on the AI industry, he did sign 18 AI-related laws that reshape the state’s governance of AI. According to law firm BCLP, as of June 2024, 21 states have passed some form of AI laws, 14 (including DC) have proposed laws, and 16 have no legislation proposed.

Overall, most state AI laws are narrowly focused on immediate concerns such as consumer privacy, hiring bias, and surveillance. This partly reflects the lack of resources for comprehensive oversight and easier passage of communicable hot-button concerns.

Broader state efforts are less prevalent, with many being task-force-based rather than regulatory. The broader the initiative, the more resistance arises over fears of stifling innovation and clarity of enforcement.

In the absence of federal legislation, states should adopt a balanced approach to regulating AI. Establishing a task force is a first step, allowing states to assess the impact of AI within their jurisdiction and providing tailored recommendations.

Collaborating with diverse stakeholders, including industry, academia, and citizens, ensures a holistic understanding of AI’s challenges and opportunities. States should collaborate with neighboring states to align standards and share best practices to ensure consistency and reduce regulatory complexity.

States can begin by focusing on application-specific regulations that address immediate risks, such as bias in hiring algorithms, transparency in decision-making systems, and privacy concerns related to biometric data. Leveraging existing laws, like consumer protection or anti-discrimination regulations, to include AI-specific provisions can help streamline efforts without overhauling the regulatory landscape.

Transparency and accountability should be emphasized, requiring AI developers to disclose system details, explain decisions impacting individuals, and provide public feedback and redress mechanisms. Encouraging ethical AI development through incentives, sandboxes, and university partnerships can support innovation while safeguarding public values.

Finally, states should adopt adaptive governance strategies, regularly reviewing and updating policies to keep pace with evolving technologies, while advocating for federal collaboration to establish baseline regulations. This balanced approach allows states to protect citizens and foster innovation simultaneously.

4. People Power

Individuals can play a crucial role in mitigating AI’s negative impacts through their online behaviors, education, and collective action. By being mindful of their digital footprint, people can help shape the way AI systems operate.

This includes limiting the sharing of personal data, interacting with credible content, and avoiding engagement with biased or harmful material. Simple actions like adjusting privacy settings, curating reposts, and reporting misinformation can reduce the influence of harmful AI-driven algorithms.

Education is another vital tool. Understanding the basics of AI and its potential risks equips individuals to make informed choices and foster discussions within their communities. Raising awareness about the ethical implications of AI — such as bias, privacy concerns, and misinformation — encourages accountability from developers and institutions.

Collective action amplifies individual efforts. Joining advocacy groups, signing petitions, and pushing for ethical AI legislation can drive systemic change. At the same time, people can promote transparency by demanding that AI-generated content be clearly labeled and systems explain their decisions.

Finally, individuals can ensure AI is used responsibly by critically evaluating its outputs, limiting over-reliance, and advocating for fair and inclusive applications. Together, these actions empower individuals to shape AI to benefit society while minimizing harm.

Conclusion

The rescission of Executive Order 14110 leaves a troubling gap in AI governance. The absence of clear federal principles necessitates a multifaceted response.

While Big Tech’s self-regulation appeals to innovation, there are concerns it will not protect public interests when profit motives conflict with safety and accountability.

States have taken the lead with narrow laws that address immediate concerns. Regional collaboration is critical for a more unified approach to prevent a fragmented regulatory landscape.

Ultimately, individuals hold the power to influence development and deployment of AI through their behaviors and advocacy. By educating themselves, promoting transparency, and pushing for ethical AI practices, citizens can help steer technology toward equitable outcomes.

While the future of AI governance remains uncertain, it is clear that a collaborative effort — spanning individuals, states, and federal policymakers — is critical to ensuring that AI innovation benefits society without causing unintended harm.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *