Lorica insights
Project Stargate: A Historic Leap for AI and Possible Crossroads for Privacy
Feb 24, 2025
The new administration wasted no time diving into the world of AI by announcing the Stargate Project, which aims to invest $500 billion over the next four years building new AI infrastructure.

As we discussed in a previous blog post, this announcement has generated excitement about the scale and potential of the project. Yet it comes with an undercurrent of controversy. Alongside this initiative, the administration has also moved to rescind Biden’s 2023 Executive Order on AI and disband the Cyber Safety Review Board memberships—decisions that have sparked concern among privacy advocates and cybersecurity experts.
This dual approach presents both promise and contradiction. On the one hand, uniting major tech players to build scalable, governed AI infrastructure is an encouraging step forward. On the other, unraveling existing policies and dismantling advisory boards critical to safeguarding AI systems raises questions about the administration’s priorities.
The challenge now is to strike a balance—fostering the promise of large-scale AI development while preserving the principles of equitable access and comprehensive security. In this blog post, we will explore the impact these executive orders could have on balancing innovation, inclusion, and security in AI.
Advancing AI without promoting safety and security is like serving ice cream to a dinner party of people with lactose intolerance. What’s the point of all that creamy vanilla and decadent chocolate if it just makes everyone sick?
Safety vs. Innovation?
Within hours of taking office on January 20, President Trump revoked the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110), which President Biden signed in October 2023.
In the ongoing push and pull between secure development and unconstrained innovation, the Act had been the most significant federal effort to create a regulatory strategy for responsible AI, with a strong focus on cybersecurity. Yet the revocation states that the Order was stifling private-sector AI innovation with burdensome requirements.
Much of the Order had already been carried out: government agencies have studied AI’s effects on workplaces, education, and public benefits, as well as cybersecurity risks. Probably the biggest change—and one with real security implications—is eliminating the requirement for developers of major AI systems that pose risks to U.S. national security, the economy, public health or safety to share safety test results and other information with the government before public release.
The Risks of Uncertainty
A few days later, President Trump signed a new Executive Order on Removing Barriers to American Leadership in Artificial Intelligence, which directs a review of all actions that were taken based on President Biden’s now-revoked Order. These actions should be altered or revoked if they do not “promote human flourishing, economic competitiveness, and national security.” This is all quite vague, but explanations should be coming: the new Order calls for developing an AI action plan within 180 days, led by a small team of White House tech and science officials.
From the point of view of organizations developing and deploying AI, these changes mainly bring uncertainty, if not confusion. Will the new administration be more hands-off in regulating AI, or is this just a reset in regulatory focus? The seemingly contradictory messages of prioritizing AI infrastructure through the Stargate Project while dismantling regulation could make organizational decision-making more difficult. Given the lack of clarity, these executive actions probably won’t inspire big shifts in approaches to cybersecurity and privacy in the short term, despite the headlines.
In the long term, if the new administration continues a more laissez-faire approach, organizations might anticipate regulation from elsewhere. Less federal oversight could motivate more state-level regulation, along the lines of the Colorado Artificial Intelligence Act. This has already been the case with data privacy: in the absence of federal leadership, 20 states have currently passed comprehensive data privacy laws. Global organizations will also need to take into account the European Union’s AI Act. Internal governance would gain importance, too.
However, there is a real risk that an inconsistent regulatory landscape would cause companies to implement differing, and in some cases insufficient, standards for AI safety and security. This is concerning given that “Americans have some of the highest rates of mistrust of AI in the developed world”—and companies can’t succeed without trust.
National Security in the Spotlight
Despite these highly publicized clashes, the two administrations share a number of priorities, including cybersecurity in service of national security. This might explain why the AI Safety Institute, focused on national security, which Biden established last year, is still in place for now. So is Biden’s 2025 Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity, which gives the U.S. government more authority to sanction bad actors.
The spotlight on national security helps explain why a somewhat routine decision to turn over membership as part of an administration change has raised concerns. On Trump’s first day in office, the acting Secretary of the Department of Homeland Security ordered the termination of current memberships on all advisory committees of the Cyber Safety Review Board (CSRB). Created in 2022 under the Biden administration, the CSRB launched an investigation late last year into the hacking of nine U.S. telecom firms attributed to Salt Typhoon, which is backed by China.
The disruption to this timely investigation by removing knowledgeable members, as well as the uncertain future of the CSRB overall, underscore the very real risks of injecting uncertainty and change into critical cybersecurity strategy.
Security and Privacy Support Innovation
There are two competing views of the new administration’s changes. Some believe that less regulation will boost U.S. innovation in technology, and notably in AI. In other words, lighter enforcement of safe, responsible AI will benefit businesses. This view celebrates a move away from a “fear-based” focus on risk and prevention that arguably characterized the approach to AI of the previous administration as well as of the EU. In this view, coupling the Stargate announcement with unwinding regulations makes sense.
Others argue the opposite: that regulatory uncertainty and less federal oversight will increase risk and slow innovation. In other words, big investments like Stargate should be intertwined with actions that unite major tech players around responsible AI. In this view, lack of regulation will leave both businesses and consumers concerned about the risks and costs of AI. Notably, cybersecurity risks, including data breaches, are already the top business concern in the U.S. and worldwide and have been for several years. As discussed, cybersecurity is also increasingly inextricable from concerns about national security.
With the number of AI incidents continuing to rise—according to the AI Incident Database, 2023 saw a 32% increase over 2022—new technology development must go hand in hand with prioritizing security and privacy. At Lorica, we take the view that strong cybersecurity and responsible AI development and deployment are the cornerstones of innovation. Industry leaders are already moving in this direction with Apple recently announcing that it was combining Machine Learning and Homomorphic Encryption for improved customer privacy. The increasing use of privacy-enhancing technologies is not about meeting regulatory requirements, but rather delivering the privacy and security that customers, clients, and partners expect while mitigating risks from data exposure events.