800.553.8359 info@const-ins.com

The rapid global evolution of AI—from OpenAI’s release of GPT-4.5 to China’s development of models like DeepSeek, Baichuan, and others (China’s “AI Tigers”)—highlights a critical challenge for U.S. AI policy. With China’s AI industry valued at $70 billion by Chinese sources in 2023 and global private AI investment surpassing $150 billion last year, is not just a race for technological dominance but a fight to secure America’s economic security and geopolitical influence. Yet, two crucial weaknesses hinder America’s ability to lead: alarmingly low AI literacy (the ability to recognize, understand, and knowingly interact effectively with AI systems) and a lack of systematic mechanisms for learning from AI failures (incident reporting). This combination leaves policymakers reacting to headlines instead of proactively shaping the future of AI.

We should expect a competitive back and forth between the global powers while a massively transformative technology develops and achieves widespread adoption—a process that historically has moved unexpectedly slowly. Lessons from electricity or commercial aviation adoption can inform effective integration of AI, to help realize its potential of detecting diseases earlier, enabling personalized education, and supercharging productivity

Just as most Americans cannot imagine life without electricity or air travel, interacting with AI will be a constant in more and more aspects of our daily lives. These powerful technologies come with inherent risks. Like early aviation, today’s AI challenges—from deepfake scams to fraudulent impersonations to AI-generated illicit images of teenage girls—require smart, practical measures to assure and to protect the public from tangible harm.

AI leadership

Companies racing to deploy AI face a critical economic calculation: How much to invest in safety and governance beyond basic compliance testing. Those who gamble on minimizing these investments face potential financial consequences when systems fail, including market devaluation, litigation costs, and damaged consumer trust that can take years to rebuild. As airlines discovered, safety is a business imperative—but implementation requires proactive efforts and nuanced thinking. 

The Trump administration’s first term demonstrated commitment to American AI leadership through executive orders in 2019 and 2020. New executive orders and mandates indicate an interest in continuing this leadership emphasis. Maintaining U.S. technological leadership and public trust requires skilled workers, engaged consumers, and smart governance that enables rather than impedes progress.

Nearly three million Americans fly with confidence daily, even in the wake of recent near misses and tragedies. Most passengers don’t fully understand how planes operate technically, the mechanics of safety systems, or how accidents are investigated. Yet, they trust the aviation industry enough to board and send family members on flights. Arguably, this confidence was built with a history evidencing commitment to safety improvements, whether self-motivated or under pressure from litigation and civil aviation authorities. The first powered flight was in 1908, but it took two decades for commercial flights to scale, and half a century to build today’s foundation of safety measures and technological standards. In 2022, the aviation industry contributed a significant $1.8 trillion to the U.S. economy—yet AI as a general purpose technology offers the potential for even more dramatic growth. 

Like electricity in the 20th century, AI is set to transform every sector. Economic forecasts for AI’s impact vary widely, with a 2018 MIT study by the Nobel laureate economist Daron Acemoglu projecting a 0.55% to 1.56% GDP increase over 10 years to Goldman Sachs predicting a potential 7% rise. IDC offers a more recent, optimistic prediction of AI adding $4.9 trillion to the global economy in 2030 alone, representing 3.5% of projected global GDP. While caution and a sober accounting of the full spectrum of expert opinion are prudent with any long-range forecast, AI’s potential impact is too large to ignore. America must not merely participate in this economic revolution; it must lead it, setting the standards as well as reaping the greatest rewards, as a nation and by its population.

Today, while imperfect, airlines operate under a system of information sharing that fosters a culture of continuous improvement and safety. Well-crafted governance authorities serve as guardrails rather than roadblocks, setting clear expectations for industry performance while ensuring the public can expect accountability and technological progress. 

Balanced AI regulation

Similar to the way they are used on aircraft, America needs a “flight data recorder,” or black box, for AI systems—capturing important information when things go wrong so we can learn from and prevent future incidents. This infrastructure turns failures into industry-wide improvements rather than just isolated problems. This need is not unique to airlines—it is standardized practice in fields involving consumer and patient safety, such as after-action and mortality reports in hospitals. This congress and administration have both an opportunity and a time-sensitive imperative: We can chart a distinctly American approach to AI governance that balances implementation and innovation.

We propose two key steps: First, launch a national initiative for AI literacy to help Americans understand how these technologies affect their daily lives so they can confidently participate in the AI-powered economy. Second, establish incident reporting mechanisms that enable systematic learning about AI risks and effective mitigation strategies.

The need for AI literacy spans political and demographic lines. Countries with AI-literate populations will have significant competitive advantages in the global economy, with workers who can leverage these tools to drive productivity gains that outpace international rivals. While currently only 30% of U.S. adults understand how AI appears in everyday life, targeted literacy programs can rapidly improve this figure. Closing this knowledge gap would transform how policymakers and citizens approach AI—moving from reactive responses or even avoidance toward proactive navigation of the underlying forces shaping global AI competition.

AI literacy is as much about economic security as it is about technological advancement. For businesses, investing in AI literacy for their workforce isn’t optional—it’s essential risk management. Companies with AI-literate employees will detect problems earlier, implement more effective safeguards, and respond more nimbly when issues arise. This workforce advantage creates resilience against the inevitable AI mishaps that all organizations will face. Simply put, AI literacy is a competitive necessity, not a luxury expense.

The first step will be to land a practical and widely accepted definition. AI literacy does not require having a technical degree, but rather the ability to recognize the presence and utility of AI systems in practical contexts, to competently collaborate with these systems, and to make informed decisions about their use—aware of both risks and opportunities. Fortunately, good AI literacy programs exist (see for example those compiled on the EqualAI AI Literacy Hub). As AI rapidly evolves, literacy programs should build upon existing digital education while remaining adaptable to teach and use advancements in technology. This can happen within corporation practices, via local educational institutions, or via government-supported efforts, such as the Consumers LEARN AI Act (House version), which would, if enacted, provide a promising practical knowledge framework that regularly evaluates the programs’ effectiveness. 

Tracking AI failures

For AI incident reporting, we can learn from the aviation field’s culture of continuous industry safety improvements through mandatory reporting requirements for high-risk incidents combined with confidential, non-punitive voluntary reporting systems. The economic case for incident reporting is compelling. Companies that systematically track AI failures—both their own and others’—develop superior products, avoid others’ costly mistakes, and build institutional knowledge that becomes a competitive advantage. Unlike many compliance costs, investments in incident detection and reporting systems directly enhance business performance and reduce operational risk.

For governments, finding the right balance is crucial. They must avoid reporting fatigue and not deter innovation, while also ensuring meaningful incentives and oversight. Safe harbor provisions, threat intelligence sharing, and targeted tax incentives could encourage industry participation, adapted from existing public-private partnerships in cybersecurity where companies and government collaborate to strengthen collective defenses.

Implementation of AI incident mechanisms requires coordination between state and federal authorities, alongside robust engagement with industry stakeholders—learning from successful public-private partnerships and information sharing in other sectors. This approach recognizes that safety and economic competitiveness are complementary, not competing goals.

Companies that establish robust internal incident tracking and workforce literacy programs will be better positioned for long-term market leadership and present the rare case where compliance requirements create immediate operational benefits and competitive advantages. This public-private collaborative approach ensures that safety measures evolve alongside technological capabilities, and that the government stays abreast of both risk trends and effective mitigation strategies—maintaining the delicate balance between innovation and protection.

The next four years are crucial for U.S. economic competitiveness and resilience on the world stage. By focusing on literacy, high-priority incident tracking measures, and collaboration between government and industry to address these two key issues, we can ensure AI tools help Americans soar to new heights of innovation and prosperity, while maintaining its standing on the world’s economic stage. Nations that thoughtfully and successfully integrate AI will see considerable economic gains and advantages—and within those nations, companies that integrate safety as a core business practice will rise as enduring market leaders. American institutions must take the lead in this AI transformation, addressing governance and literacy investments as crucial to our shared prosperity.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com