Global Coalition Vows to Make AI Systems Safe & Secure The United States, the United Kingdom, and 16 other nations commit to ensuring artificial intelligence is developed with security as a priority.

Apple-AI Siri
Apple Generative Siri | AppleMagazine Cover Illustration.

In a groundbreaking move, the United States, the United Kingdom, and a group of 16 other countries have collectively pledged to adopt measures ensuring that artificial intelligence (AI) is “secure by design”. This agreement, although primarily a set of basic principles, marks a significant step towards international cooperation in safeguarding AI against potential misuse.

The commitment was formalized through a 20-page document, highlighting the consensus that AI systems must be designed and implemented with a focus on public safety and protection against abuse.

Director of the US Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly, emphasized the shift in priorities, stating, “This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs.” Easterly’s remarks to Reuters further underscored the agreement’s focus on embedding security in the AI design phase.

Joining the US and UK in this initiative are nations like Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore. Europe, already ahead in this domain, has been working towards specific laws governing AI development, including mandatory security testing for vulnerabilities.

Despite slower progress, France, Germany, and Italy have forged an interim agreement to maintain momentum.

The White House has been vocal about the need for AI regulation in the US, with President Biden recently mandating safety tests for AI companies, primarily to fortify systems against hacking threats. This move aligns with ongoing global efforts to integrate security considerations into AI development from the outset.

Interestingly, Apple, known for its gradual and cautious adoption of new technologies, has long incorporated AI into its products, notably in iPhone photography. The tech giant’s internal development of an AI chatbot, referred to informally as “Apple GPT”, mirrors the broader industry trend of leveraging generative AI for software development while being mindful of security implications.

This international agreement sets the stage for a more secure and responsible AI future, recognizing the importance of security in the rapidly evolving AI landscape.

China: Generative AI Apps Purged from App Store | Image: Teslarati

It signals a collective acknowledgment that while AI development is exciting and promising, it must be balanced with rigorous security measures to protect public safety and privacy.

About the Author

News content on is produced by our editorial team and complements more in-depth editorials which you’ll find as part of our weekly publication. provides a comprehensive daily reading experience, offering a wide view of the consumer technology landscape to ensure you're always in the know. Check back every weekday for more.

Editorial Team | Masthead – AppleMagazine Digital Publication