Artificial Intelligence as a Tool of National Security Resilience: Evidence from Singapore
https://doi.org/10.58732/2958-7212-2025-4-64-81
Abstract
In the context of accelerated digitalization and the expansion of the use of artificial intelligence (hereinafter – AI) in critical sectors, the importance of forming effective AI management models focused on ensuring the sustainability of national security is increasing. The purpose of this study is to analyze the risk-based approach to artificial intelligence management in Singapore and assess its contribution to strengthening national security resilience in the period 2020-2025. The methodological basis of the study was a qualitative analysis of regulatory and strategic documents, comparative institutional analysis, as well as thematic coding of AI management tools from the perspective of riskbased regulation theory. The results of the study show that Singapore's AI management model is based on a combination of "soft" regulation, technical verification, and intersectoral collaboration, which minimizes the risks associated with cyber threats, vulnerability of critical infrastructure, and reduced public trust, without limiting innovation activity. In 2023-2024, the level of AI adoption among small and medium—sized enterprises increased by more than three times, and among large companies - by more than 18 percentage points. The share of employees using AI tools in their professional activities has reached almost 74%, which indicates the deep integration of AI into socio-economic processes. The practical significance of the work lies in the possibility of adapting the Singapore model in the development of national AI management systems in countries with a high degree of digitalization, including Kazakhstan.
About the Authors
A. AzatbekovaКазахстан
Aigerim Azatbekova, Bachelor
Almaty, Kazakhstan
Z. Serikbayeva
Казахстан
Zere Serikbayeva, Bachelor
Almaty, Kazakhstan
N. Nyshanbayev
Казахстан
Nurbolat Nyshanbayev, PhD, Associate Professor
Almaty, Kazakhstan
References
1. Allahrakha, N. (2024). UNESCO's AI Ethics Principles: Challenges and Opportunities. International Journal of Law and Policy, 2(9), 24-36. https://doi.org/10.59022/ijlp.225
2. Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2021). An introduction to ethics in robotics and AI (p.117). Springer Nature.
3. Bernd, W. W., Weyerer, J. C., & Sturm, B. J. (2020). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. International Journal of Public Administration, 43(9), 818–829. https://doi.org/10.1080/01900692.2020.1749851
4. Cheng, J., & Zeng, J. (2022). Shaping AI’s Future? China in Global AI Governance. Journal of Contemporary China, 32, 794 - 810. https://doi.org/10.1080/10670564.2022.2107391
5. Cohen, T., & Suzor, N.P. (2024). Contesting the public interest in AI governance. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1794
6. Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., & Wen, Y. (2022). capAIA Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act. Available at SSRN 4064091. https://doi.org/10.2139/ssrn.4064091
7. Infocomm Media Development Authority (IMDA). (2022). Annex B: Background on Singapore’s AI governance work.
8. Infocomm Media Development Authority (IMDA). (2023). AI Verify Foundation: Technical overview and assurance framework.
9. Kriebitz, A., Max, R., & Lütge, C. (2022). The German Act on Autonomous Driving: why ethics still matters. Philosophy & Technology, 35(2), 1-13. https://doi.org/10.1007/s13347-022-00526-2
10. Kumar, S., & Narayanan, A. (2021). Human-centric AI governance in Asia. AI & Society. Lütge, C., & Uhl, M. (2021). Business Ethics: An Economically Informed Perspective. Oxford University Press, USA
11. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26, 2141 - 2168. https://doi.org/10.1007/s11948-019-00165-5
12. OECD. (2020). Singapore’s model framework to balance innovation and trust in AI. OECD.AI Policy Observatory. Retrieved September 22, 2025, from https://oecd.ai/en/wonk/singapores-model-framework-to-balance-innovation-and-trust-in-a
13. Personal Data Protection Commission (PDPC). (2020). Model AI governance framework (2nd ed.).
14. Personal Data Protection Commission (PDPC). (2020). Singapore’s approach to AI governance.
15. Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40(2), 178–193. https://doi.org/10.1080/14494035.2021.1929728
16. Smart Nation Singapore. (2023). National AI Strategy 2.0: AI for the public good, for Singapore and the world. Government of Singapore. https://file.go.gov.sg/nais2023.pdf
17. Taddeo, M., Ziosi, M., Tsamados, A., Gilli, L., & Kurapati, S. (2022). Artificial intelligence for national security: The predictability problem. Centre for Digital Ethics (CEDE) Research Paper No. Forthcoming.
18. Wirtz, B. W., Weyerer, J. C., & Kehl, I. (2022). Governance of artificial intelligence: A risk and guideline-based integrative framework. Government Information Quarterly, 39, 101685. https://doi.org/10.1016/j.giq.2022.101685
19. Yerlikaya, S., & Erzurumlu, Y.Ö. (2021). Artificial Intelligence in Public Sector: A Framework to Address Opportunities and Challenges. Studies in computational intelligence, 935, 201-216. https://doi.org/10.1007/978-3-030-62796-6_11
Review
For citations:
Azatbekova A., Serikbayeva Z., Nyshanbayev N. Artificial Intelligence as a Tool of National Security Resilience: Evidence from Singapore. Qainar Journal of Social Science. 2025;4(4):64-81. https://doi.org/10.58732/2958-7212-2025-4-64-81
JATS XML












