From Myanmar to the World: GFEAPE as a Global Transformative Framework
for Ethical AI in Peace Education
Since 2010, armed conflicts have escalated by 96% worldwide, underscoring the urgent need for effective peace education. While AI offers opportunities—such as VR empathy simulations in Colombia and multilingual MOOCs connecting Gaza and Indonesia—it also poses risks, including algorithmic bias, homogenization of peace narratives, and the spread of extremist content. In Myanmar, a country with over 70 years of civil war and a highly diverse population, the traditional education system remains centralized and culturally insensitive, excluding ethnic minorities. The Rakhine State case reveals how AI-driven hate speech on social media has intensified violence against the Rohingya, while AI educational tools risk perpetuating biases due to non-diverse datasets and the absence of ethical frameworks. This proposal introduces the Global Framework for Equitable AI in Peace Education (GFEAPE), a transformative and adaptable framework designed to address these challenges through three interconnected components:
AI Ethics and Peace Education Curriculum (Global Level):Establishes an international standard curriculum for secondary and tertiary education, localized to reflect diverse histories, cultures, and languages. It incorporates bias auditing tools (e.g., IBM’s AI Fairness 360, Google’s What-If) and relies on international organizations (UNESCO, UNDP) to set certification standards, while local civil society and educators manage monitoring and evaluation.
AI for Peace Education Network (APEN) (Regional Level):Supports regional collaboration through cross-border teacher training, joint research in regional languages, and infrastructure development via Digital Solidarity Funds. APEN also creates “bias feedback loops” that empower students, teachers, and civil society to identify and report algorithmic discrimination, ensuring continuous, bottom-up system corrections.
Peace-AI Watch Units (PAWUs) (Local Level):Establishes independent, well-trained monitoring bodies to detect and review AI-generated extremist content in education and public platforms. PAWUs operate in ethnic languages (e.g., Shan, Rohingya, Karen), decentralize content regulation, and report to APEN and UNESCO for coordinated intervention, building trust and accountability through community ownership.
