
By John Singarayar
ARTIFICIAL intelligence is transforming our world at dizzying speed, carrying both extraordinary promise and serious peril for the marginalized among us.
From helping farmers in drought-stricken regions optimize scarce water to enabling underserved clinics to diagnose diseases more accurately, AI offers genuine hope for advancing social justice.
Yet as the late Pope Francis observed, technology’s ultimate impact depends not merely on sophisticated algorithms but on the hearts and intentions of those who create them.
The Catholic Church urges us to wield this powerful tool for the common good, ensuring it elevates human dignity rather than diminishing it.
Picture a smallholder farmer in Latin America using AI to make every drop of water count during severe drought, or a low-income family accessing personalized education apps that bridge learning gaps no single teacher could fill alone.
These are not far-off fantasies. In regions scarred by deep inequality, AI can work alongside local knowledge to reshape economies, healthcare systems, and even political participation, nurturing both dignity and solidarity.
The US bishops emphasize how it could streamline healthcare delivery and personalize learning, giving everyone — each person created in God’s image — better access to the fundamentals of human flourishing, such as adequate food, shelter, and medical care.
Even in environmental policy, AI could help balance what Pope Francis called the “cry of the earth and the cry of the poor” by optimizing resources more sustainably.
Pope Leo XIV poses the essential question: How do we ensure AI serves all people, not just the powerful, by keeping at the center what it truly means to be human today?
But promises without proper safeguards breed real danger. Algorithms can easily perpetuate existing biases, denying loans to minorities or jobs to those without formal credentials, widening the very chasms the Gospel calls us to bridge.
Deepfakes undermine truth itself, manipulating elections and sowing discord in what Pope Francis described as a growing crisis of trust.
In warfare, autonomous weapons threaten innocent lives by removing the human judgment essential for mercy and proportionality. Workers face displacement from automation, losing the dignity that comes from meaningful labor.
The Rome Call for AI Ethics insists that these systems must include everyone, discriminate against no one, and genuinely empower vulnerable communities rather than exploiting them. Without transparency and robust human oversight, AI risks turning our call to fraternity into deeper fragmentation.
The Church charts a clear path forward through its ethics-by-design approach. AI must actively protect human rights, explain its decisions in understandable terms, and prioritize peace and human welfare over mere profit.
Developers, users, and regulators all share responsibility for building what the Church terms “algor-ethical” frameworks that safeguard privacy, promote equity, and protect our common home.
Pope Leo XIV’s message to the AI for Good Summit stresses the need for coordinated global governance rooted firmly in human dignity, fostering what he calls “tranquillitas ordinis”— the tranquility of good order — necessary for truly just societies. Education reform around these principles ensures no one is left behind, from young people to elders, from people with disabilities to residents of remote areas.
This is not naive optimism speaking. It is a call to careful discernment. AI excels at processing vast amounts of data and identifying patterns, but it cannot replicate a moral conscience or nurture genuine human relationships.
Families, which form the heart of any healthy society, need protection from isolating technologies and moral harms like virtual exploitation. Policies demanding real accountability — requiring human oversight for consequential decisions, protecting workers from unjust displacement, conducting regular bias audits — can transform these ideals into reality.
Real-world examples already demonstrate both AI’s potential and its pitfalls. Researchers have used predictive models to identify youth at high risk of homelessness, enabling early intervention with housing support. Healthcare tools can flag biases in patient care, helping ensure marginalized groups receive fairer treatment. AI assists human rights advocates by analyzing satellite imagery and social media to identify patterns of abuse that might otherwise go unnoticed.
These applications show how AI can amplify silenced voices and dismantle barriers in education, employment, and environmental protection.
Yet the cautionary tales are equally instructive. Facial recognition systems show higher error rates for people with darker skin, reinforcing racial profiling in policing.
Predictive algorithms in criminal justice perpetuate biases against minority communities based on flawed historical data. Hiring tools trained on resumes from male-dominated fields subtly discriminate against women. These are the results of deeper problems: biased training data, a lack of developer diversity, and deployment without adequate testing.
The solution requires diverse development teams, regular bias audits, and meaningful involvement of affected communities in design processes.
Transparency matters profoundly — understanding how AI reaches decisions builds essential accountability. Access is equally crucial; in a world where AI reshapes employment and services, ensuring everyone can benefit or opt out becomes fundamental to fairness.
In the end, social justice demands AI bend toward serving the entire human family, not algorithms ruling over us.
The choice remains ours: a genuine tool for justice or merely a mirror reflecting our worst flaws. By grounding AI firmly in human dignity and authentic fraternity, we can build a future where technological progress truly serves everyone. – UCA News














































