Welcome to MEXA

Innovating Mental Health with AI

MEXA unites researchers, healthcare professionals, developers, and those with lived mental health experiences. Together, we use AI to solve mental health challenges through collaboration and events like hackathons, seminars, and more. Read more here.

Learn about how we integrate lived experience experts at MEXA.

Funded by Wellcome.

calendar imageApril 10, 2026

Building Before the Breakdown: How Grief Became the Blueprint for FriendnPal

header_img

Esther Eruchie shares how personal loss shaped her mission to build predictive, culturally relevant mental health infrastructure across Africa.

I'm Esther Eruchie, founder of FriendnPal, a predictive, AI-powered mental health platform expanding access to early, culturally relevant care across Africa and emerging markets. My work sits at the intersection of technology, mental health, and social impact, but my journey into innovation did not begin in a lab or a boardroom. It began with loss.

A Personal Mission Rooted in Loss

I had a brother on the autism spectrum. We grew up in an environment where developmental support systems were limited and deeply stigmatized. My family navigated his condition largely alone, in a society where neurodiversity was misunderstood and where stigma often replaced support. I watched my parents carry the invisible weight of care in a system that offered very little infrastructure for families like ours. He passed away in his early twenties a few years ago.

Just three months later, my mother, overwhelmed by grief, was diagnosed with depression and also passed away. Those losses changed the course of my life. They left me with a question that continues to define my work: why do our health systems wait until people are already in crisis before responding?

That question became my mission - to build systems that don't wait for breakdown before offering support.

An Unconventional Path to Building Technology

My academic background is in International Relations and International Law and Diplomacy, which shaped how I understand systems, governance, and global inequalities. I do not have a technical background, but I taught myself to build because I couldn't ignore the gap I had experienced. Living with ADHD has also shaped how I think — what once felt like a limitation has become an advantage, allowing me to see patterns across disciplines and persist in solving complex problems.

What FriendnPal Is Building

Today, through FriendnPal, we are building a predictive mental health infrastructure that leverages AI, community-based care models, and low-bandwidth technologies like WhatsApp to make support accessible, stigma-free, and continuous. Our system analyzes multimodal signals — voice patterns, text sentiment, and behavioral indicators — combined with validated clinical frameworks to generate real-time risk scores and enable early intervention. When needed, users are seamlessly triaged to human care, ensuring that technology enhances, rather than replaces, clinical support.

Contributing to the Broader Conversation

Beyond building technology, I actively contribute to global conversations on ethical AI and health equity. I am a member of HealthAI – Global Agency for Responsible AI in Health, the Healthcare Federation of Nigeria, and the Coalition for Scaling Mental Health, where I engage in policy and research discussions on responsible and inclusive AI systems. My academic work includes publications on digital mental health platforms in Africa, alongside earlier research on human trafficking in West Africa, exploring systemic responses across Nigeria, Ghana, and Libya.

At the community level, I have facilitated over 50 workshops on gender-based violence prevention, substance abuse awareness, sexual and reproductive health education, and digital inclusion for marginalized populations. I've worked directly with trafficking survivors, displaced youth, women, and underserved communities, translating evidence-based frameworks into scalable, real-world interventions.

I'm deeply committed to building systems that are not only innovative, but equitable — ensuring that the future of AI and healthcare reflects the realities of the people it is meant to serve.

How Esther Found MEXA

I joined MEXA while participating in one of the Wellcome Trust Research Fund calls, as I was looking for researchers and a community that truly understood the intersection of AI, ethics, and real-world impact, especially in underrepresented regions.

What stood out immediately was the diversity of perspectives and the openness of the community. Being part of MEXA has exposed me to thoughtful dialogue, global insights, and a network of people committed to building responsible and inclusive AI systems. It has also been a space for reflection, challenging how I approach fairness, bias, and accountability in the systems we are building at FriendnPal, particularly as we work with sensitive mental health data.

ending_esther_image

Why You Should Get Involved

I would say MEXA is more than a network for me — it is a community of people who care deeply about how AI is built and who it serves. If you are working in AI or even just curious about its impact, especially from a global or underrepresented perspective, MEXA gives you a space to learn, share, and connect with people who are asking the right questions.

It is particularly valuable if you are building or working in contexts where the "default" AI solutions don't quite fit. You get to engage with others who understand those gaps and are actively working to address them!

Esther Eruchie is the founder of FriendnPal, an AI-powered predictive mental health platform expanding access to early, culturally relevant care across Africa and emerging markets. Driven by personal loss and a commitment to health equity, she builds systems that intervene before crisis — not after. MEXA has given her a space to deepen her thinking on ethical and inclusive AI, connecting her with a global community asking the right questions about who technology serves and how.


This blog reflects the perspective of the author and does not constitute an endorsement by MEXA. We’re always looking for thoughtful, engaging voices to contribute to the MEXA blog! If you have insights to share at the intersection of mental health and artificial intelligence, we’d love to hear from you: Submit your blog here.

post image0 comment
calendar imageJanuary 22, 2026

MEXA Accelerator: Advancing Generative AI for Mental Health Through Global Collaboration

MEXA Teams

Accelerator Overview and Purpose

In August 2025, MEXA launched its first research Accelerator, a four-month program designed to support interdisciplinary teams exploring the potential of generative AI in mental health research. Built on the foundations of MEXA’s vibrant global network of over 700 members spanning 80 countries, the Accelerator brought together AI researchers, clinicians, ethicists, technologists, and lived-experience experts to co-create foundational research and de-risk proposals for a £3 million Wellcome Trust exclusive funding call focused on generative AI for anxiety, depression, and psychosis.

40 teams representing 16 countries and six continents were selected to embark on this journey. Eight projects were led by teams from low- and middle-income countries (LMICs), reflecting MEXA’s commitment to inclusivity and global equity.

Core Workshops and Learning Series

The program kicked off on August 7 with 140 participants in a vibrant virtual meeting, introducing teams to the schedule, expectations, and collaborative opportunities. From the outset, the Accelerator emphasized hands-on learning, mentorship, and co-creation. Over the following months, teams participated in a series of interactive workshops:

  • Pilot Studies Workshop (21 August, 130 participants): Led by Stephen Schueller (UC Irvine) and Qian Yang (Cornell University), this session helped teams design pilot experiments and prioritize data collection strategies, laying the groundwork for methodologically robust projects.

  • Lived Experience Workshop (4 September, 120 participants): Featured Rachel Wurzman (NeuroLivd), Jonathan Nelson (Pulverize the Stigma), Joy Muhia (LSHTM), and Shuranjeet Singh Takhar (Wellcome), who guided teams on authentic and ethical integration of lived experience into research design.

  • Project, Code, and Data Management Workshop (15 September, 90 participants): Christopher Chambers (Cardiff University), Sara Villa (RCM Cooperative, OLS, DSxHE), and Sarah Gibson (The Turing Way) focused on registered reports, open science, collaborative tools, and reproducibility.

  • Ethics Workshop (2 October, 100 participants): Ethics experts David Leslie (Alan Turing Institute), Alex John London (Carnegie Mellon University), Agata Ferretti (IBM Research), and Claudia Corradi (Nuffield Council on Bioethics) helped teams identify ethical considerations, refine strategies, and ensure research adhered to best practices for privacy, fairness, and participant safety.

Strengthening Final Proposals for Wellcome

To support project development, the Accelerator provided pilot seed funding, enabling teams to begin experimental work, refine methods, engage people with lived experience, develop ethics support, and prepare high-quality proposals. Partnerships with Google and Gooey.AI allowed teams to access technical guidance and industry insights to strengthen their research and de-risk experimental approaches.

An innovative draft submission and feedback process further enhanced proposal quality. Teams received detailed, constructive feedback from interdisciplinary experts that helped clarify scientific aims, strengthen methods, and integrate ethical and lived-experience perspectives ahead of final submission.

In-Person Forum at the Wellcome Trust

The Accelerator culminated in a two-day in-person forum at the Wellcome Trust in London (3–4 November), attended by ~100 delegates. This event showcased team progress, facilitated knowledge exchange, and reinforced the global MEXA community.

MEXA Day 1 (104)

Participants engaged in clinic-style workshops on lived experience integration, regulatory pathways, ethics, and technical feasibility, led by experts from academia, industry, and funders. Team presentations, poster sessions, and interactive panels highlighted the diversity of approaches and the depth of co-produced research. A keynote by Jackie Hunter inspired teams to think boldly about the future of responsible AI in mental health.

Participant feedback consistently highlighted the value of mentorship, structured workshops, and the collaborative environment. Teams reported that the Accelerator experience strengthened their proposals and enhanced their understanding of interdisciplinary research. Key areas for continued support include ethics, lived-experience integration, translation, regulatory pathways, and commercialization.

Looking Ahead: A Lasting Global Research Community

As the Accelerator concludes, teams have submitted their final proposals to the Wellcome Trust. Beyond funding outcomes, the program has fostered a durable, global, and ethically grounded community poised to shape the future of generative AI in mental health. The MEXA Accelerator demonstrates what is possible when diverse expertise—from AI researchers to lived-experience experts—is brought together with purpose, resources, and mentorship. By enabling collaboration across continents, disciplines, and perspectives, it has established a robust pipeline of high-impact research and a network poised to transform the AI and mental health landscape for years to come. A heartfelt thank you goes to Neuromatch (MEXA’s owners!), Wellcome, our partners, and the participating teams for their openness and enthusiasm. Your support and contributions made this transformative program possible. The momentum built by the Accelerator promises continued innovation, collaboration, and impact at the intersection of AI and mental health.
Joana Guedes, Science Program Manager

This blog reflects the perspective of the author and does not constitute an endorsement by MEXA. We’re always looking for thoughtful, engaging voices to contribute to the MEXA blog! If you have insights to share at the intersection of mental health and artificial intelligence, we’d love to hear from you: Submit your blog here.

post image0 comment
calendar imageNovember 29, 2025

Bridging Mental Health Gaps in Africa Through Culturally Contextual AI

Harnessing AI to provide culturally sensitive, accessible mental health support across Africa.

mexa_banner_quote

Where It All Began

Growing up in Kenya, I saw how conversations around mental health were often silenced or misunderstood. Many people in my community viewed emotional struggles as weakness, spiritual battles, or private matters that shouldn’t be discussed. Professional therapy was inaccessible to most, too expensive, too far, or simply not available. These realities planted a question in my mind: how could technology make mental health support more accessible, relatable, and safe for everyone?

That question became the foundation for CogniXpert-AI, an AI-powered platform designed to provide culturally contextual, empathetic mental health guidance. It reflects our mission to reimagine how Africans can access mental health support.

Building CogniXpert-AI: Technology with a Human Heart

CogniXpert-AI is more than just a chatbot. It is a digital companion that listens, understands, and responds with empathy. Using natural language processing and evidence-based frameworks like cognitive behavioral therapy, it offers personalized, stigma-free mental health conversations.

Our goal is to make mental health care accessible, affordable, and culturally relevant. Every feature we design, from mindfulness exercises and well-being assessments to journaling tools, reflects local realities and user feedback. These tools help users pause, reflect, and take small, meaningful steps toward emotional well-being.

We believe empathy should be engineered, not as a replacement for human therapists, but as a bridge that connects people to care when they need it most.

Ethics and Empathy in AI Mental Health Tools

Working in mental health AI requires deep responsibility. Unlike typical tech products, the stakes are human emotion, trust, and safety. That is why our guiding principles are safety, transparency, and cultural respect.

Safety means the AI recognizes distress signals and connects users to local helplines rather than attempting to act as a human therapist. Transparency ensures users always know they are interacting with AI. Cultural respect shapes how the system interprets language, emotions, and values, ensuring responses are compassionate and relevant.

For example, emotional expressions can vary widely across cultures. What might sound casual in one language could carry deep meaning in another. By training our models on localized data and feedback, we make sure the system listens with cultural sensitivity, an essential part of ethical AI.

Why Cultural Context Matters

Mental health is personal, and cultural context shapes how people experience, describe, and cope with emotional distress. A one-size-fits-all digital tool designed for Western contexts often misses this nuance.

That is why we prioritize culturally contextual design. We work closely with psychologists, researchers, and community leaders to ensure our tools feel familiar, not foreign. This helps users express themselves authentically, reduces stigma, and fosters a sense of belonging in digital care environments.

Our mindfulness toolkit, mood journals, and self-assessment modules are designed with this philosophy in mind: to meet users where they are and help them feel seen, heard, and understood.

Looking Ahead

For us, building AI for mental health is not about replacing human care. It is about extending it. We envision a future where someone in a rural village or urban neighborhood can access compassionate support instantly, without judgment or barriers.

By combining AI innovation with cultural empathy, we aim to make that future real. Technology should heal, not alienate; listen, not dictate. Every conversation our tools facilitate brings us closer to a world where mental health care is inclusive, ethical, and human-centered.

MEXA Community

Eugene Gitonga Muiru is co-founder and CEO of CogniX LTD, building AI-powered tools that provide culturally sensitive and accessible mental health support across Africa. MEXA has given him a platform to engage with a global community reimagining mental health through collaboration and innovation. It aligns with his passion for using AI to expand access to inclusive and culturally aware mental health care.

This blog reflects the perspective of the author and does not constitute an endorsement by MEXA. We’re always looking for thoughtful, engaging voices to contribute to the MEXA blog! If you have insights to share at the intersection of mental health and artificial intelligence, we’d love to hear from you: Submit your blog here

post image0 comment
calendar imageOctober 11, 2025

Free Academic Poster Template For Mental Health X Ai Researchers

Improve visibility, clarity, and accessibility for your research with this research-backed template

intro image

Poster sessions are a staple of academic conferences. Many of us know the feeling of walking into a poster hall and seeing rows of dense, text-heavy posters. They are often hard to read and even harder to remember. It can be hard for presenters too. Especially if they are working in a second language or having limited design experience.

That is why MEXA is excited to share an adaptable poster template that makes scientific posters clearer, more accessible, and more effective. This template builds on the work of the #BetterPoster movement and can be customized with your institution’s branding or used as-is with the MEXA branding.

Forty global teams in the MEXA Accelerator will be using the poster template to present their work at an in-person event. The Accelerator is a four month program advancing generative AI research for mental health in partnership with Neuromatch and funded by the Wellcome Trust. By sharing this poster template more broadly, we hope researchers across the MEXA community can also benefit.

Why rethink posters?

Mike Morrison, a psychologist and the creator of the #BetterPoster movement, saw posters as a bottleneck in science communication. He helped with the MEXA template. He explained, “Traditional posters often assume attendees will quietly read for ten minutes in a noisy, crowded hall. In reality, most people walk away overloaded by text and miss key findings. Posters should communicate the main point quickly and clearly, while inviting deeper discussion. That way, attendees can grasp the essential message at a glance and still explore the details if they want to learn more.”

rethink_poster_image

Video Title

The poster template uses simple language, clean layouts, and designs that encourage conversation. It was developed based on research including surveys of conference attendees, how people move through poster halls, and feedback from researchers who used the template. This approach helped identify what makes posters easy to read, engaging, and memorable.

Posters for global impact

Poster sessions play a unique role in science communication. They create space for conversations, discoveries, and networking across fields. By making posters easier to read and understand, we lower barriers to participation. More people can share their ideas and learn from each other.

This is especially important in global programs like the MEXA Accelerator. Researchers come from many cultures, disciplines, and languages. The poster template helps ensure ideas are communicated clearly, no matter who is presenting or viewing.

Bringing it into practice

Rieke Schäfer, a volunteer with Climatematch Academy and Neuromatch’s Impact Scholars Program, has helped work on this template and sees its value especially for early-career researchers. “For many of us, conferences can feel intimidating, especially when presenting research for the first time. This poster template gives researchers a way to present confidently and clearly. This poster template lets researchers present confidently and ensures science is accessible to everyone, no matter their background or language. It also supports inclusivity for those new to a field, reading or presenting in a second language, or anyone who finds traditional posters overwhelming.”

bring into practice image

Rieke also notes that the template is flexible. It is a research-based starting point that anyone can adapt to their needs. While the posters come with MEXA branding, you can easily change the colors, swap in your own logos, and adjust the layout to fit your institution, project, or personal style. This makes it simple to create a poster that works for any conference or audience while still following evidence-based design principles.


Download the template and start creating your poster today!

We would love to see how you use the template. Share your poster with us by tagging MEXA on LinkedIn or X or submit a story to the blog.

If you are interested in this topic or want to get involved, visit ScienceUX. They publish and curate research, free resources, and tools to help scientists speed up their work through better, evidence-based UX design.

post image0 comment
background image
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13
image1image2image3image4image5image6image7image8image9image10image11image12image13

Socials

linkedin share link icon
twitter share link icon

Subscribe to our Newsletter

Copyright 2024-2025 MEXA Mental Health X AI