AI conversational agents (CAs) are increasingly used (e.g., counseling, coaching, companionship) despite limited evidentiary basis due to disciplinary silos, short-term studies, and a narrow focus on short-term subjective well-being. Worryingly, current AI CAs risk fostering moral atrophy with a limited focus on short-term gratification, undermining the development of essential character virtues (CVs) for human flourishing.
We conjecture that nearly all extant AI CAs trained to enhance short-term subjective well-being (e.g., engagement, amusement) may not translate to CVs because they focus less on CV-inducing emotions (achievement-oriented [e.g., pride] or other-directed [e.g., gratitude]), do not consider philosophical underpinnings of long-term and collective-focus (vs. immediate and individual), nor overcome limitations of short-term AI architecture that CVs require.
This is an interdisciplinary problem, and significant questions remain: (1) conceptually and practically, how to design and deploy AI CAs that are psychologically and ethically guided for CVs, and (2) empirically, how to evaluate how effective the best-designed AI CAs are at promoting CVs over long-term engagements.
Our project will combine novel interdisciplinary and integrative frameworks that draw on expert focus groups across multiple disciplines to propose several technical approaches to designing AI CAs that optimize for CVs. This will also entail rigorous empirical testing, including experimental and longitudinal studies, with behavioral assessments (e.g., phone use, videos), to obtain evidence on long-term effectiveness. We expect multiple papers and talks, open source virtue-centered AI CAs, datasets, websites, and videos, showcasing the key aspects of AI creation and its effectiveness. If funded, our work will serve as a basis to guide future research on AI CAs for enhancing CVs, the adoption of AI CAs for diverse contexts, and public policy on AI use for CVs and long-term human flourishing.