The relationship between technology and its purpose has always been on my mind, as a software engineer caring about the users of the technologies we build. Recently, however, I find myself more and more drawn into conversations about technologies, human values, and impact on users and society. The more I think about it, the more I want to be part of the conversation and the advances in society we can do with AI technologies—for social good. So let’s do it right.
In the last few years I had the privilege to lead research and tech innovation tackling challenges faced by communities, such as homeless women's safety, healthcare in remote rural villages, Indigenous language and land stewardship on the Canadian Arctic shore. The insights had not been unexpected: technologies and values are intimately connected—no technologies are values-neutral (note 1); they do embed values understood by developers and negotiated by the communities subject to the technological impact: human dignity, cultural traditions, or collective decision-making. The experience, however, left a strong effect on me on how we were able to design the technologies to align with these values and how essential this alignment had been in in the success and sustainability of these solutions for these communities. The lessons I draw as a software engineer engaged in a world-making activity and the level of responsibility we have in the process of technical innovation and design, and as an educator in mentoring future technologists. I will be speaking about the "Human Connection: The Essential Compass to Innovating Technologies that Matter" at the 27th International Conference on Agile Software Development in a few days. Please join the conversation if you are attending!
I posit that the technology-value nexus should be central to considerations in the future digital society. What makes it crucial and yet intriguing and challenging is the mere nature of the concept of value: most often values are local, reflecting meaning attributed within a community through shared experiences, religion or cultural traditions, yet can have society-wide or global reach or consequences. This leads to an extremely difficult tension: whose values are we, as technologists, responsible for identifying and considering in and through technology design?
I know that how we will (or should) be interacting with GenAI systems is on everyone’s mind these days. Preserving the human agency in value-driven technology design and deployment is on mine. GenAI's rapid advancements make the conversation about technology-value alignment more crucial than ever. My lived experience in these projects makes me advocate for no less than participatory approaches and that include and uplift the voices of those impacted by technologies during both design and long-term technology adaptation and use. These voices best reflect human and societal values and can (should!) become value guardians, contextualizers, and co-reflectors on the technology-value alignment through participatory technology co-design and long-term use.
note 1: Gabriel, I. and Ghazavi, V. The Challenge of Value Alignment: from Fairer Algorithms to AI Safety, The Oxford Handbook of Digital Ethics