Find evidence, practical ideas and fresh insight for greater impact

Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences

Based on:

Journal Article (2024)

Open access

 This study presents a practical, card-based toolkit designed to improve nontechnical participants’ understanding of AI concepts, empowering them to actively participate in co-design sessions for AI development.

Brief by:
Research collaborators:
Freya Smith, Malak Sadek, Echo Wan, Akira Ito
Industry, Innovation and InfrastructureQuality EducationReduced Inequality

Involving the public in AI design has the potential to make AI systems more transparent, ethical, and user-friendly. Yet, the limited AI knowledge among nontechnical users often leads to misunderstandings, reducing their ability to engage in meaningful design contributions. AI technologies today often operate as ”black boxes,” with end-users unable to fully understand their workings or effects. This can lead to public misconceptions about AI’s abilities, a phenomenon sometimes called the ”Superhuman Fallacy,” where AI is assumed to have capabilities it does not possess. With transparency challenges in AI-especially related to ethical issues such as bias, privacy, and accountability-these misunderstandings can become obstacles to responsible development and informed user participation.

To bridge this knowledge gap, this research presents a practical, card-based AI literacy toolkit. It aims to introduce essential AI concepts, ethical concerns, and real-world applications to nontechnical audiences in codesign settings. Each card offers a plain-language definition, relevant example, and ”What if?” prompts that encourage critical thinking, using methods grounded in human-centred design. Tested with 50 nontechnical participants, the toolkit demonstrated significant improvements in participants’ understanding of AI topics, broadening the range of AI-related ideas and critical questions raised by over 50% and increasing relevant keyword usage by 80%. Results suggest that AI literacy tools like this one can enhance nontechnical audiences’ involvement in the AI design process, fostering a more inclusive and informed approach as AI plays an increasingly impactful role in public services and digital landscapes.

 

PrintShare
Cite page
Mougenot, Céline. 'Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences'. Acume. https://www.acume.org/r/codesigning-ai-with-end-users-an-ai-literacy-toolkit-for-nontechnical-audiences/

Key findings

  • The toolkit led to significantly improved understanding and questioning of AI concepts by nontechnical users.
    Evidence

    Among 50 participants, those using the toolkit demonstrated a 53.94% broader range and 80.17% higher frequency of AI-related terms in their feedback compared to the control group. Terms such as ''bias,'' ''dataset,'' and ''model'' were common among toolkit users but largely absent in the control group.

    What it means

    This toolkit effectively equipped participants with the vocabulary and conceptual understanding needed to engage critically in AI discussions.

  • Toolkit use improved collaboration and mutual understanding in group settings.
    Evidence

    During codesign sessions, nontechnical participants reported that the toolkit ''leveled the playing field,'' establishing a shared language for discussing AI ideas. Notably, it reduced discomfort by guiding discussion when gaps arose and created a common understanding of technical terms, smoothing interactions between technical and nontechnical participants.

    What it means

    The toolkit facilitated a shared language and mental model among participants, essential for bridging knowledge gaps in multidisciplinary teams.

  • The toolkit supported more diverse and imaginative ideation around AI's applications and impacts.
    Evidence

    Using the toolkit's ''What if?'' prompts, participants explored various outcomes, including risks and ethical considerations. For example, one participant reflected that ''more negatives than benefits'' emerged as they discussed potential harms and benefits.

    What it means

    The toolkit effectively broadened participants' perspectives, encouraging them to consider both positive and negative AI impacts, crucial for responsible AI design.

  • Nontechnical participants valued the chance to contribute their perspectives on AI design.
    Evidence

    Participants expressed appreciation for the toolkit's ability to facilitate their unique insights, with two expressing a desire for ongoing input to ensure their concerns, such as ethical implications, are addressed. One participant said they valued providing a ''human perspective'' in a typically technical design process.

    What it means

    Nontechnical participants valued their inclusion, indicating a strong desire to advocate for public interests and contribute to ethical technology development.

Proposed action

  • Practitioners need to actively incorporate more diverse perspectives in AI development, including those of non-technical and underrepresented groups. This approach challenges the biases of traditional user-centred design, ensuring that diverse perspectives are represented and fostering more equitable, participatory, and just AI systems.

Comments

You must log in to ask a question
 

Are you a researcher looking to make a real-world impact? Join Acume and transform your research into a practical summary.

Already have an account? Log in
Share
Sponsored links

Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences

Cite this brief: Mougenot, Céline. 'Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences'. Acume. https://www.acume.org/r/codesigning-ai-with-end-users-an-ai-literacy-toolkit-for-nontechnical-audiences/

Brief created by: Dr Céline Mougenot | Year brief made: 2024

Original research:

  • Smith, F., Mougenot, C., & et al., ‘Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences’ Interacting with Computers (pp. 1–13) https://doi.org/10.1093/iwc/iwae029. – https://spiral.imperial.ac.uk/bitstream/10044/1/113573/5/iwae029.pdf

Research brief:

This study presents a practical, card-based toolkit designed to improve nontechnical participants’ understanding of AI concepts, empowering them to actively participate in co-design sessions for AI development.

Involving the public in AI design has the potential to make AI systems more transparent, ethical, and user-friendly. Yet, the limited AI knowledge among nontechnical users often leads to misunderstandings, reducing their ability to engage in meaningful design contributions. AI technologies today often operate as ”black boxes,” with end-users unable to fully understand their workings or effects. This can lead to public misconceptions about AI’s abilities, a phenomenon sometimes called the ”Superhuman Fallacy,” where AI is assumed to have capabilities it does not possess. With transparency challenges in AI-especially related to ethical issues such as bias, privacy, and accountability-these misunderstandings can become obstacles to responsible development and informed user participation.

To bridge this knowledge gap, this research presents a practical, card-based AI literacy toolkit. It aims to introduce essential AI concepts, ethical concerns, and real-world applications to nontechnical audiences in codesign settings. Each card offers a plain-language definition, relevant example, and ”What if?” prompts that encourage critical thinking, using methods grounded in human-centred design. Tested with 50 nontechnical participants, the toolkit demonstrated significant improvements in participants’ understanding of AI topics, broadening the range of AI-related ideas and critical questions raised by over 50% and increasing relevant keyword usage by 80%. Results suggest that AI literacy tools like this one can enhance nontechnical audiences’ involvement in the AI design process, fostering a more inclusive and informed approach as AI plays an increasingly impactful role in public services and digital landscapes.

Findings:

The toolkit led to significantly improved understanding and questioning of AI concepts by nontechnical users.

Among 50 participants, those using the toolkit demonstrated a 53.94% broader range and 80.17% higher frequency of AI-related terms in their feedback compared to the control group. Terms such as ”bias,” ”dataset,” and ”model” were common among toolkit users but largely absent in the control group.

This toolkit effectively equipped participants with the vocabulary and conceptual understanding needed to engage critically in AI discussions.

Toolkit use improved collaboration and mutual understanding in group settings.

During codesign sessions, nontechnical participants reported that the toolkit ”leveled the playing field,” establishing a shared language for discussing AI ideas. Notably, it reduced discomfort by guiding discussion when gaps arose and created a common understanding of technical terms, smoothing interactions between technical and nontechnical participants.

The toolkit facilitated a shared language and mental model among participants, essential for bridging knowledge gaps in multidisciplinary teams.

The toolkit supported more diverse and imaginative ideation around AI’s applications and impacts.

Using the toolkit’s ”What if?” prompts, participants explored various outcomes, including risks and ethical considerations. For example, one participant reflected that ”more negatives than benefits” emerged as they discussed potential harms and benefits.

The toolkit effectively broadened participants’ perspectives, encouraging them to consider both positive and negative AI impacts, crucial for responsible AI design.

Nontechnical participants valued the chance to contribute their perspectives on AI design.

Participants expressed appreciation for the toolkit’s ability to facilitate their unique insights, with two expressing a desire for ongoing input to ensure their concerns, such as ethical implications, are addressed. One participant said they valued providing a ”human perspective” in a typically technical design process.

Nontechnical participants valued their inclusion, indicating a strong desire to advocate for public interests and contribute to ethical technology development.

Advice:

Practitioners need to actively incorporate more diverse perspectives in AI development, including those of non-technical and underrepresented groups. This approach challenges the biases of traditional user-centred design, ensuring that diverse perspectives are represented and fostering more equitable, participatory, and just AI systems.

Empirical Research: Mixed Methods
|
2024

"Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences"

Cite paper

Smith, F., Mougenot, C., & et al., ‘Codesigning AI with End-Users: An AI Literacy Toolkit for Nontechnical Audiences’ Interacting with Computers (pp. 1–13) https://doi.org/10.1093/iwc/iwae029.

Published in Interacting With Computers, pp. 1-13.
Peer Reviewed

DOI: 10.1093/iwc/iwae029
🔗 Find full paper (Open access)
Methodology
This is a mixed methods research.
quantitative study sentiment analysis thematic analysis workshop facilitation card-based toolkit

This mixed-methods study followed a design research approach to develop and assess an AI literacy toolkit in two stages. Stage one involved a quantitative analysis of AI literacy improvement among nontechnical participants (N=50) using sentiment analysis via Google Natural Language API and keyword frequency and breadth analysis. Stage two was a qualitative assessment of the toolkit's impact on collaboration during a codesign workshop (N=6) with thematic analysis of participant interviews to evaluate the toolkit's effect on shared understanding and team dynamics within multidisciplinary groups.



Funding
ParseError: syntax error, unexpected identifier "s", expecting "]"

Heads up: experience is better on desktop

You can use the site on your phone, but some features are easier on a laptop or desktop. We’re improving mobile soon.

Continue

Thank you for subscribing!

We’d love to know who we will be talking to, could you take a moment to share a few more details?

Thanks for signing up!
If you haven’t already, create a free account to access expert insights and be part of a global effort to improve real-world decisions.

Get started

Close