Prompts // 7 (Ensuring Accountability)

Building on my last post, about the ethical considerations of using AI and the ‘conscientious commands’ we should be aware of, I wanted to think about our responsibilities for using and directing others to the use of, AI tools like ChatGPT or DALL-E.

Again, I’m using ChatGPT to form the basis of my writing, but I’m not leaving it to the tool to do everything – it’s only a start. It’s helping me formulate ideas on what’s involved and what I need to understand. From this I explore and expand my understanding, direct further commands to uncover or provide more in-depth content, etc. It’s useful, but some of the language is clearly not my own, despite ChatGPT learning about me and my writing style.

The prompts we create and post to ChatGPT are more than just questions – they’re the direction by which we want AI to understand how we want it to travel. How we instruct ChatCPT may define the output, but how we govern ourselves and our ethical code will ensure the governance of the response that AI generates, to be not only informative but also responsible and truthful. How we explore the critical role of accountability in AI prompt engineering, and exploring how the prompts we engineer must reflect a commitment to accuracy, reliability, and educational values are worth spending some time with.

The more I read about what others in this sphere are doing with AI the more we need to be sure someone is taking a pause and thinking about the uses and the impact of this use.

Defining Accountability in AI Interactions:
Accountability in our use of AI tools is about more than ensuring a correct output; it’s about fostering a responsible exchange where AI understands the gravity of its answers. It’s the practice of designing prompts that leads AI to consider the implications of the information it provides and the consequences that such information might bear.

Expanded Context: When asked to analyze trends in student performance, AI must examine and understand how the prompt has been constructed and this respects individual confidentiality and presents aggregated data with sensitivity to the broader implications of such analyses.

Deeper Reasoning: Accountability in AI is a shared responsibility. As users of these systems we must anticipate the various interpretations of our questions, using our prompts to steer AI away from potential pitfalls and towards responses that are considerate of the educational impact and student welfare.

Factual Integrity:
The heart of AI’s accountability lies in our ability to recognise factually correct and unbiased information in the output. This means crafting prompts that are specific enough to elicit precise data while broad enough to avoid leading the AI towards a predetermined conclusion.

Expanded Context: An AI prompt requesting “evidence to support the effectiveness of homework in elementary education” should be balanced to encourage AI to present a comprehensive view, including studies that might question or disprove that effectiveness.

Deeper Reasoning: The integrity of the information provided by AI influences the quality of education. It is vital that AI’s responses are drawn from credible sources, representing the current consensus and debates within the educational community, thus promoting informed discourse and critical thinking.

Mitigating Misinformation:
In an age where misinformation can spread rapidly, AI’s role in disseminating knowledge comes with the risk of amplifying false or misleading content. Prompt engineers must be vigilant in constructing queries that encourage AI to source information from reputable and authoritative databases.

Expanded Context: A prompt like “Discuss the strategies to improve reading skills among students” should be accompanied by qualifications that direct AI towards research-based strategies endorsed by educational experts, rather than popular or unsupported opinions.

Deeper Reasoning: By consciously guiding AI to differentiate between well-founded information and popular myths, prompt engineers can mitigate the spread of misinformation, ensuring that AI becomes a tool for truth and learning, rather than confusion and error.

Addressing Ethical Implications: Accountability extends to the ethical implications of AI’s output. Prompts must be designed to avoid advocating for or implying support of any practices that could be considered unethical or harmful.

Expanded Context: For example, a prompt like “How can we ensure students’ focus in online learning?” should be nuanced to exclude solutions that may infringe on privacy or autonomy, guiding AI towards ethical and respectful strategies.

Deeper Reasoning: It is incumbent upon us to foresee the broader ethical impacts of our prompts, ensuring that AI’s suggestions align with educational values such as respect for student privacy, inclusivity, and the promotion of a safe learning environment.

How we develop our understanding of prompt engineering must be aligned with the ethical compass of accountability. In every question or instruction we provide to AI, we must consider the implications, looking for responses that are informative, ethical, and beneficial to the (educational) journey. Through thoughtful and responsible prompts we can ensure that AI serves as an ally in the pursuit of knowledge and a beacon of integrity in education.

Photo by Emiliano Vittoriosi on Unsplash