Skip to main content
All CollectionsAdmin Dashboard HelpPrism
How we use Prism (AI) for analysis: bias, transparency, and ethical considerations
How we use Prism (AI) for analysis: bias, transparency, and ethical considerations

This article covers key considerations such as bias mitigation, ethical guidelines, transparency, and compliance with regulations.

Updated this week

Overview

Prism analyses comments from employee engagement surveys conducted on our platform. Prism is currently tasked with categorising comments into pre-defined themes, summarising the feedback, and suggesting potential actions based on this analysis. In addition, users can interact with the data via a chat function for further querying.

1. Bias and fairness

How do you identify and mitigate bias in your AI models?
We work to ensure that the AI models used in our platform are continually assessed for bias. The models are designed with fairness in mind, using methodologies to identify and reduce bias in their predictions. While no AI system can be completely bias-free, the models are trained to handle a wide range of comments from different cultures, languages, and backgrounds to minimise discriminatory outcomes.

What processes are in place to ensure diverse and representative training data?
In conjunction with OpenAI, we employ vast and diverse datasets in training, including data from a wide variety of sources. This helps ensure that the model has exposure to many perspectives. Our team also monitors the performance of the AI to catch any unexpected skewing in results and acts on these insights when necessary.

Can you provide examples of bias audits or debiasing initiatives conducted on your AI systems?
Both People Insight and OpenAI regularly perform audits on the models to detect and mitigate biases. These include examining the training data and the model outputs to find patterns of unfair bias and taking steps to address them, such as rebalancing datasets or adjusting model weights. We leverage OpenAI's advancements in this area to continually refine our comment analysis.


2. Training Data and Model Development

What types of data are used to train your AI models, and how do you ensure its quality and relevance?
The models are trained on diverse datasets sourced from a broad range of content on the internet, books, and other knowledge sources. These models are trained to generalise across multiple domains, ensuring relevance to various topics. We rely on OpenAI's ongoing commitment to data quality and diversity.

How often are the models retrained, and what criteria determine the need for retraining?
The models used are retrained periodically as part of OpenAI's research and development process. Retraining occurs when significant improvements or updates are made to the model architecture, or when there is a need to integrate more up-to-date data, which helps keep the models fresh and relevant to current trends and language patterns. All updates are reviewed by People Insight as part of our ongoing auditing and analysis.


3. Transparency and Explainability

How do you ensure the transparency and explainability of your AI systems?
We aim to provide as much transparency as possible in how the AI analyses comments. We strive to make AI more explainable, ensuring users can understand how decisions (such as categorising comments) are made. We offer clear documentation on how the AI processes data, categorises themes, and generates summaries.

Can you provide documentation or reports on how decisions are made by the AI system?
Yes, we provide documentation on how the AI system works within our platform. This includes details on how themes are selected, how sentiment is analysed, and how suggested actions are generated. Additionally, we collect and track feedback from users on the AI-generated responses to refine the prompts and improve the results over time. We also monitor the prompts entered by users to help us understand usage patterns and optimise the system's value.


4. Ethical Considerations

What ethical guidelines do you follow in developing and deploying AI solutions?
We follow the ethical guidelines provided by OpenAI, which include principles of fairness, transparency, and accountability. In developing our AI-driven solutions, we prioritise the ethical use of data, ensuring that personal information is handled with care and that the system operates in a way that benefits all users equitably.

How do you ensure accountability for the actions of your AI systems?
We take accountability seriously. While AI aids in analysing data and providing recommendations, final decisions are always in human hands. We ensure that our users have the final say and offer mechanisms to question or override AI-driven outputs when necessary.


5. Compliance and Standards

How do your AI solutions align with existing regulations and standards, such as the EU AI Act or NIST frameworks?
Our platform aligns with industry standards and regulations, such as the EU AI Act and the NIST AI Risk Management Framework. We stay informed about legal requirements and ensure our AI models comply with data privacy laws (such as GDPR) and ethical AI usage standards.


By addressing these key questions, we aim to provide users with clarity and confidence in how our AI-powered comment analysis works, ensuring fairness, transparency, and accountability throughout the process.

Did this answer your question?