Bias and discrimination concerns in AI have been raised repetitively by researchers in the last few years Ferrara, 2023. Briefly, if the data used to train AI models is biased – whether due to societal inequities or historical patterns in the case of social sciences – these systems can replicate or even amplify these biases and societal stereotypes. The consequences can affect applications such as healthcare, where biased AI models may lead to unequal treatment recommendations for different groups of patients.
For scientists and researchers, understanding and addressing these risks is essential to ensuring the responsible use of AI in research and beyond. Researchers suggest several approaches to mitigate AI bias: data pre-processing, mindful and justified model selection, and data post-processing Ferrara, 2023.
Explainable AI is a domain within AI research that emerged in response to the need for system transparency and user trust Thalpage, 2025. Research in Explainable AI shows consistent bias with the focus on the Western user population, minor representation of underrepresented communities, and lack of inclusivity-oriented responses to address this issue.
The AI research agent scienceOS embeds approaches to mitigate bias into every search by ranking sources using a combination of metadata, content relevance, and citation data to surface diverse, high‑quality evidence. With one‑click deep dives – critical‑review searches, follow‑up questions, and citation analyses – it helps uncover opposing viewpoints and nuanced context.