In AI development, application, and research, ensuring individuals are fully informed about how their data is used – and obtaining their explicit consent – is essential. This principle isn’t just about compliance; it reflects a deeper commitment to respecting individual human autonomy.
Researchers suggest approaching autonomy and AI by looking at two distinct dimensions: autonomy-as-authenticity, which relates to being true to one’s own values and identity, and autonomy-as-agency, which concerns the ability to make and act on decisions freely Prunkl, 2024. Understanding this distinction helps pinpoint different ethical challenges. Some, like manipulation or constraint of choice, are well known. Others, such as how AI might subtly shape user preferences over time (adaptive preference formation), have received less attention but are equally important. Therefore, AI systems play different roles in relation to human autonomy: they can act as agents, influencing decisions, or as tools, serving human purposes. Recognizing these roles allows us to better assess the ethical implications of AI use, especially when it comes to research and education.
At scienceOS, we take the risks of compromised autonomy seriously. We neither make use of user inputs (e.g. prompts or files) nor outputs (e.g. text answers or plots) to train AI models, uploaded files are encrypted and the user maintains full access control, and the AI acts as a good faith assistant that may suggest further directions but does not manipulate a researcher’s course of action.