Topic 2F – Ethical Considerations

The use of AI often sparks debates about ethics and bias. AI can represent most human insight, but some human insight is inherently biased. It is important that we have a high degree of confidence that the data being used for AI is correct and accurate.

EO research makes use of various types of data, such as satellite images and social media data. However, improved image resolution raises questions under the new European General Data Protection Regulation (GDPR), that came into force in 2018. The collection, storing, processing and transfer of such data therefore needs to be done with much greater attention to legal and ethical requirements by (for example) adopting supporting technologies that give data contributors greater control over their (personal) data on one hand, and support the development of the European Data economy on the other. Technologies such as blockchain are, therefore, also likely to be researched and deployed, including from an ethical perspective, for fundamental AI4EO research. Other ethical issues, including but not restricted to algorithmic bias, resulting in discrimination, also need to be checked and addressed at early stages of technological development. Building responsible AI4EO that embody social norms and values while ensuring sustainable and inclusive development is of central relevance. Accordingly, we will address issues of data protection/privacy, data portability, and fairness/equality at the level not just of data collection, but also at the level of data use and dissemination within the sphere of ethics in AI4EO.

Quantifying the uncertainty of labelled data is an ongoing topic in AI4EO – and creating trustworthy AI is a topic that is under a lot of research in the wider AI community. Regarding training data, the AI4EO Future Lab at DLR has the first benchmark dataset in AI4EO that comes with a human confidence level. Apart from this aleatoric uncertainty (which is rather difficult to reduce like the measurement noise), we are also researching the epistemic uncertainty of the model.

Explainability helps to resolve bias problems in AI. There is a technique called Bayesian Deep Learning that lends itself to understanding the uncertainty of a network.

Featured Educator

  • Xiaoxiang Zhu

No items found.
Optional Further Reading

Discussion

An arrow pointing up