Towards Ecosystems for Responsible AI
Abstract
Governing artificial intelligence (AI) requires multi-actor cooperation, but what form could this cooperation take? In recent years, the European Union (EU) has made significant efforts to become a key player in establishing responsible AI. In its strategy documents on AI, the EU has formulated expectations and visions concerning ecosystems for responsible AI. This paper analyzes expectations on potential responsible AI ecosystems in five key EU documents on AI. To analyze these documents, we draw on the sociology of expectations and synthesize a framework comprising cognitive and normative expectations on sociotechnical systems, agendas and networks. We found that the EU documents on responsible AI feature four interconnected themes, which occupy different positions in our framework: 1) trust as the foundation of responsible AI (cognitive–sociotechnical systems), 2) ethics and competitiveness as complementary (normative–sociotechnical systems), 3) European value-based approach (normative–agendas), and 4) Europe as global leader in responsible AI (normative–networks). Our framework thus provides a mapping tool for researchers and practitioners to navigate expectations in early ecosystem development and help decide what to do in response to articulated expectations. The analysis also suggests that expectations on emerging responsible AI ecosystems have a layered structure, where network building relies on expectations about sociotechnical systems and agendas.
Origin | Files produced by the author(s) |
---|