Conversations on the Social Embeddedness of Digital Technologies
Conversations on the Social Embeddedness of Digital Technologies
Digital technologies are characteristic of our society. At the same time, they lead to for-reaching changes in the processes of work, organisation, privacy and also in the society as a whole. This raises questions and challenges about how the digital revolution shall be steered in a way that is consistent with normative and ethical foundations of libertarian civil societies, which are a precondition for modern democratic statehood.
The general aim of these conversations is to gain an insight into the interplay of society and digital technologies. We welcome researchers from different academic disciplines, e.g. social science and computer sciences, who present current perspectives and research results on digital technologies and their social embeddedness.
The conversations take place every Tuesday from 16:00 to 17:00 during the summer term 2021 in form of a virtual meeting. After a short introduction, the speakers will present recent work, followed by a discussion of the content in plenary session.
The conversations are part of the research training group “Social embeddedness of Autonomous Cyber Physical Systems (SEAS)” and the research center “Human Cyber-Physical System (HCPS)” with the cluster “Human – Technology – Society”. The target group of the conversations are members of SEAS and HCPS. Moreover, the conversations will be also attractive for undergraduate and graduate students and researchers from disciples such as Social Science, Computer Science, Health and Neuroscience.
Effective governance of self-driving cars requires broad public support. Although policy-makers and practitioners agree upon the growing need to regulate the development of self-driving cars and the importance of regulation that is consistent with citizens' moral believes and societies' legal standards, there is little systematic evidence about which type of regulation citizens prefer and whether the public is sensitive to specific features of possible regulation regimes.
In a conjoint experiment, respondents are asked to compare two hypothetical regimes regulating self-driving cars and to decide which regime they prefer. The regime profiles vary with respect to three substantive dimensions: (1) Safety (admission agency of self-driving cars and safety standards compared to conventional cars), (2) legal framework (liability for accidents caused by the autopilot and ethical prioritization and (3) autonomy (data protection and supervision of autopilot by the driver). The pre-registered conjoint experiment has been conducted on representative online samples for the USA (N=1,188), Japan (N=1,135), and Germany (N=1,174). While all three countries have a substantive automotive industry, the country selection also reflects cultural differences regarding the subjective evaluation of AI and autonomous vehicles. However, across all samples, we find that citizens strongly prefer regulation that requires permanent human supervision of self-driving cars and stricter safety standards. Cross-country differences emerge on the safety dimension, as respondents from Japan and Germany see public authorities to be in charge of the approval of self-driving cars, while respondents in the US are more likely to accept industry self-regulation. Furthermore, in-depth sub-group analysis reveals that preferences towards self-driving cars' regulation are weekly affected by respondents' attitudes towards technology (technophobia) while their partisan orientation does not affect regulatory preferences whatsoever.
For more information about Prof. Dr. Markus Tepe click here.
Unraveling the social embeddedness of digital technologies may also include an analysis of the ways in which the hopes and fears, the threats and potentials associated with these technologies are (re)produced in and through (the use of) different forms of popular media. Starting from this assumption, my presentation reflects on a selection of fictional and non-fictional representations of digital technologies and the ways in which they project or are referred to in both utopian and dystopian scenarios of digital futures, e.g., to negotiate their effects on processes of subjectivation and social transformation. As such, these representations can thus be considered both socio-culturally produced in that they are given shape in historically, culturally and medially specific environments, and socio-culturally productive in that they actively participate in the discourse on digital technologies, thus contributing to giving shape to our perceptions and understandings of what these technologies are and do.
For more information about Prof. Dr. Martin Butler click here.
Current debates on Artificial intelligence (AI) often depict this technology as possessing some appalling properties. Even if AI is a lively research field for decades, most debates describe is as something revolutionary and new. In addition, AI is portrayed as a monolithic technology and posed in opposition to humans: Robots will take over our work, once singularity happens, machines will rule over humans and so on. In contrast to that, Uli Meyer will discuss the social embeddedness of AI. He shows that AI – like all technology – is developed, implemented and used in socio-technical constellations. He argues, that to understand the impact of AI on society and to be able to shape and influence it, we have to understand these constellations.
For more information about Prof. Dr. Uli Meyer click here.
For the last few years, AI-based techniques have emerged in the art world, discussed under terms like computational creativity or artificial creativity. The lecture shows examples of creative experiments with artificial intelligence in the fields of literature, film and art. It will raise the question of what creativity means in the digital age. Will computer programs replace the artist? Alternatively, will they open up new possibilities for creating innovative forms of art – not as a substitute for the artist, but as his collaborator?
For more information about Prof. Dr. Stephanie Catani click here.
Digital Technologies are not only envisioning typical users but also typical situations of use, e.g. a user consulting a decision support systems. Many of these situations are occurring in organized contexts where users are members and decisions are part of a much broader architecture of decisions and expectations. What difference do organizations as active contexts make for understanding the embeddings and disembeddings of digital technologies?
For more information about Prof. Dr. Stefanie Büchner click here.
Understanding – and maybe shaping – the real-time society requires modelling the mechanisms that are guiding the dynamics of socio-technical systems in general and of the real-time society in particular. Agent-based modelling is a suitable means to experiment with various governance scenarios, and to better understand the functioning of complex systems.
For more information about Prof. Dr. Johannes Weyer click here.
When, where and why are robots socially accepted is a driving question among (social) roboticists. I will contrast the common belief that emotional displays or polite conversations are key to social acceptance and argue that robots instead need to better understand their social surroundings. I will discuss the concept of common ground and derive challenges that need to be addressed to enable robots to interact in a socially acceptable way. Finally, I will discuss why this is important for group dynamics in human-robot mixed groups.
For more information about Prof. Dr. Rosenthal-von der Pütten click here.
Recent advances in digital technologies in general and artificial intelligence in particular have stirred high hopes and deep fears at the same time. High expectations of great advancements in science, industry and society are confronted with equally grave concerns regarding threats to civil rights, societal values and democratic liberties posed by such technologies. As a consequence, requests have been voiced demanding the ethical design of such technologies, a trend mirrored in many recent policy papers. In my talk, I will elucidate the role ethics can play in the design of digital technologies. I will investigate various values to be considered in the design of AI systems, such as transparency and accountability, justice and fairness or privacy. At what point and in what way can these values accounted for in the processes of designing, developing and using such systems? I will conclude my talk with some remarks regarding the governance of big data and artificial intelligence.
For more information about Prof. Dr. Judith Simonn click here.
For about ten years now, industry 4.0 and digitisation has been discussed as a new megatrend shaping work. At the SOFI Göttingen, we are working on several research projects focused on what really is going on in companies and on work places. Our research is based on in-depth case studies often combining workplace observations with interviews (managers from different levels, experts from different domains, works councils as well as employees) and questionnaires/surveys. From our sociology of work perspective the research is focused on how digitisation is used and implemented on the site and work process level, on managerial and work policy concepts and on processes of negotiation around digitisation. We are looking at work related outcomes and what employees think about digitisation. Some main findings from our research are that digitization is used in a variety of ways with a wide range of outcomes on work. But, it is not simply diversity. There are patterns related especially with sectors, business models, types of work processes (tasks, occupations) as well as work policies. Quite often, we do not find disruption but a huge amount of path dependency. Concerning work situations and occupational health, at least in most German work places, the biggest problems are not new forms of surveillance, intensified digital/algorithmic control or digital taylorism. A much more important issue is that, from a user perspective, digital technologies quite often do not deliver what suppliers promised. This often ends up in work intensification. Employees do not have enough influence on how technologies and work processes are shaped. Getting employees involved in the design and optimisation of technologies-in-use, work policies and organisational as well as work environmental issues is of major importance.
For more information about Dr. Martin Kuhlmann click here.
For the seamless social embedding of digital products to succeed, user- and practice-oriented methods are essential, such as Participatory Design (PD). However, PD is highly demanding when target groups have little experience with digital media, as is often the case in product design for and with older adults. Therefore, it is necessary to understand learning processes as essential elements of co-design and to integrate them into the research process. The lecture will illustrate the vital link between learning and co-design with two project examples.
For more information about Prof. Dr. Claudia Müller click here.
Privacy, as danah boyd and others have argued, is networked. Privacy is achieved in the midst of far-flung practices, involving heterogeneous media ensembles, trans-local infrastructure, multiple data, and various people. When privacy is disrupted, bystanders are involved. In this talk, I outline an interdisciplinary perspective on bystanders and their implication in disruptions of privacy.
For more information about Prof. Dr. Susann Wagenknecht click here.