Zitat:
Public debate is needed to scrutinise these developments. Critical attention must be given to current trends in thought about technology and mental health, including the values such technologies embody, the people driving them and their diverse visions for the future. Some trends – for example, the idea that ‘objective digital biomarkers’ in a person’s smartphone data can identify ‘silent’ signs of pathology, or the entry of Big Tech into mental health service provision – have the potential to create major changes not only to health and social services but to the very way human beings experience ourselves and our world. This possibility is also complicated by the spread of ‘fake and deeply flawed’ or ‘snake oil’ AI, and the tendency in the technology sector – and indeed in mental health sciences – to over-claim with outlandish claims of a 'magic bullet' for relieving human distress. Meredith Whitaker and colleagues at the AI Now research institute observe that disability and mental health have been largely omitted from discussions about AI-bias and algorithmic accountability. This report brings them to the fore. It is written to promote basic standards of algorithmic and technological transparency and auditing, but also takes the opportunity to ask more fundamental questions, such as whether algorithmic and digital systems should be used at all in some circumstances—and if so, who gets to govern them. These issues are particularly important given the COVID-19 pandemic, which has accelerated the digitisation of physical and mental health services worldwide, and driven more of our lives online.