![]() AI means a rethink of teaching foreign languages This and other biases show up in many algorithms like those determining who gets bail and generating images based on text descriptions. Researchers like Emily Bender and Timnit Gebru (who was forced out of Google after raising ethical concerns about AI bias) have spoken out about how AI can propagate misinformation, deepfakes and prejudice.įor example, asking generative AI for the pronoun of a doctor in a short story is likely to return ‘he’ rather than ‘she’ or ‘they’, mirroring prejudices in society. These algorithms absorb and transmit prejudices from the data used to train them. AI can propagate misinformation, deepfakes and prejudice. Algorithms with subtle biases are already seeping into everyday life. People in the ‘AI Ethics’ camp focus on the social justice implications of AI. However, some suspect that this time will be different. Historically, these jumps have been bookended by long AI winters where progress slows considerably. Thinkers like Swedish philosopher Professor Nick Bostrom believe that in the not-too-distant future, AI may not only match human intelligence but far outstrip it, leading to a dangerous superintelligence.īut other experts regard the risks of an AI apocalypse as “ overblown.”ĪI’s history reveals sharp jumps in technological capabilities that apparently heralded unlimited growth. And that’s how these chatbots can know so much more than any one person.” We need to retain research integrity in the AI era Hinton likened emerging AI to having “10,000 people and whenever one person learnt something, everybody automatically knew it. Picture: Getty ImagesĪccording to these and other AI experts, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Professor Nick Bostrom is the director of the Future of Humanity Institute at Oxford University. Sometimes AI’s harms will be relatively localised – though still significant – like when AI replaces human workers or even eliminates industries.īut their largest concern is that AI could get smart enough to threaten humanity itself. No matter how carefully AI is programmed, these people think we may inadvertently design algorithms (‘Misaligned AI’) that relentlessly pursue goals contrary to our interests. Some people – like Musk, Hinton and Altman – fear that AI could be turned into a weapon of mass destruction by autocrats or nations. ![]() Read more AI Alignment – death, self-interest and superintelligence There are two camps of thinkers with different takes on AI’s immediate and longer-term risks – the ‘AI Alignment’ (safety first) and ‘AI Ethics’ (social justice) camps. We’re unlikely to put the genie back in the bottle, so we need to understand and, where possible, mitigate the risks. Whichever way you slice the silicon, things are changing fast. Picture: Getty ImagesĪll the hubbub raises the question: are we truly at an AI turning point? Is it the beginning of the end for humanity or a passing fad? Should we be worried about emerging AI, and if so, why? Samuel Altman, CEO of OpenAI, before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Professionals in many sectors are already integrating tools like Google’s Bard into their work. ![]() For example, AI might provide creative assistance in writing and image generation. Of course, many individuals and Big Tech companies contend that AI can be a force for good. More recently, Samuel Altman from OpenAI appealed to US lawmakers for greater regulation to prevent AI’s possible harms. Hinton’s remarks followed an open letter signed by another ‘Godfather of AI’, Yoshua Bengio, as well as Elon Musk and others, which called for a pause on the development of AI models more advanced than GPT-4. “I don’t think they should scale this up more until they have understood whether they can control it,” Hinton said. AI pioneer Geoffrey Hinton resigned from Google so that he could speak freely about the technology’s dangers. Other AI luminaries are also expressing concern. ![]() Italy temporarily banned ChatGPT over privacy concerns. Some countries are acting over concerns about generative AI. Soothsayers are having a field day with headlines like “ Open AI’s ChatGPT will change the world” “First your job, and then your life!” and “30 ways for your business to survive the AI revolution”. There are many predictions out there about AI’s implications, including the risks of so-called large language models. Social media, traditional media and water cooler conversations are awash with predictions about AI’s implications, including the risks and dangers of so-called large language models like ChatGPT. The tidal wave of predictions that generative artificial intelligence (AI) created last year shows no signs of abating.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |