• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Business News

Latest business breaking news from around the world

  • Home
  • Markets
  • Business
  • Investing
  • Tech
  • Politics

Are we ready for AI that knocks out jobs, fuels disinformation, and is difficult to regulate?

February 8, 2023 by www.moneycontrol.com

As it happens with most new advancements of technology, a discussion and a debate is emerging on what AI can or can't do. (Representative image)

As it happens with most new advancements of technology, a discussion and a debate is emerging on what AI can or can't do. (Representative image)

On February 6, Sundar Pichai, CEO of tech behemoth Google and Alphabet announced 'Bard' , their experimental AI service. Google's response comes nearly three months after a hitherto lesser-known company, OpenAI , announced ' ChatGPT ' to the world. The last three months has seen a flurry of activity as an excited world embraced the tool, making it compose prose and poetry and figuring out how far it could go.

Tech pundits even began to write articles that ChatGPT would be a great addition, but the company had to figure out how to make money. Even before the ink could dry on those articles, Microsoft announced Microsoft Teams Premium that featured services by OpenAI's GPT-3.5, hoping to make their online meeting and collaboration tool much more intelligent.

Three Key Impacts

As it happens with most new advancements of technology, a discussion and a debate is emerging on what AI can or can't do. The impact that technology has on people, cultures, politics and nation-states are expected and well documented. But each new development in AI also brings forth worries about how relevant the human race will remain as functions and decision-making shifts to machines.

For now, the coming of a language model for dialogue applications (LaMDA) like ChatGPT and Bard will impact three major areas.

* First, the relationship that content plays in the evolution of societies and economics will probably be the first to face a challenge.

* Second, it will also prove to be a major challenge for regulators as they grapple with the aftereffects of a technology that has arrived.

* Third, as the technology gets weaponised, it will impact the ability to combat fake news and disinformation.

Replacing Content Creators

The coming of the internet saw the rise of a new language, culture, and multiple subcultures. It also saw economies rise, as the world wide web changed  the way information was produced and consumed. ChatGPT and Bard have already begun to shift the paradigm. Once again, the language of the internet, as we understand it, is all set to change. An artificially intelligent tool is now surfing the web to find myriad sources and also develop correlations and produce not just  prose, but poetry too.

Which means, economies that were built on generating content are now all set to be made redundant. Columnists are already wondering if ChatGPT can knock out a large chunk of the content-generating industry, and even functions like technical writing or legal drafting . A tool, with the right sources and keywords could emerge to be much faster and more efficient than a pool of human minds could.

A lot of the Information Technology Enabled Services (ITES) industry in emerging economies could suddenly become redundant, once again hurting emerging markets and shifting the geopolitical balance back to the advanced economies. Content creators on social media platforms that reward consistent uploads and participation in trends and memes to stay relevant on the platform now have to contend with a technology that seems to be able to generate such content easily without fatigue.

It also raises questions about the sourcing of information that generates the content by ChatGPT and its avatars. Unlike researchers who pour through reams of text and build a thesis over a period of time, ChatGPT could produce a credible piece of work in seconds. Besides raising questions about how the text was sourced and the conclusions substantiated, it will also challenge how the information that now seems credible will be consumed.

Nascent Regulatory Frameworks

Naturally, lawmakers have been worried about what AI can do. However, most recognise that it is futile to try and regulate technology. It is easier to regulate the outcomes of any technology, once they are known. In the case of AI, most countries are at a nascent stage of their regulatory frameworks. Basically, everyone is trying to grapple with the unknown.

In India, history shows that in little over a decade, it went from advocating massive computerisation for economic reasons to nearly banning it for political expediency. Advancements in technology have continued at an increasing pace despite India's policy (or lack thereof) on its regulation. In the AI space so far, NITI Aayog has held a consultation on a National Strategy for Artificial Intelligence (NSAI) and released a two-part Approach Document titled "Responsible AI (RAI) #AIforall" which proposes principles for responsible AI and approaches for operationalising these principles .

In the United States, the National Institute of Standards and Technology is building an AI risk management framework that tries to lay down elements of "responsible development". It is an attempt to ensure that "core concepts in responsible AI emphasise human centricity, social responsibility and sustainability". In sectors such as medicine, the US Food and Drug Administration (FDA) is also trying to draw up an action plan in response to stakeholder feedback on how to regulate AI/ML learning-based 'software as a medical device' (SaMD).

Similarly, the EU has also signed an agreement with the US to not only jointly further AI research in five sectors, but to also create regulatory frameworks . The EU's AI Act is already awaiting clearance from the European parliament and is an attempt to restrict "unacceptable risk" as well as "high risk applications". The EU hopes that like its GDPR, the AI Act, once it becomes law, could also set global standards.

Weaponising ChatGPT

But as regulators race against time, tools like ChatGPT could also decisively change the way the consumption of information can be weaponised. This again will have profound implications for societies that are already struggling under the deluge of disinformation.

Nearly 70 years ago, Marshall McLuhan, a philosopher who defined media theory had predicted that the speed at which information travels will have a profound impact on societies. As an extension, the speed of disinformation travel has already impacted not just domestic politics, but also geopolitics.

The use of disinformation by the Russians to impact the US elections are well known and documented. The fake news farms run by teens in Macedonia, which were widely reported about during the Trump years, demonstrate how a group of people, without any state backing, can exploit data voids on the internet by flooding it with made up stories.

The exponential growth of social media has seen an equally exponential rise to disinformation, online gender based violence and even fueled genocide. Traditionally, the deliberation that went into the creation of content and its delivery played a fine balance in ensuring the credibility of not just information, but also its impact.

Social media, while "democratising" news, also brought in fake news, while devastating traditional media and the erstwhile gatekeepers of news. But it would still take information farms run by human beings to fuel disinformation targeting adversaries and citizens.

The growth of ChatGPT has now changed the paradigm, where flesh and blood are no longer needed to produce fake news. Not only can technology now produce fake news that seems completely credible, but it can also now do so in a matter of seconds or less. The weaponisation of any technology is inevitable. It is the management of its consequences that will begin to feel even more impossible than ever before.

Shachi Solanki is Programme Manager and Saikat Datta is CEO, DeepStrat, a New Delhi-based think tank and strategic consultancy that specialises in risk assessment and management. Denny George is co-founder and product engineer at Tattle Civic Technologies. His work involves building tools and datasets to understand and respond to online misinformation.

Views are personal and do not represent the stand of this publication.

  • There's a new obstacle to landing a job after college: Getting approved by AI
  • The Sisters Working at a Women’s Clinic Who Have to Keep Their Jobs a Secret
  • Hackers Are Coming for the 2020 Election — And We’re Not Ready
  • Tech firms could face new EU regulations over fake news
  • AI can fight climate change but there's a catch: Optimization doesn't automatically equal emissions reduction
  • How to build ethical AI
  • The Next Step Toward Improving AI
  • AI Captain! Norway to Unveil World's First Crewless Electric Cargo Ship
  • China's one-child policy's human cost fuels calls for reform
  • Klopp ready for Liverpool’s ‘biggest game’ against Manchester United
  • Key to success in data science: Domain expertise
  • 2018 Dodge Challenger SRT Demon: Dragster’s Dream
  • UK is going into reverse on clean energy, says former Environment Agency head
  • The end of the checkout signals a dire future for those without the right skills
  • Taylor Swift doc among movies headed to 2020 Sundance Film Festival
  • Doomsday Clock is now 100 seconds from midnight
  • VinFast Sedan, SUV Debut In Paris With Italian Style, Big Plans
  • US looks into over 100 complaints of Tesla cars suddenly accelerating and crashing
  • Tribune Take: Technology trends that will shape 2020
  • Elizabeth Warren REFUSES to shake hands with Bernie Sanders after they clash at 2020 debate over claim he said a woman can't win the presidency and she says: 'Look at the men. They lost 10 elections!'
Are we ready for AI that knocks out jobs, fuels disinformation, and is difficult to regulate? have 1521 words, post on www.moneycontrol.com at February 8, 2023. This is cached page on Business News. If you want remove this page, please contact us.

Filed Under: Opinion opinion, Artificial Intelligence, Bard, ChatGPT, Google, Meta, chatbot, why ai should be regulated, ai canada jobs, job interview most difficult questions, ais canberra jobs, stronghold crusader most difficult ai, knocks pressure regulator, knocks regulator, ai threatens jobs, regulation ai, ai-powered jobs search engine, google ai powered job search, google ai powered jobs search engine, knock job, how fossil fuel wastes are regulated in saudi arabia, ai powered job search, opportunity knocks job seeker, ai robots jobs, ready jet inc jobs, ai based jobs, shovel ready jobs not shovel ready

Primary Sidebar

RSS Recent Stories

  • Leadership with Fr. Ben
  • Unified WESM to boost Luzon energy buffer
  • Only 27% of companies in Philippines can ward off cyber attacks, says Cisco
  • Roxas Holdings selling Batangas mill following shutdown
  • Philippines seen to triple Asian trade by 2030
  • Monde Nissin restructures alternative meats business
  • Pag-IBIG released P53.76B cash loans in 2022; assisted record-high 2.61M members
  • BIZ BUZZ: PLDT’s moment of truth
  • More jobs needed for pandemic-hit youth, says WB
  • Govt to start mapping political exiles for reparations

Sponsored Links

  • How American stocks could continue to climb
  • Which is The Economist’s country of the year for 2021?
  • After a shocker in 2021, where might inflation go in 2022?
  • The hidden costs of cutting Russia off from SWIFT
  • Has the pandemic shown inflation to be a fiscal phenomenon?
Copyright © 2023 Business News. Power by Wordpress.
Home - About Us - Contact Us - Disclaimers - DMCA - Privacy Policy - Submit your story