Notice: Undefined index: HTTP_ACCEPT_LANGUAGE in /home/stockstowatch/public_html/wp-content/mu-plugins/GrULw0.php on line 4

Notice: Undefined index: HTTP_ACCEPT_LANGUAGE in /home/stockstowatch/public_html/wp-content/mu-plugins/GrULw0.php on line 4
Emerging Technologies Regulation: Don’t Regulate the Tool, Regulate the Use of the Tool – Stocks to Watch
  • Sat. May 18th, 2024

Emerging Technologies Regulation: Don’t Regulate the Tool, Regulate the Use of the Tool

Byanna

Mar 2, 2023
Emerging Technologies Regulation: Don't Regulate the Tool, Regulate the Use of the Tool

[ad_1]

ChatGPT has generated a lot of hype. There is no doubt OpenAI, with its ChatGPT app, has generated more interest, by all measures, than any other application that ever existed. It took Instagram and Spotify around 2.5 months and 5 months, respectively, to reach 1M users. OpenAI’s new generative AI-powered chatbot ChatGPT, on the other hand, reached 1M users just 5 days after launching at the end of November 2022.

In January 2023, it set another record, reaching 100M monthly active users after 2 months. For comparison, it took TikTok 9 months to hit the same threshold. Its climbing popularity is also reflected by a rapid rise in the volume of Google searches on the topic as well as the number of r/chatgpt subreddit subscribers.

With so much hype also comes panic, especially from those who do not fully understand the full spectrum of the technology’s capabilities and benefits as well as its limitations and drawbacks. We recently observed this kind of panic with the EU’s new proposal of the AI Act. The EU is considering classifying generative artificial intelligence (AI) tools, such as ChatGPT, in a “high risk” category in its upcoming AI bill, and as a result, subjecting such tools to burdensome compliance requirements.

This overarching proposal impedes innovation and creativity, and shows that the EU is hitting the panic button instead of carefully considering the benefits and risks of new technologies. The proposal targets so-called “high risk” applications of AI – including those used in public services, law enforcement, and judicial procedures, which must comply with the strictest requirements, including conformity assessments, technical documentation, monitoring, and oversight measures.

The proposed AI Act would place AI systems that generate complex text (chatbots) in a new high-risk category despite their low risk. AI-powered chatbots can generate complex text from limited human input and fulfill various functions, from writing recipes, poems, scripts, and articles to Internet searches, creative ideation, and summarizing texts.

Like many new technologies, AI chatbots have evoked familiar panic: Doomsayers have been prophesizing such tools as having the capacity to “destroy education,” “create catastrophic redundancies,” “confuse and control the masses,” or “become sentient.”

In addition to ChatGPT, which people already use for a range of valuable functions, this amendment would assign “high risk” classification to other helpful and harmless tools, including:

By considering the above use cases as “high risk,” the AI Act will not only curb productivity and creativity but will also limit these tools – many currently free to use – by subjecting them to expensive compliance requirements.

The dual purpose of technology

Technology is a tool. Like any tool, it can be used for good or for evil. Consider a knife: You can use it to cut vegetables for your salad or you can use it as a weapon and harm someone.

Generative AI also has its dual purpose. There are plausible concerns about chatbots, such as the spread of misinformation or toxic content. For example, you can use a generative AI system ethically to assist in writing emails or press releases or you can use it unethically to produce an article for a scientific journal. The tool can do both, it’s our choice on how we wish to use it.

This dual purpose and choice apply to any technology, new or old. We have witnessed it with the use of the internet, for example: We can use the internet for our day-to-day work communication with colleagues and teammates or we can use it for criminal affairs engaging in all sorts of “dark web” activities. You can use data analytics and algorithms to produce robo-advisors to assist with financial advice or you can create algorithms for phishing, malware and ransomware.

With the emergence of AI and immersive technologies – these combined technologies can be used for enhanced user and customer engagement, but they can also be used to deceive and spread misinformation through the creation of deepfake photos and videos.

And no doubt blockchain and cryptocurrency have many potential benefits to better our global economy and society and, in particular, enable financial inclusion to those who need it the most. Yet it has received a very bad reputation as being used mainly for nefarious activities.

In January 2021, U.S. Treasury secretary, Janet Yellen said: “Cryptocurrencies are a particular concern. I think many are used – at least in a transaction sense – mainly for illicit financing…and I think we really need to examine ways in which we can curtail their use and make sure that money laundering doesn’t occur through those channels.”

But these are misconceptions about cryptocurrency and the use of blockchain technology. In the same manner that the internet is not mainly used for “dark web” activities, so cryptocurrency is not mainly used for nefarious activities. According to an excerpt from Chainalysis’ 2021 report, in 2019, criminal activity represented 2.1% of all cryptocurrency transaction volume (roughly $21.4 billion worth of transfers). In 2020, the criminal share of all cryptocurrency activity fell to just 0.34% ($10.0 billion in transaction volume). 

It is encouraging to see that Yellen’s negative sentiment towards cryptocurrency has changed over the years. In 2022 she could also see the benefits of blockchain technology – the technology that underlines cryptocurrency.

In March of 2022, in an interview to CNBC, she said: “I have a little bit of skepticism because I think there are valid concerns about it. Some have to do with financial stability, consumer/investor protection, use for illicit transactions and other things. On the other hand, there are benefits from crypto, and we recognize that innovation in the payment system can be a healthy thing.”

Sensible regulation

Since the existence of humanity and the evolution of civilization, humans have developed tools to assist them to evolve and push the boundaries to explore new ways to better our lives and to explore new worlds. This is how we went from learning to fly at Kitty Hawk to launching rockets into space; from landing a man on the moon to trying to send humans to Mars by the end of this decade; from using horses and carriages to riding cars that will soon drive themselves.

Humankind has always pushed the boundaries of innovation and evolution. Emerging technologies such as Web3, metaverse, AI, Generative AI, IoT, blockchain technology, Robotics, and others, are the tools that will take us to the next new frontiers. Innovation is how we evolve as humankind.

Indeed, a tool has a dual purpose. But do you regulate the tool? Do you regulate the hammer, or do you regulate the use of the hammer?

In the case of ChatGPT, where there are legitimate concerns about chatbots, such as the spread of misinformation or toxic content, legislators should deal with these risks in sectoral legislation, such as the Digital Service Act, which require platforms and search engines to tackle misinformation and harmful content. Not, as proposed in the AI Act, in a way that entirely ignores the different use cases’ risk profiles.

We need future-proof regulation that is independent of a concrete tool like AI, generative AI or blockchain. We don’t regulate hammers, or math, or cars. We regulate the use of these tools and require individuals and organizations to follow policies and rules on how to use and implement them.

If we regulate these technologies, we create regulations that will stifle innovation and consequently our future evolution as humankind. We should be mindful and consider the broad spectrum of their use cases to support the ones that benefit our future and place rules and policies that will mitigate harmful, nefarious activities.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

[ad_2]

Image and article originally from www.nasdaq.com. Read the original article here.

You missed


Notice: Function WP_Object_Cache::add was called incorrectly. Cache key must not be an empty string. Please see Debugging in WordPress for more information. (This message was added in version 6.1.0.) in /home/stockstowatch/public_html/wp-includes/functions.php on line 5835