{"id":31681,"date":"2023-05-18T09:22:44","date_gmt":"2023-05-18T13:22:44","guid":{"rendered":"https:\/\/www.kaspersky.co.za\/blog\/ai-government-regulation\/31681\/"},"modified":"2023-05-18T15:32:30","modified_gmt":"2023-05-18T13:32:30","slug":"ai-government-regulation","status":"publish","type":"post","link":"https:\/\/www.kaspersky.co.za\/blog\/ai-government-regulation\/31681\/","title":{"rendered":"Here&#8217;s how we should approach artificial intelligence"},"content":{"rendered":"<p>I\u2019m a bit tired by now of all the AI news, but I guess I\u2019ll have to put up with it a bit longer, for it\u2019s sure to continue to be talked about non-stop for at least another year or two. Not that AI will then stop developing, of course; it\u2019s just that journalists, bloggers, TikTokers, Tweeters and other talking heads out there will eventually tire of the topic. But for now their zeal is fueled not only by the tech giants, but governments as well: the UK\u2019s planning on introducing <a href=\"https:\/\/www.reuters.com\/world\/uk\/britain-opts-adaptable-ai-rules-with-no-single-regulator-2023-03-28\/\" target=\"_blank\" rel=\"nofollow noopener\">three-way AI regulation<\/a>; China\u2019s put draft AI legislation up for a <a href=\"https:\/\/www.reuters.com\/technology\/china-releases-draft-measures-managing-generative-artificial-intelligence-2023-04-11\/\" target=\"_blank\" rel=\"nofollow noopener\">public debate<\/a>; the U.S. is calling for \u201c<a href=\"https:\/\/www.reuters.com\/technology\/us-begins-study-possible-rules-regulate-ai-like-chatgpt-2023-04-11\/\" target=\"_blank\" rel=\"nofollow noopener\">algorithmic accountability<\/a>\u201c; the EU is <a href=\"https:\/\/www.reuters.com\/technology\/ai-booms-eu-lawmakers-wrangle-over-new-rules-2023-03-22\/\" target=\"_blank\" rel=\"nofollow noopener\">discussing but not yet passing draft laws<\/a> on AI, and so on and so forth. Lots of plans for the future, but, to date, the creation and use of AI systems haven\u2019t been limited in any way whatsoever; however, it looks like that\u2019s going to change soon.<\/p>\n<p>Plainly a debatable matter is, of course, the following: do we need government regulation of AI at all? If we do \u2014 why, and what should it look like?<\/p>\n<h2>What to regulate<\/h2>\n<p>What is artificial intelligence? (No) thanks to marketing departments, the term\u2019s been used for lots of things \u2014 from the cutting-edge generative models like <a href=\"https:\/\/en.wikipedia.org\/wiki\/GPT-4\" target=\"_blank\" rel=\"nofollow noopener\">GPT-4<\/a>, to the simplest machine-learning systems, including some that have been around for decades. Remember <a href=\"https:\/\/en.wikipedia.org\/wiki\/T9_(predictive_text)\" target=\"_blank\" rel=\"nofollow noopener\">\u04229<\/a> on push-button cellphones? Heard about <a href=\"https:\/\/www.kaspersky.com\/blog\/humachine-intelligence-antispam\/6638\/\" target=\"_blank\" rel=\"noopener nofollow\">automatic spam and malicious file classification<\/a>? Do you check out film recommendations on Netflix? All of those familiar technologies are based on machine learning (ML) algorithms, aka \u201cAI\u201d.<\/p>\n<p>Here at Kaspersky, we\u2019ve been using such technologies in our products for close on 20 years, always preferring to modestly refer to them as \u201cmachine learning\u201d \u2014 if only because \u201cartificial intelligence\u201d seems to call to most everyone\u2019s mind things like <a href=\"https:\/\/en.wikipedia.org\/wiki\/HAL_9000\" target=\"_blank\" rel=\"nofollow noopener\">talking supercomputers<\/a> on <a href=\"https:\/\/en.wikipedia.org\/wiki\/2001:_A_Space_Odyssey_(film)\" target=\"_blank\" rel=\"nofollow noopener\">spaceships<\/a> and other <a href=\"https:\/\/en.wikipedia.org\/wiki\/R2-D2\" target=\"_blank\" rel=\"nofollow noopener\">stuff<\/a> straight out of <a href=\"https:\/\/en.wikipedia.org\/wiki\/C-3PO\" target=\"_blank\" rel=\"nofollow noopener\">science fiction<\/a>. However, such talking-thinking computers and droids need to be fully capable of human-like thinking \u2014 to command <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence\" target=\"_blank\" rel=\"nofollow noopener\">artificial general intelligence<\/a> (AGI) or artificial <a href=\"https:\/\/en.wikipedia.org\/wiki\/Superintelligence\" target=\"_blank\" rel=\"nofollow noopener\">superintelligence<\/a> (ASI), yet neither AGI nor ASI have been invented yet, and will hardly be so in the foreseeable future.<\/p>\n<p>Anyway, if all the AI types are measured with the same yardstick and fully regulated, the whole IT industry and many related ones aren\u2019t going to fare well at all. For example, if we (Kaspersky) will ever be required to get the consent from all our training-set \u201cauthors\u201d, we, as an information security company, will find ourselves up against the wall. We learn from malware and spam, and feed the knowledge gained into our machine learning, while their authors tend to prefer to withhold their contact data (who knew?!). Moreover, considering that data has been collected and our algorithms have been trained for nearly 20 years now \u2014 \u00a0quite how far into the past would we be expected to go?<\/p>\n<p>Therefore, it\u2019s essential for lawmakers to listen, not to marketing folks, but to machine-learning\/AI industry experts and discuss potential regulation in a specific and focused manner: for example, possibly using multi-function systems trained on large volumes of open data, or high responsibility and risk level decision-making systems.<\/p>\n<p>And new AI applications will necessitate frequent revisions of regulations as they arise.<\/p>\n<h2>Why regulate?<\/h2>\n<p>To be honest, I don\u2019t believe in a superintelligence-assisted Judgement Day within the next hundred years. But I do believe in a whole bunch of headaches from thoughtless use of the computer black box.<\/p>\n<p>As a reminder to those who haven\u2019t read our <a href=\"https:\/\/www.kaspersky.com\/blog\/machine-learning-nine-challenges\/23553\/\" target=\"_blank\" rel=\"noopener nofollow\">articles on both the splendor and misery of machine learning<\/a>, there are three main issues regarding any AI:<\/p>\n<ul>\n<li>It\u2019s not clear just how good the training data used for it were\/are.<\/li>\n<li>It\u2019s not clear at all what AI has succeeded in \u201ccomprehending\u201d out of that stock of data, or <a href=\"https:\/\/arxiv.org\/abs\/2004.07780\" target=\"_blank\" rel=\"nofollow noopener\">how it makes its decisions<\/a>.<\/li>\n<li>And most importantly \u2014 the algorithm can be misused by its developers and its users alike.<\/li>\n<\/ul>\n<p>Thus, anything at all could happen: from malicious misuse of AI, to unthinking compliance with AI decisions. Graphic real-life <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-fails\/18318\/\" target=\"_blank\" rel=\"noopener nofollow\">examples<\/a>: fatal <a href=\"https:\/\/www.ntsb.gov\/investigations\/AccidentReports\/Reports\/HAR1903.pdf\" target=\"_blank\" rel=\"nofollow noopener\">autopilot errors<\/a>, deepfakes (<a href=\"https:\/\/www.kaspersky.com\/blog\/rsa2020-deepfakes-mitigation\/34006\/\" target=\"_blank\" rel=\"noopener nofollow\">1<\/a>, <a href=\"https:\/\/www.kaspersky.com\/blog\/deepfake-darknet-market\/48112\/\" target=\"_blank\" rel=\"noopener nofollow\">2<\/a>, <a href=\"https:\/\/www.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/48193\/\" target=\"_blank\" rel=\"noopener nofollow\">3<\/a>) by now habitual in memes and even the news, a silly error in <a href=\"https:\/\/algorithmwatch.org\/en\/algorithm-school-system-italy\/\" target=\"_blank\" rel=\"nofollow noopener\">school teacher contracting<\/a>, the police apprehending a <a href=\"https:\/\/edition.cnn.com\/2021\/04\/29\/tech\/nijeer-parks-facial-recognition-police-arrest\/index.html\" target=\"_blank\" rel=\"nofollow noopener\">shoplifter but the wrong one<\/a>, and a <a href=\"https:\/\/www.reuters.com\/article\/amazon-com-jobs-automation\/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idINKCN1MK0AH?edition-redirect=in\" target=\"_blank\" rel=\"nofollow noopener\">misogynous AI recruiting tool<\/a>. Besides, any AI can be attacked with the help of custom-made hostile data samples: <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-fails\/18318\/\" target=\"_blank\" rel=\"noopener nofollow\">vehicles can be tricked using stickers<\/a>, one can extract <a href=\"https:\/\/venturebeat.com\/2020\/12\/16\/google-apple-and-others-show-large-language-models-trained-on-public-data-expose-personal-information\/\" target=\"_blank\" rel=\"nofollow noopener\">personal information<\/a> from GPT-3, and <a href=\"https:\/\/securelist.com\/how-to-confuse-antimalware-neural-networks-adversarial-attacks-and-protection\/102949\/\" target=\"_blank\" rel=\"noopener\">anti-virus or EDR can be deceived<\/a> too. And by the way, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Pineapple_Water_for_the_Fair_Lady%23The_Anti-Aircraft_Codes_of_Al_Efesbi\" target=\"_blank\" rel=\"nofollow noopener\">attacks on combat-drone AI<\/a> described in science fiction don\u2019t appear all that far-fetched any more.<\/p>\n<p>In a nutshell, the use of AI hasn\u2019t given rise to any truly massive problems yet, but there is clearly a lot of potential for them. Therefore, the priorities of regulation should be clear:<\/p>\n<ol>\n<li>Preventing critical infrastructure incidents (factories\/ships\/power transmission lines\/nuclear power plants).<\/li>\n<li>Minimizing physical threats (driverless vehicles, misdiagnosing illnesses).<\/li>\n<li>Minimizing personal damage and business risks (arrests or hirings based on skull measurements, miscalculation of demand\/procurements, and so on).<\/li>\n<\/ol>\n<p>The objective of regulation should be to compel users and AI vendors to take care not to increase the risks of the mentioned negative things happening. And the more serious the risk, the more actively it should be compelled.<\/p>\n<p>There\u2019s another concern often aired regarding AI: the need for observance of moral and ethical norms, and to cater to psychological comfort, so to say. To this end,\u00a0we see warnings given so folks know that they\u2019re viewing\u00a0a non-existent (AI-drawn) object or communicating with a robot and not a human, and also notices informing that <a href=\"https:\/\/www.kaspersky.com\/blog\/neural-networks-data-leaks\/47992\/\" target=\"_blank\" rel=\"noopener nofollow\">copyright was respected during AI training<\/a>, and so on. And why? So lawmakers and AI vendors aren\u2019t targeted by angry mobs! And this is a very real concern in some parts of the world (recall <a href=\"https:\/\/www.theguardian.com\/world\/2016\/jan\/26\/french-taxi-drivers-block-paris-roads-in-uber-protest\" target=\"_blank\" rel=\"nofollow noopener\">protests against Uber<\/a>, for instance).<\/p>\n<h2>How to regulate<\/h2>\n<p>The simplest way to regulate AI would to prohibit everything, but it looks like this approach isn\u2019t on the table yet. And anyway it\u2019s not much easier to prohibit AI than it is computers. Therefore, all reasonable regulation attempts should follow the principle of \u201cthe greater the risk, the stricter the requirements\u201d.<\/p>\n<p>The machine-learning models that are used for something rather trivial \u2014 like retail buyer recommendations \u2014 can go unregulated, but the more sophisticated the model \u2014 or the more sensitive the application area \u2014 the more drastic can be the requirements for system vendors and users. For example:<\/p>\n<ul>\n<li>Submitting a model\u2019s code or training dataset for inspection to regulators or experts.<\/li>\n<li>Proving the robustness of a training dataset, including in terms of bias, copyright and so forth.<\/li>\n<li>Proving the reasonableness of the AI \u201coutput\u201d; for example, its being <a href=\"https:\/\/fortune.com\/2023\/04\/17\/google-ceo-sundar-pichai-artificial-intelligence-bard-hallucinations-unsolved\/\" target=\"_blank\" rel=\"nofollow noopener\">free of hallucinations<\/a>.<\/li>\n<li>Labelling AI operations and results.<\/li>\n<li>Updating a model and training dataset; for example, screening out folks of a given skin color from the source data, or suppressing chemical formulas for explosives in the model\u2019s output.<\/li>\n<li>Testing AI for \u201chostile data\u201d, and updating its behavior as necessary.<\/li>\n<li>Controlling who\u2019s using specific AI and why. Denying specific types of use.<\/li>\n<li>Training large AI, or that which applies to a particular area, only with the permission of the regulator.<\/li>\n<li>Proving that it\u2019s safe to use AI to address a particular problem. This approach is very exotic for IT, but more than familiar to, for example, pharmaceutical companies, aircraft manufacturers and many other industries where safety is paramount. First would come five years of thorough tests, then the regulator\u2019s permission, and only then a product could be released for general use.<\/li>\n<\/ul>\n<p>The last measure appears excessively strict, but only until you learn about incidents in which <a href=\"https:\/\/www.nature.com\/articles\/538311a#\/b9\" target=\"_blank\" rel=\"nofollow noopener\">AI messed up treatment priorities<\/a> for acute asthma and pneumonia patients and tried to send them home instead of to an intensive care unit.<\/p>\n<p>The enforcement measures may range from fines for violations of AI rules (along the lines of European penalties for GDPR violations) to licensing of AI-related activities and criminal sanctions for breaches of legislation (as proposed in China).<\/p>\n<h2>But what\u2019s the right way?<\/h2>\n<p>Below represent my own personal opinions \u2014 but they\u2019re based on 30 years of active pursuit of advanced technological development in the cybersecurity industry: from machine learning to \u201csecure-by-design\u201d systems.<\/p>\n<p>First, we do need regulation. Without it, AI will end up resembling highways without traffic rules. Or, more relevantly, resembling the online personal data collection situation in the late 2000s, when nearly everyone would collect all they could lay their hands on. Above all, regulation promotes self-discipline in the market players.<\/p>\n<p>Second, we need to maximize international harmonization and cooperation in regulation \u2014 the same way as with technical standards in mobile communications, the internet and so on. Sounds utopian given the modern geopolitical reality, but that doesn\u2019t make it any less desirable.<\/p>\n<p>Third, regulation needn\u2019t be too strict: it would be short-sighted to strangle a dynamic young industry like this one with overregulation. That said, we need a mechanism for frequent revisions of the rules to stay abreast of technology and market developments.<\/p>\n<p>Fourth, the rules, risk levels, and levels of protection measures should be defined in consultation with a great many relevantly-experienced experts.<\/p>\n<p>Fifth, we don\u2019t have to wait ten years. I\u2019ve been banging on about the serious risks inherent in the Internet of Things and about vulnerabilities in industrial equipment for over a decade already, while documents like the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/cyber-resilience-act\" target=\"_blank\" rel=\"nofollow noopener\">EU Cyber Resilience Act<\/a> first appeared (as drafts!) only last year.<\/p>\n<p>But that\u2019s all for now folks! And well done to those of your who\u2019ve read this to the end \u2014 thank you all! And here\u2019s to an interesting \u2013 safe \u2013 AI-enhanced future!\u2026<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"premium-generic\">\n","protected":false},"excerpt":{"rendered":"<p>It\u2019s obvious already that AI needs regulating, but how? Here\u2019s Eugene Kaspersky telling us how he sees it. <\/p>\n","protected":false},"author":13,"featured_media":31682,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1789],"tags":[1140,3643,1876,3168,1083,422],"class_list":{"0":"post-31681","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-ai","9":"tag-government-regulation","10":"tag-machine-learning","11":"tag-neural-networks","12":"tag-technologies","13":"tag-threats"},"hreflang":[{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/ai-government-regulation\/31681\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/ai-government-regulation\/25686\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/ai-government-regulation\/21105\/"},{"hreflang":"ar","url":"https:\/\/me.kaspersky.com\/blog\/ai-government-regulation\/10626\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/ai-government-regulation\/28341\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/ai-government-regulation\/25985\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/ai-government-regulation\/26358\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/ai-government-regulation\/28846\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/ai-government-regulation\/27781\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/ai-government-regulation\/35317\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/ai-government-regulation\/11457\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/ai-government-regulation\/48220\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/ai-government-regulation\/20625\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/ai-government-regulation\/21311\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/ai-government-regulation\/30173\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/ai-government-regulation\/33892\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/ai-government-regulation\/26294\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/ai-government-regulation\/31993\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.co.za\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts\/31681","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/comments?post=31681"}],"version-history":[{"count":1,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts\/31681\/revisions"}],"predecessor-version":[{"id":31683,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/posts\/31681\/revisions\/31683"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/media\/31682"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/media?parent=31681"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/categories?post=31681"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.co.za\/blog\/wp-json\/wp\/v2\/tags?post=31681"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}