AI has significantly helped improve social productivity and made human life more comfortable and convenient. But on the other hand, the emergence of AI challenges all aspects of human society ethics, privacy security and so on. At the beginning of the new year, looking back at the world's ten most representative AI ethics events over the past two years, we believe that while allowing technology to move forward appropriately, we should find the right balance between privacy, security and convenience.


“Artificial intelligence may be the end of the human race." Hawking, the world-famous theoretical physicist, was wary of AI technology.


Over the past decade, thanks to improved algorithms, computing power and communications technology, artificial intelligence technology has ushered in an unprecedented period of development opportunities. From the theory to the practice, from the laboratory to the industrialization, the artificial intelligence technology has launched a competition which integrates \"the industry, the university and the research\" in the global scope.


Artificial intelligence technology has significantly helped improve the efficiency of social production, but also made human life more comfortable and convenient. But as Hawking worries, the emergence of artificial intelligence has challenged all aspects of human society, such as ethics, privacy and security.


Especially in the last year or two, with the large-scale industrial application of AI technology, some contradictions between people and artificial intelligence that have no precedent gradually emerged. The sound of worry and doubt has intensified, and it has become the primary proposition of the near artificial intelligence era to explore a predictable, restrained and good-behaved artificial intelligence governance mechanism as soon as possible.


For example, as a technology company focusing on AI, one of the pioneers of AI industrialization has realized the importance of facing up to the ethical problems brought about by AI technology, and advocated that AI enterprises should pay attention to governance as a top priority.


In july 2019, we launched the guidelines for the application of artificial intelligence, and formally established the institute of artificial intelligence governance in the same year, hoping for rational attention to the ai incident, and to do in-depth research on the problems behind the incident, through the constructive discussion of all sectors of society, we can finally put the ai good thing into practical action. In four sentences,\" rational attention, in-depth research, constructive discussion, and persistent action,\" the co-founder and CEO of Kuang Vision, India, expressed his determination to advocate AI governance.


At the beginning of 2020, the Institute of Artificial Intelligence Governance first reviewed the world's more representative top 10 AI governance events, hoping to find solutions to the deep-seated problems reflected behind them.


Remember a portrait called Edmund Bellamy? In New York, Christie's eventually sold for $432,000, even catching up with Picasso's paintings for sale at Christie's at the same time. The new artist is not a human being, but an artificial intelligence painter.


Today's AI, Qin Qi calligraphy and painting, omnipotent, in addition to news writing, picture generation, video and music creation, but also to become a singer, star change face. Of course, can also help researchers, up to astronomical exploration, down to product development.


However, while bringing efficiency and pleasure to human production and life, these smart AI also create many difficult problems, which not only challenge the bottom line of human ethics, but also impact the legal boundary of the pre-Artificial Intelligence era.


A team of international patent lawyers, led by Professor Ryan Abbott of the University of Surrey in the UK, patented the equipment they used for the invention of the AI system DABUS. The inventor's column on the application data states that DABUS (AI system DABUS inventor is Imagination Engines CEO Stephen Thaler).


According to the introduction, DABUS is an AI model based on connectivism. The system consists of two neural networks, one trained to respond to new ideas based on the rebirth of neuronal connectivity into self-interference; the second neural network is responsible for criticizing, monitoring and comparing innovative ideas identified by existing knowledge, feedback to the first neural network to generate the most creative ideas.


It is under this operating mechanism that Dabus \"came up with\" some interesting inventions, such as the new device for fixing drinks; the signal equipment that helps search and rescue teams find their targets.


Ryan Abbott's team filed patents for the UK, US and EU respectively. Britain and the European Union have rejected their applications. The UK Intellectual Property Office said it would not recognize the AI system DABUS as a qualified inventor because the machine was \"not human\" and therefore would not accept patent applications. The European Patent Office also rejected the DABUS patent application, saying it did not meet the requirements of the European Patent Office that the inventor designated in the filing of the patent application must be a human, not a machine.


Mr Abbott, however, does not endorse the European Patent Office's case. He took the student as an example, since the trained student applied for a patent and got the patent from the student rather than the person who trained him, the machine that received the training was also eligible to be regarded as an inventor. In fact, last year the United States patented the DABUS invention.


In response, Abbott explains that while U.S. law also emphasizes that applicants must be \"individuals,\"the legislation is designed to prevent legal persons from making inventions and becoming patent subjects, without carefully considering the invention of AI and autonomous thinking machines.


It is worth noting that as artificial intelligence (AI) continues to affect human intellectual creation, such as content creation, invention and creation, and begins to play more roles as \"creators\" and \"inventors \", the international community has been focusing on the impact of artificial intelligence on intellectual property rights systems such as copyright, patents, trademarks, trade secrets and so on.


For example, patent issues include how to define AI disclosure (including inventions using AI and inventions developed by AI); how to determine the contribution of natural persons to AI inventions; and whether AI patents require new forms of intellectual property protection, such as data protection, etc.


In comparison, a series of EU initiatives on AI tend to be more stringent. Some have even advocated strict restrictions on AI, including the prohibition of granting legal personality rights to AI systems or robots.


On the night of August 30,2019, an AI face-changing software brushed the screen on social media, and users needed just a positive face photo to replace the characters in the video with their own. The server for the AI's face-changing software burned more than 2 million yuan in operating expenses overnight as users drove in frantically.


However, the change in the Internet world began earlier. In early 2018, an anonymous Reddit user named Deepfake tinkered with AI's'face-changing' with its own computer and open source AI tools to transform a person's face into another in any video.


In just a few weeks, the Internet is littered with crude porn with celebrity faces. Although Reddit quickly blocked Deepcake, it was too late and the technology had taken root on the web.


Man's obsession with face change is ancient. From the face of Sichuan opera to \"Liaozhai Zhi Yi \",\" painting skin \", the classic after a long time behind, perhaps because of the face to provide a change of identity, out of the shackles of reality reverie.


Computer-based easy-to-read technology has long been available, but for the first time, AI has reduced the face change to such a civilian, both technical and communication barriers to extremely low levels. Especially as a mobile app, such as ai face-changing software,\" should be the first time,\" some industry insiders said.

  由于不少高频使用的App都用手机号 面部图像注册登录,中国用户担心AI换脸软件被不法分子利用,通过技术合成完成刷脸支付等;或在微信视频,假扮家人朋友却不被识破,导致诈骗甚至更严重的犯罪行为。

Since many high-frequency Apps use mobile numbers Face image registration login, chinese users worried that ai face-changing software was used by lawbreakers to complete face brushing payments through technology synthesis; or in wechat video, pretending to be a family friend was not detected, leading to fraud and even more serious criminal behavior.


In addition, the AI's management of the user's portrait rights is controversial or even pitfalls, such as how to regulate the reasonable use of other people's portraits? How to protect the user's own portrait right from infringement and abuse?


At present, the AI face change software is blocked by China WeChat because of \"security risks \". However, the code with deepwake features appears on an android version of an overseas version of a well-known short video social app, according to media reports. Although it limits the use of the feature by minors, allows you to change your face only, and prevents users from uploading their own source video, the move suggests that the parent company of the short video social app is still willing to accept controversial technology.


For example, there are defects in the existing face changing technology, such as the face changing video based on generative counter network technology is often not real-time, can be manually real-time designated interaction to enhance detection.


For example, on April 20,2019, the second draft of the Civil Code's human rights draft has been standardized for AI change. In addition, we should strengthen the establishment and promotion of artificial intelligence technology standards. Through the specification of deep learning network, artificial intelligence chip and other technologies, strengthen the supervision and guidance from the root.


As far as foreign experience is concerned, in 2018 Germany established the Data Ethics Committee, which is responsible for the development of ethical standards and specific guidelines for the German federal government in the digital society. Last october, the commission issued recommendations for data and algorithms, with a central vision of creating a five-tier risk rating system for digital service companies using data, with different regulatory measures for companies of different types of risk.


Also, the business itself should recognize that if people misuse your stuff, your business will have problems, and it's best to try to prevent this from happening.


More than 90 percent of press releases over the next 15 years will be created by AI, according to US-based NarrativeScience. But the problem is, what if it's also good at writing fake news?


On February 15,2019, OpenAI, an AI research institute, showed a software GPT-2 that just needs to give the software some information to write realistic fake news.


The OpenAI published the process of writing news by software. The researchers gave the software the following information:\" a train carriage loaded with controlled nuclear material was stolen in Cincinnati today and its whereabouts are unknown.\" Based on this, the software produced seven paragraphs of news, and the software quoted the words of government officials, except that the information was all false.


In terms of technological breakthroughs, it's exciting. The reason for the gpt-2's excitement is that predicting text is seen as a computer's \"uber-task,\" a challenge that, if it can be overcome, opens the valve for intelligence.\" Ani Kembhavi, a researcher at the Allen Institute of Artificial Intelligence, told The Verge.


However, if the GPT-2 can be used to write fake news, in theory, it may also be used to produce hate language and violent speech, including spam, false social speech and so on. Because the text generated by GPT-2 is not simply copied and pasted, but the instant generation of AI, this results in negative text cannot be effectively tracked and cleaned up.


On the one hand, the OpenAI emphasizes that the tool serves only the population of policymakers, journalists, writers, artists, and so on, and that they are testing what GPT-2 can help write. On the other hand, such a powerful tool could pose a danger, so only a smaller, more functional model was released.


Some researchers believe that human publishing fake news usually has a certain purpose, however, language model generation of text is not purposeful. Language models like GPT-2 are designed to produce text that looks more realistic, coherent, subject-related, and in fact, it's not that simple to use them to generate mass fake news.


Furthermore, the researchers found that the model was good at various interesting ways of generating information, but the least good was the feared generation of false information or other undesirable content.


Many technicians prefer to seek solutions from technology, such as Grover, where the best way to identify false news generated by AI is to create an AI model that can write fake news itself.


In addition to changing your face, wherever you go these days, the most common thing you do is a variety of face-recognition cameras: take the subway, get in and out of the neighborhood and campus, get in and out of the park, and even get paper from public toilets. Experts say the chinese face more than 500 cameras a day. But the rapid development of camera and face recognition technology also brings a lot of controversial reports, such as from face data disclosure, black production cases, to the first case of face recognition in China.


On April 27,2019, Guo Bing, a special associate professor at Zhejiang University of Technology, purchased the Hangzhou Wildlife World Annual Card and paid 1360 yuan for the annual Kaka fee. Under the contract, cardholders are eligible to visit the card for a period of one year through the simultaneous verification of their annual card and fingerprints.


On october 17 of the same year, hangzhou wildlife world informed guo bing by text message that \"the park's annual card system has been upgraded to face recognition into the park, the original fingerprint identification has been cancelled, unregistered face recognition users will not be able to enter the park normally \". After Guo Bing went to the field to verify, the staff confirmed that the text message is true, and made clear to Guo Bing that if the face recognition registration will not be able to enter the park, and cannot go through the refund procedures.


But guo believes that the park's upgraded annual card system for face recognition will collect his facial features and other personal biometric information, such as personal sensitive information, once leaked, illegally provided or abused, will be extremely vulnerable to the safety of consumers, including plaintiffs and property.


After the consultation failed, Guo Bing filed a lawsuit with the Fuyang District people's Court of Hangzhou on October 28,2019, and now the Fuyang District people's Court of Hangzhou has formally accepted the case.


Face recognition involves the collection of important biological data for individuals, since information such as phone numbers, names and addresses must be obtained with the consent of citizens, let alone face information? Relevant organizations or institutions must demonstrate the legitimacy and necessity of such an approach prior to its collection.


Of course, no one denies that the technology of face recognition has done wonders. Some netizens counted that since 2018, the \"song god\" Zhang Xueyou in the country's tour concert, the police rely on face recognition technology to capture dozens of fugitives; also help people find missing and lost relatives. However, a basic reality that cannot be ignored is that the explosion of application scenarios does not return to the system and technical safeguards for data collection, storage and use.


“Currently, the protection of people's biological characteristics, such as iris, face, fingerprints, etc., is not included in the existing law, the subject of responsibility for information protection, the boundary of responsibility, how information is used, processed and destroyed, and the law does not specify." Zhu wei, deputy director of the communications law research center at china university of political science and law, told china newsweek that business ethics should be binding when the law is disconnected." Face recognition companies, as sources of technology and beneficiaries, must give users the right to know about the risks."


Microsoft deleted its largest face recognition database, msceleb, which caused some to object by using a \"knowledge-sharing\" license to capture and search for images that were not authorized by the owners of many images taken from the database, the financial times reported.


In china, more and more face recognition practitioners have admitted that the privacy and security problems caused by face recognition technology have come to face up to the time to find an appropriate balance between privacy, security and convenience, allowing technology to move forward moderately while protecting citizens'privacy. 。


The use of advanced technology in teaching scenarios is one of the most controversial applications. In November 2019, a video of Zhejiang Primary School wearing a surveillance head ring attracted widespread attention and controversy.


In the video, children wear what are known as \"brain-machine interfaces\" that record their concentration in class and generate data and scores to send to teachers and parents.


In response, the head-ring developer replied in the statement that brain-computer interface technology is an emerging technology that will not be easily understood. The \"score\" mentioned in the report is the average class concentration, rather than each student's concentration figure that netizens speculated. The focus report formed through this data integration is also not available to parents. In addition, the ring does not need to be worn for a long time and can be trained once a week.


In addition to the brain-machine interface, security manufacturers are also interested in this scenario. In theory, the face recognition system can scan students'faces with a camera every other time, collect and analyze their sitting posture, facial expressions, and evaluate whether they are listening attentively. Teaching is more demanding and is seen as a \"deep understanding of people\" than the common \"face-brushing\" recognition skills.


Professor Kate Crawford, founder of AINow, once told the BBC:\" Emotional recognition technology claims to be able to read our inner emotional state by interpreting our microexpressions, voice and tone, and even how we walk. It's now widely used in society, from finding the perfect staff in interviews to assessing the suffering of patients in hospitals to tracking which students are listening attentively in class.


The French Data Protection Agency, CNIL, has declared it illegal to use face recognition technology in schools because of violating the GDPR principle. But in China, there is a view that the classroom is a public place, and there is no problem of invasion of privacy. There is also a point to insist that face recognition to obtain students'right to portrait, must involve infringement of students'right to privacy. Even without discussing privacy, using face recognition to monitor students'status involves more fundamental values of education.


What exactly is the boundary used for face recognition on campus? The Ministry of Education is currently developing relevant management documents for face recognition technology. However, in previous responses to media questions, officials said face recognition entered the campus with both data security and personal privacy issues. Be very careful about your student's personal information. The ability to collect less is less, especially involving personal biological information.


For example, it has been reported in the media that although similar products are scientifically flawed, police in the United States and the United Kingdom are indeed using the eye detection software Converus, which examines eye movements and changes in pupil size to mark potential deception.


Oxygen Forensics, which sells data extraction tools to agencies such as the FBI, Interpol and London Police, also said in July that it had added tools to its products to enable it to \"analyse videos and images captured by drones and identify known terrorists.\"


On September 13,2019, the U.S. Congress of California passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. If Gov. Gavin Newson signs through, the bill will come into force as law on January 1,2020.


Its entry into force would make California the largest state in the United States to ban facial recognition technology. Some states, including Oregon and New Hampshire, have similar bans.


Europe is more cautious about face recognition. In May 2018, the European Union introduced general data protection regulations (referred to as \"GDPR \"), which provide for Internet companies that illegally collect personal information (including fingerprints, face recognition, retinal scanning, online location data, etc.) and do not guarantee data security, with a fine of up to 20 million euros or 4 percent of global turnover, known as the\" strictest ever \"regulation.


Although GDPR is not a perfect piece of legislation, it has had an impact internationally. It also puts the U.S. government in an awkward position because U.S. companies could be regulated by other countries. U.S. AI experts say it is a crucial time to start enacting and implementing regulations. It will also be one of the most important things in the next five years. The federal government needs to make decisions about how AI and its ancillary technologies are regulated.


In 2017, a Stanford study published in \"PersonalityandSocialPsychology\" sparked widespread social controversy. Based on more than 35,000 images of men and women on american dating sites, the study uses deep neural networks to extract features from images and examine a person's sexual orientation by studying a facial image.


In the \"recognition of sexual orientation\" task, human judgment performance is lower than the algorithm, with an accuracy rate of 61% among men and 54% among women. When the software identified the subjects'images (five images per person), the accuracy was significantly different:91% for men and 83% for women.


In essence, this algorithm will still lead to the abuse of other people's portrait rights, data privacy, and may have extremely serious consequences. \"If we start judging people by their appearance, then the results will be disastrous. Nick Rule, a professor of psychology at the University of Toronto, said.


Gender orientation belongs to human privacy, if AI can be forced to calculate sexual orientation according to the photo, it is neither legal nor humane. Once the technology is popularized, couples use the technology to investigate whether they have been cheated, and teens use the algorithm to identify their peers, even more unthinkable in the controversy over identifying specific groups. \"If we start judging people by their appearance, then the results will be disastrous.


At present, for the design and application of AI products, in addition to the ethical standards of human rights, well-being, accountability and transparency set up by the World Institute of Electrical and Electronic Engineers (IEEE) in the Code of Ethics for Artificial Intelligence Design, more than 100 global tech giants, such as Amazon, Microsoft, Google and Apple, have created a non-profit AI cooperation organization, PartnershiponAI, which proposes ethical frameworks such as fairness, no harm, openness and transparency, and accountability.


On the other hand, as some scholars have suggested, from the point of view of application communication, it is necessary to introduce ethical and legal rules for the use of AI, and to establish a platform of artificial intelligence that can be monitored, and to control the account number of all users who use unethical AI products, so as to force technology companies to adjust and calibrate their R