Top trending technologies regulation: webinar follow-up

Hey-hey! We have recently talked about modern technologies, their impact on everyday life and have discussed possible changes of their regulation. And now we want to share some insights with you!

Is your privacy under attack?

 

  1. a) Risks of using Big Data in your business

Better customer service, personalized marketing campaigns, identifying potential frauds, reducing risks for new business, increasing profitability is just a short list of opportunities of using Big Data (BD).

But there is fly in the ointment: these opportunities come with risks.

And the first legal risk of Big Data is related to privacy. Since BD is a huge volume of information, it also contains a lot of data about our personal lives that could be identifying. Where BD contains personal data, data protection laws should be taken very seriously. Failure to do this comes with risk of serious data breach which can cause huge fines and business` reputation damage. In order to minimize this risk, businesses that gather and analyze personal data while using BD should comply with an appropriate legislation.

 

For example, GDPR (if information is gathered about EU-located persons), HIPAA Security Rule that provides standards for protecting personal health information within USA territory etc.

Risk of discrimination. New ways of using BD may affect people’s abilities, for example, to get insurance, to access education, to become a Superman.

For example, one investigation showed that minority neighborhoods pay more for car insurance than white neighbourhoods with the same risk levels. Another case describes the situation when man found his credit rating reduced because his bank determined that others who shopped where he shopped had a poor repayment history.

As we can see, discrimination laws need to be considered, especially where the outcome of Big Data is to offer goods and services on selectively basis, that could be discriminative.

The next one is breach risk. Company that uses Big Data takes this risk if it stores all of its data in one place to facilitate data analysis. In this case the consequences of a data breach can be huge. We all remember news headlines about data breaches.

For example, Target breach in the US caused leaked credit card information, bank account numbers and other financial data. US court ruled that everyone affected could claim up to $10,000 in compensation. Just imagine the loses!

There is always a risk of breach where data is proceeding, but implementing appropriate technical, organizational and legal measures to keep data safe is a way to minimize this risk.

And what about regulation? There are no laws concerning Bag Data only yet. But there are Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data, published by Consultative Committee of Convention 108. They are the only ones on a European level to provide an important step in regulating Big Data use. These document provides general guidance only. The purpose of the Guidelines is to limit the risks of violating data subjects’ rights by facilitating an effective application of the principles of the Convention 108 in the Big Data context.

 

  1. b) While using your devices connected to the Internet, are your privacy rights protected?

Simply the Internet of Things means taking all the things in the world and connecting them to the internet When something is connected to the internet, it can send information or receive information, or both. Should we be aware that such information is collected and might be used in not the best way for us?

Situations where data from your devices might be used:

1) Vulnerable resource of data which may be collected as a source of information needed to commit a crime.

For example, the information from a fridge to the owner about the food that was stored in fridge, allows people understand that the owner was away from home (because the communication by the fridge was reduced or, instead, increased after owner bought a lot of fresh food). The distorted use of this kind of information might facilitate criminal activities: for example, your house might be robbed.

2) Technology companies obtain information

Sometimes technology companies obtain voiceprints. Almost everyone has faced with icon asking “do you allow us to receive access to your audio/voice?”. Usually people just click yes in order to use the application.  

But imagine if some criminal party got the access to voicemail where you say the time you are coming back home! You will help to rob your house with your own hand (or, more obviously, mouth)! It seems that the only way to protect yourself is not to use these devices at all.

 

What does the law say about it?

For example, Californian “Information Privacy: Connected Devices” Bill is beginning on January 1, 2020, and would require a manufacturer (person who manufactures connected devices) to equip the device with a reasonable security feature that are appropriate to the nature and functions of the device, to the information it may collect or transmit, and is designed to protect the device and any information contained from unauthorized access, use, modification, or disclosure. So, if there will be a breach of data protection due to improper security measures, the manufacturer will held be liable for that.

In the UK, all IoT devices must come with unique passwords. It caused the situation when most devices back in 2016 came with the same passwords, that facilitated the work for hackers. For now, except unique passwords, manufacturers of IoT devices must provide 1) a public point of contact, such that consumers know whom to contact if they have security questions about their devices; 2) the minimum length of time for which the device will receive security updates.

 

  1. c) Do you still have privacy if your face might be easily recognized everywhere?

Face recognition is a system that is built on computer programs that analyze images of human faces in order to identify them.

Situations where your face might be recognized:

1)      Medical uses – some facial recognition software providers claim that their products can help monitor blood pressure or pain levels by identifying key facial markers;

2)      Validation of purchases – imagine never having to bring your card to the shop again: just step up to the counter, your face is scanned to identify you and your linked bank account, and your purchase is complete more realistic situation in contrast to previous two:

3)      Catching criminals – facial recognition can be used for general surveillance in combination with public video cameras, and it can be used in a passive way that doesn’t require the knowledge, consent, or participation of the subject.

In the last situation there are two sides: from one side this is good since you may catch criminal easier, but from other side, faces of other innocent people will be also scanned.

For example, if was disclosed that FBI had access to over 400 million face recognition images, and most of them were collected from Americans and foreigners under civil and not criminal circumstances.

 

What is a legal side?

            Using of facial recognition has been regulated in the USA by two acts: 1) “Commercial Facial Recognition Privacy Act of 2019” Sec. 15 (b) and 2) Illinois’s Biometric Information Privacy Act (BIPA).

The first document states that in order to use facial recognition technology there must be:

(A) an end user affirmative consent and

(B) end user should be provided with

(i) notice that facial recognition technology is present, and information on where the end user can find more information about the use of facial recognition technology by the controller; and

(ii) documentation that includes general information that explains the capabilities and limitations of the facial recognition technology.

Second document states that “no private entity may collect, purchase, receive person’s or a customer’s biometric identifier or biometric information, unless it first:

(1) informs the subject in writing that a biometric identifier is being collected or stored;

(2) informs of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and

(3) receives a written release executed by the subject of the biometric identifier or biometric information

 

Regulation of the rise of machines

 

  1. a) Robot vs. lawyer. Which one is a better employee?

It is believed that soon people will compete with robots on the job market. So do the lawyers need to be aware as well?

We can highlight some areas that machine intelligence will change in the near future:

1) legal search (mostly in case law). It is believed that soon machines will replace this function of lawyers and will performed it much more effectively (for example, Westlaw and Lexis services). And taking into account that hardware and software capacity improves, legal research will potentially become more accurate in finding the case law.

2) document generation. Machine intelligence will help to adapt document generation to individual situations.  For example, clients of LegalZoom can already submit information about their assets and intentions for transfer of his estate to generate a draft of a will. At first, lawyers will still be very involved in rewiev and improvement of the first drafts that machines create. But savings can be very significant even at this stage!

3) prediction of case outcomes. Machine intelligence could get access to huge amount of information and systematically extract that information to understand the likely outcome of the case. For example, scientists from two UK universities have created a “computer judge” that predicts decisions of the European Court of Human Rights within the accuracy of 79 %. The algorithm takes into consideration not only legal evidence, but also the moral side!

This can also reduce the number of cases that go to trial, because whenever the parties agree on the cost of a case, it is more likely to settle.

 

Should lawyers now fear for their jobs?

One law firm announced that they would employ the AI robot to work on bankruptcy cases that are for now under consideration of nearly fifty lawyers.

Our verdict is that lawyers are increasingly dependent on technology, but even if robots could perform basic legal tasks, most of lawyer`s functions will be beyond robots’ capabilities for the near future. For today we can with more confidence talk about cooperation of human lawyers and AI for providing better services.

 

  1. b) If AI commits a crime, can the software itself be held liable?

In the nearest future AI robots are believed to be used in different fields: health care, education, public safety and so on. But at the same time it could equally be something not created to cause harm, but which could do so by accident or error.

We have already witnessed serious incidents associated with AI machines. For example, one incident took place at a car manufacturer where a robot killed a woman worker because it was restarted unexpectedly and crushed her skull.

Generally intentional actions are punished more strictly than careless ones. So could an AI system be held criminally liable for its actions? How can we demonstrate the intentions of a non-human AI robot to commit crime?

Let`s discuss the real situation where two artists created a bot that randomly bought items from a “darknet” market using Bitcoin. The bot purchased, among other lawful things, fake Diesel jeans and ten pills of ecstasy. Are these artists liable for what the bot bought?

On the one hand, all objects created by human are still taken as items of property, targets of crime, things that can`t have legal rights and interests and cannot be responsible for damage caused.

For today, responsibility may be identified upon robots’ manufacturers under the provisions of the Product Liability Directive no. 85/374/EEC. Such Directive is based on strict liability of producers of defective products also in the event of personal injury or damage to property. But there are opened questions left: for example, should the responsibility of developer be proportional to the “degree of autonomy” of the robot?  The situation will become more complicated, if the AI is self-learning and self-improving.

 

So, is it possible to set specific status of AI machines and their liability?

For example, The United States already granted rights and responsibilities to non-human entities, namely corporations; the “Non-Human Person” status has been given to dolphins in India based on their “highly developed social organization and engagement in a complex communication system”. So, it is not incredible if the same conception will be applied to AI machines.

The European Parliament, in its Resolution of 16 February 2017 on Civil Law Rules on Robotics (European Parliament resolution 2015/2103/INL of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics), considers the idea to grant the autonomous system or robot itself the status of a legal entity or “ePerson”. Giving the robots a separate legal status would protect manufacturers and users against liability, like corporate entities protect shareholders and managers.

Butt still, there are a lot of questions to be considered, for example what defense might AI machine use?  What kinds of punishment could be applied to AI machines?

 

  1. c) You have been hit by autopilot. Who is the guilty party?

An automated vehicle means a vehicle is considered to be ‘driving itself’ if it is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual.

 

Here is a case:

Woman was pushing a bicycle across a four-lane road when she was struck by an Uber test vehicle, which was operating in self-drive mode with a human safety backup driver sitting in the driving seat. Suddenly she was collapsed with a car. After that was taken to the hospital because of her injuries.

Usually, the most important aspect of a lawsuit is who is at fault in a car accident.  So the question is who is the guilty party in this case? Manufacturer, driver itself, insurer?

Obviously, if a self-driving car is involved in an accident, there is no actual driver who can be held responsible. Many times, issues of product liability are applicable because you are dealing with a piece of machinery and a computer. If it was a human driver, then legal theories of negligence would be applicable. The only way to hold a computer responsible is by suing the entity that created and/or programmed the computer.

 

         In July 2018 the Automated and Electric Vehicles Act officially became law in the UK:

  • The basic premise is, where the vehicle is insured, and it is involved in an accident in Great Britain, the liability shall lie with the insurer and insurer may recover damages from another party at fault, such as the manufacturer. If it is not properly insured, the owner is liable;
  • The owner or insurer is not liable where the person was negligent in allowing the vehicle to begin driving itself when it was not appropriate to do so. With regard to a manufacturer’s liability.

Some states of USA require manufacturers of these vehicles to assume fault for each incident in which the automated driving system is at fault. Under this theory, if the automated driving system’s “negligence” causes an accident, the manufacturer assumes that negligence, and the legal liability that comes with it.

The sadness of reality is that as long as self-driving cars require human assistance, those humans (whether sitting in the driver’s seat or monitoring the vehicle remotely) will remain potentially liable if their negligence contributes to a car accident.

 

And there will be always such questions like: between a driver and a pedestrian, both at risk of death, who “deserves” to be saved by the electronic brain? And if there were four people in the car, vs. a single pedestrian “in danger”, what choice we would like the machine to do?

Self-driving technology has not yet been perfected to the point where the car can sense, react to, and avoid a sudden and unexpected danger. There is no unambiguous answer on this issue. But we hope it will be provided soon.

 

To conclude, trending technologies that we discussed have a great impact on our lives nowadays, and “the rice of machines” don`t seem a fantasy any more. It seems that the status of AI, robots, self-driving cars will be changed in the nearest future. Who knows, maybe next time we will talk about first robot punished for crime?

You can watch our webinar here. More interesting themes are coming, so subscribe and stay tuned!

 

 

    Your question to IT lawyers


    Subscription