The images of the self-driving car running on the road, stopping at red lights, and avoiding pedestrians or virtual assistants automatically scheduling an appointment for the owner based on the traffic situation and weather are no longer a vision. AI technology is and will play an increasingly important role in our lives. All objects and vehicles have become more and more intelligent. However, from a legal perspective, the appearance of a factor with such intelligence will pose many challenges.
Privacy at risk
Just as humans learn knowledge to become more knowledgeable, AI systems also need to consume data to become intelligent, and this requires a huge database. For example, a smart speaker that wants to suggest or process user commands will need to collect data about the user's location, voice, and habits. Users' personal data will be easily collected by this intelligent robot, including identifying data, such as date of birth, phone number, and ID number, and even sensitive data, such as voice, fingerprints, or health conditions. With millions and billions of users globally, companies will collect huge amounts of data. If they are not controlled, data will possibly be used for other purposes.
In 2019, a class-action lawsuit occurred in the US. The plaintiffs alleged that Amazon violated the Privacy Shield Act and the Children's Online Privacy Protection Act (COPPA). Specifically, the Alexa assistant permanently recorded and stored the voice recordings of millions of students at schools for commercial use without noticing them or receiving their consent.
Possible discriminatory practices
“All men are created equal,” and it is also the universal value we are trying to ensure. However, when AI systems are applied to real life, there may be a risk of discriminatory practices.
Because AI is not naturally intelligent, it needs the process of teaching by humans by giving data and labeling information for those data. If the data provided to AI are discriminatory, likely, AI decisions will also be discriminatory. It could be sexism, race, or even discrimination against the elderly. A recent study showed that automatic facial analysis technology gives errors of up to 34.7 percent for dark-skinned women, but the error rate for light-skinned men is only 0.8 percent, which is 43 times lower.
Helping to corner the market, limit competition?
Enterprises must strive to offer affordable, better-quality, and better-designed products to compete with each other, and consumers benefit from that. However, that is not always the case with businesses. Instead, they negotiate with each other to set prices or divide customers and limit the amount of goods supplied to the market. In this way, instead of competing with each other, companies will still be able to sell goods and make big profits. Consumers are those who take the disadvantage.
With AI, sophisticated and complex algorithms can become powerful tools to help companies carry out this behavior. For example, AI collects and processes information to help companies implement collusion agreements. Or AI can help enterprises easily monitor each other's prices and strategies, thereby ensuring the connection between the parties in the collusive relationship. In a higher scenario, the AIs observe their opponents, predict, and determine prices themselves, leading to collusive behaviors with or without human intervention.
Do lawmakers lose to AI?
Perhaps, the toughest problem for lawmakers is how to find the right and correct equilibrium point. Lawmakers seem to be walking a tightrope, with innovation promotion on the one hand and risks from AI technology on the other. If regulations are too loose, risks can arise for consumers and the market. However, if the regulations are too strict, it will be hard to bring technologies to the market, and it will take much time and money for AI to become smarter. The risk in the second direction is more likely to happen, as according to statistics, up to 90 percent of regulations only focus on tackling negative aspects.
Another challenge is the borderless nature of technology. How will lawmakers adjust if countries altogether participate in the operation of AI? For example, a server in the US can completely process and manipulate a script programmed in Vietnam. Or a business can place its supercomputer in the Caribbean to process computational data in Vietnam.
Another barrier comes from technology, that is, lawmakers and regulators must understand algorithms and input data - the elements that form AI. Nevertheless, these factors are extremely high-tech, which will be a big challenge for legislators when learning about AI. Another example is that black box algorithms inserted into AI systems can generate outputs from input data with complex functions that no one can understand, including their creators.
How complicated AI is!