AI should have more responsibilities along with effective operation

Artificial Intelligence in 2023 sees various breakthrough improvements. It is time AI applications be bound by responsibilities in their operations.
Microsoft engineers are studying AI technologies

Microsoft engineers are studying AI technologies


Last summer, Microsoft’s Responsible AI experts and senior managers experienced OpenAI, which is similar to the renowned ChatGPT. Years of research have helped Microsoft engineers realize that AI is able to change many activities of different fields thanks to its ability to boost human’s thinking, reasoning, learning, and opinion expressing.

Therefore, AI is expected to increase working performance and reduce labor-intensive, repetitive tasks, which in turn stimulates economic growth. Also, its capability of learning new knowledge in big databases can promote new medical developments, overcome scientific limits, improve trading models, provide stronger methods for cybersecurity and national defence.

However, there are ill-minded people who intentionally take advantage of AI technologies to commit crimes via spreading misleading information or devising more cunning tricks. As a result, AI application developers and users have to always keep in mind their own responsibilities.

In 2017, Microsoft formed the Aether Committee, consisting of researchers, engineers, and policy experts in order to focus on the responsibility aspect and operation principles of AI applications.

In 2019, the Responsible AI Office was established to coordinate AI administrative tasks, along with the introduction of the first version of the Responsible AI Standard Set. The second version updated three years after that clearly outlines the ways to develop an AI system via practical approaches to identify, evaluate, and mitigate potential harms even before they happen. Control measures are integrated in the initial design of any AI systems.

Microsoft’s researchers and policy experts as well as the technical team are collaborating to study more about potential negative impacts of AI. This has led to the realization of the importance of in-depth expertise when encouraging responsible AI uses and the continuous demands of updating new principles and standards for AI applications.

In Vietnam, Microsoft is working with state agencies and ministries like the Information and Technology Ministry to promote the implementation of AI in daily operations, while helping these organizations to develop their own code of ethics and accountability for AI.

As new AI models increasingly appear, it is necessary to focus on the three targets of developing AI applications responsibly and morally through frequently updated principles parallel with the growth of new technologies, ensuring AI can raise the international competitiveness and national defence of a country, and letting AI serve the general public not just some specific groups.

Other news