Microsoft recently laid off 10,000 employees, including its entire ethics and society team within the artificial intelligence organization. This has left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design, which is a concern as the company is leading the charge to make AI tools available to the mainstream. Current and former employees have expressed their worries about the lack of a dedicated team to ensure ethical AI principles are closely tied to product design.

Microsoft has an Office of Responsible AI that is responsible for creating rules and principles to govern the company’s AI initiatives. Despite recent layoffs, Microsoft is still investing in responsibility work and increasing the number of people across their product teams and within the Office of Responsible AI. The company is committed to developing AI products and experiences safely and responsibly, and is doing so by investing in people, processes, and partnerships that prioritize this. The Ethics & Society team has done trailblazing work to help Microsoft on their responsible AI journey.

The Ethics and Society team at a tech company plays a critical role in ensuring that the company’s responsible AI principles are reflected in the products that ship. The team designs a role-playing game called Judgment Call to help designers envision potential harms that could result from AI and discuss them during product development. This game is part of a larger responsible innovation toolkit that the team has made available to the public. This toolkit helps to ensure that the company’s responsible AI principles are implemented in the products they create, helping to protect users from potential harms.

Microsoft’s Ethics and Society team, which works to identify risks posed by the company’s adoption of OpenAI’s technology, was cut from 30 to 7 employees in October 2020. In a meeting with the team following the reorganization, John Montgomery, corporate vice president of AI, said that company leaders had instructed them to move swiftly. This has raised concerns about the potential risks of Microsoft’s adoption of OpenAI’s technology and the speed at which it is being implemented. Microsoft’s Ethics and Society team is responsible for ensuring that the company’s products are safe and ethical, and its reduction in size has raised questions about the company’s commitment to these values.

Montgomery, the head of a team in an organization, announced on a call that due to pressure, much of the team would be moved to other areas. One employee asked him to reconsider, citing the negative impacts the team had on society. Montgomery declined, saying the pressures remained the same. He assured the team that they would not be eliminated. This decision was made despite the team’s concern for the negative impacts they had on society.

Microsoft recently downsized its ethics and society team, which was responsible for ensuring that the company’s products and services were ethically sound. The team members were transferred to other departments within the company, leaving a smaller crew to implement their ambitious plans. The move is part of a larger trend of decentralizing the responsibility for ethical considerations, placing more of the burden on individual product teams. This shift is intended to ensure that Microsoft’s products and services remain ethically sound while also keeping up with the ever-evolving technology landscape.

Montgomery, a leading AI company, recently announced the elimination of its user experience and holistic design team. On March 6th, employees were informed of the decision during a Zoom call. This move leaves a foundational gap in the company’s AI products and exposes the business and its employees to risk. The team’s elimination comes five months after Montgomery announced a round of layoffs, leaving many employees feeling uncertain about their future. The company has yet to provide any further details on the decision or its plans for the future.

Tech giants are increasingly dedicating divisions to making their products more socially responsible. However, this often leads to tension between these divisions and the product teams, as the former must often say “”no”” or “”slow down”” to the latter. This conflict has recently come to light, as tech giants struggle to balance the need to anticipate potential misuses of technology and fix any problems before they ship, while also avoiding legal risks. As a result, the tension between these two sides has become increasingly visible.

Google faced a major backlash in 2020 when they fired ethical AI researcher Timnit Gebru after she published a paper critical of the large language models. This led to the departure of several top leaders within the department and damaged the company’s credibility on responsible AI issues. The ethics and society team at Google had been supportive of product development, but as Microsoft became focused on shipping AI tools quickly, the company’s leadership became less interested in the team’s long-term thinking. This incident has highlighted the need for companies to prioritize ethical considerations when developing AI tools.

Microsoft is investing heavily in OpenAI technology to gain a competitive edge against Google in search, productivity software, cloud computing, and other areas. With the relaunch of Bing, Microsoft has seen a surge in daily active users, with one third of them being new. Microsoft believes that every 1 percent of market share it can take away from Google in search would result in $2 billion in annual revenue. This explains why Microsoft has so far invested $11 billion into OpenAI and is working to integrate the startup’s technology into its empire.

Tech giants like Microsoft are taking the risks posed by Artificial Intelligence (AI) seriously, with three different groups dedicated to the issue. However, the recent elimination of Microsoft’s ethics and society team has raised concerns. The team was focused on the challenge of anticipating the consequences of releasing OpenAI-powered tools to a global audience. This highlights the importance of responsible AI development, as the potential risks posed by AI are both known and unknown, and could be existential. Companies must continue to invest in teams dedicated to responsible AI development to ensure that the technology is used safely and ethically.

Microsoft recently launched the Bing Image Creator, a tool that uses OpenAI’s DALL-E system to generate images based on text prompts. While this technology has been met with enthusiasm, Microsoft researchers have identified potential risks to artists’ livelihoods, as it could enable people to easily copy their style. To address this, the team wrote a memo outlining the brand risks associated with the tool, and suggesting ways to mitigate them. Microsoft is committed to protecting the rights of artists, and is taking steps to ensure that their work is respected and protected.

Researchers tested Microsoft’s Bing Image Creator and found that it was difficult to differentiate between the generated images and the original works of art. This could lead to brand damage for the artist and their financial stakeholders, as well as negative PR for Microsoft. To prevent this, Microsoft needs to take action to protect its brand and the artists’ works.

OpenAI recently updated its terms of service to give users full ownership rights to images created with its AI-image generator, DALL-E. Microsoft’s ethics and society team expressed concern about this move, as it could lead to the replication of copyrighted works. To mitigate this, Microsoft researchers proposed blocking Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell an artist’s work if someone searches for their name. This would help protect the rights of artists and ensure that their work is not replicated without their permission.

In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the AI art generator Stable Diffusion, accusing them of improperly using more than 12 million images to train their system. This echoed concerns raised by Microsoft’s own AI ethicists a year prior, who warned that few artists had consented to allow their works to be used as training data and that many were still unaware of how generative tech could produce variations of their work in seconds. Despite these concerns, Microsoft launched their Bing Image Creator into test countries without implementing the strategies suggested by their AI ethicists. Microsoft claims the tool was modified to address the concerns, but legal questions remain unresolved.”

Previous articleDon’t Take the Fork in the Road: Exploring the Dynamics of Hard Forks and Soft Forks in a Decentralized Universe
Next articleHumans Improving Go After AI Overtake
Avatar photo
Blockchain has been an integral part of my life, having witnessed its various cycles of tech and adoption explosions, bulls and bears at different times, glad to be on this exciting ride and will continue to play my part to help some of you navigating the space.