Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

At a recent Bilibili event, OpenAI’s safety executive emphasized the importance of responsible AI development to protect and benefit society. 🌍💡

At a Bilibili-hosted event, researcher Lilian Weng Li described AI as a “double-edged sword” that requires careful training to be used safely and effectively.

An OpenAI safety executive delivered an uncommon speech at a prominent Chinese video streaming platform event, urging responsible development in artificial intelligence (AI).

At a recent event hosted by the Chinese streaming platform Bilibili, Lilian Weng Li, OpenAI’s research VP on AI safety, described AI as a “double-edged sword” for humanity.

“It offers both convenience and challenges, making our role essential,” she said. “Let’s work together to build a smart, responsible AI companion.”

OpenAI, the San Francisco-based company behind ChatGPT, does not provide its services in China, including Hong Kong and Macau, where mainland regulatory approval isn’t required. This restriction places China on a limited list alongside U.S.-sanctioned countries like Iran, North Korea, and Russia. In July, OpenAI further limited access by blocking its application programming interface (API) in China.

According to her LinkedIn profile, Weng holds an information systems degree from Peking University in Beijing and a doctorate from Indiana University Bloomington. Before joining the early-stage OpenAI as a research scientist in 2018, she worked at major U.S. tech companies like Facebook and Dropbox.

As one of the Chinese researchers contributing to AI advancements at OpenAI, she has spent nearly seven years in applied AI research roles. In August, she was promoted to vice president of research and safety, now leading the team focused on practical strategies to ensure AI safety—a rising concern within the industry and government circles.

Speaking in Chinese during a Bilibili livestream, Weng emphasized that rigorous AI model training is essential to keeping AI safe for humanity.

“Like humans, AI experiences ‘growing pains’—it can develop biases from data or be misused in adversarial attacks,” Weng explained. “But through careful guidance and AI security research, we can make its development more secure and aligned with human needs.”

In her speech, she avoided mentioning OpenAI or the U.S.-China tech competition. Instead, she focused on the importance of creating a strong foundation for AI and emphasized the need for diverse, high-quality data in training.

Mainland media and institutions have been quick to applaud Chinese researchers at the world’s leading AI company. Earlier this year, a high school in Wuhan recognized former student Jing Li for her work on the text-to-video model Sora, calling her a “shining star on the international stage.” Ricky Wang Yu, another Chinese researcher from Jiangsu province, also garnered local media attention for becoming a “hot topic” and drawing the interest of teenagers.

Chinese scholars are increasingly represented among AI researchers in the U.S., despite growing political scrutiny of Chinese scientists in the country. In 2022, 38% of top-tier AI researchers at U.S. institutions were from China, according to research by the think tank MacroPolo, up from 27% in 2019.

OpenAI does not release demographic data about its workforce, but a review of the list of contributors to its latest GPT-4 model shows that roughly a quarter of the over 140 contributors have Chinese surnames. While this doesn’t indicate citizenship or birthplace, Chinese media often celebrates individuals of Chinese heritage who achieve success in their fields abroad.