In the lead-up to this launch, OpenAI has been working with early adopters to troubleshoot the device. The first wave of customers has produced a gradual stream of surreal and putting pictures: mash-ups of cute animals, pictures that imitate the style of real photographers with eerie accuracy, mood boards for restaurants and sneaker designs. That has allowed OpenAI to discover the strengths and weaknesses of its device. “They’ve been giving us a ton of actually nice suggestions,” says Joanne Jang, product manager at OpenAI.
OpenAI has already taken steps to regulate what sort of pictures customers can produce. For instance, folks can not generate pictures that present well-known people. In preparation for this industrial launch, OpenAI has addressed one other major problem early customers flagged. The model of DALL-E launched in April usually produced pictures reflecting clear gender and racial bias, similar to pictures of CEOs and firefighters who had been all white males, and academics and nurses who had been all white girls.
On July 18, OpenAI introduced a repair. When customers ask DALL-E 2 to generate a picture that features a group of individuals, the AI now attracts on a dataset of samples OpenAI claims is more representative of global diversity. According to its personal testing, OpenAI says customers had been 12 occasions extra prone to report that DALL-E 2’s output included folks of numerous backgrounds.
Disqus Shortname not set. Please check settings