Мы в Telegram
Добавить новость

Представители ЦМД поселения Киевский пригласили всех на музыкальное мероприятие

В Подмосковье состоятся вебинары по дорогам и транспорту 5 июня

С начала недели нефть Brent потеряла 5,48%, а WTI - 5,04%

Дожди на трассе М-4 в Воронежской области пообещали синоптики 



Новости сегодня

Новости от TheMoneytizer

Google’s ouster of a top A.I. researcher may have come down to this

Google’s large language A.I. BERT now powers its search results—and is a potential key to billions in future cloud computing revenue.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

The recent departure of a respected Google artificial intelligence researcher has raised questions about whether the company was trying to conceal ethical concerns over a key piece of A.I. technology.

The departure of the researcher, Timnit Gebru, came after Google had asked her to withdraw a research paper she had coauthored about the ethics of large language models. These models, created by sifting through huge libraries of text, help create search engines and digital assistants that can better understand and respond to users.

Google has declined to comment about Gebru’s departure, but it has referred reporters to an email to staff written by Jeff Dean, the senior vice president in charge of Google’s A.I. research division, that was leaked to the tech newsletter Platformer. In the email Dean said that the study in question, which Gebru had coauthored with four other Google scientists and a University of Washington researcher, didn’t meet the company’s standards.

That position, however, has been disputed by both Gebru and members of the A.I. ethics team she formerly co-led.

More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

On Wednesday, Sundar Pichai, Google’s chief executive officer, told staff he would investigate the circumstances under which Gebru left the company and would work to restore trust, according to a report from news service Axios, which obtained Pichai’s memo to Google employees.

But why might Google have been particularly upset with Gebru and her coauthors questioning the ethics of large language models? Well, as it turns out, Google has quite a lot invested in the success of this particular technology.

Beneath the hood of all large language models is a special kind of neural network, A.I. software loosely based on the human brain, that was pioneered by Google researchers in 2017. Called a Transformer, it has since been adopted industrywide for a variety of different uses in both language and vision tasks.

The statistical models that these large language algorithms build are enormous, taking in hundreds of millions, or even hundreds of billions, of variables. In this way, they get very good at being able to accurately predict a missing word in a sentence. But it turns out that along the way, they pick up other skills too, like being able to answer questions about a text, summarize key facts about a document, or figure out which pronoun refers to which person in a passage. These things sound simple, but previous language software had to be trained specifically for each one of these skills, and even then it often wasn’t that good.

The biggest of these large language models can do some other nifty things as well: GPT-3, a large language model created by San Francisco A.I. company OpenAI, encompasses some 175 billion variables and can write long passages of coherent text from a simple human prompt. So imagine writing just a headline and a first sentence for a blog post with GPT-3 then composing the rest. OpenAI has licensed GPT-3 to a number of technology startups, plus Microsoft, to power their own services, which include one company’s using the software to enable users to generate full emails from just a few bullet points.

Google has its own large language model, called BERT, that it has used to help power search results in several languages including English. Other companies are also using BERT to build their own language processing software.

BERT is optimized to run on Google’s own specialized A.I. computer processors, available exclusively to customers of its cloud computing service. So Google has a clear commercial incentive to encourage companies to use BERT. And, in general, all of the cloud computing providers are happy with the current trend toward large language models, because if a company wants to train and run one of its own, it must rent a lot of cloud computing time.

For instance, one study last year estimated that training BERT on Google’s cloud costs about $7,000. Sam Altman, the CEO of OpenAI, meanwhile, has implied that it cost many millions to train GPT-3.

And while the market for these large so-called Transformer language models is relatively small at the moment, it is poised to explode, according to Kjell Carlsson, an analyst at technology research firm Forrester. “Of all the recent A.I. developments, these large Transformer networks are the ones that are most important to the future of A.I. at the moment,” he says.

One reason is that the large language models make it far easier to build language processing tools, almost right out of the box. “With just a little bit of fine-tuning, you can have customized chatbots for everything and anything,” Carlsson says. More than that, the pretrained large language models can help write software, summarize text, or create frequently asked questions with their answers, he notes.

A widely cited 2017 report from market research firm Tractica forecast that NLP (natural language processing) software of all kinds would be a $22.3 billion annual market by 2025. And that analysis was made before large language models such as BERT and GPT-3 arrived on the scene. So this is the market opportunity that Gebru’s research criticized.

What exactly did Gebru and her colleagues say was wrong with large language models? Well, lots. For one thing, because they are trained on huge corpora of existing text, the systems tend to bake in a lot of existing human bias, particularly about gender and race. What’s more, the paper’s coauthors said, the models are so large and take in so much data, they are extremely difficult to audit and test, so some of this bias may go undetected.

The paper also pointed to the adverse environmental impact, in terms of carbon footprint, that training and running such large language models on electricity-hungry servers can have. It noted that BERT, Google’s own language model, produced, by one estimate, about 1,438 pounds of carbon dioxide, or about the amount of a roundtrip flight from New York to San Francisco.

The research also looked at the fact that money and effort spent on building ever larger language models took away from efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do.

Many of the criticisms of large language models made in the paper have been made previously. The Allen Institute for AI had published a paper looking at racist and biased language produced by GPT-2, the forerunner system to GPT-3.

In fact, the paper from OpenAI itself on GPT-3, which won an award for “best paper” at this year’s Neural Information Processing Systems Conference (NeurIPS), one of the A.I. research field’s most prestigious conferences, contained a meaty section outlining some of the same potential problems with bias and environmental harm that Gebru and her coauthors highlighted.

OpenAI, arguably, has as much—if not more—financial incentive to sugarcoat any faults in GPT-3. After all, GPT-3 is literally OpenAI’s only commercial product at the moment. Google was making hundreds of billions of dollars just fine before BERT came along.

But then again, OpenAI still functions more like a tech startup than the megacorporation that Google’s become. It may simply be that large corporations are, by their very nature, allergic to paying big salaries to people to publicly criticize their own technology and potentially jeopardize billion-dollar market opportunities.

This story has been updated to include reports that Google CEO Sundar Pichai has promised to investigate Gebru’s departure from the company.

More must-read tech coverage from Fortune:

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

На выставке «Россия» разыграли 11 путёвок в здравницы Кавминвод

Тротуар обустроят вблизи автобусной остановки на улице Чайковского в Лобне

С начала недели нефть Brent потеряла 5,48%, а WTI - 5,04%

В Котельниках под благоустройство ЖК выкуплена областная земля

Музыкальные новости

Хозяйка «голой» вечеринки Анастасия Ивлеева открывает в Москве азиатский ресторан

«Деревенский вайб»: Бородина показала, как доит корову

«Зенит» добыл волевую победу в матче с «Балтикой» и завоевал Кубок России

5-летняя дочь Анастасии Заворотнюк пришла на могилу матери

Новости России

Международный театральный фестиваль «У Троицы» пройдет в Подмосковье

В Саратовской гордуме открылась выставка юной художницы – лауреата международных конкурсов

«Неожиданные новые данные». Синоптики изменили прогноз на июнь

Анимационная компания «ЯРКО» провела три мероприятия в День защиты детей

Экология в России и мире

Добрые рисунки: «585*ЗОЛОТОЙ» запустила конкурс ко Дню защиты детей

Конференция «Общество и психическое здоровье» прошла в Ставрополе

«585*ЗОЛОТОЙ» выступила партнером медиафестиваля творческой молодежи YOUPITER

Тысячи активистов разгромили офис Socar в Стамбуле, обвинив «Азербайджан» в поддержке геноцидальной политики Израиля в Палестине

Спорт в России и мире

Организм был не в состоянии: Потапова прокомментировала поражение на «Ролан Гаррос»

Главный русский теннисист заставил Париж аплодировать ему стоя. Что удалось Осьминогу Медведеву?

Осталась в одиночестве: Андреева победила Грачёву и вышла в четвертьфинал «Ролан Гаррос», Медведева остановил де Минор

Потапова о поражении от Свентек на "Ролан Гаррос": организм дал сбой

Moscow.media

Торговые настольные электронные весы CAS PR-15P

ТСД SAOTRON RT-T50: высокопроизводительный терминал сбора данных промышленного класса

Филиал № 4 ОСФР по Москве и Московской области информирует: Свыше 5,2 миллиона жителей Московского региона получают набор социальных услуг в натуральном виде

Правительство РФ направило дополнительные средства на завершение строительства трассы Гузерипль – плато Лагонаки в Адыгее











Топ новостей на этот час

Rss.plus






В Коломне состоялся благотворительный фестиваль для детей из хосписа

Дожди на трассе М-4 в Воронежской области пообещали синоптики 

Генпрокуратура потребовала передать государству велотрек с Олимпиады-80

Прием заявок на соискание премии «Лучший промышленный дизайн России» запущен