Добавить новость

В Москве женщина напала на чужого ребёнка и его бабушку

Симоньян: президент Никарагуа поблагодарил RT за правдивое освещение событий в его стране

"МК": рэпер Илья Бланко попал в ДТП на Maybach в центре Москвы

Не пейте эти помои: названы популярные марки кофе, которые не стоит брать даже по акции



Новости сегодня

Новости от TheMoneytizer

Robo-cops watching your every move and Matrix-style machines that teach themselves – scariest AI breakthroughs incoming

ARTIFICIAL intelligence may find a niche in industries like policing and teaching – and in some ways it’s already happening.

As businesses prioritize productivity and strive to cut costs, AI is seen as an easy way to streamline tasks and eliminate the reliance on human workers.

AFP
Artificial intelligence is already being used in industries like teaching and legal services – and its influence is only expected to grow in the near future[/caption]

However, there are downsides like the amplification of biases and inaccurate information. Plus, what happens when AI begins to learn from itself?

Here are a few of the biggest scientific breakthroughs that are expected to catch on in the near future.

Self-teaching AI

Media has popularized the idea of AI systems as obedient machines reliant on direction from human beings.

However, developers have learned it is possible for these systems to go mad.

That’s an acronym, MAD, meaning model autophagy disorder. It describes the process by which AI learns from its own output, producing increasingly nonsensical results.

The term “autophagy” comes from the Greek “self-devouring,” aptly capturing the way a system trains itself on AI-synthesized content like a snake eating its own tail.

Researchers at Rice and Stanford University were among the first to discover that models decline in the quality and diversity of their responses without a constant stream of new, real data.

Complete autophagy occurs when a model is trained solely on its own output, but machines can also train on data published by other AI programs.

This introduces a problem as more and more AI-generated content floods the web. It is increasingly likely that such material is being scraped and used in training datasets for other models.

It is difficult to gauge how much internet data is generated by artificial intelligence, but that amount is growing quickly.

NewsGuard, a platform that rates the credibility of news sites, has been tracking “AI-enabled misinformation” online.

By the end of 2023, the group had identified 614 unreliable AI-generated news and information websites, dubbed “UAINS.” As of this week, the number has swelled to 987.

The websites span a whopping 16 languages and have generic names to appear like legitimate news sites. Some contain political falsities, while others fabricate celebrity deaths and events.

It is expected that the scale of the issue will only increase in years to come as more and more content is generated by machines.

Models may begin to show signs of MAD more frequently as they feed on this content, amplifying lies and distorting facts.

Getty
Researchers have identified a phenomenon known as model autophagy disorder, aka MAD, where AI systems train themselves on synthetic data[/caption]

The tipping point comes when people fail to distinguish between human output and that of machines, taking the misleading content as fact.

AI police

Proponents argue that AI could supplement certain areas of police work, from investigating crimes to answering 911 calls.

And some industry experts believe artificial intelligence will soon play a key role in crime analysis, reducing the strain on an industry facing widespread staff shortages.

A 2019 survey conducted by the Police Executive Research Forum revealed that 86% of police agencies reported an officer shortage.

In a survey two years later, the typical hiring rate fell by 5% across agencies of all sizes while retirement rates rose by nearly 50%.

For this reason, some insiders are eager to adopt AI tech. So where could it fit in?

Getty
Industry experts say AI could be used in police work to reduce the reliance on human officers and streamline tasks like data analysis[/caption]

AI could highlight key information to be used in investigations, pulling details from social media logs or financial statements.

New technology could automate data collection and interpretation, leaving crime analysis teams with more time for complex tasks.

Similarly, the systems can be used for fraud detection, identifying patterns in documents that are indicative of illegal activities.

One of the most likely applications is facial recognition technology – a tool that is already widely in use.

Getty
Facial recognition software could become more widely used, identifying suspects and missing persons by comparing their faces to a database[/caption]

The software locates a face in surveillance footage and maps its key features like the distance between the eyes and the shape of the lips.

The machine then compares the facial template to a database of known faces to make a match.

Proponents say this could be used to identify suspects or missing persons, but concerns about privacy and civil liberties continue to present a challenge.

For starters, there’s the possibility of algorithmic discrimination, where an AI system amplifies harmful stereotypes.

The technology could intensify racial and gender biases, leading to unfair treatment.

Moreover, there’s the possibility that tools like facial recognition could be weaponized against journalists or political opponents.

Getty
With the positives come the negatives, and it is easy to see how facial recognition tech could be used to target journalists and political opponents[/caption]

AI lawyers

Analysts predicted the legal industry would see job losses due to AI in 2023 and no such thing happened.

But is it too soon to assume the storm has blown over?

A report from Goldman Sachs estimated that nearly half of legal work could be automated.

And a study by researchers at Princeton University, the University of Pennsylvania, and New York University deduced that law is the most exposed to new tech.

It is easy to see how chatbots that specialize in mimicking speech could intrude on legal work, especially as technology continues to get better at analysis and human-like language.

Getty
Steven Schwartz, a New York lawyer, admitted to using ChatGPT to research a brief, which cited six nonexistent court decisions[/caption]

OpenAI even announced earlier this month that it had trained a model called CriticGPT to catch errors in ChatGPT’s output.

“We found that when people get help from CriticGPT to review ChatGPT code they outperform those without help 60% of the time,” the company wrote in a press release. 

However, the technology has its flaws, notably its tendency to make up information, or “hallucinate” in AI parlance.

And while proponents insist these defects can be fixed, they carry gravity in an industry that hinges on the interpretation and analysis of facts.

AFP
Schwartz, his partner, and their firm were all hit with sanctions and ordered to pay thousands of dollars[/caption]

Stoking on naysayers are examples like the case of New York lawyer Steven Schwartz.

Schwartz admitted in May 2023 that he had used ChatGPT to help research a brief, which cited six fake court decisions, in a client’s personal injury case.

Schwartz said at a hearing the following month that he “never” imagined ChatGPT could lie and did not intend to mislead the court.

Prompted in part by Schwartz’s case, a federal judge in Texas issued a requirement for lawyers to certify they did not use AI to draft their filings without a human checking their accuracy.

A judge later imposed sanctions on Schwartz, his partner Peter LoDuca (whose name was on the brief), and their firm, ordering them to pay a $5,000 fine.

Legal tech startups are attempting to curtail hallucinations with the creation of software that runs on top of chatbots like Casetext.

These programs can comb through legal documents, draft deposition questions, and even propose contract revisions.

Morehouse College
Morehouse College plans to adopt AI teaching assistants that will be available any time to answer students’ questions[/caption]

AI professors

While concern about AI taking jobs is nothing new, it may be happening sooner than you think.

This fall, Morehouse College plans to introduce AI teaching assistants – three-dimensional avatars complete with digital whiteboards.

Unlike actual professors, these robots don’t have to eat, sleep, or take personal time, meaning they will be available for students 24/7.

Students will access the program through the Google Chrome browser and type their questions into a box or speak aloud.

The virtual assistant will return a verbal response in the student’s native language to mirror the classroom experience.

The endeavor is spearheaded by Muhsinah Morris, a senior professor in education at Morehouse.

Morris has denied that students’ questions will be used to train any large language model.

Alamy
The school has experimented with emergent technology in the past, launching the nation’s first so-called metaversity three years ago[/caption]

Every professor is expected to adopt an AI assistant in three to five years.

And this won’t be the school’s first foray into cutting-edge tech.

Morehouse partnered with VictoryXR, a VR education software company, to launch the country’s first “Metaversity” in 2021.

Students donned Meta Quest VR headsets and attended virtual lessons as the Covid-19 pandemic raged.

According to Morris, this experiment made Morehouse the blueprint for virtual reality classrooms at other historically Black colleges and universities.

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.

Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

Симоньян: президент Никарагуа поблагодарил RT за правдивое освещение событий в его стране

Более полусотни мастеров представили свои изделия на «Янтарных выходных»

Не пейте эти помои: названы популярные марки кофе, которые не стоит брать даже по акции

Мастер спорта по шахматам из Пензы рассказал, как стать гроссмейстером

Музыкальные новости

Воскресным утром в Пулково задерживаются 14 рейсов

ЦСКА и "Ростов" сыграли вничью в первом туре РПЛ

Первый класс пожарной опасности спрогнозировали в Подмосковье 22 и 23 июля

Владимир Минеев под крики "русские вперёд" избил и уложил на ринг Магомеда Исмаилова

Новости России

Стало известно, когда начнут строить ВСМ Москва-Петербург

Симоньян: президент Никарагуа поблагодарил RT за правдивое освещение событий в его стране

«Октава ДМ» представила слуховые аппараты на Форуме «Надежда на технологии»

В Финляндии начались военные учения у границы с Россией

Экология в России и мире

Madi Hiyaa – один из самых красивых ресторанов мира

Семьи работников заводов АО "Желдорреммаш" стали призерами федерального конкурса «Это у нас семейное»

Жаркий движ «Браво, артист» на Черноморском побережье

На «Атомайз» состоялась первая в России цифровая секьюритизация портфеля кредитов

Спорт в России и мире

Шнайдер прошла в финал турнира WTA в Будапеште

Рафаэль Надаль вышел в финал турнира в Бостаде

Панова и Сизикова вышли в финал турнира WTA в Палермо в парном разряде

Ига Свёнтек опубликовала атмосферные фото с отдыха в Польше

Moscow.media

"Он очень больной мальчик": диаспора просит отпустить мигранта проломившего голову депутату Госдумы Матвееву.

Еще один китайский кроссовер планируют собирать в России: что это за модель?

Певец Дмитрий Камский готовит к релизу новый сингл "Песня Земли"

На мосту в створе улицы Мясищева ведется гидроизоляция











Топ новостей на этот час

Rss.plus






В Финляндии начались военные учения у границы с Россией

"МК": рэпер Илья Бланко попал в ДТП на Maybach в центре Москвы

Дело возбуждено против вандала, разбившего кувалдой голову памятника Сталину

Пенсионерка отдала аферистам квартиру, сбережения и кредитные деньги на 15,6 млн