Добавить новость

Мантуров заявил о возможном рекорде по товарообороту Индии и РФ в 2024 году

В Рузском округе прошел концерт в честь 57-летия ДК «Юбилейный»

«Креативные индустрии — новая энергия!»: в Барнауле завершился форум «Мой креатив. Мой бизнес 2.0»

Вы делаете это регулярно: 3 привычки в магазине выдают низкий интеллект человека

News in English


Новости сегодня

Новости от TheMoneytizer

Making sense of science: Using LLMs to help reporters understand complex research

During the news gathering process, reporters can often struggle with understanding complex, jargon-heavy documents, particularly in fields like science and technology. For instance, a tech reporter trying to write a story about a new study about AI may encounter difficulty making sense of specialized terms and concepts from the field. This can create friction during the process of reading the documents to determine what may be a newsworthy angle to cover about the study.

To address this challenge, we propose using large language models (LLMs) to identify and define jargon terms within scientific abstracts. These models can be leveraged to wrangle and transform textual input into different formats (e.g., news headlines or summaries based on news article text), and may lend themselves well to the use-case of simplifying complex terms, especially if a definition is available but needs re-writing into more easily understood words. Such systems could also support the contextualization of news articles for readers, as demonstrated by BBC News Labs a few years ago.

This blog post walks through how we built and conducted a preliminary evaluation of a prototype to test out this idea, and describes what we learned in the process.1 We hope this can help others engaged in similar projects, and extend to systems that support journalists’ sense-making of documents in other domains, such as complex legal or medical documents.

Using retrieval-augmented generation

Retrieval-augmented generation (RAG) is an approach that can enable this kind of simplification of complex terms — it allows a user to input a “query” (i.e., a prompt, a question, or even simply a jargon term), and then matches it against text in a reference document that could be used to help derive a simplified definition. For instance, a jargon term like “precision metric” from a scientific abstract about a novel AI model will likely be found in different sentences within the text of the whole scientific article (Note: In this post we use article to refer to a scientific article rather than a news article.) RAG relies on finding these matching sentences or text snippets, and supplying them to an LLM, along with a prompt instructing how the snippets need to be used — e.g., to create a summary, or to generate a readable definition.

Two assumptions we made when designing this prototype with a RAG approach were that: (1) the retrieved snippets will actually be informative for creating a definition of the jargon term, and (2) even if the RAG output has minor errors, a human will actually verify the supplied definition if it elicits their interest.

By employing such a RAG approach with GPT-4, we designed a prototype system to provide reporters with clear, concise, and accurate definitions of complex terms. We also designed the prototype to personalize the identification of jargon terms based on a reader’s knowledge level, making it easier for journalists with differing levels of scientific knowledge to parse these articles (e.g. a general interest reporter might need something different than a seasoned science reporter working her specific beat). This prototype was constructed and evaluated in the lab using a sample of 64 peer-reviewed articles published on arXiv Computer Science in March 2024.

Preview of the prototype

The prototype is built as a web app with a list of scientific articles, displaying each article’s metadata and abstract. Jargon terms are highlighted within the abstracts, allowing users to hover over them for instant definitions. Additionally, users can click to access a comprehensive list of all jargon terms from the abstract, along with their definitions. A search bar allows users to find articles of interest, and filters allow users to sift through specific categories of the articles as well (we focus on topics in AI, Human Computer Interaction, Computing and Society for now).

Building this prototype entailed work on two main problems:

  1. Identifying the jargon, based on a readers’ expertise
  2. Defining the jargon, based on the text of the scientific article

Identifying the jargon

We tackled the challenge of identifying jargon terms in scientific articles using GPT-4, with a prompt template that allowed users to specify their level of scientific expertise. We used this prompt to generate a tailored list of jargon terms for that user.

This was evaluated with two annotators who manually identified jargon terms in the articles in our sample dataset, and who also provided their own levels of expertise in the prompt templates by describing their knowledge in natural language. The overlap between the resulting sets of jargon terms — for each individual article, for each individual annotator — was used to understand how well GPT-4 performs at this task. Worth noting here is that both annotators had differing levels of scientific expertise, and this was visible with how one annotator consistently identified more jargon terms per abstract than the other.

We find that GPT-4 shows promise in identifying jargon terms, but does tend to identify more terms than the human annotators. It successfully captured most human-identified terms but also incorrectly labeled many words as jargon which the annotators didn’t think were jargon. In information retrieval jargon this equates to a high recall but low precision.

Interestingly, GPT-4 does maintain the relative differences in expertise between the two annotators, identifying more terms as jargon for the less expert annotator. This suggests potential for personalization based on readers’ knowledge levels, and aligns to similar recent findings as well.

Defining the jargon

Once the jargon terms were identified, the next challenge was to provide clear, concise, and accurate definitions. To support our RAG approach we sourced relevant snippets of text for a given jargon term from the complete text of the source article. These are obtained by calculating the similarity of the jargon term to individual snippets of text from the article, and returning snippets that exceed a certain similarity threshold. We use cosine similarity to capture semantic relatedness and chose a threshold of 0.3 (low to medium) to capture a wide range of relevant snippets while excluding irrelevant ones.

We then used GPT-42 with a query prompt to generate simple and understandable definitions from the retrieved snippets. We also generated another set of definitions for comparison, solely based on providing the abstract to GPT-4, and asking it to infer a definition of a given jargon term based on the text of the abstract. Annotators rated both definitions for a given term on their accuracy, and then recorded a preference for one or the other (or a tie), based on their clarity and informativeness. Pairwise preferences such as these are often used in LLM evaluations.

The accuracy is calculated as a percentage over all definitions generated by a model. The preferences are measured by calculating a win percentage, based on the number of times one approach (Abstract vs. RAG) is preferred over another.

Surprisingly, GPT-4 with the abstracts performs a little bit better than GPT-4 with RAG over the article text, both in terms of accuracy (96.6% vs. 93.5%) and win percentage (29.2% vs. 27.8% — the rest were ties). This suggests that more context from the scientific article did not necessarily lead to higher accuracy or better understandability (i.e. our first assumption about using RAG here as described above was not met). A deeper investigation of the retrieved snippets and their actual relevance to the jargon term may help understand if this is an issue with the quality of the context, or if there may be other causes such as the similarity threshold we used. To apply RAG successfully, it’s essential to explore and test different parameters. Without this kind of careful experimentation, RAG on its own might not provide the desired results.

It may also be the case that the large size of GPT-4’s pre-training dataset enables it to draw from other sources to generate definitions. This can be as much a concern as a benefit though — it can make it harder to override irrelevant information from the pre-training data, or eschew its limitations based on cutoff dates for model training.

We also found that the effectiveness of these approaches varied based on the reader’s expertise. For instance, the less experienced annotator found similar value in both methods (higher tie percentage), while the more expert reader noticed more differences. Further evaluation with a larger set of annotators may help to replicate and understand these differences.

Closing notes

This blogpost demonstrates a small and scoped experiment in using GPT-4 with RAG to generate definitions of jargon terms, in service of improving the experience of reading complex documents during the news gathering process. We find that GPT-4 performs fairly well at identifying jargon based on a reader’s expertise, although it does tend to over-predict a bit.

Contrary to expectations, we also find that GPT-4 with RAG over an article’s text performs a little worse in terms of accuracy and clarity/informativeness of generated definitions, when compared to GPT-4 with just the context of the article’s abstract. This finding leads us to draw out more questions that are worth considering in the context of using LLMs to support text generation and transformation in the newsroom, including in terms of how to evaluate RAG oriented systems. In addition, our exploration of including expertise in the prompt to identify jargon suggests a potentially valuable pattern for journalists seeking to emulate this experience in their own workflow: providing a natural language description of expertise and knowledge in the domain can help steer the model to be more helpful in identifying jargon.

Ultimately, the utility of such a prototype is contingent on how users actually incorporate it into their workflows, and if it actually saves time and improves readers’ comprehension in practice. We would love to know if you have tested out such RAG-based tools in your own newsroom, and how they have been perceived, received, and used!

Sachita Nishal is a Ph.D. student in human-computer interaction and AI at Northwestern University. Eric Lee, an undergraduate computer science student at Northwestern, also contributed to this article. This piece originally ran on Generative AI in the Newsroom.

Illustration by Anton Grabolle used under a Creative Commons license.
  1. Our dataset and web-app are publicly available to support others interested in experimenting with our approach.
  2. Specifically, we used GPT-4 Turbo in May 2024.

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

Ограничить аренду квартир мигрантами намерены в Госдуме

Три культурных проекта Подольска отметили в финале всероссийской программы

Обновленный коллективный договор «Норникеля» остается одним из лучших в России

Вы делаете это регулярно: 3 привычки в магазине выдают низкий интеллект человека

Музыкальные новости

В Сколково обсудили плюсы и минусы медийности

Прилетевший в Россию ради юбилея Крутого Леонтьев спел песню «Занавес»

"Спартак" разгромил "Акрон" с Дзюбой в составе

Путин подписал закон о ратификации договора о стратегическом партнерстве РФ и КНДР

Новости России

148 футбольных полей: сколько писем в Новосибирске обрабатывает Почта

Подкопи и купи: можно ли приобрести жилье при максимальной ипотечной ставке

Три культурных проекта Подольска отметили в финале всероссийской программы

В Рузском округе прошел концерт в честь 57-летия ДК «Юбилейный»

Экология в России и мире

Назван средний чек туров по 10 самым популярным направлениям в ноябре: Египет, Россия, Таиланд, ОАЭ, Турцию и ещё 5 стран

Валентина Алексеева вошла в топ участниц «Мисс Вселенной» благодаря платью

3 добавки в чай, которые сделают его полезнее: куркума, имбирь, молоко

Что посмотреть в Ельце: 10 главных достопримечательностей

Спорт в России и мире

Медведев получил предупреждение за разбитую ракетку в матче с Фрицем на Итоговом турнире

Теннисистка Пегула снялась с итогового турнира WTA, ее заменит Касаткина

Касаткина сыграет на Итоговом турнире WTA после снятия Пегулы

Русские ракетки развели по углам // Даниил Медведев и Андрей Рублев попали в разные группы на Nitto ATP Finals

Moscow.media

В селе Старцево Орловского МО сгорело несколько домов

Пьяный мужчина избил трех подростков на территории лицея в Челябинске

5 ГВт и 2 млн ускорителей: опубликованы детали проекта километровых ИИ ЦОД OpenAI

Современный литературный критик. Литературная критика произведений.











Топ новостей на этот час

Rss.plus






Ограничить аренду квартир мигрантами намерены в Госдуме

Риелтор Сырцов: темпы роста цен на жилье в Москве упали в четыре раза

«Креативные индустрии — новая энергия!»: в Барнауле завершился форум «Мой креатив. Мой бизнес 2.0»

Обновленный коллективный договор «Норникеля» остается одним из лучших в России