Добавить новость

Воробьев назвал местное самоуправление опорой Московской области

Органный концерт «Предчувствие Рождества» погрузит москвичей и гостей столицы в атмосферу зимних праздников

В Мытищах прошел открытый турнир по фехтованию памяти Игоря Тихоновича Елисеева

Песков: Россия не пытается победить доллар, этого не нужно пытаться делать

News in English


Новости сегодня

Новости от TheMoneytizer

Anaconda's CEO says Python can make on-premises AI solutions 'easier and safer' as companies seek stronger data protection

  • As the CEO of Anaconda, Peter Wang helps the programming-language distributor improve AI solutions.
  • This includes supporting on-prem IT and running AI models locally to boost security and efficiency.
  • This article is part of "Build IT," a series about digital-tech trends disrupting industries.

Peter Wang is the CEO of Anaconda, a company he cofounded in 2012 with the goal of democratizing business-data analytics by making Python tools easier to use. Under Wang's leadership, Anaconda grew alongside Python's rise to prominence as one of the world's most popular programming languages.

With Python established as a leading language for AI workloads, Anaconda is broadening its focus to data science and artificial intelligence, with the goal of becoming a common software layer for high-performance AI.

Anaconda has introduced tools to help companies and people get started with AI large language models. One such tool is AI Navigator, a desktop app that can run AI models locally on Windows and Mac, with Linux support coming soon. The company has 300 full-time employees and 40 million users worldwide.

Business Insider spoke with Wang to learn more about how AI workloads are prompting companies to think about bringing more of their IT infrastructure on-premises — and the hardware and software challenges that come with making such a move.

The following has been edited for clarity and length.

Can you talk about on-premises infrastructure and what that term means to you in 2024?

On-premises initially meant servers within the company's physical location. Now it's more about the governance of infrastructure, including data, networking, and servers. Who manages it? Who gets to say, "No, absolutely, you can't do that," or, "Yes, you can do that?"

At Anaconda, we see customers across the spectrum. Some have what we call air-gapped systems, not connected to the internet at all. These might be a box in a building somewhere, oftentimes guarded by people with machine guns, where you go in with a flash drive. That's the hardcore level of secure on-prem.

On the other end, we see businesses that use a lot of cloud resources. But even they need stricter boundaries. They work with cloud providers to set up virtual private clouds or to provision resources with specific governance rules and policies.

Why are companies interested in on-premises solutions for AI and large language models?

We see a lot of interest from companies wanting control over their own destiny.

They want to fine-tune models on their own data, connect them to internal databases for retrieval-augmented generation, and use agent-based models. If a company can only consume AI as a cloud end point, you have to tie all your internal systems to that cloud AI service, which is difficult.

And many of these cloud AI companies, while they are well capitalized, are still relatively new as enterprise-software players. There are a lot of concerns about data leakage and compliance.

Running locally gives more control and reduces the risk of accidental data exposure. You don't have to worry about a junior IT person at a cloud startup accidentally misconfiguring something and causing a data breach.

The risks of a data breach are clear, but many companies seem concerned about any external use of their data for AI. Why is that?

People have said that data is oil. If your data is oil, LLMs are like the internal-combustion engine, meaning they provide a much more interesting way to use that oil.

Companies want to use their sensitive "crown jewel" data with LLMs to gain insights and improve predictive analytics. These use cases are central to their business, so they're protective of this data. They don't trust putting it on external systems where it could leak valuable information like customer insights or product preferences.

When people think of "on-prem," they tend to focus on hardware. But much of what you are describing seems like software. Can you explain that in more detail?

The hardware for AI work is often similar across setups. It's typically high-end Nvidia GPUs, though not always the latest or most expensive versions. On top of that hardware stack, the software needed to run an LLM isn't exotic if you know what you're doing. But there's a big asterisk here.

The challenges often come from internal IT policies, organizational competencies, and the dynamic nature of AI workloads. For example, if your organization is familiar with Docker or Kubernetes, great. But if you're a Java shop used to deploying with Maven, or a Ruby shop unfamiliar with Python, that creates hurdles. When these companies want to start an internal LLM, that's where they can use help.

AI workloads require varying amounts of computing power at different times. When you're training, you might need a lot of GPUs, fewer, or different kinds — or even just CPUs.

This dynamic set of hardware requirements, sometimes very bursty in terms of when and how long you need it, creates an orchestration challenge. That becomes a software challenge, and then an organizational one.

Are you saying the challenge is optimizing software for efficient and dynamic use of hardware?

I actually think it's more of a broad competency challenge for a company.

In traditional software development, the IT group talks to the software-development group. The developers specify their needs for memory, bandwidth, storage, and IT provisions for them.

But data scientists and machine-learning teams have dynamic needs. They require newer, more advanced hardware, and the software they run is Python with many dependencies, like specific GPU-driver versions.

The challenges organizations face in on-premises AI relate to the dynamic nature of server and machine orchestration and the open-source-software ecosystem they're tapping into. This is especially true for compliance and security requirements.

Speaking of open source, what are your thoughts on open versus closed LLMs?

I've tried to stay out of the fray on social media, but my take is that current LLMs, especially the frontier models, have a lot of overlap in capabilities. Some are better than others in certain aspects, but at their core, once you throw enough data at them, these models start to become similar to each other.

Making open models like Meta's Llama freely available is a game changer. The original Llama release was significant, and the latest Llama 3.1 model, with 400 billion parameters, is a massive step forward. It will increase interest in running models on-premises, especially for fine-tuning on sensitive data.

But while these models are often called open, they're not open in the traditional open-source sense. You can use them freely, but you can't rebuild them from scratch or modify them at will. The training data, scripts, and hyperparameters are often not disclosed. It's a complex issue that involves considerations of safety and licensing. The data used for training is a particularly big issue that no one really talks about.

How should a company looking to implement on-premises AI get started?

Anaconda has an AI Navigator tool that is a great way to get started. It's a simple graphical interface where you can download appropriate models for your computer. We're currently running this in beta, and we're eager to get feedback from users.

Our tool connects to our curated model repository. We've quantized models to make them smaller and more efficient for different machines, which is important because downloading models from public repositories can pose security risks.

For example, we've seen attacks where someone uploads a fine-tuned code-generation model that hallucinates nonexistent Python packages. It generates code that tries to import or install these fake packages, and then the attacker creates malicious versions of these packages in the real world. When users try to run the generated code, they install these malicious packages.

Our tool helps users get past many initial hurdles in setting up and running AI models locally. It accelerates the process of getting the software running correctly for a given machine, making it easier and safer for businesses to start exploring on-premises AI solutions.

Read the original article on Business Insider

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

На физфаке МГУ ввели противоэпидемические меры из-за кори

Игрок "Локомотива" Ньямси назвал одноклубника Монтеса игроком высокого уровня

Три новые машины поступили на станцию скорой помощи в Солнечногорске

Воробьев: опыт Подмосковья учтут при принятии реформы МСУ в России

Музыкальные новости

Путин в День народного единства посетил памятник Минину и Пожарскому

*Meta незаконно использовала финансовые данные пользователей в рекламе

Музыкальный менеджер. Менеджер музыкальной группы. Музыкальный менеджер директор.

Енот Шоня из мультсериала «Команда МАТЧ» посетил матч ЦСКА – «Спартак»

Новости России

МИД РФ прокомментировал предварительные итоги выборов в США

В Ростовской области из-за вируса Коксаки могут усилить контроль в школах и детсадах

В Москве задержали «Мисс Россию — 2015» за контрабанду ювелирных украшений

РИА Новости: Дональд Трамп набирает 277 голосов выборщиков. Возможны корректировки. Россия, США, Европа могут улучшить отношения и здоровье общества?!

Экология в России и мире

«Гонорар вырос на 30%»: Султан Лагучев заявил, что в новогоднюю ночь выступит трижды

Джиган, Artik & Asti и NILETTO спели о худи, а Дина Саева стала новым артистом: в Москве прошел BRUNCH Rocket Group

Юные морские пехотинцы посетили отель Yalta Intourist

Героическое участие армян в СВО. Часть восьмая

Спорт в России и мире

Даниил Медведев станет самым возрастным участником Итогового турнира — 2024

Вместо Джоковича на Итоговый турнир ATP поедет Андрей Рублев

Борис Беккер поддержал Хачанова после его слов о поведении Умбера на «Мастерсе» в Париже

Соболенко обыграла Паолини и вышла в полуфинал Итогового турнира WTA

Moscow.media

В Санкт-Петербурге завершился Международный фестиваль робототехники «РобоФинист 2024»

Дайджест новостей «Грузовичкоф» за октябрь

По ком звонит "брутто-колокол"?

Без помех и потери сигнала: первые беспроводные игровые клавиатуры











Топ новостей на этот час

Rss.plus






Беларусь заявила о гибели 60 мигрантов на границе с ЕС за последние три года

Пугачева выставила на продажу бриллианты, которые обещала подарить внучке Клавдии

Президент РФ В. Путин принял участие в церемонии спуска на воду атомного ледокола Чукотка

Воробьев назвал местное самоуправление опорой Московской области