Exploring the Future of AI with Nvidia’s Chat with RTX

Exploring the Future of AI with Nvidia’s Chat with RTX

Nvidia wchodzi do gry z generatywną sztuczną inteligencją opartą na demie

Nvidia has entered the realm of generative artificial intelligence with its latest technological demo called Chat with RTX. This demo, currently in beta with version 0.2, offers a glimpse into the future of AI on devices. Unlike other AI models that rely on cloud computing for computational power, Chat with RTX allows users to run the chatbot directly on their computers. This approach not only ensures faster response times but also enhances data security.

To experience Chat with RTX, all you need is an Nvidia GPU from the 30 or 40 series, 8GB of system memory, and Windows 11. As our partner site PCMag discovered, the download size for the demo is approximately 35GB, with an additional 100GB of disk space required for installation. The installation process takes anywhere from 30 minutes to an hour, making it ideal to leave unattended. It’s worth noting that Chat with RTX does not support older GPUs from the RTX 20 series, potentially due to inadequate tensor cores, which are vital for generative AI.

Similar to ChatGPT or Microsoft’s Copilot, Chat with RTX allows users to interact with a large language model developed by Nvidia, known as Mistral. Users can pose questions to the model, which quickly generates responses by accessing data from the internet and even complex documents, including PDFs and YouTube videos. While Nvidia claims that Chat with RTX can also provide language translation, this functionality is not yet available.

PCMag’s assessment reveals that Chat with RTX functions similarly to ChatGPT but lacks some of its features. For example, Chat with RTX occasionally skips parts of queries, whereas ChatGPT has no such issues. The current standout feature of Chat with RTX seems to be its ability to summarize text files (PDF, Doc, txt) and YouTube videos and answer questions based on their content. Although we all love an 18-minute GPU story, having a textual version would be a welcome addition for text-oriented enthusiasts.

According to PCMag’s evaluation, Chat with RTX excels at summarizing YouTube videos, but its performance with PDF files containing GPU test results was mixed. While accurate responses were generated for YouTube videos, the model provided incorrect answers and struggled with follow-up questions when presented with PDFs. Currently, Chat with RTX appears to be more of an experimental tool, placing Nvidia ahead of its competitors in this domain. However, as both AMD and Intel CPUs now come equipped with neural processing units (NPUs), the question arises as to whose hardware will handle the AI we wish to use on our computers. Frankly, it’s becoming quite a competition.

FAQ – Chat with RTX Program

The source of the article is from the blog radardovalemg.com