Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
Official Website:http://2023.rubyconf.tw The Large-Language Models have taken application development by storm. Ruby has libraries that help build LLM applications, e.g.: Langchain.rb. We need to understand what kind of LLM applications can be built, how to build them, and what are common pitfalls when building them. We're going to look at building vector (semantic) search, chat bots and business process automation solutions. We're going to learn AI/LLM-related concepts, terms and ideas, learn what AI agents are, and look at some of the latest cutting edge AI research.
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
The presentation titled "Catching the AI Train" by Andrei Bondarev at RubyConf Taiwan 2023 explores the integration of Large Language Models (LLMs) into Ruby development and the opportunities they present for application development. Bondarev, a long-time Ruby enthusiast and creator of the Langchain.rb library, argues for the Ruby community to embrace AI technologies, acknowledging that Ruby currently lags behind other languages in this area. Key points from the presentation include: - **Context of AI Development**: The rise of generative AI is expected to add $2.0 to $4.5 trillion in value to the global economy, significantly impacting productivity across various sectors such as customer service, marketing, and software engineering. - **Generative AI and LLMs**: Bondarev defines generative AI and LLMs, describing how these models generate content, including text, and outlining their capabilities and limitations, such as hallucinations or inaccuracies in generated content. - **Retrieval-Augmented Generation (RAG)**: RAG is introduced as a technique to enhance accuracy in LLM applications. The RAG process involves generating embeddings, querying a vector search database, and synthesizing answers using relevant documents. - **Vector Embeddings and Similarity Search**: The significance of vector embeddings in representing data, the concept of similarity search, and how they facilitate retrieving information based on meaning rather than keywords were discussed. - **AI Agents**: Bondarev elaborates on the potential of AI agents to automate workflows and streamline business processes by utilizing LLMs to execute multi-step tasks. A brief demonstration showcased an AI agent that answers questions by integrating search API outputs and processing data effectively. - **Challenges and Metrics**: The complexities of building RAG systems, the importance of continuous evaluation, and the emergence of metrics like RAGAS for assessing RAG pipeline effectiveness were addressed. In conclusion, Bondarev emphasizes that the Ruby community has a unique opportunity to adapt and leverage AI technologies. Embracing these advancements is crucial to remaining relevant in the evolving tech landscape. The session encourages developers to explore generative AI applications and implement strategies to integrate LLMs into Ruby-based systems while being mindful of potential pitfalls.
Suggest modifications
Cancel