What can I expect from this course?

  • Practical, tested solutions to getting better higher accuracy out of your POC apps.

  • Systematic RAG evaluation techniques.

  • Best practices for consistent and reliable outputs while minimizing hallucination.

  • Cohere credits to run course notebooks.

In collaboration with

Course curriculum

    1. コースへようこそ

    2. WeaveとCohereのクレジット設定

    1. 章の目標

    2. ノートブック 1: ベースライン RAG パイプライン

    3. 基本から高度なRAGへ

    4. Wandbot

    5. 80/20ルール

    6. RAGのベストプラクティス

    7. 課題と解決策

    1. 章の目標

    2. 評価の基本

    3. ノートブック 1: ベースライン RAG パイプライン

    4. ノートブック 2: 評価

    5. リトリーバーの評価

    6. LLM as a judge

    7. アサーション

    8. 従来のNLPの限界

    9. LLM評価の実践

    10. モデルの再評価

    11. LLM評価の制限

    12. ペアワイズ評価

    13. 結論

    1. 章の目標

    2. ノートブック 3: データ準備、チャンク化とBM25リトリーバル

    3. ノートブック 3: チャンク化の実践

    4. ノートブック 3: BM25 リトリーバル

    5. データ取り込み

    6. データ解析

    7. チャンク化

    8. メタデータ管理

    9. データ取り込みの課題

    10. ベストプラクティス

    11. 結論

    1. 章の目標

    2. ノートブック 4: クエリ強化

    3. クエリ強化のための4つの重要な技術

    4. コンテキストの強化

    5. クエリ強化におけるLLM

    6. クエリ強化ケーススタディ: Wandbot

    1. 章の目標

    2. 制限事項

    3. 評価の比較

    4. クエリ翻訳

    5. CoTを用いた情報取得

    6. メタデータフィルタリング

    7. 論理的ルーティング

    8. コンテキストの詰め込み

    9. クロスエンコーダ

    10. ノートブック 5: 検索と再ランキング

    11. 相互ランク融合

    12. ハイブリッドリトリーバー

    13. Weaviate ベクトルデータベース

    14. Weaviate ハイブリッド検索

    15. 結論

About this course

  • Free
  • 76 lessons
  • 2 hours of video content
  • $15 Cohere credits

Guest instructors

Meor Amer

Developer Advocate at Cohere

Meor is a Developer Advocate at Cohere, a platform optimized for enterprise generative AI and advanced retrieval. He helps developers build cutting-edge applications with Cohere’s Large Language Models (LLMs).

Charles Pierse

Head of Weaviate Labs

Charles Pierse is a ML Engineer at Weaviate on the Weaviate Labs team. His work is focussed on putting the latest research in AI into production. The labs team is focussed on building out AI native services that build upon and complement Weaviate's existing core offering.

Recommended prerequisites

This course is for people with:

  • familiarity with Python

  • basic understanding of RAG

Course instructors

Bharat Ramanathan

ML Engineer @ Weights & Biases

Bharat is a Machine Learning Engineer at Weights & Biases, where he built and manages Wandbot, a technical support bot that can run in Discord, Slack, ChatGPT and Zendesk. Currently also pursuing a Data Science Master's at Harvard Extension School. Bharat is an outdoor enthusiast who enjoys reading, rock climbing, swimming, and biking.

Ayush Thakur

ML Engineer @ Weights & Biases

Ayush Thakur is a MLE at Weights and Biases and Google Developer Expert in Machine Learning (TensorFlow). He is interested in everything computer vision and representation learning. For the past 2 years he’s been working with LLMs and have covered RLHF and how and what of building LLM-based systems.

“Very broad view on many levers to increase RAG performances. And grounded with concrete examples and notebooks to apply these technics... I highly recommend !”

Gabriel Grandamy

“I've just started chapter 3, it is a really engaging course with great depth and breadth. Really appreciate you guys sharing your journey and the fantastic resources. I highly recommend starting if you have not yet.”

Elle

“This free course has everything you need to know to bring your RAG prototype to production.”

Leonie

“I really enjoyed the RAG++ course.”

Alec

“I really like the fact that this course comes with a stronger curriculum and covers many topics to go from PoC to prod (topics like data ingestion, query enhancements, and optimizing for latency and efficiency etc.).”

Aishwarya

Learning outcomes

  • Get better performance out of your RAG apps using practical and tested solutions

    Spend 1.5h learning what we have spent 12 months debugging, testing in real-life scenarios and evaluating.

  • Increase the consistency and reliability of your outputs

    Achieve reliable outputs with fewer hallucinations, higher accuracy, and improved query relevance.

  • Save costs while improving performance

    Optimize your RAG applications to achieve higher performance at a lower cost.

If you would like to start with a more introductory course get started with the Building LLM-Powered Applications