AI Memory

Memory, RAG, MCP & Real-World AI Systems

AI Memory Short-term Memory Long-term Memory Context Window Conversation ID RAG – Deep Dive Steps: Knowledge Files vs Tools Knowledge Files Tools Full context Targeted retrieval Static data Dynamic access Direct File Processing MCP Architecture Core Elements MCP Communication Uses: Security Real-World AI Agent Examples 1. Customer Invoice Agent 2. Meeting Assistant 3. Sales Assistant 4. Security Agent Why Agentic Automation? Final Thought AI is evolving from: Understanding LLMs and systems like MCP and RAG gives you the ability to:

AI Agents, AI Consultant: Training and Fundamentals

AI Agents, Automation & Make Platform

What Are AI Agents? AI Agents: Agent Capabilities Types of Agents Simple Reflex Model-Based Goal-Based Agents vs Assistants vs Bots Feature Agents Assistants Bots Autonomy High Medium Low Learning Yes Limited No Complexity High Medium Low AI Agent Components Make AI Toolkit Capabilities: AI Content Extractor Works with: Functions: AI Agents in Make – Setup Process Agent Settings Tools & Knowledge Tools Knowledge MCP (Model Context Protocol) What is MCP? A standard to connect AI with tools→ “USB-C for AI” Components Benefits

AI Consultant: Training and Fundamentals, Practical Applications of AI

Practical Applications of AI & Prompting Mastery

Real-World Applications of AI Virtual Personal Assistants Chatbots Robotic Process Automation (RPA) Examples: Recommendation Systems Examples: Recognition Systems Used in: Prompts – The Core of AI Interaction A prompt is the input you give to the AI. The output depends heavily on prompt quality. Good Prompting Principles Prompting Methods Prompt Completion Chat Completion Roles in AI Communication User Assistant System AI Parameters – Controlling Output Temperature Controls creativity:

AI Consultant: Training and Fundamentals, LMS

Understanding Large Language Models (LLMs) – The Foundations

What is a Large Language Model? A Large Language Model (LLM) is a type of AI model designed to understand and generate human language. At their core, LLMs are pattern-recognition systems, not thinking entities. Neural Networks – The Brain Behind AI LLMs are powered by Neural Networks: These networks learn by identifying patterns in data and forming connections. Machine Learning – How AI Learns Machine Learning allows systems to: Flow: Training Examples → Algorithm → Input Data → Prediction/Output During training, the model builds a mathematical representation of relationships within data. Transformers – The Key Breakthrough A Transformer is a special type of neural network designed for language. It works by: This is what makes LLMs powerful. Tokens – How AI Sees Language LLMs don’t see words—they see tokens. Token Types: Key Concepts: Vectors & Embeddings Tokens are converted into numbers: Each dimension represents: Training Process of LLMs 1. Pre-training 2. Fine-tuning How LLMs Generate Text Flow: Input → Tokens → Neural Network → Relationships → Predicted Token Interacting with LLMs 1. GUI (Graphical Interface) 2. API (Application Programming Interface) 3. CLI (Command Line Interface) AI Agents – Introduction AI Agents are systems that: They can:

AI Consultant: Training and Fundamentals

Part 4: Generative AI and Agents

The Transformer The Transformer is a neural network in ML especially suited for language. It allows the system to look at all “words” in a sequence at once. Once trained, these models generate new content by predicting what should come next based on the patterns they have learned. GenAI Applications Generative AI can be applied across various business needs: AI Agents Unlike standard tools, AI Agents act, make decisions, and carry out tasks without direct and continuous input from the user. Key examples include virtual assistants, customer service (CS) chatbots, and autonomous vehicles.

AI Consultant: Training and Fundamentals

Part 3: Learning Paradigms and Methodology

How Machine Learning Works The process follows a specific flow: Training Examples lead to an Algorithm (a set of rules), which then uses Input Data to create a Prediction or Output. During training, the algorithm analyses data for patterns to create a mathematical model of the relationships within that data. Three Types of Learning Paradigms (LP)

Scroll to Top