top of page
  • LinkedIn
  • X
  • Facebook
  • YouTube
Asset 22לבן חדש.png

OUR SPEAKERS

speaker_badge_banner_red.png
Share on:
Asset 14icon.png
Asset 39icon.png
Asset 12icon.png

Michal Efrati is an AI researcher and software engineer with hands-on experience building and deploying data-driven products across fintech, cybersecurity, and taxtech. She has developed ML pipelines, agent-based self-correcting architectures, and learning models in computer vision and NLP domains that process complex data at scale into real-world workflows. Her work includes developing schema-aware systems, natural language interfaces for structured data, and intelligent document understanding pipelines using transformer-based models. Michal specializes in bridging research and engineering to turn LLM prototypes into production-ready tools, helping teams move beyond experimentation to reliability, where precision, context, and control are essential. She has led cross-functional teams and collaborated closely with product managers, engineers, and domain experts to align AI solutions with real-world needs. Her work blends deep technical expertise with strong product thinking.

Michal Efrati

Applied AI Researcher & Machine Learning Engineer
Asset 12icon.png
Asset 1TWITTER.png
Asset 39icon.png
Asset 17icon.png
linkedin.png
twitter.png
facebook.png
github.png
English
Languages:
Asset 7TWITTER.png
Location:
Tel Aviv, Israel
Asset 7TWITTER.png
Can also give an online talk/webinar
Paid only. Contact speaker for pricing!

MY TALKS

Smart Models, Stupid Mistakes: Hitting the Context Sweet Spot for LLMs in Production

Data / AI / ML, Software Engineering

Asset 12SLIDES.png
Asset 21talk.png
Asset 11SLIDES.png

Imagine walking into the world’s largest library with a mission to answer a single question. You’re surrounded by thousands of books; some helpful, most irrelevant, and a few totally misleading. This is exactly what working with Large Language Models (LLMs) feels like: without the right context, it’s easy to get lost in the noise.
Too little context, and the model guesses. Too much, and it fixates on irrelevant details. Like a smart but over-eager assistant, an LLM needs just enough clearly framed information, to respond accurately. But how much is “just enough”? The truth is, there’s no one-size-fits-all sweet spot; every use case demands a different balance.
In this talk, I’ll focus on the art of context refinement when integrating LLMs into production and show how avoiding common, costly mistakes can make your models smarter and your applications more reliable. You’ll learn practical techniques to balance input size, design effective prompts, minimize hallucinations, and build feedback-aware systems that thrive under ambiguity.
Whether you're scaling prototypes or building robust applications, I will share actionable insights to make LLMs more reliable, efficient, and aligned, by giving them exactly what they need, and nothing more.

Asset 1icon.png

Smart Models, Stupid Mistakes: Hitting the Context Sweet Spot for LLMs in Production

Completed

true

Visible

true

Order

2

2
Go to lecture page

Approve speaker

email was sent to speaker

Reject speaker

email was sent to speaker

bottom of page