top of page
🎤 Ready to take the stage? Academy applications now open! Apply now
OUR SPEAKERS


Share on:
Liji Thomas is an accomplished technologist and leader with over 15 years of experience in building cutting-edge digital experiences. She is a Microsoft MVP in AI, which is a testament to her expertise and contributions to the community in the field of artificial intelligence. As Gen AI Manager at HRBlock, Liji Thomas oversees the development of cutting-edge solutions that leverage data and artificial intelligence to drive business value. Her expertise spans a wide range of technologies, including AI, machine learning, natural language processing, and conversational AI. In addition to her work at Valorem Reply, Liji is an active participant in the broader data and AI community. She is a frequent speaker at industry events and conferences.
Liji Thomas
GenAI Manager
English
Languages:
Location:
Kansas City, USA
Can also give an online talk/webinar
Paid only. Contact speaker for pricing!
MY TALKS
What I Learned About Talking to People from Talking with AI
Women in Tech, Innovation, Data / AI / ML, Diversity and Inclusion



Interacting with AI has sharpened the way I communicate and collaborate. What started as simple queries evolved into a process that refined my ability to express ideas clearly, structure thoughts logically, and engage in more thoughtful discussions. AI has reinforced the importance of precision—vague prompts lead to weak responses, just as unclear communication creates confusion in human conversations. It has also taught me to ask better questions, synthesize information effectively, and adapt my messaging to different audiences.
This session explores ways AI has improved my communication skills, from handling ambiguity and reducing bias to balancing brevity with context and refining how I frame ideas for impact. AI has become more than a tool. It’s a mirror that reflects and refines human communication. As AI continues to evolve, its influence on how we engage with one another will only grow. Let’s dive into the lessons it offers and how they translate into stronger, more effective collaboration.
Evaluation-Driven Development: Turning AI Demos into Real Products
Data / AI / ML, Innovation



If you want to move POCs into production, they have to do more than impress. They have to work at scale.
Generative AI demos can feel powerful- fast, fluent, and full of potential. But capability alone doesn’t scale. Without measurement, prototypes stall, trust erodes, and systems never make it to production. The gap between a compelling demo and a reliable product is rarely the model. It’s the absence of evaluation.
To build enterprise-grade AI, you have to measure what you build.
This session introduces the Evaluation as a first-class part of Gen AI applications. These capabilities provide a practical foundation for assessing what matters in real systems: relevance, truthfulness, coherence, completeness, and safety. They include built-in quality, NLP, and safety evaluators, with the flexibility to extend or tailor them to your domain.
And as agentic AI takes hold — systems that plan, reason, and take multi-step actions — evaluation becomes even more critical. We’ll explore how evaluation extends beyond static responses to cover agent workflows, action orchestration, and decision chains. When AI can act, understanding why it acted is as important as the outcome.
By the end, one principle should be clear:
You can’t scale AI on intuition. You scale it by measuring it.
Speaker not found
What I Learned About Talking to People from Talking with AI
Completed
true
Visible
true
Order
1
1
Evaluation-Driven Development: Turning AI Demos into Real Products
Completed
true
Visible
true
Order
4
4
Approve speaker
email was sent to speaker
Reject speaker
email was sent to speaker
bottom of page



