OUR SPEAKERS

speaker_badge_banner_red.png
Share on:
Asset 14icon.png
Asset 39icon.png
Asset 12icon.png

Bitya is a Data Scientist at Intel, specializing in feature engineering and data exploration. She has a B.Sc in Computer Science, Cognitive Science as well as an MBA. Currently a Master's student in statistics at the Hebrew University. Bitya has extensive experience in organizing professional events and courses, enjoys spending time with her family and friends, and loves cooking.

Bitya Neuhof

Data Scientist
Asset 12icon.png
Asset 1TWITTER.png
Asset 39icon.png
Asset 17icon.png
linkedin.png
twitter.png
facebook.png
github.png
English, Hebrew
Languages:
Asset 7TWITTER.png
Location:
Jerusalem, Israel
Asset 7TWITTER.png
Can also give an online talk/webinar
Paid only. Contact speaker for pricing!

MY TALKS

Automation of feature engineering: pros and cons

Data / AI / ML, Software Engineering

Asset 12SLIDES.png
Asset 21talk.png
Asset 11SLIDES.png

Data scientists spend over 60% of their time getting familiar with data, understanding features and the relationships between them, and ultimately creating new features from the data. This process is called feature engineering. It is a fundamental step before using predictive models and directly affects the predictive power of a model. Traditional feature engineering is often described as an art: it requires both domain knowledge and data manipulation skills. The process is problem-dependent and might be biased by personal skills, loss of patience during data analysis, prior experience in the field, and more. Featuretools is an open-source automated feature engineering Python library. In my talk, I will present the Featuretools library and address the very important question - to which extent can feature engineering be completely automated? I will discuss different scenarios presenting pros and cons. Finally, we will implement auto feature engineering and explore code examples.

Asset 1icon.png

Behind the Scenes: Explainable AI With SHAP

Data / AI / ML

Asset 12SLIDES.png
Asset 21talk.png
Asset 11SLIDES.png

Machine learning prediction models have become a widespread tool for multiple applications in diverse areas including healthcare and finance. A model may have high precision; it is accurate and impressively generalizes its results to unseen data. Despite all that, sometimes it makes intolerably wrong decisions. Why? Does it use relevant data? Is it free from biases? Data scientists use Explainable AI (XAI) tools to answer these questions. Why does the model produce wrong predictions? What features influence a model's decision? Are the decisions made by the model fair?
In my talk, I will discuss the motivation to use XAI tools. How to find the most important features in your data, to recognize the features that have the biggest impact on the model’s predictions, to be able to infer and explain your model’s decisions. I’ll present the SHAP XAI algorithm and how it works behind the scenes. I’ll go through a detailed Python SHAP example and explain how to read its output graphs.

Asset 1icon.png

Automation of feature engineering: pros and cons

Completed

true

Visible

true

Order

3

Go to lecture page

Behind the Scenes: Explainable AI With SHAP

Completed

true

Visible

true

Order

4

Go to lecture page