Journey Behind Einstein Analytics’ New Data Platform

0
(0)

So far you might have heard that the new Data Platform in Einstein Analytics is going to change the way you work with data. You might have read an overview of what new Data Platform is all about as well as a deep dive of its features because after all, it is a BIG change! And while it may be exciting for some, it can also be scary for others. This is why, the Data Manager teams behind this new Data Platform put in an enormous amount of care, thought, and hard work over the last two years to make sure we build it right. And in this blog, we want to share with you our story of why we decided to invest in building the new Data Platform and the steps we took to shape it the way it is today.

It all started with a dilemma…

In mid-2018, user research showed that preparing data was the most challenging and frustrating part of using Einstein Analytics. It was taking users 3-6 months to become successful with preparing data and get started with Einstein Analytics.

The Einstein Analytics team knew that it was critical to improve the data prep experience to decrease the onboarding time but was facing a dilemma on where to start because of the contrasting recommendations it was hearing from the different customer-facing teams. For example:

  • Should we prioritize improving the Dataflows or the Recipes?
  • Should we prioritize providing a better onboarding experience for new users or more capabilities for power users?
  • Should we prioritize providing more connectors, richer transformations, or better scalability to handle larger data?

To overcome this dilemma, we decided to take a user-centric approach

This involved taking the following steps:

  1. Understand Our Users
  2. Understand Their Pain Points
  3. Identify Opportunities
  4. Translate Opportunities Into Solutions
  5. Iterate Till We Get It Right

Understand our users

And so we decided to go back to the basics and understand our users – more specifically the analysts who were mainly responsible for preparing data at their companies. And we realized there were three different types of such analysts (depending on one’s experience and background with other analytics tools): Salesforce Expert, BI Expert, and ETL Expert.

The experience with other analytics tools was important was because it shaped one’s mental model and expectations around preparing data. In other words, whenever you try a new tool, you expect it to be similar to other tools you might have used in the past to do the same job (which is preparing data in this case). And depending on which analytics tools you used in the past, you will need different things in the data prep tool to make it seem more approachable, learnable & intuitive to use!

And once we understood our users and their mental models, we realized that the old Data Platform was not aligning with any of the three mental models – this was the reason why it was taking almost all users around 3-6 months to become successful and confident in preparing data and getting started with Einstein Analytics.

Understand users’ pain points

Next, we decided to map the user journey of preparing data to understand where exactly the Data Platform was not aligning with their mental model and causing pain points. And based on the journey map, we noticed some big patterns (as shown in the picture below).

  1. We noticed that in the beginning, both Salesforce Experts and BI Experts preferred using Recipes to prepare data in Einstein Analytics as it was a much user-friendly tool to get started. On the other hand, ETL experts preferred starting with Dataflows because they felt it was a much powerful and complete tool for preparing data.
  2. After some time, users using Recipe would quickly realize that it was lacking some core data prep functionality (e.g. case statements, branching, etc.) and was causing data duplication and data sync issues. This would prompt them to switch to Dataflows after some time. However, using Dataflows came with its own set of troubles. For example, users often struggle with validation and troubleshooting issues and had to spend hours manually debugging and monitoring the Dataflows.
  3. Finally, irrespective of whether users used Recipes or Dataflows, users were unable to track changes or collaborate with other team members and had to find innovative ways to workaround.

Identify opportunities

As we did deep dives with all the three types of users to understand their needs around preparing data in Einstein Analytics, the challenges they faced, and the workarounds they used, we also explored what their ideal data prep experience would be like. In the end, we identified 11 users needs that were common across all the user types and translated them into opportunities that could broadly be clubbed into four categories:

  1. Help me understand my data and flow easily and quickly
  2. Help me create a flow & iterate on it easily and quickly
  3. Help me validate my flow easily and quickly
  4. Help me manage and scale my flows easily and quickly

Translate opportunities into solutions

Based on these opportunities, the Einstein Analytics Data Prep team came up with over 200 ideas to not just bring the best of Recipes and Dataflows together but also create a new Data Platform overall that addressed above user needs. We even got these ideas and priorities validated by ~80 Einstein Analytics users to make sure we were heading in the right direction. And that is how we ended up with the different features you saw in the blog that deep-dived into Data Platform’s features.

1. Understand
New layout with live data preview
Persisted node placement
Track lineage

3. Validate & Troubleshoot
Instant Validation
Smart suggestions
Monitor entire data pipelines

2. Create & Iterate
Easy join, branching & smart transforms
Undo & Versioning
Security Inheritance

4. Manage & Scale
Orchestrate flows
More connectors with 10B rows
Integrate with Tableau

Iterate till we get it right

All this was just the beginning of the journey! Once we knew what we wanted to build, we spent the next 1+ year iterating over the designs and implementations to get the New Data Platform and Data Prep 3.0 just right (see how the designs progressed in the image below). So far we have run over 11 studies in the last 15 months where we spoke with over 130 users and with the Summer ’20 release, we will be starting with an open beta for Data Prep 3.0 where you have the opportunity to provide feedback!

So what’s next?

The Data Manager teams still have a long way to go as we bridge the functionality gap to Dataflows, build richer features like flattening a role hierarchy, custom case statements, data partitioning etc. and provide scheduling and orchestration capabilities. To learn more about the new Data Platform or Data Prep 3.0, check out this blog. If you’d like to provide feedback on any of them or be part of defining the future of Salesforce products then please sign up for Salesforce Research Program!

If you enjoyed reading this blog or would like to share your thoughts or provide feedback, then please leave your comments on this page. We are looking forward to learning from you!

*Forward-looking statement

This content contains forward-looking statements that involve risks, uncertainties, and assumptions. If any such uncertainties materialize or if any of the assumptions proved incorrect, the results of salesforce.com, inc. could differ materially from the results expressed or implied by the forward-looking statements we make.

Any unreleased services or features referenced in this document or other presentations, press releases or public statements are not currently available and may not be delivered on time or at all. Customers who purchase our services should make the purchase decisions based upon features that are currently available. Salesforce.com, inc. assumes no obligation and does not intend to update these forward-looking statements.

How useful was this post?

Click on a star to rate useful the post is!

Written by


9 thoughts on “Journey Behind Einstein Analytics’ New Data Platform”

  • 1
    itzik sadeh on July 13, 2020 Reply

    Great article and a great improvement to the current recipe solution.

    • 2
      Richa Prajapati on July 19, 2020 Reply

      Thank you Itzik! Glad to know we are heading in the right direction. 🙂

  • 3
    Richard Wilson on July 13, 2020 Reply

    A very interesting journey that I had no idea that the team was on, thank you for the write-up.

    • 4
      Richa Prajapati on July 19, 2020 Reply

      Thank you Richard! Customer success is very critical to us in everything we work for at Salesforce and with this article, we just wanted to show a sneak into how we are manifesting it our work everyday…

  • 5
    Anil on July 14, 2020 Reply

    Thank you, Richa for the thoughtful and insightful article. It really shows the timelapse of the einstein analytics evolution. I have been using it for 3 months and can relate a few of the pain points.
    I liked the schema builder approach in salesforce, which helps us to map the entity-relationship diagram between objects, similarly is there a way we can have in einstein data prep, where we can map and show which relation is between objects. secondly, the summarization is still at a numeric level. we don’t have a summarization at the String level, like summarize case comments data at the case level, this will help us to do prediction or sentiment analysis at parent level(case) using child data(comments) …

    • 6
      Richa Prajapati on July 19, 2020 Reply

      Thank you for the feedback Anil! Those are some great ideas and I will pass them to the team. You can also sign up for the Research Program to continue to getting insights into what’s coming next and even shape their future!

  • 8
    Ankita Dutta on July 15, 2020 Reply

    Great article. Very insightful and well written.

    • 9
      Richa Prajapati on July 19, 2020 Reply

      Thank you for your kind words Ankita! 😀

Leave a Reply to Richa Prajapati Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.