Einstein Analytics Recipes: Second Round

Earlier this year Salesforce released recipes in Wave; a simple way for you to transform your data. I liked the idea of recipes and decided to use it in one of my projects. As you could read in one of my blog posts there were good and bad elements to recipes, so with the summer 17 release, I thought it would be good to see where we stand with recipes.

Add data sources

When you add data sources to your recipe you can use replicated data as well as data sets. This is of course only if you have replicated data available. Mapping two data sources you can now add multiple keys hence composite keys argument. This is particularly useful if you do not have a unique id to use to match to data sources, instead you can, for instance, combine name and company as a key. Although I would still recommend a unique id if possible.

Bucket fields

I’ve been a fan of bucket fields from the start, both in Salesforce operational reports and Einstein Analytics (Wave). With the new release it’s now possible to bucket date fields and both use “absolute” and “relative” dates. So absolute you would define to and from dates. However, relative dates allow you to bucket by year, quarter, month, week and day. I see this to be very powerful way to group data based on business specific dates; I know already know a few of my customers that could use this.

Transform data

There are two improvements when it comes to the transformation of data: “convert to measure” and”extract date fields”.

Convert to measure does what it says; it takes a dimension and converts into a measure. This can be handy if you want to calculate some dimensions. For instance, if you want to calculate age you can convert a year to a measure and use the new field/column in a calculation. Now we just need the NOW function for formulas… That being said, I’ll probably still do this calculation in Salesforce.

Extract date fields allow you to generate a new field/column that extracts a year, quarter, month, week, day, hour, minute, second, day epoc and second epoc. Now, this will be handy if you have to do the calculation from above; first, extract a year from a date field/column then convert it into a measure and do your calculation.

Adjust names

This is great! I can now change the names of my columns! So when Einstein Analytics automatically names a column after doing a transformation, bucket field or other I can now choose what the end user will see as well at what the user don’t see (the API name). This was something I was rather annoyed with before as I would have to modify the XMD of the data set. Note that only the API name of fields created by the recipe can be edited.

Dynamic scheduling

One thing I actually hadn’t prioritized, but I still find quite useful is that the scheduling of the recipe now can be dynamic. You are no longer stuck with every day or hour, in fact, you can set it to be every Monday, every month on the 1st or maybe the 1st Wednesday of the month. If you are looking at situations where external data is being brought into Einstein Analytics on a specific time every month then you can time your recipes to generate the data set at that specific time.

What I am still waiting for…

I know you cannot get everything in one release but I am still missing 5 things that would make recipes more powerful and user friendly:

  • Allow me to easily deploy my recipes! It seems like an absolute must that I am dissapointed haven’t been prioritized more. Best practice is generally to develop in a sandbox environment, but if I can’t deploy.. well, that makes things complicated.
  • Allow me to rename my datasets! Yes, I am happy I can change the field names, but I would love to be able to do the same for the actual dataset when I clone it. It still just calls the new recipe exactly the same as the master making it rather hard to tell the difference.
  • It would be great if I could search for blanks or make formulas checking blank values… frankly is there is always a lot of blank values and it would be great to be able to set some processes up around that.
  • It would also be great if recipes didn’t create a new field every time I set up a search and replace for a field. I guess here it still would be better to use bucket fields. Nontheless I will be waiting for this feature…
  • Advanced formulas would be great! For us that are use to use Salesforce formula fields it would be great to have the same flexibility in the recipe’s formula feature. The functions are still rather limited.

 

I will be waiting patiently for the Winter 18 release, maybe my wishes will come true…


6 thoughts on “Einstein Analytics Recipes: Second Round”

  • 1
    Liu Yongyan on February 8, 2018 Reply

    I Can’t agree more!
    When I want to copy datasets for backup(data in datasets can only reserve in 30 days), although it can be scheduled by recipes to clone datasets, it can not rename datasets like ‘DatasetXXX_20171201’ during each execution. That means I have to download datasets each copy execution in case being overrided. I really hope there is some tools to help me out.

    • 2
      Rikke on February 8, 2018 Reply

      I’m not sure what your use case is? Creating a new dataset every time it ran would not be beneficial. You would have to point your dashboards manually to a new dataset every time.

      • 3
        Liu Yongyan on February 8, 2018 Reply

        Hi Rikke.
        Thank you for your prompt reply!
        My use case has two purposes.
        One is to create dashboards through dataset like normal users do.
        The other is to backup(download to local etc.) data in the shape of dataset.
        How can I do to meet the other purpose?
        As a beginner of salesforce.com, I can only think of using recipes to clone datasets regularly and rename them like ‘DatasetXXX_20171201’, and then download them manually.
        What is your idea?
        Thank you so much.

  • 4
    Liu Yongyan on February 8, 2018 Reply

    Hi Rikke.
    Thank you for your prompt reply!
    My use case has two purposes.
    One is to create dashboards through dataset like normal users do.
    The other is to backup(download to local etc.) data in the shape of dataset.
    How can I do to meet the other purpose?
    As a beginner of salesforce.com, I can only think of using recipes to clone datasets regularly and rename them like ‘DatasetXXX_20171201’, and then download them manually.
    What is your idea?
    Thank you so much.

  • 5
    Rocco on August 29, 2018 Reply

    Hi Rikke,
    learning more from your posts, I hope you can help me.
    Probably its very easu, but I have a great problem. In my dataflow, I have used “Default value” to insert value for null in a datefield. (My first question: is it possible to insert dynamically today date?).
    Now, I am inserting static value that behind I use as condition to create a new flag attribute field. Using a computeExpression I want now change the static value with today date; is it possible?
    I am trying also in recipe with search and replace, but i can’t replace with dynamic value.
    Many thanks in advantage

    • 6
      Rikke on September 19, 2018 Reply

      Hi Rocco,

      No, unfortunately not. However, you could have a computeExpression that looks to find your default date and replace it with now().

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.