Deprecated: Function create_function() is deprecated in /customers/0/f/e/salesforceblogger.com/httpd.www/wp-content/plugins/rua-blog-subscriber-lite/rua-blog-subscriber-lite.php on line 664 Warning: Cannot modify header information - headers already sent by (output started at /customers/0/f/e/salesforceblogger.com/httpd.www/wp-content/plugins/rua-blog-subscriber-lite/rua-blog-subscriber-lite.php:664) in /customers/0/f/e/salesforceblogger.com/httpd.www/wp-includes/feed-rss2.php on line 8 Salesforce Blogger http://www.salesforceblogger.com Salesforce // Einstein Analytics // Pardot // Marketing Cloud Sat, 20 Oct 2018 01:18:54 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.8 https://i1.wp.com/www.salesforceblogger.com/wp-content/uploads/2018/05/image.jpg?fit=32%2C32 Salesforce Blogger http://www.salesforceblogger.com 32 32 125433866 A Look to the Future with Timeseries – Part 2 http://www.salesforceblogger.com/2018/10/18/a-look-to-the-future-with-timeseries-part-2/ http://www.salesforceblogger.com/2018/10/18/a-look-to-the-future-with-timeseries-part-2/#respond Wed, 17 Oct 2018 23:51:41 +0000 http://www.salesforceblogger.com/?p=1199 In my last post, I introduced one of my highlights from the Einstein Analytics Winter 19 release timeseries – the ability to predict future trends based on your data. We looked at the basics around the SAQL ‘timeseries’ statement as well as the ‘fill’ statement. Now I am not done with this topic, because there are more we can do! So let’s get to it!

The difference between organic and conventional avocados

In the first part of this blog series, I introduced a dataset around avocado prices, in this example, we will continue looking into the trending prices of this amazing fruit! So make sure you have the avocado dataset available in Einstein Analytics. And if you don’t have it just yet, check out the link in the first blog series, regardless I can recommend you read that post first as there are certain concepts I have explained there that I will skip in this post.

Okay, so we have already looked at how are the average avocado prices are developing and we have also seen the trend with a prediction interval. Now wouldn’t it be great to see the trend of prices for organic and conventional avocados? Regardless of your answer, that’s what I will be showing now.

First, of course, we need the basics, so go ahead and explore your avocado dataset and make sure you choose Average of ‘AveragePrice’ as a measure and as a grouping we need ‘Date’ (Year-Month) plus ‘type’. You will now end up with a chart like below. It is worth knowing that currently, you can only group by one additional field beside your date when you are using time series.

Just as we did in the previous blog post on timeseries we need to switch to SAQL mode to use this new statement. So in the top right corner select ‘SAQL Mode’

You should now have something like below which basically just loads the dataset, group by ‘Date’ and ‘type’ whereafter it’s projected and ordered by ‘Date’ and ‘type’.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type' as 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

The first thing we want to add is a new ‘foreach’ statement after our grouping (line 2). It’s almost a copy of the existing ‘foreach’ statement (line 3), however, we are not joining our date together.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type' as 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

As I mentioned in the previous blog, we may realize that there are gaps in our dates so we can use the ‘fill’ statement to fill in those missing dates. Please feel free to look at the documentation here.

The ‘fill’ statement will be added after our new ‘foreach’ statement. Looking back to when we used the ‘fill’ statement in part 1, there is one major difference, we now want to group by ‘type’. This means we need to add the partition into out ‘fill’ statement as you see below.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type' as 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

With that done we can now add the actual ‘timeseries’ statement to our SAQL query. I’ve kept my ‘timeseries’ statement simple and only included the parameters ‘length’, ‘dateCols’ and of course ‘partition’. We need ‘partition’ since we want to differentiate between organic and conventional avocado prices. Note that you can, of course, add more parameters should you wish to – have a look at the documentation for possible parameters.

q = load "avocado"; 
q = group q by ('Date_Year', 'Date_Month', 'type'); 
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice'; 
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type'); 
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type'); 
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type' as 'type', avg('AveragePrice') as 'avg_AveragePrice'; 
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc); 
q = limit q 2000;

Next, we need to modify the last ‘foreach’ statement. All I want to do in this scenario is change the projection of the measure to either look at the AveragePrice from the data or if that is null look at the predicted value from the timeseries calculation. I have used the coalesce function for this – exactly the same as in part 1 of this blog series.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

You can now go ahead and click ‘Run Query’ and see the result update. If you look at your chart it should look similar to mine. Pretty cool, right? We can now see that organic prices are generally higher than conventional and you can see the prediction for organic and conventional avocados are slightly different one goes slightly up where the other goes slightly down.

Filtering out old data

Now, this particular chart is fine to look at but what if you had data for the last 8 years? Your chart would be rather compact and you may even have to scroll to see all the data points. But do you really care to see data that is 8 years old? Probably not, you are really just interested in what’s going to happen. So let’s try and filter out data that is older than last year.

Normally when we work with filters we would add them right after the load statement – you can try and explore a dataset, add a filter and switch to SAQL mode. The reason we do this is that we don’t want to group data we don’t need as it has an impact on the query performance. But in this case, if we filter right after our ‘load’ statement then our ‘timeseries’ statement will not use all the data available and the confidence in the prediction would be less hence the prediction would be less accurate. So in this particular scenario, we want to apply a post projection filter. This basically just means we will add this filter after our last ‘foreach’ statement.

Now there is a little trick to this. If you notice a date filter contains all date part, like this:

q = filter q by date('Date_Year', 'Date_Month', 'Date_Day') in ["current year" .. "current month"];

However, timeseries do not support the day part. This means we have to be a little creative in our query and create a static day part that we can use. We do this by modifying the last ‘foreach’ statement by first separating our date parts, since we cannot use ‘Date_Year~~~Date_Month’ and we also add the static day part “01” and call it ‘Date_Day’.

q = load "avocado"; 
q = group q by ('Date_Year', 'Date_Month', 'type'); 
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice'; 
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type'); 
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type'); 
q = foreach q generate 'Date_Year', 'Date_Month', "01" as 'Date_Day', 'type', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price'; 
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc); 
q = limit q 2000;

With that done we can now filter out anything older than last year. We also want to make sure that in our filter we include our predictions, so if you in the ‘timeseries’ statement have defined ‘length’ as 12 you want to add that in the filter. Meaning if you modified the length to be 6 or 20 you, of course, want your filter to reflect that.

q = load "avocado"; 
q = group q by ('Date_Year', 'Date_Month', 'type'); 
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice'; q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type'); 
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = foreach q generate 'Date_Year', 'Date_Month', "01" as 'Date_Day', 'type', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price'; 
q = filter q by date('Date_Year', 'Date_Month', 'Date_Day') in ["1 year ago".."12 months ahead"];  
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc); 
q = limit q 2000;

Now we want to make one final projection where we combine the year and the month but throw away the day, since we don’t need it anymore. We also don’t need to repeat the coalesce function, instead we can just take the ‘Avg Price’ we projected previously. The final ‘foreach’ statement and query should look like below.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = foreach q generate 'Date_Year', 'Date_Month', "01" as 'Date_Day', 'type', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price';
q = filter q by date('Date_Year', 'Date_Month', 'Date_Day') in ["1 year ago".."12 months ahead"];
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type', 'Avg Price';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

If you run your query and switch back to SAQL mode you would have something like this.

Before you go

You can take the learning from part 1 and part 2 of this blog series, combine the examples and have one awesome timeseries example.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month', 'type');
q = foreach q generate 'Date_Year', 'Date_Month', 'type', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type');
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), partition='type', predictionInterval=95);
q = foreach q generate 'Date_Year', 'Date_Month', "01" as 'Date_Day', 'type', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price', 'Forecasted_Avg_Avocado_Price_high_95' as 'High Forecast', 'Forecasted_Avg_Avocado_Price_low_95' as 'Low Forecast';
q = filter q by date('Date_Year', 'Date_Month', 'Date_Day') in ["1 year ago".."12 months ahead"];
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'type', 'Avg Price', 'High Forecast', 'Low Forecast';
q = order q by ('Date_Year~~~Date_Month' asc, 'type' asc);
q = limit q 2000;

I do have to say many thanks to Pete Lyons and Antonio Scaramuzzino for brainstorming on the post projection filter.

]]>
http://www.salesforceblogger.com/2018/10/18/a-look-to-the-future-with-timeseries-part-2/feed/ 0 1199
A Look to the Future with Timeseries – Part 1 http://www.salesforceblogger.com/2018/10/16/a-look-to-the-future-with-timeseries/ http://www.salesforceblogger.com/2018/10/16/a-look-to-the-future-with-timeseries/#respond Tue, 16 Oct 2018 02:03:18 +0000 http://www.salesforceblogger.com/?p=1180 It’s Winter 19 release time! And that means we get some new awesome features in our production environment! One of the things I am excited about is the new timeseries SAQL function. Why? Well, if you look at trends when it comes to big data it is all about machine learning and AI. Einstein Analytics haven’t really had any way to look at predictions – that’s where Einstein Discovery comes in. With timeseries we can actually start predicting what is going to happen based on patterns in our existing data. Let’s have a look at an example.

Prepping your data

You can, of course, use any existing datasets that you have or create a new one with your data from Salesforce. However, I decided to look elsewhere for some data that I could use, so I turned to Kaggle and found some data on avocado prices, cause who does not love avocado? Anyway if you want to do the same as me you can find the data here. We are not going to use all the measures and dimensions and frankly, I don’t know how accurate this dataset is, but it doesn’t matter. Let’s have some fun.

Make sure you download the dataset, unzip it and upload it to Einstein Analytics. Notice that the UI has changed a little bit.

I am sure you are a smart cookie so I won’t go through how you upload the csv file, but once it’s uploaded hit “Explore” in the top right corner.

Let’s look to the future

The very first thing we want to do is to choose the measure average of AveragePrice and group by Date (Year-Month).

Now it’s time to the fun part, we will move away from chart mode and into SAQL mode. So click the SAQL icon in the top right corner. At any point you can always refer to the timeseries documentation here, there are more parameters available than I will be using in my example.

Alright, if you are new to SAQL this may look a little scary, but it really isn’t. All it’s doing is what you told it to load the avocado dataset, group by the date field and showing the average of AveragePrice. Yes, there is also an order and limit. Anyway, your query should look something like this:

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

The first thing we want to add is a new ‘foreach’ statement after our grouping (line 2). It’s almost a copy of the existing ‘foreach’ statement (line 3), however, we are not joining our date together.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

Now when I group by date there may be months where we do not have any data, in order to avoid these gaps we can use the new fill function to fill in the missing dates. Please feel free to look at the documentation here.

The ‘fill’ statement will be added after our new ‘foreach’ statement. For this to work make sure if you have grouped by Year-Month then you use the same combination in the ‘fill’ statement and not switch to let’s say Year-Quarter. The syntax is highlighted below.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

With the gaps filled we have now come to adding in our ‘timeseries’ statement – yay! So what we need to do is after our new fill statement, make room for the timeseries by pressing enter. The syntax is to first call the timeseries but then define all the parameters that we want to use, not all are mandatory to use to please refer to the documentation for that – I will be using ‘length’ and ‘dateCols’. ‘dateCols’ is the time period and ‘length’ is how many of those time periods do I want to predict. And of course the measure you see before all the parameters is the measure that is being predicted.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

So we got this far. One more thing to do before we hit run is to make sure the last ‘foreach’ statement has included our new field ‘Forecasted_Avg_Avocado_Price’, also we don’t need to call the aggregation again. So we just want to delete the aggregation and add the new measure to the last part of the ‘foreach’ statement.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month','avg_AveragePrice', 'Forecasted_Avg_Avocado_Price';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

Now you can hit run. Notice you have your aggregation of the data from the data in the ‘avg_AveragePrice’ and the 12 months in the future has a blank value. However, these are filled in the new ‘Forecasted_Avg_Avocado_Price’.

This is not the end, we can do more. Maybe we want to see how the prediction interval is looking like. We can add that to our ‘timeseries’ and last ‘foreach’ statement. The parameter is ‘predictionInterval’ and we can make it between 80 and 95, which will be the confidence for the prediction. Let’s try to add it to our query in the ‘timeseries’ statement as well as our ‘foreach’ statement, so it’s being projected.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), predictionInterval=95);
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', 'avg_AveragePrice', 'Forecasted_Avg_Avocado_Price', 'Forecasted_Avg_Avocado_Price_high_95', 'Forecasted_Avg_Avocado_Price_low_95';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

Notice that in my projection (the ‘foreach’ statement) I am calling my predicted measure again but adding ‘_high_95’ and ‘_low_95’. This is just the syntax for these fields. If I had called my measure ‘Price’ and I had used 80 in my prediction interval then it would have been ‘Price_high_80’ and ‘Price_low_80’. Once you have the new parameter added and the two extra fields projected feel free to run the query. You will now see the high and low values as well.

If I switch over to a timeline chart, it would look something like this – if you made the same changes to the chart of course.

We are not done yet. Let’s clean up the chart a little bit, for instance, why not use ‘avg_AveragePrice’ when we have data and ‘Forecasted_Avg_Avocado_Price’ when we don’t have any actuals? Also, the naming could be a little better. So switch back to SAQL mode.

We will be using the ‘coalesce’ function to take the avg_AveragePrice when we have data and when we don’t when that column is blank, we will take the predicted value instead. The syntax for ‘coalesce’ is simple:

coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price'

The first part before the comma is the default value and the value after the comma is the value we will be using if the first turned out to be null. Now the latter “as ‘Avg Price'” is the new name for our column.

We are also renaming the high and the low forecast the be more meaningful as you can see below.

q = load "avocado";
q = group q by ('Date_Year', 'Date_Month');
q = foreach q generate 'Date_Year', 'Date_Month', avg('AveragePrice') as 'avg_AveragePrice';
q = fill q by (dateCols=('Date_Year', 'Date_Month', "Y-M"));
q = timeseries q generate 'avg_AveragePrice' as 'Forecasted_Avg_Avocado_Price' with (length=12, dateCols=('Date_Year', 'Date_Month', "Y-M"), predictionInterval=95);
q = foreach q generate 'Date_Year' + "~~~" + 'Date_Month' as 'Date_Year~~~Date_Month', coalesce('avg_AveragePrice', 'Forecasted_Avg_Avocado_Price') as 'Avg Price', 'Forecasted_Avg_Avocado_Price_high_95' as 'High Forecast', 'Forecasted_Avg_Avocado_Price_low_95' as 'Low Forecast';
q = order q by 'Date_Year~~~Date_Month' asc;
q = limit q 2000;

You can now hit run and switch back to your chart mode and you will have a chart that is similar to mine – I have made a few more adjustments to the formatting of the chart.

If you want to do more with the new ‘timeseries’ statement, then have a look at the second part of this blog series.

]]>
http://www.salesforceblogger.com/2018/10/16/a-look-to-the-future-with-timeseries/feed/ 0 1180
Coming full circle with Marketing Cloud & Einstein Analytics http://www.salesforceblogger.com/2018/09/26/coming-full-circle-with-marketing-cloud-einstein-analytics/ http://www.salesforceblogger.com/2018/09/26/coming-full-circle-with-marketing-cloud-einstein-analytics/#respond Wed, 26 Sep 2018 06:00:12 +0000 http://www.salesforceblogger.com/?p=1177 Marketing departments are no different than sales or finance, they too want insight. With all the data marketing is generating it becomes that much more interesting to start having a tool that can give you good insight. Salesforce have two marketing automation tools – Pardot and Marketing Cloud. Pardot already has an app that leveraging Einstein Analytics, but Marketing Cloud doesn’t, however, we do have a default connector we can use. Therefore, I decided to explore what we could do with Marketing Cloud and Einstein Analytics, which ended up being 5 blogs.

I hope this blog series has inspired you how to get Marketing Cloud and Einstein Analytics connected.

]]>
http://www.salesforceblogger.com/2018/09/26/coming-full-circle-with-marketing-cloud-einstein-analytics/feed/ 0 1177
Marketing Smart with Einstein Analytics – Part 5 http://www.salesforceblogger.com/2018/09/25/marketing-smart-with-einstein-analytics-part-5/ http://www.salesforceblogger.com/2018/09/25/marketing-smart-with-einstein-analytics-part-5/#respond Tue, 25 Sep 2018 17:40:59 +0000 http://www.salesforceblogger.com/?p=1015 By now I have posted four blogs on how you can use Einstein Analytics for Marketing Cloud reporting. In the blog series we have covered how to get Marketing Cloud prepped, how to set up the connector in Einstein Analytics and also how to generate a dataset on open and click tracking data using the dataflow. So at this point, it’s really up to you to be creative, and by the way, I would love to hear what you have done, so please do drop a comment if you want to share what you have done. Anyway, there is one thing left to show. One thing that will lift your dashboards and make them even more powerful; sent your data back into a journey in Marketing Cloud with a bulk action!

Yup, it is possible! But I cannot take credit for this one cause I have never been a Salesforce developer and since you need Apex and Visualforce for bulk actions then this one is out of my skill set. Luckily I know and work with some absolutely brilliant people one of them being Brenda Campbell a Marketing Cloud Solution Architect. So we can all thank Brenda for this insight.

Creating a marketing journey

The use case was to have a dashboard in Einstein Analytics where you can segment your contacts. Now once you have segmented who you want to target you can add these to a journey in Marketing Cloud. So the first thing we need to do is, of course, make sure we have a Marketing Cloud journey as well as a data extension that triggers the journey and hence where we need to push our contacts to.

So login to Marketing Cloud and navigate to Email Studio or Contact Builder to create your new data extension.

In Email Studio in the “Subscriber” tab choose “Data Extensions”.

Now click “Create” in the top right corner, choose “Standard Data Extension” and click “Ok”.

Now give your data extension a name, it doesn’t really matter what you call it. Also, make sure you check the sendable checkbox and click “Next”.

For the data retention policy, you can just click “next” without making any changes.

We need to add four fields to the data extension:

  • ContactID – type text – length 20 – primary key is true
  • EmailAddress – type email
  • Date_Added – type date – nullable is true – default current date
  • Sent_Email – type boolean – nullable is true – default true

Also, make sure the “Send Relationship” is “ContactID” before you click “Create”.

Note that you can add more fields but if you want them populated from Salesforce then you need to make sure you later modify the Apex.

Next, we need to create a journey that uses this data extension, before we get started on that make sure that you have an email ready to use for this journey. However, in this guide, I am not going to walk through how to create an email.

Now navigate to the Journey Builder by hovering over “Email” and choose “Journey Builder” – twice.

In the top right corner go ahead and click “Create New Journey” and “Create Journey from Scratch”.

First up is, of course, to give your journey a name. Next, take the “API Event” and drag it into the “Start with an Entry Source”.

Then drag over the email in the step next to the “API Event”.

Click on the orange wait period to change it from 1 day to 1 minute, so contacts exit shortly after being sent the email.

Now let’s click on the “API Event” in order to define the entry source, which will be our data extension we created. So choose the data extension and click “Next”.

In this scenario, we don’t need to apply a contact filter so just skip this step and click “Next”. And when you see the summary screen notice and copy the Event Definition Key and paste it in a text document, we will need it later. Once you have that just hit “Done”.

Now we need to define the email we will be using, so click on the “Email” in the canvas, the activity you dragged over earlier. Find and select the email that you want to use and click “Next”.

Click “Next” until you get to the summary slide. You can of course choose to change some of the settings, but I am sticking with the default setup. Finally, make sure you agree to the setup and then hit “Done”.

We now need to define if contacts can be added multiple times and what should happen if they do. So go to the settings by clicking the gear symbol.

I’ve chosen “Re-entry only after exiting” but it doesn’t really matter what you select here, you just need to make a decision on the contact entry.

 

The final step in the journey creation is to make sure you save and activate your journey.

One more note before we jump to Salesforce is that we need to go back into the installed package – you would have created it following the step in the first blog in this series.

First thing here is to make sure you as a minimum have “Read” and “Write” access to the data extensions. Next, you want to copy and paste the Client Id and Secret together with your event definition key. If you are unsure how to do this, make sure you follow the steps in the first blog.

Prepping Salesforce

The assumption here is that you have the Marketing Cloud Connect set up to make sure that the two platforms are connected. One thing the app comes with is a connected app, which we will also use for this solution. Don’t worry we don’t actually have to do anything about this, it’s just important that it’s there.

First up we need to set up two remote sites so we can access Marketing Cloud from Salesforce. So in Salesforce navigate to setup and search for “Remote Site” in the quick find. Click “New Remote Site”, give it a name and enter this URL:

https://www.exacttargetapis.com

Click “Save & New”.

Give the second remote site a name and use the following URL:

https://auth.exacttargetapis.com

Then click “Save”.

Next up we need to create an Apex class, which we will call “AddToJourneyController”. Again you can use the quick find and search for “Apex classes” and click on “New”.

Now add the following code:

public class AddToJourneyController {
 public string query{get; set;}
 
/* To determine the records to perform the bulk action on, extract the SAQL query */
 public PageReference init() {
 query = ApexPages.currentPage().getParameters().get('query');
 return null;
 }

/* Takes the contact records from the SAQL query, calls the AddToJourney method to add to SFMC data extension and inject into journey */
 @RemoteAction
 public static Map<String, String> create(List <Map<String, String>> contactRecords) {
 Map<String, String> result = new Map<String, String>();
 String accessToken = getAccessToken();
 for (Map<String, String> contactRecord : contactRecords) {
 
 String email= contactRecord.get('Email');
 String contactId = contactRecord.get('Id');
 System.debug('Adding ' + email + ' and Id: ' + contactId + ' to SFMC using accessToken: ' + accessToken);
 AddContactToJourney(accessToken, contactId, email);
 }
 return result;
 }
 

/* Connecting to SFMC with client ID and Secret */
 public static String getAccessToken() {
 String clientId = 'XXXXXXXXXXX';
 String clientSecret = 'XXXXXXXXXX';
 Http http = new Http();
 HttpRequest request = new HttpRequest();
 request.setEndpoint('https://auth.exacttargetapis.com/v1/requestToken');
 request.setMethod('POST');
 request.setHeader('Content-Type', 'application/json;charset=UTF-8');
 request.setBody('{"clientId":"'+ clientId + '","clientSecret":"' + clientSecret + '"}');
 HttpResponse response = http.send(request);
 // Parse the JSON response
 if (response.getStatusCode() != 201) {
 System.debug('The status code returned was not expected: ' +
 response.getStatusCode() + ' ' + response.getStatus());
 } else {
 System.debug(response);
 }
 String accesstoken = null;
 JSONParser parser = JSON.createParser(response.getBody());
 while (parser.nextToken() != null) {
 if (parser.getCurrentToken() == JSONToken.FIELD_NAME) {
 String fieldName = parser.getText();
 parser.nextToken();
 if (fieldName == 'accessToken') {
 accesstoken = parser.getText();
 }
 }
 }
 System.debug('accesstoken => ' + accesstoken);
 return accesstoken;
 } 
 
/* Determine which journey and DE to add the contacts to */
 public static HttpResponse AddContactToJourney(String accessToken, String contactId, String emailAddress) {
 System.debug('Inside AddContactToJourney and Adding ' + emailAddress + ' and Id: ' + contactId + ' to SFMC using accessToken: ' + accessToken);
 String bearerToken = 'Bearer ' + accessToken;
 String reqBody = '{"ContactKey": "' + contactId + '", "EventDefinitionKey": "APIEventXXXXXXXXXXXXXXXXXXXXX", "Data":{"ContactID": "' + contactId + '", "EmailAddress": "' + emailAddress +'"}}';
 System.debug('reqBody is: ' + reqBody);
 Http http = new Http();
 HttpRequest request = new HttpRequest();
 request.setEndpoint('https://www.exacttargetapis.com/interaction/v1/events');
 request.setMethod('POST');
 request.setHeader('Content-Type', 'application/json;charset=UTF-8');
 request.setHeader('Authorization', 'Bearer ' + accessToken);

 request.setBody(reqBody);
 
 HttpResponse response = http.send(request);
 // Parse the JSON response
 if (response.getStatusCode() != 201) {
 System.debug('The status code returned was not expected: ' +
 response.getStatusCode() + ' ' + response.getStatus());
 } else {
 System.debug('The request executed ok ' + response);
 }
 return response;
 }
}

The code does several things, first it takes the contacts from SAQL query in Einstein Analytics, then it connects to Marketing Cloud and pastes the contacts from Einstein Analytics into the data extension that powers your journey.

As you hopefully can see I highlighted three things in this Apex class, which we will need to modify. You hopefully copied and pasted the Client Id, Secret, and API definition because we need them now.

Where it says “String clientId” and “String clientSecret” make sure you replace the X’s with the correct values from the installed package.

String clientId = 'XXXXXXXXXXX';
String clientSecret = 'XXXXXXXXXX';

Next, you need to modify the API definition key. So the next bit that is highlighted make sure to replace the “APIEventXXXXXXXXXXXXXXXXXXXXX” with the API definition key from the event entry source in your journey.

"EventDefinitionKey": "APIEventXXXXXXXXXXXXXXXXXXXXX"

Finally hit “Save”.

A note to the Apex class is that you can, of course, modify this to not having all the keys hardcoded and instead use custom settings. This approach would also allow you to make the journey selection more flexible as right now people would ever only be added to this journey.

Now the Apex class is not enough, we need to make sure we can trigger it as well, so for this, we will use a visualforce page.

Go to the quick find again and search for “Visualforce”, choose “Visualforce Pages” and click “New”. Give your new page a name, I’ve called mine “Add To Journey”. The API name or simply name is what is important to remember because we need that later when we connect the final dots.

We, of course, need to add the code, so copy and paste the following code into the visualforce markup.

<apex:page controller="AddToJourneyController" action="{!init}" showheader="false" sidebar="false" standardStylesheets="false" title="Add To Journey" >
 <apex:stylesheet value="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"/>

<apex:includeScript value="https://code.jquery.com/jquery-3.1.0.min.js"/>
 <style>
 th {
 width: 50%;
 }
 h4 {
 font-size: 24px;
 }
 table {
 font-size: 20px;
 width: 100%;
 }
 </style>
 <div class="container-fluid">
 <h4 id="message">Querying Contacts...</h4>
 <table name="results" id="results" data-role="table" class="table-bordered table-striped table-responsive">
 <thead><tr><th>Contact</th><th>Journey</th></tr></thead>
 <tbody></tbody>
 </table>
 </div>
 
 <script>
 $(function() {
 $.ajaxSetup({
 headers: {"Authorization": 'Bearer {!$Api.Session_ID}'}
 });
 
 setTimeout(executeQuery, 1000);
 });
 
 function executeQuery() {
 var query = {};
 query.statements = "{!JSENCODE(query)}";
 var queryObj = {query: query.statements};
 $.ajax({
 type: 'POST',
 url: '/services/data/v39.0/wave/query',
 data: JSON.stringify(queryObj),
 contentType: 'application/json',
 success: function(data) {
 $('#message').html('Adding to journey...');
 var record = null;
 var row = null;
 $('#results tbody').empty();
 
 for (var i = 0; i < data.results.records.length; i++) {
 record = data.results.records[i];
 row = $('<tr>');
 row.append($('<td>').html(record['Email']));
 row.append($('<td class="' + record['Email'] + '">').html('Complete...'));
 $('#results tbody').append(row);
 }
 setTimeout(function() {addToJourney(data.results.records);}, 1000);
 
 },
 });
 }

/* Calls the Apex controller method that add contact to the journey. */
 function addToJourney(contactRecords) {
 AddToJourneyController.create(contactRecords, function(result, event) {
 console.log(result);
 if (event.status) {
 for (var i = 0; i < contactRecords.length; i++) {
 $('td.' + contactRecords[i].Id).html(result[contactRecords[i].Id]);
 }
 $('#message').html(contactRecords.length + ' Contact added to journey in Marketing Cloud');
 }
 else {
 $('#message').html('Error: ' + event.message);
 }
 });
 }
 
 </script>
</apex:page>

We don’t have to do any modifications here, so just go ahead and click “Save”.

Again if you want to modify the styling of the visualforce page, knock yourself out. You could allow the user to make selections or make it look like lightning. So if you are visualforce shark then be creative.

Connecting the dots

Yay! We are now ready to connect the dots and power the journey with contacts from Einstein Analytics using the Apex class and visualforce page we created.

Navigate to Einstein Analytics and make sure you have a dashboard ready to use – of course, powered by a dataset with the contact as the root object. Have whatever you want on this dashboard but as a minimum, we need a values table that contains the id and the email. Mine looks like this.

In edit mode find the widget of your value table. Notice that in the general section there is a checkbox for Show custom action, we need to select that one.

You will now need to give your action a label, meaning what will the users see when they look in the drop down. I’ve just called mine “Add to Journey” but you could, of course, be more specific about what journey you are adding the contacts to. We also need to add the API name of the visualforce page we just created, so the action know which page to load.

And that’s it! You can now preview your dashboard, make some selections and test out the bulk action in the value table by clicking on the drop-down menu in the table and select the action you just enabled.

Just note that we in the Marketing Cloud journey defined an email send, that does mean an email will be sent to the contacts that are in your table, so make sure that if you are just testing this out that you have your own email in it.

Connected clouds

Hopefully, you can see the power of bulk actions, we can not only get data from Marketing Cloud into Einstein Analytics, but we can we can segment our contacts in Einstein Analytics and add them to a marketing journey. So Brenda proved that this is possible, but you can, of course, take this solution even further. Maybe you want to be able to select the marketing journey your contacts to be added to. You can also make your journey more complex and add different channels within the journey. The opportunities are endless.

]]>
http://www.salesforceblogger.com/2018/09/25/marketing-smart-with-einstein-analytics-part-5/feed/ 0 1015
Marketing Smart with Einstein Analytics – Part 4 http://www.salesforceblogger.com/2018/08/21/marketing-smart-with-einstein-analytics-part-4/ http://www.salesforceblogger.com/2018/08/21/marketing-smart-with-einstein-analytics-part-4/#comments Tue, 21 Aug 2018 14:48:34 +0000 http://www.salesforceblogger.com/?p=1093 If you have been following along in this series of ‘Marketing Smart with Einstein Analytics’ you will have seen how you can set up the connection from Marketing Cloud to Einstein Analytics and further how we can access the hidden data extensions. Now if you sent a lot of emails you will have a lot of tracking data which means queries can be rather slow to run, so let’s have a look at how you can deal with vasts amount of data.

Replicating the Hidden Data Extensions

In part 2 of this blog series, we covered how the first step in accessing the tracking data via the connector is to create a copy of the hidden data extensions or at least those that we are interested in using in Einstein Analytics.

For us to create new data extensions we need to go to the email studio (yes, you can also do this in the contact builder if you rather do that). In Marketing Cloud hover over ‘Email Studio’ and choose ‘Email’.

Now navigate to data extensions by choosing ‘Data Extension’ in the Subscriber tab.

Before we move on and create the data extensions we need please make sure you have the Help Section open with the data extension you want to use. In this case, I am looking at recreating the Open data extension. Now we have an idea of how our new data extension should look. Click on the blue ‘Create’ button, choose the ‘Standard Data Extension’ and click ‘Ok’.

Give the new data extension a name and click ‘Next’.

In the next section, we are going to just click ‘Next’ since we don’t care about retention.

Now the interesting part comes, you need to create every single field from the hidden Open Data Extension in this new data extension. Make sure that the name, data type, length and nullable is an exact match from the data view. Once you are done you can click ‘Create’.

Extracting That Data

Once we have the data extension created we are ready to populate it. When we are dealing with vasts amount of data we will use a different way to extract this data than we did it in part 2. For this setup, you want to make sure you have created your Marketing Cloud FTP account. Another thing is to have a CSV file with the columns from the data extension ready.

Now let’s move to Automation Studio by navigating to the ‘Journey Builder’ tab in the menu and choose ‘Automation Studio. Note that the individual steps we will be creating can also be done from email studio if you prefer that.

Once in the Automation Studio, we need to create a new automation by clicking the blue button in the top right corner saying ‘New Automation’.

Best practice is always to give your automation a name, which you can do in the top left corner where it says ‘Untitled’. Once you have given it a name just remember to click the ‘Done’ button.

The first thing we want to do hereafter is to define how our automation will start and since I want this to happen on a schedule I will choose and drag the starting source ‘Schedule’ into the starting source part of the canvas.

Next, we need to have three activities on our canvas in order for us to grab data from the hidden open data extension to the new data extension created as a replica. The first activity we will perform is a ‘Data Extract’, so find it to the left under ‘Activities’ and drag it onto the canvas on the right side.

Once we have this activity click on ‘Choose’ to setup a new data extract, which will open up a new dialog box where you can choose to create a new data extract activity.

Make sure to give your data extract activity a name and a file pattern name. The latter will determine the name of the file that is dropped on your ftp account. Finally, you want to make sure that the extract type is ‘Tracking Extract’. Once you have that click ‘Next’.

On the next screen, there is quite a bit to consider. The first thing is the range of your data extract, in my case I have just chosen to do a rolling range of 90 days. Account ID refers to the business unit (MID) you want to pull data from. The easiest way to get that ID is by hovering over the business unit name left to your own name and then again hovering over the business unit of your choice.

The last thing is to define the extract you want, which in my case is ‘Extract Open’. Once selected you can click ‘Next’ and ‘Finish’ once you get the summary screen.

The next activity we need to create is a ‘File Transfer’, which helps to take the file we just created, so go ahead and drag that activity over to the canvas as a second step.

We now need to choose and create a new file transfer activity, so click ‘Choose’ to open up the dialog box and click the ‘Create New File Transfer Activity’ in the top right corner of the box – just like we did with the data extract.

Again we must give our activity a name and we also need to choose ‘Move a File From Safehouse’ as the action type. Then click ‘Next’.

In the new screen we need to first set the file naming pattern for the file we need to pick up, which defined in the data extract before. In my case I just called it ‘Open’. We also need to pick the destination for the transfer which will be our ftp account. We are now ready to click ‘Next’ and ‘Finish’.

The last activity we need is the ‘Import File’, so find it and drag it over as a third step.

Again we need to choose and create a new activity just like we did with the previous two activities. Similar to previous activities we also need to give this one a name and click ‘Next’.

Now choose the destination that you picked in the file transfer activity and again put the name of the file naming pattern you used previously. You will get a warning here as we have not yet run the two previous steps, so no file exists just yet, but go ahead and click ‘Next’.

Now you need to pick the data extension you want to populate, which would be the one we created earlier and click ‘Next’.

We now have to define what type of data import we want to do. Depending on how you set up your automation you may change it, but for the illustration of what is possible then I will just do an ‘Overwrite’. Next, we need to map the columns, which we will do with the map manually. Once that has been selected you will see the dialog box changing a bit and we now need to import that CSV file I said was a prerequisite so we can map the columns. Once you have your file uploaded you can start mapping the columns by dragging column headers from the left to the matching on the right. Once done click ‘Next’ and ‘Finish’.

Your automation should now have three steps and looks somewhat like mine in the picture below.

Now you can save your automation and afterward run it once by clicking the two buttons in the top right corner.

Once you hit ‘Run Once’ you do need to pick which activities you want to run, you want to make sure all of them work together, so select them all and click ‘Run’.

If you are successful in the run you can now go ahead and schedule your automation and everytime your automation run you will have fresh new tracking data to grab from the Marketing Cloud connector in Einstein Analytics. Please have a look at part 1 of this blog series to see how you sync the data extension in Einstein Analytics.

]]>
http://www.salesforceblogger.com/2018/08/21/marketing-smart-with-einstein-analytics-part-4/feed/ 2 1093
How to Make the Gauge Chart Dynamic Again http://www.salesforceblogger.com/2018/08/09/how-to-make-the-gauge-chart-dynamic-again/ http://www.salesforceblogger.com/2018/08/09/how-to-make-the-gauge-chart-dynamic-again/#respond Wed, 08 Aug 2018 23:23:17 +0000 http://www.salesforceblogger.com/?p=1077 Several have asked me “how do we do the dynamic gauge post Summer18 release?”. Why? Well, when the gauge chart was released I wrote a blog on how to make the ranges dynamic; a great use case for actual vs. quota. On top of that, it was often a question I got when I worked with standard reporting in Salesforce, people really like that the ranges in the gauge are dynamic. However, since the introduction of conditional formatting and the asset XMD in Summer18 if you follow the steps in the blog I wrote back in June 2017 you will no longer be successful. Therefore I decided to try and solve this puzzle (and the many questions I got) and luckily I was successful so this blog will demonstrate how you can make the gauge chart dynamic again.

Get the basics down

So the steps that I covered in “How to make your Gauge chart dynamic” still apply. You need to make sure you have the following:

  • Your gauge chart
  • Your breakpoint calculations
  • The breakpoint bindings

So basically you need to follow the whole blog! Once you have that there is one more action to take in the Dashboard JSON.

The secret trick

Before going into the JSON make sure you notice the widget name of your gauge. You can do this by clicking on your gauge chart and in the property panel to the right you will see two tabs “Widget” and “Step”, make sure you are on the “Widget” tab and one of the first things you see is the name or ID of the widget.

With that memorized now go to your Dashboard JSON by hitting command+E (Mac) or control+E (Windows).

You can now search for the widget that you just looked at; in my case ‘Chart_1’. In order to search in the Dashboard JSON hit command+F (Mac) or control+F (windows) and type in the widget name of the gauge.

 

The JSON for that widget should look something like this:

 "chart_1": {
 "parameters": {
 "max": "{{cell(BP_1.result,0,\"Max\").asString()}}",
 "showPercentage": true,
 "visualizationType": "gauge",
 "exploreLink": true,
 "medium": "{{cell(BP_1.result,0,\"Mid\").asString()}}",
 "title": {
 "fontSize": 14,
 "subtitleFontSize": 11,
 "label": "",
 "align": "center",
 "subtitleLabel": ""
 },
 "trellis": {
 "flipLabels": false,
 "showGridLines": true,
 "size": [
 100,
 100
 ],
 "enable": false,
 "type": "x",
 "chartsPerLine": 4
 },
 "bands": {
 "high": {
 "color": "#008000",
 "label": "High"
 },
 "low": {
 "color": "#B22222",
 "label": "Low"
 },
 "medium": {
 "color": "#ffa500",
 "label": "Medium"
 }
 },
 "showRange": true,
 "showLabel": true,
 "showValue": true,
 "high": "{{cell(BP_1.result,0,\"High\").asString()}}",
 "columnMap": {
 "trellis": [],
 "plots": [
 "sum_Amount"
 ]
 },
 "min": 0,
 "angle": 240,
 "theme": "wave",
 "step": "Gauge_1",
 "applyConditionalFormatting": true,
 "legend": {
 "show": false,
 "inside": false,
 "showHeader": true,
 "position": "right-top"
 }
 },
 "type": "chart"
 },

You may have noticed that my naming is not consistent with the original blog, but that is alright the idea is the same. The only real difference is that with Summer18 you can create a column alias in compare tables, which means we don’t have to use ‘A’, ‘B’ and ‘C’ when referencing columns, we can just use the alias. Hence my bindings are using ‘Max’, ‘High’ and ‘Mid’.

Now the other change is there is a key called applyConditionalFormatting; I’ve highlighted it in the JSON above. This key is what we need to change. By default, it is set to ‘true’ but we want to change that to ‘false’. The JSON for that chart should now look like this:

 "chart_1": {
 "parameters": {
 "max": "{{cell(BP_1.result,0,\"Max\").asString()}}",
 "showPercentage": true,
 "visualizationType": "gauge",
 "exploreLink": true,
 "medium": "{{cell(BP_1.result,0,\"Mid\").asString()}}",
 "title": {
 "fontSize": 14,
 "subtitleFontSize": 11,
 "label": "",
 "align": "center",
 "subtitleLabel": ""
 },
 "trellis": {
 "flipLabels": false,
 "showGridLines": true,
 "size": [
 100,
 100
 ],
 "enable": false,
 "type": "x",
 "chartsPerLine": 4
 },
 "bands": {
 "high": {
 "color": "#008000",
 "label": "High"
 },
 "low": {
 "color": "#B22222",
 "label": "Low"
 },
 "medium": {
 "color": "#ffa500",
 "label": "Medium"
 }
 },
 "showRange": true,
 "showLabel": true,
 "showValue": true,
 "high": "{{cell(BP_1.result,0,\"High\").asString()}}",
 "columnMap": {
 "trellis": [],
 "plots": [
 "sum_Amount"
 ]
 },
 "min": 0,
 "angle": 240,
 "theme": "wave",
 "step": "Gauge_1",
 "applyConditionalFormatting": false,
 "legend": {
 "show": false,
 "inside": false,
 "showHeader": true,
 "position": "right-top"
 }
 },
 "type": "chart"
 },

With this done hit ‘Done’ in the top right corner and your gauge should yet again be dynamic.

]]>
http://www.salesforceblogger.com/2018/08/09/how-to-make-the-gauge-chart-dynamic-again/feed/ 0 1077
Marketing Smart with Einstein Analytics – Part 3 http://www.salesforceblogger.com/2018/07/30/marketing-smart-with-einstein-analytics-part-3/ http://www.salesforceblogger.com/2018/07/30/marketing-smart-with-einstein-analytics-part-3/#respond Mon, 30 Jul 2018 08:06:14 +0000 http://www.salesforceblogger.com/?p=983 In the last two parts of this blog series we explored how to get data from Marketing Cloud into Einstein Analytics, we even looked at taking those hidden data views and bring them in, because let’s face it that is key details any marketing organization wants to report on – if you missed it here’s part 1 and part 2. But the big question now is what do we do with this replicated data? I mean no user can actually use this data yet since we haven’t created any datasets. Also, the replicated data extensions do not make much sense in itself; for instance, we do not know the email subject of an email click just by looking at the click replicated data. This part 3 will explore how we can create a dataset from multiple data extensions with the user of the Einstein Analytics data flow.

Know the Data Model

When you are working with data flow it is absolutely crucial that you know the data, the grain and keys you can use when augmenting your data. If you don’t understand the data then the chances are that you won’t get a meaningful output of your data flow. The dataset we want to create in this blog is one to illustrate the open and click activities. Remember if you have a specific use case from the marketing organization make sure you understand what they want to see in their dashboard and then start looking at the data you have available, you may have to structure your dataset in a different way.

In order to create the open and click activities dataset you need to make sure you have replicated the following data extensions from Marketing Cloud:

  • Open
  • Clicks
  • Job

If you don’t have those data extensions available in Einstein Analytics make sure to follow the steps in Part 1 to set up the connector and part 2 to make the hidden data views available for the connector.

Build the Data Flow

In Einstein Analytics navigate to the Data Manager by clicking the gear icon in the top right corner in Analytics Studio.

Once there navigate to the data flow by clicking ‘Dataflow & Recipes’ on the left menu.

Now choose the data flow you want to use – or you can always create a new data flow as well. In my case, I already have a Marketing Cloud Dataflow that I can use for this purpose. If this is an existing data flow you may want to take a backup of it before making any changes.

Get the Data

The very first thing we want to do is extract our data from the replicated data extensions. Since we are taking data from a connector we need to use the ‘digest’ function. So click on the digest function to create your first node.

We want to give the digest node a name I have called it ‘Open’. Then choose the Connection Name, which is the name of your connection to Marketing Cloud. From the connection we want to choose the ‘Source Object’ which is the Open data extension; in Einstein Analytics it’s being renamed to ‘Ungrouped__Open’. Finally, you need to select the fields from the replicated data extension that you want to include, I’ve selected all the fields. Once you have all that hit ‘Create’.

We now need to do the same for the Click replicated data extension. Here I will also select all the fields, so it should look something like below.

And finally, we need to create the digest for the Job replicated data extension also including all the fields.

Define Activity Types

With all the data available we need to start thinking about bringing the data together. I know that my Open and Click data extension somewhat contain the same details by checking out the data view documentation. Further, I want my dataset to contain information on activities, so I want each row to symbolize an open or a click activity and therefore it makes sense to do an append function. However, knowing my data there is no field in the Open or Click data views that say what activity it is as it is split on a replication level. In other words when I append my Open and Click I don’t know which is which. In order to distinguish what is a click and what is an open activity, we need to create two compute expression transformation that does just that.

In the transformation menu find the computeExpression button and click it.

The first computeExpression we will create is to have Open as the source so we can say that these activities are open activities. So call your new computeExpression node ‘OpenActivity’ and choose ‘Open’ as the source.

Next, we need to add the new field we want to use. So click on ‘+ Add Field’ and give your new field a name and label; I’ve called mine ‘IsOpens’. You want to keep ‘text’ as the data type as we will be expecting a ‘true’ value in the field.

Finally, we need to add our SAQL expression. We could write a fancy SAQL statement, but in reality, we know that every single row from the source is an open activity. So we can pack the fancy SAQL statements away and just type “true”. We need the double quote to indicate that this is a text string. Once done you can hit ‘Save’ and ‘Create’.

Next, we need to do the same but for the Click digest. So create a new computeExpression and call this one ‘ClickActivity’ and choose ‘Click’ as the source.

Similarly to before click ‘+ Add Field’ and give the field the name and label ‘IsClicks’, keep the data type as text and add “true” in the SAQL expression.

We now know what each digest as we have defined that in the computeExpressions, so now we can move on and start bringing the data into one instead of three different digests or data extensions. In the meantime, your data flow should look like what you see below.

Bring the Data Together

As mentioned in the previous section we want to append the Open and the Click digest, meaning we want them to the opens and clicks to be individual rows in the same table. We also want some more details about the activity for instance what was the subject of the email the user clicked or opened which we will get from the ‘Job’ digest. We will use two transformations the ‘Append’ and the ‘Augment’ to bring all the data together.

First up we need to append the Open and the Click data by clicking the ‘Append’ transformation in the menu.

All we have to do in this node is, of course, give it a name ‘AppendActivity’ and define the data we want to append by choosing the source, which is our two new computeExpression nodes ‘OpenActivity’ and ‘ClickActivity’. Finally, we need to make sure that ‘Allow disjoint schema’ is checked as the two sources are not identical. First of all, remember that out computeExpression is generating a field that is unique for that source on top of that looking at the data view documentation the ‘Click’ data contains information about the URL clicked, which you do not have in the Open activity data. Note that I can do an append transformation because my field names in each source are identical.

Once done you can hit ‘Create’.

The final step to bringing all the data sources together is to joining our new append node with our Job data. The ‘AppendActivity’ will be the left key as this is our lowest grain or root information and the job is our right key. Again knowing my data I know that my Open and Click data has a reference to a ‘JobID’ which is the same id you will find in the Job data.

To join the data together click the ‘Augment’ transformation in the menu.

First of all, make sure to name your node ‘AugmentActivityJob’. Then choose ‘AppendActivity’ as your ‘Left Source’ and ‘JobID’ as the ‘Left Key’.

When we have defined the left part of the augment we need to look at the right side. First, the ‘Relationship’ should be ‘JobDetails’ its basically just a description. The ‘Right Source’ is as mentioned the ‘Job’ node and the ‘Right Key’ is the ‘JobID’. Finally, you need to pick the fields you want to bring in from the right source. You can pick all or a few of the fields, but I wouldn’t pick the right key as that would be a duplicate of your left key. Also if you only select a few you probably don’t need to include them in the Job digest node unless you use them for something previously to augmenting your data.

Once all your fields have been selected click ‘Create’. Now you will see all three data sources have been brought together.

Clean up and Register Dataset

Nothing is a dataset just yet. In order for us to actually generate a dataset, we need to register it. But before we want to do that we want to clean up our fields a little bit. First of all, the activity type is split in two fields ‘IsOpens’ and ‘IsClicks’ which makes groupings in Einstein Analytics a little hard, so we want to join that information into one field. Second of all, we probably want to drop a few fields that we no longer need.

In order to have just one field that describes the activity field, we want to create another computeExpression. So once again hit the ‘computeExpression’ button in the menu. Give your new node the name ‘Activity’ and choose ‘AugmentActivityJob’ as the ‘Source.

Now hit ‘+ Add Field’ to create a new field. We will give the new field the name ‘ActivityType’ and keep the Data Type as text. This time around we want to use a case statement for our SAQL expression, so add the following.

case when 'IsOpens' == "true" then "Email Open" when 'IsClicks' == "true" then "Email Click" else "Null" end

All we are doing in the case statement is to reference the other computeExpressions we created earlier saying if ‘IsOpens’ has the value true then return ‘Email Open’, if ‘IsClicks’ is true then return ‘Email Click’ and finally is none of that is the case then just put the value null. In reality, null should never be an option as our rows from the source data will always have true in the computeExpressions we created.

Make sure to save your field and create the computeExpression by hitting ‘Save’ and ‘Create’.

Since we have the new ‘ActivityType’ field we no longer need ‘IsOpens’ and ‘IsClicks’, so why keep them in the data set? I have no reason, so let’s add a new transformation that will allow us to drop those two fields. In the menu find and click the ‘sliceDataset’ transformation.

No surprise we first need to give the node a name, so let’s call it ‘DropCEFields’. We also need to choose the source and this time we will pick the ‘Activity’ node. We can now either choose to keep or drop fields from our source node, in this scenario it’s easier to drop fields since we want to keep everything but two fields ‘IsOpens’ and ‘IsClicks. Next chose those two fields in the ‘Fields’ section.

With that done we can finally register our dataset, so we can explore our activity data. So click the ‘sfdcRegister’ transformation.

Again we need to give our node a name, so type ‘RegisterMCActivities’ in the ‘Node Name’. The ‘Source Node’ is the slice node we just created. Finally, give your new dataset the ‘Alias’ and ‘Name’ ‘MCActivities’ and hit ‘Create’.

Now your dataflow should look something like mine below.

Run Your Dataflow

Before we can use our new dataset we need to hit ‘Update Dataflow’ in the top right corner and accept that it will overwrite the current data flow. It’s always best practice to have a backup if something goes wrong.

This action will only save your data flow, so in order to actually see the result within Analytics Studio hit the same button which now has changed to be ‘Run Dataflow’.

Once your data flow has successfully run you can go to Analytics Studio to explore your new dataset.

Taking Your Data Flow Further

This blog covered how you can combine data from three different data extensions into one dataset. Let’s be honest it’s a simple use case, but nothing stops you from taking it further. Personally, I would combine it with my Salesforce data to get more rich information about my audience. If you are using the Salesforce Marketing Connector to bring data from core Salesforce into Marketing Cloud then your Marketing Cloud subscriber key would most likely be the same as your Salesforce id, so you can use that when you augment your Salesforce data. If you come up with some cool use cases please do feel free to share by dropping a comment below, I would love to hear about it.

]]>
http://www.salesforceblogger.com/2018/07/30/marketing-smart-with-einstein-analytics-part-3/feed/ 0 983
Marketing Smart with Einstein Analytics – Part 2 http://www.salesforceblogger.com/2018/07/26/marketing-smart-with-einstein-analytics-part-2/ http://www.salesforceblogger.com/2018/07/26/marketing-smart-with-einstein-analytics-part-2/#comments Thu, 26 Jul 2018 06:43:15 +0000 http://www.salesforceblogger.com/?p=981 Getting marketing smart with Einstein Analytics includes getting details on how your marketing campaigns are performing. What emails were open and which ones had the best click rate? In the first part of this blog series I mentioned, we couldn’t access the tracking data through the connector. In fact, we can’t even see them in Marketing Cloud since they are hidden data extensions. But as I also mentioned in the blog, there is a workaround to accessing the data anyway, which is what this blog will explore. Note that this solution only really works if you have small amounts of tracking data.

Hidden Data Extensions

If you have worked with Marketing Cloud before I am sure you are already aware of the hidden data extensions or data views that contains tracking data such as sends, opens, and clicks. There are a lot more and you can find all the details about them in the Salesforce Help Pages.

As I already mentioned these data extensions contain details we want in Einstein Analytics but the connector cannot access. However, we can replicate the information in Marketing Cloud in new data extensions that the connector can access.

Making the Hidden Unhidden

In order to replicate the hidden data extensions in Marketing Cloud we first need to identify which data extensions we are interested in and then create a complete match of that data extension.

For us to create new data extensions we need to go to the email studio (yes, you can also do this in the contact builder if you rather do that). In Marketing Cloud hover over ‘Email Studio’ and choose ‘Email’.

Now navigate to data extensions by choosing ‘Data Extension’ in the Subscriber tab.

Before we move on and create the data extensions we need please make sure you have the Help Section open with the data extension you want to use. In this case, I am looking at recreating the Open data extension. Now we have an idea of how our new data extension should look. Click on the blue ‘Create’ button, choose the ‘Standard Data Extension’ and click ‘Ok’.

Give the new data extension a name and click ‘Next’.

In the next section, we are going to just click ‘Next’ since we don’t care about retention.

Now the interesting part comes, you need to create every single field from the hidden Open Data Extension in this new data extension. Make sure that the name, data type, length and nullable is an exact match from the data view. Once you are done you can click ‘Create’.

We now have the new Open data extension but we don’t have any data in it. In order for us to populate the data extension, we need to create a query. In the ‘Interactions’ tab choose ‘Query’.

Next, we need to create the query to populate our new data extension. So hit the ‘Create’ button.

Give the query a name and add the following SQL into the ‘Query’ section.

SELECT *
FROM _Open

All we are doing is saying select all the fields from the hidden Open data extension. I know that this data extension is called ‘_Open’ from the data view documentation.

Next, we need to choose the data extension we want to use as a target. So select the data extension you just created and make sure you still have update type as ‘Overwrite’.

Once that is done hit ‘Save’ at the top of the page. You can now select your query and click ‘Start’ to populate data from the hidden data extension to the newly created data extension.

Automate the Whole Thing

We now have all the components we need in order to make the hidden unhidden. However, it’s a manual process. We now want to automate this process, so we do not have to think about getting data from A to B.

In the top right corner where it says ‘Email’ hover over it to show the main menu, hover over ‘Journey Builder’ and select ‘Automation Studio’.

We want to create a new automation so click the blue ‘New Automation’ button. The very first thing we need to do is create a name.

Next drag the green ‘Schedule’ to the Starting Source section and click ‘Configure’. In the popup window set up the schedule as you see fit and click ‘Done’ when you are satisfied.

In the activity section, find the ‘SQL Query’ and drag it onto the canvas.

Click ‘Choose’ and select the query we just created and then click ‘Done’.

You can now click ‘Save’ in the top right corner. After that, you can hit ‘Run Once’ to populate the data (if you didn’t already do that after you created the query).

If you are satisfied with the automation you can activate the automation by hitting the ‘Activate’ button.

And there you go. Your hidden Open data extension is now visible and accessible for the Marketing Cloud connector in Einstein Analytics. In order to replicate this data in Einstein Analytics, please follow the steps from Part 1 of this blog series.

Get All the Data

In this blog we just looked at getting the open data but as you saw there are many different hidden data views. We can surface all of them if you need them. You just need to follow the same process to create the data extensions, then create and add multiple queries to your automation.

]]>
http://www.salesforceblogger.com/2018/07/26/marketing-smart-with-einstein-analytics-part-2/feed/ 1 981
Marketing Smart with Einstein Analytics – Part 1 http://www.salesforceblogger.com/2018/07/21/marketing-smart-with-einstein-analytics-part-1/ http://www.salesforceblogger.com/2018/07/21/marketing-smart-with-einstein-analytics-part-1/#comments Sat, 21 Jul 2018 00:19:17 +0000 http://www.salesforceblogger.com/?p=910 It’s no secret that I have a background in implementing Einstein Analytics, Pardot and Marketing Cloud for different companies, so I get a lot of questions around reporting on your marketing data. First of all Pardot’s and Marketing Cloud’s build in reporting capabilities are limited, but the request for reporting is great and becomes even greater when you have Einstein Analytics. Now the Pardot team has already enabled better reporting with the introduction of the Einstein Analytics B2B Marketing App, which is a templated app that provides some default datasets and dashboards giving insight into Pardot data, but what about Marketing Cloud? It may not be as well explored as Pardot but it is possible to get Marketing Cloud data into Einstein Analytics.

The ABC’s of the Solution

Einstein Analytics have out of the box connectors to different data sources including Marketing Cloud so this solution will explore how to leverage that in order to bring data from Marketing Cloud to Einstein Analytics. The setup will include:

  • Enable Replication for Einstein Analytics
  • Create an Installed Package for Marketing Cloud
  • Setup the Marketing Cloud connector in Einstein Analytics

Enable Replication

Most Einstein Analytics instances already have replication enabled, however, I am sure there are still a lot of orgs that do not have it enabled. First of all what it is? Well, it’s a setting in the Analytics Setting in Salesforce. When enabled we allow objects in Salesforce to be replicated in a cache free layer making the data flow run faster since it doesn’t have to do a full extract from Salesforce. Two other benefits from this setting are you can create up to 30 different data flows and you have access to the out of the box connectors including the one for Marketing Cloud.

The way it’s enabled is by navigating to the setup section in Salesforce by clicking the gear icon in the top right corner and choose ‘Setup’.

Once you are in the setup section you can search for ‘Analytics’ in the quick find search box to the left. You need to select ‘Settings’ under the Analytics section.

In the settings, we need to just check the checkbox for ‘Enable Replication’.

Remember to hit ‘Save’ in order to save the changes. Note that the action won’t come with a success or error message, but you can always click on ‘Settings’ in the menu again to check if it was saved.

Note that if you already have running data flows, you want to make sure your replication is scheduled else you will never get new data into Einstein Analytics. I will later return to this topic and show how to schedule replicated data.

Prepping Marketing Cloud

Next step is to move over to Marketing Cloud. If you just have one business unit then it will be easy, you just have to do this once. But if you are using multiple business units and what to grab data from all of them you will need to set up an installed package and a connector for each of your business units.

Within Marketing Cloud in the top right corner hover over your name and select ‘Administration’.

Now navigate to the ‘Account’ tab and choose ‘Installed Packages’.

It’s now time to create a new installed package so click ‘New’, give it a name and click ‘Save’.

Your installed package has now been created but we are not done. We need to add a component to the package. So with that click on ‘Add Component’, choose the ‘API Integration’ option and click ‘Next’.The next step is to grant access for the connector to different parts of Marketing Cloud. In this scenario we only really need the Data Extensions, so make sure you check the boxes for read and write – even though the connector won’t actually change the data extensions. Hit ‘Save’ when the correct settings have been made.

You will now see the different keys for this component which you need when setting up the Marketing Cloud connector in Einstein Analytics (yes, I’ve removed my keys in the screenshot). But note these details down, you will need them later.

Okay, so there are two more things you would need to check before setting up the connector in Einstein Analytics. Firstly, you need to make sure that the user you will be using for the connector has ‘API user’ marked on their profile. Back in the ‘Account’ tab click on ‘Users’.

Find the user you want to leverage for the connection and click on it. Make sure that ‘API User’ is defined as ‘Yes’ like the image below.

The second thing to notice is which stack your Marketing Cloud instance is using. The easiest way to do this is by checking the URL and see the number that is following the s. In my case it’s s4.

Okay with all those details noted down it’s time to move to Einstein Analytics and get that connector established.

Setup the Marketing Cloud connector in Einstein Analytics

Make sure you log in to Salesforce and navigate to Einstein Analytics and the Data Manager. Once you are in Analytics Studio click on the gear in the top right corner and choose ‘Data Manager’.

We now need to set up the connector and replicate the data extensions in Einstein Analytics so we can use this in our data flow. Therefore click the ‘Setup’ button on the left menu to see your replications.

In the setup section, we need to create our new connection. In the top right corner, you will see the blue ‘Set Up Replication’ button, which you should click. You will now get the option to add another remote connection. Note that I already have a Marketing Cloud connection established, but you can create multiple; one for each business unit in Marketing Cloud.

Click on the ‘Add Remote Connection’ and have all the details from Marketing Cloud ready.

You should fill out the settings as below:

  • Connection Name: The name you will see in Einstein Analytics for this connector
  • Developer Name: The API name for this connector
  • Description: Something that describes the connector
  • User Name: The user that you want to use for the connector which has ‘API user’ as ‘yes’
  • Client Secret: This is a key from the Installed Package in Marketing Cloud
  • Client Id: This is a key from the Installed Package in Marketing Cloud
  • Salesforce Marketing Cloud Url: Keep the URL but make sure to update the stack reference to the right one, in my case s’4′
  • UTC Offset: Keep as is.
  • Password: The password for the user you are using for the connector.

Once it’s all filled out you can go ahead and save the details.

The next thing is to somewhat repeat the process. At least you need to click on the ‘Set Up Replication’ button again but this time you choose your new connector.

Choose the data extension that you want to replicate and click ‘Continue’. You now need to select the fields you want to include in the replication, so check all the fields you want and click ‘Continue’.

The next thing is we need to define the data type for each field; dimension, measure or date. Click on a column and then the pencil icon.

Once everything is set then click ‘Save’. You have now set up everything you need for this data extension, but you would have to repeat this latter process again for each of the data extensions you wish to replicate.

The final thing you want to do is schedule the replication, so you make sure that your data is always up to date. The schedule is flexible as you can set it up to run every hour or just once a week.

Still in the Setup section, next to the Marketing Cloud connector click on the arrow and select ‘Schedule’.

Choose whatever schedule works for you and hit ‘Save’.

And that’s really all we need to do in terms of replication, the data is now available for you to use in your data flows or recipes.

Tracking Data

So I am sure if you are trying to set up the connector as you are reading this blog, that you have noticed that tracking data is not available in the connector. So why did you go through all this hard work? Well, there is a workaround, which I will cover in part 2 of this blog series.

]]>
http://www.salesforceblogger.com/2018/07/21/marketing-smart-with-einstein-analytics-part-1/feed/ 6 910
Colors, labels, values – oh my! http://www.salesforceblogger.com/2018/07/12/colors-labels-values-oh-my/ http://www.salesforceblogger.com/2018/07/12/colors-labels-values-oh-my/#comments Thu, 12 Jul 2018 06:15:20 +0000 http://www.salesforceblogger.com/?p=884 You might have heard of ‘XMD’ working with Einstein Analytics. But do you know what it is? First of all, XMD stands for ‘Extended Metadata’ and it a set of instructions that defines how a user visually see your dataset. Meaning once you apply these instructions every lens or dashboard leveraging this dataset will see the dataset according to your instructions. The XMD can control colors, labels, and values of your dimensions and measures, which is defined in a JSON file. Previously, making these changes has happened by either writing the JSON yourself or using an unmanaged app like ‘WaveLabs’. Now I won’t link to this app because as of Summer 18 it is no longer necessary. I and do think that deserves a big ‘WHOO HOO’! So with that news, I thought it was time to explain what you can do and where you find it.

The Basics

Since the XMD is related to your dataset, it only makes sense that the changes to the XMD are happening when you are looking at your dataset. Most of the modifications are done in the exploration mode under fields.

So find your dataset and click on it. From the explore mode notice the “Fields” link on the left-hand side, this is where most of the magic happens.

Change labels

Make sure you have clicked the ‘Fields” button, so you see all the fields you once upon a time added to your dataset.

So what are we doing when we change the label? Well, it’s basically the name the user sees of a field. We still have the API name, which will be referenced in the JSON and SAQL, but most users don’t want to see ‘AccountId.Name’ when looking at their dashboards, it makes more sense to see ‘Account Name’. If you want to change every single label you can just start from the top and move your way down, but if you just want to change a few fields then I field the search box very handy.

There are two ways to change the label either hover over your field of choice and click on the arrow to the right of the field and select ‘Rename’ or simply click on the label to change it.

Make sure you save your changes, which will give you a warning that all dashboards that are using this dataset will be affected.

Change values and colors

Each field contains different values, sometimes we want to change those values maybe our users are not used to refer to customers as that, they are keen to see ‘Client’ instead. Well, we can simply change that. And to add to it we can change the color of that value, meaning if we have a donut chart the wedge that represents ‘Client’ will have the chosen color.

Okay, find ‘Account Type’ or ‘AccountId.Type’, hover over the field and click on the arrow that appears where you choose ‘Edit Values’.

You now get a dialog box where you first have to choose which values to modify, so make sure you click each of the values you want to be changed and click ‘Done’.

Now you see the selected values with a white square and arrow next to it – click it.

You can now modify the label as well as the color. If you are changing several labels and/or colors then make sure to click the first arrow ^ again and not the ‘Done’ button. I’ve done that myself many times and it basically means you are done editing the values for those fields, so wait until you are actually done.

Again make sure to save your changes.

Customize measures

Sometimes it’s nice to indicate what a measure symbolizes; is it in dollars, is it in euro or is it not even a currency but how hot or cold the weather is and should thereby indicate it’s degrees. This is where custom format for measures comes in.

Again from within our ‘Fields’ section find your measure, hover over it and choose ‘Number Format’.

As you can see there are quite a few different ones to pick from. If one works pick it else you always have the option to choose ‘Custom Format’. In this scenario, I will choose custom format as I want to have the euro sign in front of my amount.

The easiest way to do this is by selecting one of the predefined formats that fit best and modify it to your liking. So I will pick ‘$#,###’ by click on it.

Once that has been selected I will switch ‘$’ to ‘€’.

The multiplier can be used if you, for instance, have a percentage ‘0.1’ that you want to show as 10%. Just change the multiplier to 100 and change the custom format to ‘#%’.

Hit ‘Done’ and make sure to save your changes.

 

Default Fields in Values Table

When you want to show a values table in your dashboard it will by default select the first five measures and first five dimensions. Now you can easily change the values table to show any fields from your dataset. However, there is a way to set which fields are shown by default. Again under your dataset ‘Fields’ you have an arrow next to the dataset name.

Select ‘Default Fields’ to open up the dialog box where you can define what fields you want to see in a values table.

In the ‘Available Fields’ section choose the fields you want to be selected by default and click on the arrow on the right to transfer them to the ‘Default Fields’ section.

Once done you can hit ‘Update’.

Add derived measures and dimensions

When you are creating calculations on the fly in compare tables or SAQL as well as creating new dimensions that are what we call a ‘derived measure’ or a ‘derived dimension’. We can add these derived fields in our dataset so we can customize them just like we have done with our normal measures and dimensions. All you have to do is in the derived section click on ‘+ Add Derived Measure’ or ‘+ Add Derived Dimension’.

A tip if you want to change ‘Count of Rows’ to say ‘Count of Opportunity’ is to create a new derived measure called ‘*’.

Once it’s created go ahead and click on it to rename it and call it ‘Opportunity’ and save your changes. If you now show ‘Count of Rows’ you will see it has been changed to “Count of Opportunity’.

Enable Actions

The final thing we can enable in the dataset XMD is actions. This is unlike all the other features not enabled under the ‘Fields’ section but under the dataset ‘Edit’ section. So find your dataset and to the right of the name click the arrow and choose ‘Edit’.

On the bottom of the page, you will find the ‘Configure Actions’ button, which you need to click.

You can enable actions to every single field in your dataset if you wish to. Let’s look at an example where we want to open up Opportunities when we click on the ‘Account Name’. First of all find and select the ‘Account Name’ field.

The ‘Record ID Field’ is the field in the dataset that represents the Salesforce ID that we want to open up. In this case, it will be ‘Opportunity ID’.

Since my dataset represents opportunities each row is an opportunity. Since an account can have several opportunities it may actually be the case that there is more than one opportunity that is related to the ‘Account Name’. In this case, Einstein Analytics doesn’t know which opportunity to open and instead, a dialog box will be shown to the user that needs to make a choice. Without any details of the related opportunities, it can be hard to make that choice so we can choose which fields to display in the dialog box in the ‘Display Fields’ section. Here I just want to show ‘Opportunity Name’, ‘Opportunity Owner’ and ‘Close Date’, so I have selected those three fields.

Finally, I need to define which actions should be available. You can allow the user to open the Salesforce record and to perform Salesforce actions.

Note that you can either allow the user to use all global actions from Salesforce or you can choose the specific actions you will allow the user to perform.

Again remember to save your changes.

]]>
http://www.salesforceblogger.com/2018/07/12/colors-labels-values-oh-my/feed/ 1 884