You are part of a team of enthusiastic people who are keen on exploring Tableau CRM, formerly known as Einstein Analytics. This vibrant team of yours has come across the need to create a series of analytics solutions that answer key business questions.
But soon after you have started developing your analytics apps the questions on creating and managing Tableau CRM assets start to linger including:
- Should I make relevant changes directly in production?
- If yes, then how shall I manage to roll the changes out to end-users?
- How would I manage these assets in production? If not, then should I set up sandboxes for the multiple stages of development and testing?
- But wait! – do I need multiple stages of development and testing like other salesforce assets?
- Also, how do I propagate these changes to different Salesforce orgs?
In short, you are thinking of the lifecycle of TCRM assets. The focus of this blog series.
Who is this blog series meant for? And is relevant only for developing and managing Tableau CRM assets, or would it also answer the questions you have about developing and deploying Einstein Discovery assets? And do all your assets need to go through all the stages mentioned in this doc?
Let me start answering these questions from the top. This blog series is not curated keeping a particular role in mind, rather it is for anyone who is part of the Tableau CRM project. You could be the product owner, a developer, or could wear some other hat and this will still be relevant to you. How? Well, you will see that once you read the set. Moving on to the second question, so far Einstein Discovery isn’t fully supported by metadata API; only the Models and Predictions you create are. However, if you’re using datasets created in Tableau CRM using dataflows/recipes/external connectors/CSV, then you’ll find this blog relevant. And the answer to the third question. The answer is no. Treat this blog series as a guide, rather than a hard set of rules which must be followed.
We want you to be the final decision maker on which stages would be followed, what kind of deployment methodology would be used, where would you do your deployment and how will you proceed with your testing. This is a best practice blog that we have curated based on the collective experience of the Analytics Center of Excellence (ACE) team.
In this blog, I’ll discuss the typical lifecycle your Tableau CRM assets go through. I also know that the question of “who will own what” is lurking somewhere in your mind. In this very blog, I’ll also share typical roles and responsibilities that come with creating and managing Tableau CRM assets. So, let’s get started.
What is the Asset lifecycle?
In easiest terms, it is the lifecycle of an asset from genesis to the end. Think of it as different stages an asset goes through, from idea conception and requirement gathering until deployment to end-users and further maintenance. Below is a picture of the Tableau CRM asset lifecycle.
Let’s have a quick look into the what each stage represents:
- Idea Genesis and Requirement Gathering: This is the stage where your project lives in ideas. You start gathering requirements from stakeholders whose inputs will shape the project. This is also the stage where some companies convert their ideas into a quick prototype, to get funding for the project. By the end of this phase, you’d have a project with defined scope (may or may not be well defined) and funds allocated for the teams to start working on it.
- Release Planning and Resource Allocation: This is the stage where you plan the releases your product will go through, and what your product will look like after each release. By now you are done with resource allocation, requirement prioritization, development methodology selection, deployment plan selection, clearly laid down requirements, and have a ready plan around maintenance and bug fixes.
- Development and Unit Testing: This is where each individual will work upon the task assigned to them and test the individual component – called unit testing. At this stage, the developer is not concerned with merging her changes with the changes made by the rest of the team but is focusing only on her tasks. Now, this is usually done requirement by requirement, so that the merging process is easier.
- Test and Validate: Once your testable features pass unit testing, they are pushed through Data Validation and Performance Tests. Some may argue why to carry out the performance tests at this stage, the intent here is to narrow down the possible causes of validation failures and poor performance as early as possible. You made a change in dataflow, and now your dataflow is running for an unacceptably longer duration, or the query you just wrote takes longer than the agreed-upon EPT or your page in the dashboard is firing far too many queries (Salesforce recommends not to have more than 25 widgets on a page). In such but not limited to these cases, it is better to detect the performance lapses at an early stage. Apart from the performance of your queries and dataflows, recipes, you’d also want to keep the limits around API calls per hour, API calls per user per hour, maximum concurrent queries per organization, maximum concurrent queries per user, and scalability of your design in mind.
- Build Release: This is when individual developers’ tasks are merged together and are tested overall for analyzing the combined functionality. This stage ensures feature developments of developers can stay in harmony. Once a release is built, do a quick smoke test to check if merging your changes with the rest of the release artifacts has broken something.
- User Acceptance Testing and Staging: This is the phase where your UAT/power users test the release artifacts and provide their feedback. After UAT, release artifacts are pushed to staging for potentially the following:
- Identify deployment errors, if any.
- Data security testing.
- End to end performance testing.
- Deploy release: This is the stage where approved release artifacts are pushed to production/are merged in the live assets and are made available to end-users.
- Monitor: This is the stage where you’d take care of bug fixes and requirements flowing in through end-user feedback. You’ll also push subsequent releases to production.
With so many different types of testing being done, its rather natural to ask what the feedback loop/testing scenario in Tableau CRM looks like. I’ve tried to capture it in the diagram below:
Roles and Responsibilities
With all these phases in place, it is almost impossible to not think about the ownership for each change. You will typically have these five roles within the team who work or use the Tableau CRM product with the following responsibilities:
Please note that one person can have multiple roles. We have deliberately emphasized that there is a single role of Application Builder, the reason is that good UX and efficient elegant queries to answer a question at scale require thoughtful data modeling. These activities work hand in hand and the Tableau CRM tooling is there to support them, you do not have to be a hard-core code developer in Tableau CRM to build beautiful applications. We don’t want to put unnecessary barriers and bottlenecks into the delivery based on older siloed technologies and high skill requirements, it is Salesforce easy.
So the team roles above are really those personas who interact with the Tableau CRM product directly. As with any project, there could be many other actors who have a peripheral impact on the project. For the purposes of this blog deployment and lifecycle of these other activities are not addressed here. A few examples of other roles are:
- Salesforce DX developers building lightning components or apex on the SFDC platform utilizing the Tableau CRM SDK. The SDLC for these follows normal lightning and apex conventions.
- External client developers using the Tableau CRM SDK. Conventions determined by that team.
- Project Sponsor – could be a consumer business user but the role is really to determine strategic goals and remove blockers.
- Data integration specialist. External tool and data sources. Since tools and sources are many varied these tend to be quite specifically skilled individuals and their SLDC is dependent on those tools. For example an on-premise Oracle ERP developer and Informatica ETL tool to move data into Tableau CRM.
Once you know who does what, the next question to address would be “where?”. You may find yourself with questions like “Where should I develop my Tableau CRM assets?” or “As it is a click and create tool, do I still need to go through the entire process of setting up Tableau CRM specific orgs?”. Let me answer these questions by splitting the development approach into two broad categories:
- Develop in Production
- Develop in a Sandbox
In the table below, I have captured some of the behavioral aspects of developing in production vs developing in a sandbox, against a couple of, of course not limited to, criteria:
Data and Security
Governance, Resources and Errors
The section above shows you the good and the bad side of both approaches. And honestly, there’s no right or wrong answer to “Where to develop?“. However, what we have observed is that developing in production proves to be the right choice until it’s not.
As the number of projects increase, the development team size grows, the scope of existing projects expands, the data volume and data refresh time become bottlenecks, developing in sandbox takes the win. Then again, as there’s no right or wrong answer here, we have observed customers use a hybrid model where they define which kind of changes should be dealt with in sandbox, and which kind of changes can be done directly in production. For example, a change as simple as making a spelling correction in a text widget can be done directly in production. But the introduction of a new KPI is done in the sandbox so that it goes through proper testing before it is released to end-users. This works! But… You’d need to make sure that the changes done directly in production are also incorporated in the ongoing development or else you’ll end up overwriting production changes once the release is pushed. I understand that the changes done directly in production were not business-critical, but they may lead to a bad and erratic user experience.