If you’ve been managing multiple data spaces within Salesforce Data 360, you know the “Data Space Tax.” Up until now, creating DMO mappings, segment definitions, and process definitions in one data space didn’t help you in the next. You had to manually rebuild those definitions every single time.Not only did this slow down the initial setup for any new data space, it also turned simple updates into a repetitive manual slog. This changes with March ’26.
The Problem: The Manual Rebuild Bottleneck
Many Data 360 customers structure their data spaces according to criteria like brand or geography. This means that whenever a new data space needs to be set up with consistent business definitions, users are currently required to manually replicate the exact same mappings and configurations to make the new space operational.
In organizations utilizing a hub-and-spoke operational model—often with data spaces segmented by geography—a Center of Excellence (CoE) hub typically defines process standards for all spokes. Ensuring that different data spaces adhere to these central definitions requires maintaining identical metadata configurations across them. When this replication is done manually, it results in:
- Slow Time-to-Value: Launching a new data space took at least days of manual configuration
- Consistency Risk: Manual entry increased the chance of human error and “configuration drift”.
- High Operational Overhead: Updating a single mapping meant repeating that update across every active data space.
Imagine a scenario where a central “center of excellence” marketing team validates a new data source within the default data space, proving its ability to significantly enhance campaign outcomes. The goal is now to replicate this successful campaign across different geographical regions. To achieve this, the team needs to do the following in each respective data space:
- Incorporate a Data Lake Object from the new data stream.
- Map this new Data Lake Object to a new Data Model Object.
- Build a new Calculated Insight on the mapped Data Model Object.
- Use this new Calculated Insight to refine an existing Segment Definition.
The teams in each of the data spaces will now be ready to run their respective campaigns.
Following the principles of build once, deploy everywhere, we can now achieve the same outcome as the steps above through a few clicks via local data kit deployment.
The Solution: Local Data Kit Deployment
The new Local Data Kit Deployment feature allows you to package your Data 360 metadata and process definitions from the default data space and deploy it across other data spaces within the same sandbox org with a single click. Think of it as a “template” system for your internal data architecture. This feature is available only in sandbox environments.
Key Benefits:
- Speed: Faster rollout of new data spaces.
- Consistency: Keep data spaces aligned without manual reconfiguration.
- Simplicity: Simpler data space management, even when managing dozens of data spaces.
How It Works: Build, Localize, Promote
To get the most out of this feature, we recommend a streamlined workflow starting in your sandbox environment:
- Build: Develop and test your mappings and segments in the Default Data Space (in a Sandbox).
- Package: Create a Standard Data Kit within that Sandbox.
- Replicate: Use Local Deployment to push those definitions to other data spaces in the same environment with one click.
- Promote: Use a DevOps Data Kit to push the finalized configurations to Production.

Managing Upgrades
Local deployment also simplifies maintenance. When you need to update your configurations, you can simply create a new Standard Data Kit and add the updated metadata before you do a Local deployment again. If your datakit contains only datastream bundles and DLO-DMO mappings, then you can also use the “update” functionality within a Standard datakit, with which, you do not need to create a new Standard datakit. The deployment of a datakit is always an upsert operation.
What can all be replicated?
Local deployment currently supports the following Data 360 Configurations and Process definitions and we are actively expanding coverage. Additional metadata support is already on the roadmap for upcoming releases.
- CRM Datastreams & DMO mappings
- Ingestion API stream
- Streaming App (Web and Mobile)
- Calculated Insights
- Data Action Target
- Segments
Stop paying the “Data Space Tax” and start scaling your data strategy with ease.
Example Flow: How to Replicate configurations across data spaces in a Sandbox Org
Below is a the snapshot of a sandbox org having two data spaces:
- Default
- India
We will see how I can configure all my central or common metadata and process definitions in the default data space and replicate the same in a brand new data space with the help of datakit local deployment.

Below are a few snapshots of the configurations from default data space.
- Account CRM datastream mapped to Account Data Model Object(DMO) in the default data space.

- Lead CRM datastream mapped to Account Data Model Object (DMO) in the default data space.

- Opportunity CRM datastream mapped to Account Data Model Object(DMO) in the default data space.

- OpportunityContactRole CRM datastream mapped to Account Data Model Object(DMO) in the default data space.

- User CRM datastream mapped to Account Data Model Object(DMO) in the default data space.

Now let us create a new data space “Germany” and see how easily you can duplicate all your configurations from default data space to this new data space.


Now Let us create a new Standard data kit and add the CRM datastreams (as Bundles – which includes the mappings) into the data kit.


After you have created a Standard data kit, you can then deploy the configurations into “Germany” or any other data space in your sandbox org by clicking the “Local Data Kit Deploy” button. You need to provide the data space as input along with the org id (in case of CRM data streams).

You can check the status of deployment via the Deployment History tab on the data kit page.

After the data kit is deployed successfully, you can check the deployed configurations in the new data space (Germany, in this case). As you can see in the snapshots below – Account, Opportunity, Lead, OpportunityContactRole data streams have been deployed with mappings to the DMO in Germany data space.






Now, let us say you created a new data stream “Case” and created a new Segment “Account segment” as shown in the snapshots below.


You can create a new data kit and follow the same process of Local data kit deploy.




If you add additional mappings to a data stream which was deployed previously via a data kit, you can click on the update button of the existing data kit and update the data kit, without creating a new data kit altogether.
Summary
The era of the ‘Data Space Tax’ is over. You no longer have to choose between granular data separation and operational speed. By leveraging Local Data Kit Deployment, you turn your default data space into a blueprint for success. Replicate your winning strategies, maintain total consistency, and finally start scaling your data infrastructure at the speed of your business.
Things to avoid
- When the goal is to replicate configurations within the same org, always use “Standard data kits” and Local Data kit Deploy (and this can be done only in a sandbox org). Never use “DevOps data kits” to replicate configurations across data spaces within the same org.
- When the goal is to move the configurations from one sandbox org to another or a production org – Use DevOps data kit for each data space and follow the DevOps process as described in another blog post: