This is the 2nd of 7 blog posts which are part of the SAP Business Technology Platform Showcase series of blogs and videos. We invite you to check this overall blog, so you can understand the full end-to-end story and the context involving multiple SAP BTP solutions.
Here you will see how to create an SAP HANA Deployment Infrastructure (aka HDI) container on SAP HANA Cloud, and persist actual sales data originated from an external system in the same SAP Data Warehouse Cloud’s existing persistence area. We will show how to provide bi-directional access between SAP Data Warehouse Cloud and SAP HANA Cloud’s managed datasets. You will also see how to expose SAP HANA Cloud & SAP Data Warehouse Cloud’s artifacts, like a table or a Graphical View, as oData services.
We encourage you to watch this quick introductory video, for getting acquainted on this scenario.
Below you can see this is the 2nd step of the “Solution Map” prepared for the journey on the referred overall blog:
Have you ever implemented an SQL data warehouse on top of the very same SAP HANA Cloud database tenant that serves as the platform for SAP Data Warehouse Cloud?
This is possible and referred as the “SAP SQL data warehouse Integration” approach.
In the picture below you can see the SAP HANA Cloud database tenant highlighted in blue:
SAP Business Application Studio can deploy an HDI container in the same SAP HANA Cloud database tenant where SAP Data Warehouse Cloud is also running on, as per the drawing below:
SAP Data Warehouse Cloud serves business users, so they can manage their data with autonomy from IT. IT manages high volume/complex datasets and deploys a ready-to-consume “HDI SAP SQL data warehouse” in order to meet business users expectations.
As all of this data is persisted on the same SAP HANA Cloud database tenant, we leverage an integrated approach that eliminates data duplication and data movement between the data provided by IT and data managed by business users. Think about it, this is a huge opportunity for simplification and optimization!
Depending on the scenario, this can be a much wiser architecture when compared to the hybrid approach, where a separate SAP HANA Cloud database tenant is necessary to store the SAP SQL data warehouse data, and agents, remote connections, network latency and etc. are also necessary for communicating with SAP Data Warehouse Cloud. In the picture below, you can check the benefits of the “Integrated SAP SQL data warehouse” approach.
SAP Data Warehouse Cloud provides the ability to assign the HDI container to an SAP Data Warehouse Cloud Space, which provides immediate access from the SAP Data Warehouse Cloud Editors to the objects and content of the Space assigned HDI container.
Let’s follow a step-by-step implementation for the “SAP SQL data warehouse Integration” approach, developing a new SAP Cloud Application Programming Model project in SAP Business Application Studio.
Although you will be able to watch and learn everything that is explained in this blog on a detailed technical video, we also consider you may want to implement this concept by yourself. In this case, let’s guarantee you have provisioned everything you need for deploying the project.
If you still don’t have an SAP account for starting your developments, don’t worry… SAP is providing you with a completely free SAP trial account, so you can join our network and get access to all SAP BTP solutions (SAP HANA Cloud. SAP Data Warehouse Cloud, SAP Analytics Cloud) required for you to implement all projects & artifacts presented in the overview blog, except steps proposed in this specific blog, which require opening a ticket at SAP, what’s not yet available for trial accounts.
We will use SAP Business Application Studio for developing this project. If you still do not have access, you can learn how to setup a new subscription here.
The project source code is available in the official Github SAP-samples and can be cloned in SAP Business Application Studio, so you can easily reproduce this deployment in your own landscape.
We assume you already have SAP HANA Deployment Infrastructure (HDI) skills for deploying this project. You can watch this official SAP HANA Academy video if you want to improve your knowledge, and also take a look on the SAP Help Portal – HDI Reference documentation.
Now, we will split the scenario in 5 main topics in order to facilitate your understanding:
In our end-to-end demonstration, we talk about an utility company that wants to optimize it’s Energy Production, as already explained in the overall blog. Our data warehouse will have 4 main source tables/view:
1. Energy Consumption Actual – Quantity of energy consumed by people in Germany in MWH
2. Energy Production Actual – Quantity of energy produced by utilities
3. Energy Consumption Predicted (calculation view)- Applying a Machine Learning algorithm (presented in Blog 4: Run future sales prediction using SAP HANA Cloud Machine Learning algorithms), SAP HANA Cloud will generate predicted values for the quantity of energy that will be consumed by people in in the future
4. Energy Production Planned – Quantity of energy planned for production by utilities
2- SAP Datawarehouse Cloud accessing SAP HANA Cloud’s managed datasets
For this technical use-case, 5 years of Energy Consumption Actual data is being exported by a system of records (e.g. SAP ERP) on .csv format, and we will persist this data on an HDI container deployed on the same SAP HANA Cloud database tenant where SAP Data Warehouse Cloud is running on.
We understand that there are multiple ways to load data into SAP Data Warehouse Cloud, as already exposed in Blog 1: Load data into SAP Data Warehouse Cloud, and that we could use even more sophisticated alternatives for replicating data. However, the intention here is to stick to the technical example of populating the SAP SQL data warehouse integrated to SAP Data Warehouse Cloud.
A possible scenario, for example, is SAP Landscape Transformation Replication Server replicating ERP’s data to the SAP SQL data warehouse, which is in fact, an HDI container deployed in the mentioned SAP HANA Cloud database tenant.
We will then create this table, populate it with historical data, and make it available for consumption directly from SAP Data Warehouse Cloud Editors, with no data movement at all.
3- SAP HANA Cloud accessing SAP Datawarehouse Cloud’s managed datasets
In SAP Data Warehouse Cloud, we loaded Energy Production Actual data, and created a “Graphical View” comparing Consumption and Production Actual values. This comparison provides business users with an interesting analysis.
Now, TI wants the SAP SQL data warehouse to consume the “Consumption vs Production Actual Graphical View”, modeled with autonomy by business users on SAP Data Warehouse Cloud. Their intention is to expose this artifact as an API, so external applications can consume it.
You will see how to make SAP Data Warehouse Cloud’s artifacts available for consumption in SAP SQL data warehouse.
4- Exposing artifacts as oData Services
The SAP HANA Cloud database tenant we use for the SAP SQL data warehouse is a powerful technical platform. In this example, we will show how to expose the Energy Consumption Actual table as an oData service, as well as expose the “Consumption vs Production Actual Graphical View” (which is managed by SAP Data Warehouse Cloud) as an oData service as well.
5- Consuming exposed oData services
We will now show how easy is to consume these oData service APIs from any browser, any SAP or non-SAP applications, SAP BTP components, or any analytic solution that provides oData support.
Now that you understand the complete scenario, you can follow-up on watching this detailed technical video, and check how this project can be implemented.
All source code and data are available in the official Github SAP-samples.
In this blog you could learn how to leverage SAP Data Warehouse Cloud’s persistence area, extending it for integration with a pure SAP SQL data warehousing. This topic also demonstrated how IT can create and populate a source table that will be consumed by business users in SAP Data Warehouse Cloud, with no data duplication/movement.
All of your feedback is appreciated. Enjoy!
Blog 1: Location, Location, Location: Loading data into SAP Data Warehouse Cloud: how to easily consume data from systems of records (e.g. SAP ERP), cloud and on-premise databases (e.g. SAP HANA, SQLServer, Oracle, Athena, Redshift, BigQuery, etc.), oData Services, csv/text files available in your own local computer, or any File/Object store (e.g. Amazon S3). We will leverage SAP Data Warehouse Cloud’s Replication and Data Flow capabilities, as well as demonstrate how to access remote sources using data virtualization.
Blog 3: SAP Continuous Integration & Delivery (CI/CD) for SAP HANA Cloud: how to develop and trigger a pipeline using either Jenkins or SAP Continuous Integration and Delivery for automating the deploy of the above SAP HANA Cloud application on multi-target (DEV/PRD) landscapes.
Blog 4: Run future sales prediction using SAP HANA Cloud Machine Learning algorithms: how to create an SAP HANA Cloud HDI container, load training and testing historical data, and run Predictive Analytics Library (PAL) procedures for just-in-time predicting future sales (energy consumption) values.
Blog 5: Develop a SAP HANA Cloud native application: how to create a SAP Cloud Application Programming Model project, which will manage additional data values, working on the back-end application (HDI providing oData services) as well as the front-end SAP Fiori/SAPUI5 application, deployed on dedicated services in SAP BTP.
Blog 6: Provide governed business semantics with SAP Data Warehouse Cloud: how to consume all of the multiple data sources referenced in the blog, enabling business users with self-service data modeling, harmonization, transformation and persistence.
Blog 7: Consume SAP Data Warehouse Cloud’s assets using SAP Analytics Cloud: how to provide self-service business insights to the business community. We will also demonstrate how to use SAP Analytics Cloud’s Smart Insights and Smart Discovery augmented analytics smart features.