Use-case: Data Integration with Apache Kafka & SAP Financial Products Subledger

Dr. Christine Unkmeir has contributed an article about “Data Monitoring with KaDeck in the context of data integration with Apache Kafka and SAP Financial Products Subledger”.

The article talks about how our monitoring solution KaDeck helps to identify corrupt data sets and enables full traceability of the data pipeline for business and IT users.

If you want to learn more about integrating data from Apache Kafka to SAP, you can find our article here.

The rest of this article is about using a data-centric monitoring solution to get full data transparency. Why is monitoring so important?

Why monitoring?

We believe that monitoring to backtrace data and analyzing it is an essential and critical aspect for businesses.

Creating control of the data for compliance or regulatory reasons alone is important, but leaves out another crucial point.

Data transparency in software environments not only makes it possible to successfully implement IT application projects cost-effectively and time-efficiently, but also avoids undesirable developments and simplifies the search for erroneous data. Enabling business users to backtrace the data of their calculation results is an important aspect and results in more efficiency.

Common use-cases of KaDeck

KaDeck is designed to meet multiple requirements of today’s data-driven companies when working with Apache Kafka. How that? KaDeck is the increment of years of experience in Apache Kafka projects to successfully master the emerging challenges.

KaDeck is used in various scenarios. To name a view:

  • Development of polyglot streaming applications
  • Integrating with other streaming applications
  • Testing and analyzing Java data-mapping components (e.g. as a custom codec)
  • Data integration with Apache Kafka and SAP
  • Monitoring production environments
  • Data access for business departments for data analysis and in case of system failures

We talk to our customers and partners on a regular base to learn about their use cases and continually improve and enhance our products and solutions. And we are always happy to hear their success stories.

Orchestration of SAP data

Data integration scenarios with SAP and Apache Kafka are very common for us. Typical challenges are referential and temporal dependencies, semantic correctness, and full transparency in case of failure. Contact us if you want to learn more about how KaDeck can help you solve these challenges.

Follow us on Twitter for updates and feel free to contact us.