KaDeck use-case: Data Integration with Apache Kafka & SAP Financial Products Subledger
Dr. Christine Unkmeir has contributed an article with the title “Data Monitoring with Kadeck within Apache Kafka in the context of integrating SAP Financial Products Subledger”.
KaDeck, a data-centric Kafka UI & monitoring tool, is our first released product besides customized enterprise solutions.
We believe that monitoring and analyzing data is an essential and critical aspect for businesses.
Creating control of the data for compliance or regulatory reasons alone is important, but leaves out another crucial point.
Data transparency in software environments not only makes it possible to successfully implement IT application projects cost-effectively and time-efficiently, but also to avoid undesirable developments and to simplify the search for errors in the event of a breakdown.
Common use-cases of KaDeck
KaDeck is designed to meet multiple requirements of today’s data-driven companies when working with Apache Kafka. How that? KaDeck is the increment of years of experience in Apache Kafka projects to successfully master the emerging challenges.
KaDeck is used in various scenarios. To name a view:
- Development of polyglot streaming applications
- Integrating with other streaming applications
- Testing and analyzing Java data-mapping components (e.g. as a custom codec)
- Data integration with Apache Kafka and SAP
- Monitoring production environments
- Data access for business departments for data analysis and in case of system failures
We talk to our customers and partners on a regular base to learn about their use cases and continually improve and enhance our products and solutions. And we are always happy to hear their success stories.
Orchestration of SAP data
Data integration scenarios with SAP and Apache Kafka are very common for us. To support these use-cases, we have announced Xeotek SAAPIX – a new data integration pipeline for integrating data into SAP FPSL.