building agile and resilient schema - rainfocus€¦ · building agile and resilient schema transf....
TRANSCRIPT
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Building Agile and Resilient Schema Transformations using Apache Kafka and ESB's Transformation-free Data Pipelines by combining the Power of Apache Kafka and the Flexibility of the ESB's
Ricardo Ferreira Principal Solutions Architect Cloud Solution Architects Team (A-Team) March 08, 2017 @jricardoferreir
Building Agile and Resilient Schema Transf. using Kafka and ESB's
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 3
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
• Ricardo Ferreira – Principal Solutions Architect, Oracle
Building Agile and Resilient Schema Transf. using Kafka and ESB's 4
• Freaking nerd, proud husband and father
• I have been writing code since 1997
• Currently working in the Oracle A-Team
• Author of couple Kafka-based projects:
Service Bus Transport for Kafka
Stream Explorer Adapter for Kafka @jricardoferreir
Who am I ?
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
API changes, Transformations and Decoupling
Hands-on Demonstration using the Oracle Cloud
Why Apache Kafka instead of other Options?
1
2
3
Building Agile and Resilient Schema Transf. using Kafka and ESB's 5
Agenda
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Wait… did you say ESB's? Aren't they evil?
• Modern Cloud-Native applications are built on top of design principles that doesn't include ESB's. One of them is the Smart endpoints and Dumb Pipes
Building Agile and Resilient Schema Transf. using Kafka and ESB's 6
Service A
Service Bus Service B
Service C
Service D
Service E
Service F
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Wait… did you say ESB's? Aren't they evil?
• Modern Cloud-Native applications are built on top of design principles that doesn't include ESB's. One of them is the Smart endpoints and Dumb Pipes
Building Agile and Resilient Schema Transf. using Kafka and ESB's 7
Service A
Service Bus Service B
Service C
Service D
Service E
Service F
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Smart Endpoints and Dump Pipes
• Let the endpoints handle the transformation logic required to expose and/or invoke other endpoints. Pipes must only carry the messages in/out.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 8
Service A Service B
REST or Messaging
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. | Building Agile and Resilient Schema Transf. using Kafka and ESB's 9
Wait… but what if the API changes?
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
We might have some options. And along with… trade-offs
Widely accepted solution among the developer community, but it is not bullet-proof and creates lots of code and operational overhead. Jamie Zawinski's used to say:
- "I'll use versioning. Now you have 2.1.0 problems."
Effective but relies on the assumption that developers must coordinate to handle the API changes. This might work for certain cases, but is reactive and creates organization overhead.
It creates a more complex architecture upfront but handle changes almost on-the-fly and allows services to react with higher uptime while teams coordinate to evolve their systems.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 10
Service Versioning Deployment Pipelines Design for Change Principle
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Design for Change Principle
• Whenever possible, exchange messages using messaging technologies. This allows you to decouple the communication while providing a way to filter and correct message schemas. Use REST for synchronous use cases only.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 11
Phase 1 :: Leveraging messaging in-between microservices
Service A Service B
Messaging using Topics
Benefits of this design: Messages will never be lost. Reliability while maintenance. Possibility of schema handling UI-ready using reactive coding Scalability using less hardware
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Design for Change Principle
• API providers hook up directly while API consumers hook up indirectly. By using different message channels you can plugin schema transformations on-the-fly. By default, just perform messages pass-through.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 12
Phase 2 :: Allow schema transformation by foreseen message channels
Service A Service B
Two topics per service
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Design for Change Principle
• When the time comes and your service API need to be changed, simply do the change and allow a third-party service to handle the schema evolution. This service responsibility is exclusively handle schemas. Ring any bells?
Building Agile and Resilient Schema Transf. using Kafka and ESB's 13
Phase 3 :: Plugin schema transformation engines on-the-fly
Service A Service B
Two topics per service
Transformation Service ("A.K.A Service C")
This can be implemented in a variety of ways but using ESB's seems to be a natural fit. If this is a one time thing, then maybe Serverless could work.
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
API changes, Transformations and Decoupling
Hands-on Demonstration using the Oracle Cloud
Why Apache Kafka instead of other Options?
1
2
3
Building Agile and Resilient Schema Transf. using Kafka and ESB's 14
Agenda
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Understanding the "Use Case"
Building Agile and Resilient Schema Transf. using Kafka and ESB's 15
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
More specifically... this use case is about the Saviors/Negan
Building Agile and Resilient Schema Transf. using Kafka and ESB's 16
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Collecting Supplies from the Communities
Building Agile and Resilient Schema Transf. using Kafka and ESB's 17
Supplies Collection
Scavenge
Lesson Teaching
• Is it time to collect? • What do we need? • Half of their assets?
• Check Inventory • Check their Demands • Do scavenging…
• Is 50% of their assets? • There was disrespect? • Is Lucille angry?
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Services Design :: First Iteration
Building Agile and Resilient Schema Transf. using Kafka and ESB's 18
Supplies Collection
Service
Scavenge Service
Lesson Teaching Service
Initiator Service
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Services Design :: Second Iteration
Building Agile and Resilient Schema Transf. using Kafka and ESB's 19
Supplies Collection
Service
Scavenge Service
Lesson Teaching Service
Initiator Service
Topic Topic
Topic
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Services Design :: Third Iteration
Building Agile and Resilient Schema Transf. using Kafka and ESB's 20
Supplies Collection
Service
Scavenge Service
Lesson Teaching Service
Initiator Service
Topic
Service Bus
Topic Topic
Service Bus
Topic Topic
Service Bus
Topic Topic
Service Bus
Topic
Design for Change Principle
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Services Design :: Fourth Iteration
Building Agile and Resilient Schema Transf. using Kafka and ESB's 21
Supplies Collection
Service
Scavenge Service
Lesson Teaching Service
Initiator Service
Topic
Service Bus
Oracle Java Cloud Service Oracle SOA Cloud Service
Topic
Oracle IaaS Cloud Service
Kafka Transport
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. | Building Agile and Resilient Schema Transf. using Kafka and ESB's 22
Enough slides, show me code
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Do you want to win a Negan Souvenir?
http://141.144.29.149:8080/oracle-code-2017/collectSupplies?name=XYZ
Building Agile and Resilient Schema Transf. using Kafka and ESB's 23
{
"timestamp" : "MM/dd/yyyy hh:mm:ss",
"requestorName" : "XYZ",
"outcome" : true
}
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
API changes, Transformations and Decoupling
Hands-on Demonstration using the Oracle Cloud
Why Apache Kafka instead of other Options?
1
2
3
Building Agile and Resilient Schema Transf. using Kafka and ESB's 24
Agenda
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
What is Apache Kafka?
• Simply Put, Kafka is a Distributed Streaming Platform – Allows ingestion and consumption of streams of records
– Allows streams of records to be persisted with fault tolerance
– Allows streams of records to be processed as they happen
• It is Comprised of Six Main Modules: – Kafka Cluster
– Producer API
Building Agile and Resilient Schema Transf. using Kafka and ESB's 25
– Consumer API
– Connector API
– Streams API
– REST Proxy *
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Kafka Design "Crash Course"
• For implementation simplicity; Kafka does not support the concept of many destinations, topics are the only abstraction. However, both Queuing and Pub/Sub scenarios are supported.
• In Kafka, Queuing is just a matter of how consumers are grouped together.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 26
• Each consumer has a property called group id. When two (or more) consumers has the same group id value, that means that they belong to the same group.
• Consumers belonging to the same group will load balance themselves to fetch records from the partition.
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Kafka Design "Crash Course"
• The basic abstraction in Kafka is called topics.
• Queue and Pub/Sub scenarios are supported.
• Topics are broken down in multiple partitions.
• Partitions are spread over the Kafka cluster.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 27
• Each partition is an ordered, immutable sequence of records that is continuously appended to a structured commit log.
• Committed records has an offset that uniquely identifies it within the partition.
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Kafka Design "Crash Course"
• Each partition has an ever-growing commit log.
• A log can be simply view as a list of linked files.
• Partition logs always carries the topic name and the identifier of the partition (i.e.: "topic-0")
Building Agile and Resilient Schema Transf. using Kafka and ESB's 28
• Kafka's commit log has been designed for maximum efficiency, therefore:
File journaling (O1 structures) may be faster than RAM
There is no JVM caching. Page Caching is used instead
Zero-copy transfer: CPU offloading using sendfile()
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
Kafka Design "Crash Course"
• As mentioned before, Kafka supports fault tolerance. That is achieved with the concept of replication. Replication happens in a per partition basis.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 29
• Partition backups are spread over the cluster based on the number defined in the replication factor.
• When replication is in place, one broker is elected to be the leader of a given partition. Only leaders can read/write records. The rest of the brokers (hosting replicas) are called followers or ISR's.
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. |
• Any Kafka deployment must include a Zookeeper setup. That is mandatory. Therefore, it is important to understand what role Zookeeper plays in Kafka.
Building Agile and Resilient Schema Transf. using Kafka and ESB's 30
• Zookeeper is used to store lots of metadata: Controllers election. Keeps track of leaders and followers.
Cluster details. Which brokers are alive? Dead? Got stuck?
Topic configuration. Which ones, partitions, replicas, etc.
Quotas. How much each client is allowed to read/write?
ACLs. Who is allowed to read and/or write to which topic?
Old 0.8 Consumers. Offset storage and rewind operations.
Producer
Producer
Producer
Topic
Consumer
Consumer
Consumer
Kafka Cluster
Leaders Membership Metadata Quotas
Zookeeper Cluster
Kafka Design "Crash Course"
Copyright © 2017, Oracle and/or its affiliates. All rights reserved. | Building Agile and Resilient Schema Transf. using Kafka and ESB's 31