Join us for our next Elastic meetup, we are co-hosting with the Kafka Bay Area User Group. Thanks to Lyft for providing the venue!
The agenda for the evening is:
6:00pm: Doors open
6:00pm - 6:30pm: Pizza, Drinks & Networking
6:30pm - 7:15pm: Streaming DynamoDB changelogs to Elasticsearch using Apache Kafka & Flink by Ying Xu & Dan Fan
7:15pm - 7:45pm: Integrating Kafka into your Elasticsearch use case by Andrew Selden
7:45pm - 8:00pm: Microservices Integration Patterns with Kafka by Kasun Indrasiri
8:00pm - Additional Q&A & Networking
Abstract Information:
Microservices Integration Patterns with Kafka
Microservice composition or integration is probably the hardest thing in microservices architecture. Unlike conventional centralized ESB based integration, we need to leverage the smart-endpoints & dumb pipes terminology when it comes to integrating microservices.
There two main microservices integration patterns; service orchestration (active integrations) & service choreography (reactive integration).
In this talk, we will explore on, Microservice Orchestration, Microservice Choreography, Event Sourcing, CQRS & how Kafka can be leveraged to implement microservices composition
Kasun Indrasiri is the director of Integration Architecture at WSO2 & an architect with over nine years of experience in Enterprise Integration & Microservice. He is an author & an evangelist on Microservices Architecture. He has authored Microservices for Enterprise (Apress: 2018 Q4) & Beginning WSO2 ESB (Apress - Released in 2017) books.
He was also an architect & the product lead of WSO2 ESB & an Apache committer.
Integrating Kafka into your Elasticsearch use case
Andrew will be providing an introduction on how users are integrating Kafka into their Elasticsearch use cases. This will include best practices, architecture overviews & more.
Andrew is a Senior Solutions Architect at Elastic
Streaming DynamoDB changelogs to Elasticsearch using Apache Kafka & Flink
In this talk, we will present the architecture of Lyfts changelog data ingestion pipeline which allows for real-time ingestion of DynamoDB changelogs into Elasticsearch. Our system uses Apache Kafka as a core pub-sub component storing all the changelog data. Apache Flink jobs are employed as connectors linking data source(s) & destination(s). By virtue of state-of-the-art streaming technology, the whole data pipeline achieves low latency, strong message durability & ordering guarantee, with scalability & extensibility built into the design.
https://www.meetup.com/KafkaBayArea/events/254248245/?isFirstPublish=true