When we are integrating different components and services in our software architecture, the first step is to select a good orchestration framework. On this opinionated article I will present my criteria to decide which is the right framework.
Riding the Enterprise Service Bus
As you compose services, you will notice the need for an Enterprise Service Bus to communicate with each other. But an EBS can be useless if you don’t have good ETL (extract, transform, load) tools along with it to manipulate our data. The same way that an ETL without a proper routing system can leave us orphaned.
We need to route messages and events and at the same time make sure data transformations take place so different endpoints with varied protocols and formats can interact with each other. That’s where integration frameworks come in.
The Enterprise Integration Patterns can help developers on both tasks: by providing data transformations between outputs and inputs and offering different routing strategies.
Orchestration Functionality
Our framework should be able to support not only the Enterprise Integration Patterns, but also a wide range of protocols and data formats.
Easiness of use, specially if we have complex use cases to maintain, is relevant to keep our architecture clean. This usability should never be in detriment of the functionality or the extensibility of the framework. We want our framework to interact with a varied range of components and services.
Sacrificing features in favor of user experience will make you hit faster the ceiling of what you can do.
Multiple Orchestration Languages
Related to the easiness of use is the language we need to use to build the integration routes. We don’t want to be tied to a specific language like Java or Python. There are frameworks that allow you to build the integration route in different languages.
When our orchestration builders come from different backgrounds, or are not very tech savvy, being able to integrate on different languages may come in handy. We want them to feel comfortable in whatever language they need to orchestrate.
Tutorials and documentation
It was never the case when good usability made up for a bad documentation. We need tutorials, manuals, helpers,… Both for users and developers. And of course, we will want to have some reference to fall back when something doesn’t work as expected and we need to find why.
Customization
Sometimes we need to use some data format or connect to some service not currently supported by the framework. Whether if we do it ourselves or hire someone to do it, can the existing functionality be extended to support them?
Dependencies size
We don’t want to drag a heavy dependency to our architecture. But not only smaller in size, we also want a framework that will use a light footprint over our hardware, while at the same time, it doesn’t drag our performance.
A smaller dependency usually means less source code that can introduce bugs to your software too.
Technical Support
Maybe we don’t need help, but what if we have a problem? Can we hire someone for technical support? Are there developers to hire to implement our custom features?
Having a wide range of companies offering services around our framework will greatly improve our experience in the long run.
License
It doesn’t matter how many companies are supporting the framework if the license is restricted. Only a software with a FOSS license will warrant you will not be tied to the whims or misfortunes of any external force or private company.
If you have been following me, this will not come as a surprise. You know I already have a preferred choice that scores high on all these criteria: Apache Camel.
I would like to present you with an ETL and integration editor Rachel and I have been working on for the past year with the initial help of Zineb: Kaoto.
What is Kaoto?
Kaoto is an integration editor to create and deploy integrations in a low-code way and no-code way based on Apache Camel. It combines a source code editor and a drag and drop graphical space synchronized with each other. It can be run both as standalone and as a service (SaaS).
With the no-code mode, the user can build the entire integration orchestration with the drag and drop function. Kaoto has a step catalog with a list of all available building blocks that the users may want to transform data or integrate with services.
The source code will be available for users curious to know what they are going to deploy. But in no case they have to understand or even see that code.
Example of building block drag and drop
With the low code mode, users can learn how to create integrations and, at the same time, control what they are deploying. They can use both the drag and drop and the source code editor, that will autocomplete what the user won’t or don’t know how to write. Meanwhile the graphical space will show the integration being built and the drag and drop will still be available for them to complete or accelerate the development.
Example of low code integrations.
Kaoto can help users start with Apache Camel and slowly build their knowledge. All the source code generated is clean and viewable in any IDE.
Customizing Kaoto
Each building block type can have its own microfrontend. This is useful when you add your own building blocks to your Kaoto instance. But it can also help adapt Kaoto to different kinds of users, hiding or extending certain details important for each use-case. Extensions can be manuals and helpers for Kaoto.
When used as a service, the extensions and the list of available building blocks are settings that can be stored in the cloud. Therefore, administrator users can modify this configuration, which will refresh live in the users playgrounds. In addition, as this configuration can be in the cloud, different users can share the configuration. This can help organizations accommodate different groups of users, offering different configurations depending on the profile.
What is on the roadmap?
We started the development focused on Kamelet Bindings, but we soon realized we could add more languages. Edition of Apache Camel routes (in yaml form) and Kamelet definitions are next in development queue. We are also working on translating from other integration languages to Camel DSL. This can help users migrate to Apache Camel.
We will soon have one-click support for cloud-native Apache Camel deployments via Camel-K. Matej is close to having an operator for Kubernetes clusters which will simplify even more the installation of Kaoto in the cloud.
You can quickly test it via docker as described on the quickstart. Make sure your docker images have access to internet to be able to access the default remote configuration!
Imagine you are in charge of creating a whole new complex architecture to solve a new need of your company. You are already an experienced engineer and have solved many of the requirements with some components you are already familiar with. But now, you have to orchestrate all these components together and make them work like a clock. Now we need a proper integration.
You may have heard any or all of these keywords before: middleware, integration, orchestration. And you may be wondering why and when to use them. Take a walk with me to understand when and how integration frameworks are useful.
Imagine you are in charge of solving a new need of your company. There is no complete software stack for what you need. You will have to involve your team to create something new. Even if you reuse some components, you have to make them interact and talk to each other.
You are an experienced software engineer and have solved previously many of the requirements with some components you are already familiar with. But now you have to orchestrate all these components together and make them work like a clock. Now you need a proper integration. You want all of them to cooperate smoothly in your architecture.
The first thing any good developer thinks about is building a custom software that acts as the glue between all these components. Maybe adding some fancy extra functionality. And, (why not?) as we are at the beginning of a new exciting project, probably we want to try all these new technologies you have been reading and hearing about. Whatever the fad buzzword is now, you are willing to try it.
Although this may be appealing, your inner experienced engineer tells you to stop. There’s something you also read about, these integration frameworks. Could they be useful here?
The Integration Paradigm
As much as we would like to start a new clean project from scratch and throw all our ingeniousness into it, we shouldn’t reinvent the wheel. Let’s take a look at what is this middleware or integration software.
Middleware, or integration software, can help us orchestrate and automate the interaction between different applications, APIs, third party services or any other software piece we may have to connect.
A proper integration tool should provide us with the following features: transformation, integration patterns and connectors to existing protocols and components.
Transformations
When we connect different components of an architecture, they rarely speak the same languages or, on this case, data formats. Some components will output an xml that has to be fed to the following component on a json form. Maybe we even need to add or remove some attributes on that json data.
We need some way to easily transform the data traveling from one component to the following so it fits properly.
If we want to do this with our own script, there are many libraries that can help us doing this like Jackson or the built-in xml libraries on Python . We even have the XSLT language to transform XML. But to use any of these properly, we would have to learn first how to use them. And any code we generate will have to be maintained and upgraded properly.
An integration framework allows us to define what is the mapping between the output of one component and the input of the following so we can forget about the explicit implementation. The less code we have to maintain, the better.
Enterprise Integration Patterns
Not all workflows in the architecture will be lineal. Some of the steps will require broadcasting, some of them will require conditional flowing. Some will require waiting the output of different components to conflate the data. These action patterns are something that have been studied for a long time. And as with software development patterns, you can classify them and study them to create better integrations.
All of the above is useless if we can’t connect to (and from) the specific component we need.
Our ideal integration framework should offer support for common protocols like ftp, http, jdbc,… Also it should offer support to connect to common components like a mail server, messaging services, atom,… We could claim even that no integration tool would be good if it doesn’t also support specific well known services like being able to send a message through a Telegram bot or store information on Elastic Search.
Integration Frameworks as Lego building blocks
Being able to seamlessly connect from one component to the next without having to worry about the specifics of their interfaces is what distinguishes an average integration tool from a good integration tool.
Apache Camel
Let’s talk about something less abstract. At this point you may be wondering where you can find a good integration framework.
Apache Camel is not only one of the most active projects inside the Apache Software Foundation, it is also the lightest and most complete integration framework available. And on top of it, it is also Free and Open Source Software!
Camel is already an old actor on the integration world. It has support for hundreds of components, protocols and formats. Some of these protocols come very handy allowing the user, for example, to connect to any REST API that they need.
Camel uses its own DSL, a simplified language to define easily the workflows step by step.
Camel-K
Camel is also available in Knative. This means, we can use it on a serverless environment, making sure the orchestration between services runs and escalates properly.
Camel K Orchestration Example
This example demonstrates how to orchestrate integrations using Camel-K and Kafka as a messaging service. We are going to implement two integrations that interact through a database to simulate how cat adoptions work.
Two integration workflows that simulate how cat adoptions work
One integration will store cats coming from Kafka to the database waiting for a person to adopt them. The second integration will receive people interested in adopting and will match cats with them.
Cat Input from Kafka to Database
First we are going to implement the storage of cat input messages to the database.
As you can see, the Camel DSL is very intuitive: this integration listens to the proper Kafka broker and for every message that arrives, it unmarshalls the json to extract the data and pushes it to the database. The Cat class is just a simple bean with getters and setters for the attributes.
// camel-k: language=java
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;
import model.Cat;
public class CatInput extends RouteBuilder {
@Override
public void configure() throws Exception {
//Listen to kafka cat broker
from("kafka:cat?brokers=my-cluster-kafka-bootstrap:9092")
.log("Message received from Kafka : ${body}")
.unmarshal().json(JsonLibrary.Gson, Cat.class)
//Store it on the database with a null person
.setBody().simple("INSERT INTO cat (name, image) VALUES ('${body.name}', '${body.image}')")
.to("jdbc:postgresBean?")
//Write some log to know it finishes properly
.log("Cat stored.");}
}
}
Person Input from Kafka to Adopt
Now we are going to implement the reception of people wanting to adopt a cat.
This integration is a bit more complex, as we are going to introduce a conditional choice: if there is a cat available on the database, it will be assigned to the person. If there is no cat (otherwise), a message will be returned saying no cat is available.
// camel-k: language=java
import org.apache.camel.builder.RouteBuilder;
public class PersonInput extends RouteBuilder {
@Override
public void configure() throws Exception {
//Listen to kafka person broker
from("kafka:person?brokers=my-cluster-kafka-bootstrap:9092")
.log("Message received from Kafka : ${body}")
.log("${body} wants to adopt a cat")
//Store the name of the person
.setProperty("person", simple("${body}"))
//Search for a lonely cat
.log("...looking for available cats...")
.setBody().simple("SELECT id, name, image FROM cat WHERE person is NULL LIMIT 1;")
.to("jdbc:postgresBean?")
.choice()
.when(header("CamelJdbcRowCount").isGreaterThanOrEqualTo(1))
.setProperty("catname", simple("${body[0][name]}"))
.setProperty("catimage", simple("${body[0][image]}"))
.setProperty("catid", simple("${body[0][id]}"))
.log("Cat found called ${exchangeProperty.catname} with ID ${exchangeProperty.catid}")
//There's a cat available, adopt it!
.setBody().simple("UPDATE cat SET person='${exchangeProperty.person}' WHERE id=${exchangeProperty.catid}")
.to("jdbc:postgresBean?")
//Write some log to know it finishes properly
.setBody().simple("Congratulations! ${exchangeProperty.catname} adopted ${exchangeProperty.person}. See how happy is on ${exchangeProperty.catimage}.")
.to("log:info")
.otherwise()
//Write some log to know it finishes properly
.setBody().simple("We are sorry, there's no cat looking for a family at this moment.")
.to("log:info")
.end();
}
}
Feeding data automatically
As an extra step on this exercise, we are going to implement a final job that sends random new cat data to the Kafka “cat” topic with a timer.
The complexity on this class is not the Camel side, but the random generator of cat names.
// camel-k: language=java dependency=camel:gson
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;
public class AutoCat extends RouteBuilder {
@Override
public void configure() throws Exception {
// Preparing properties to build a GeoJSON Feature
Processor processor = new Processor() {
String[] title = new String[] { "", "Lady", "Princess", "Mighty", "Your Highness", "Little", "Purry", "Empress", "Doctor", "Professor" };
String[] firstname = new String[] { "Dewey", "Butter", "Merlin", "Epiphany", "Blasfemy", "Metaphor", "Fuzzy",
"Whity", "Astro", "Salty", "Smol", "Whiskers", "Scully" };
String[] lastname = new String[] { "", "Luna", "Wild", "Dragonis", "Firefly", "Puff", "Purrcy", "Priss",
"Catsie" };
Random r = new Random();
@Override
public void process(Exchange exchange) throws Exception {
Map<String, String> map = new HashMap<String, String>();
map.put("image", exchange.getProperty("catimage").toString());
StringBuilder name = new StringBuilder();
name.append(title[r.nextInt(title.length)]);
name.append(" ");
name.append(firstname[r.nextInt(firstname.length)]);
name.append(" ");
name.append(lastname[r.nextInt(lastname.length)]);
exchange.setProperty("catname", name.toString());
map.put("name", name.toString().trim());
exchange.getMessage().setBody(map);
}
};
// Listen to kafka cat broker
from("timer:java?period=10s")
// Take a random image
.to("https://api.thecatapi.com/v1/images/search")
.unmarshal().json(JsonLibrary.Gson)
.log("A new cat arrived today ${body[0][url]}")
.setProperty("catimage", simple("${body[0][url]}"))
// name cat and prepare json
.process(processor)
.log("${body}")
.marshal().json(JsonLibrary.Gson)
.log("We named them ${exchangeProperty.catname}")
// Send it to Kafka cat broker
.to("kafka:cat?brokers=my-cluster-kafka-bootstrap:9092")
// Write some log to know it finishes properly
.log("Cat is looking for a family.");
}
}
Now you are ready to implement your own orchestrations with Kafka and Camel K.
We use cookies on our website to allow authenticated users to function. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
VISITOR_INFO1_LIVE
5 months 27 days
A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSC
session
YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devices
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-id
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextId
never
This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requests
never
This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
CONSENT
2 years
YouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.