What you need to know about system integration

System integration is the process of linking different IT systems, services, and software together so that they functionally act as a single system. It’s a way to automate back-office functions, manage business processes, and integrate all aspects of a company’s operations into the user's interfaces. 

Setting up various components involved in operating a business to communicate with each other behind the scenes to produce a single interface saves time, creates a more user-friendly experience, and makes life easier for anyone who needs to use any combination of the components in the system.

There are several different ways to integrate systems. One example is horizontal integration, where a separate subsystem, the enterprise service bus (ESB) is used as a common interface layer between all of the other component subsystems. This reduces the number of connections necessary for system integration compared with something like star integration, because system components are connected indirectly through the main system rather than interconnected directly. A reduced number of connections necessary to maintain functionality makes system management and potential troubleshooting easier.

Other ways to integrate include API (Application Programming Interface) management, event streaming platforms, and MQ (Message Queus).  

System integration is valuable for businesses because it connects the business with suppliers, customers, and shareholders in a simple, straightforward way. This reduces operational costs, improves the customer experience, and facilitates better and faster internal and external communication. Faster information flow improves productivity and ultimately, the quality of the products or services that the business provides.

Investing in system integration seems like an obvious choice for a business to make given all of the potential advantages, but there are challenges to overcome too. There can be disagreement among collaborators who use some of the components of a system about how the integrated system should look and operate. The integrated system can require ongoing management and troubleshooting, especially if it contains many different components, and there can be a lack of clear communication about whose responsibility this is in the absence of a designated system manager.

Lastly, a lack of sufficient expertise can often be an issue in creating and maintaining system integration. Skill in setting up integrated systems efficiently in a way that makes sense to all parties who will be working with them is critical for avoiding potential problems in the future.

Contents & Chapters

1

BUILDING BLOCKS FOR AN API DRIVEN APPROACH

2

HOW TO CHOOSE BETWEEN REST APIs AND EVENT STREAMING

3

#1 FAIL IN DIGITAL TRANSFORMATION

4

ACCESS TO DATA SETS THE LIMIT FOR SUCCESS IN DIGITALIZATION

5

WHAT IS GRAPHQL?

6

ADVANTAGES OF API-LED CONNECTIVITY

7

HOW TO ACHIEVE EXCELLENCE IN YOUR INTEGRATION PROJECT DELIVERIES

8

OPERABILITY OF END-TO-END INTEGRATIONS

Building blocks for an API driven approach

 

gustav.rosenby Gustav Rosén

Application Programming Interfaces (APIs) play a significant role in the functionality of modern applications, and an API-driven approach to application development involves designing and building the APIs as the first step.

 

Benefits of this approach include :

  • Easy and broad accessibility of different application service
  • Highly distributed and modular application architectures
  • Ease of updating each module separately using the CI/CD pipeline
  • Avoidance of incompatibilities between an API and other applications
  • Avoidance of app and API obsolescence
  • Cloud friendly
  • Provides a central point of reference for entire software development team

To reap these benefits and design your app with the needs of your customers in mind, it’s important to focus on APIs not only after software development, but by taking an API-driven approach to development from the beginning. In reality, people usually work on the delivery process and set up the architecture of the application before deciding to migrate to an API-driven development model. However, much more difficult, if not impossible, to accomplish on the first attempt!

apiblocks

So, why not make it easy, optimized, and customer-focused by starting with an API-driven development approach? To get started with this approach, focus on these building blocks:

Building Block 1: Brainstorming

The first step for being API- driven is to clearly define the goal of the API. Determine the purpose of the API, for example, to enhance third-party system compatibility, to provide easy access to services from a mobile application, or to gain a competitive advantage. Then brainstorm and identify the key services your business offers and figure out your business capabilities. Next, determine which APIs are currently being used and brainstorm which new APIs can potentially be implemented. Consider APIs for both internal and external resource integration.

Building Block 2: Know your ecosystem

Now it's time to review your enterprise ecosystems. Assess the API architectures that can be used for the developed application, for example REST, GraphQL, SOAP, gRPC, Falcor, or RPC, and determine whether it’s most advantageous to use the existing architecture or switch to another one. It’s also vital to identify any existing application silos. Another question to consider at this step is whether to use a single API or multiple APIs and/or to consolidate them for better integration.

Building Block 3: Testing Virtually

Once the design of the API and the architecture are finalized, it’s best to test it by creating a virtual API. For example, when designing a RESTful API, the validity of the REST contract must be tested. This will evaluate the functionality of the API before the actual API is launched. It’s also smart to demo the API for customers and other key stakeholders as a mock API run. This will provide valuable feedback that can be incorporated into improving the design in the early stages of development.

Building Block 4: Now it’s time for integration

After identifying the APIs and their architecture, the API design and implementation must be set at the beginning of the Continuous Integration/Continuous Delivery (CI/CD) pipeline. In other words, this is when the real API is launched. By doing it this way, your development is API-driven. This step is usually done by determining the API’s accessibility with components of the app and coding in advance, as well as examining and determining the location of the new code in the context of the entire API architecture.

Building Block 5: It’s not over yet

Starting the CD/CI pipeline with the APIs doesn’t mean that all the work is done. It’s equally important to ensure that the stages of software testing and software monitoring address the APIs. API functionality must be tested before deployment of the application. Moreover, detection of potential API problems needs to be tracked by routine software monitoring.

Building Block 6: Don’t underestimate the power of feedback

Create a feedback loop in the CI/CD process so that the API designers can receive downstream information about the API from the pretesting and post-deployment processes in the previous steps. Access to this information is important for improvement of the overall architecture and strategy of the implemented API.

To achieve success in API-driven application/software development, consider the following:

  • Provide sufficient training to the people on the team who do not have comprehensive knowledge of the APIs.
  • As you go on to build additional APIs in the future, it is critical to ensure that there is good communication between and collaboration among team members. Otherwise, consistency and compatibility between developed APIs tends to be lost over time.
  • Security is often a major issue when working with APIs, since the applications that are centrally based on the APIs can potentially be attacked by the same APIs. It’s important for the team to check and implement the best security practices regularly to optimize the flexibility and scalability of the APIs.
  • When your first or second API proves to be successful and you want to create a large number of APIs to gain the competitive advantage, plan their management, performance tracking, and monitoring in advance. It’s a good idea to use automated API

How to choose between REST APIs and event streaming

gustav.rosenby Gustav Rosén

You’ve reached the point in your application development where you need to build connectivity, and you feel a little stuck. Would event streaming work better for you, or REST APIs? What’s the difference between the two, what are the benefits of each option, and most importantly, which is the best choice to optimally serve your customers? Actually, the right answer is both. Keep reading to find out why.

 

Push or pull communication?


REST APIs are perfect for synchronous pull communication that works in a request/reply pattern. They provide an interface between systems that use HTTP to perform actions such as GET, POST, PUT, and PATCH on data in formats such as JSON and XML. REST APIs can be channels-specific and can also be combined in higher-level process APIs that use data from multiple systems. They are used both internally and for external, customer-facing applications. They also have the benefit of increased efficiency through caching and concurrency.

When you need asynchronous push communication that works in a publish/subscribe pattern, event streams are the right choice. This method for connectivity is mostly for internal use, for receiving replayable information within a structure that’s optimized for scale, performance, and operability. Event streams can be used for notifications, event-carried state transfers, event sourcing, and CQRS.

digital_teknologi_bild

REST APIs or event streams? - Use both!

When you’re creating a connectivity solution for your application, the best approach is to use both REST APIs and event streams. They complement each other so well. REST APIs allow the user to connect with the application over HTTP and event streams allow the user to receive automatic updates. This combination meets the needs of most applications today and is comprehensive enough that it is likely to meet the needs of emerging technologies too.

What’s the best way to combine these two complementary connectivity solutions? If you make all of your REST APIs publish events, you get the best of both methods. That way, when a user updates information through the REST API, it results in publishing an event to send notifications to other applications with this information. This works the other way around, too – when your applications publish notifications, your REST APIs can be queried to retrieve all of the relevant data. For batches of complete data sets, you can either use paginated REST APIs or use a REST API to trigger an asynchronous event stream publication of the entire data set. A good application includes the capability of calling APIs as needed plus the capability of receiving information that’s published as it occurs. 

One last thing – connectivity is certainly important, but the quality of your content is the key to success with any application. The information you’re sharing with your team or your customers should be correct, interesting, and relevant.

Now that you know what to do to for optimized application connectivity, the next step is implementation. There are plenty of free open-source solutions available to accomplish this. You can build REST APIs using microservices in Java or .Net, and you can build event streaming platforms using Apache Kafka, for example. If you have the technical skills to do this yourself, you’re now ready to get started. If not, the best solution is to hire an integration partner with the right expertise.

#1 FAIL IN DIGITAL TRANSFORMATION

gustav.rosenby Gustav Rosén

To digitally transform is a challenge – and it’s now on the table in virtually every business worldwide. This post will help you avoid pitfalls in your digital transformation.

I’m going to break it to you right away:  The #1 reason for failure in digital transformation is failure to reorganize your information flows. Huh? Say what?! Okay, let me explain.

“Digital transformation” implies:

  1. transforming a business - i.e., creating new business flows…
  2. ...by digitalization

Let’s make this clearer by separating business transformation (on the y-axis) from digitalization (on the x-axis).

Business transformation (y-axis) from digitalization (x-axis).

To further clarify:  When we refer to information, we mean the information that flows through a business. This can be a product or service (e.g., a book, movie, weather forecast, or bank loan). Information can also directly or indirectly power the experience of a service or product (e.g., a user guide, order form, contract, invoice, stock info, price tag, schedule, employee record, or financial statement).

As we go along, we will look at how to accomplish stepwise transformation by reorganizing information flows - aided by digitalization. Reorganization of information flows are described as three characteristics that undergo stepwise change:

Information producers and consumers   
Information character  
Information timeliness  

 



LEVEL 1 - ANALOGUE BUSINESS

Now, let’s assume you are running an analogue business powered by typewriters. Your business flows have three characteristics:

One to many By one producer for many consumers
Specialization Information is specialized and sorted
Static info Hard to change and therefore not updated frequently

 

At this level we have a non-complex organization of producers (blue) and consumers (purple). 

An example is printing a book from one author to multiple readers. The book needs to be categorized as belonging to a special area (since print books are not electronically searchable). It is seldom revised because a print book revision is costly.

At this level we have a non-complex organization of producers (blue) and consumers (purple).

 

LEVEL 2 - INSIDE-OUT DIGITAL TRANSFORMATION

Let’s take a step on the digitalization axis by replacing the typewriters with word processors. 

Digitalization gives you the potential to reorganize information flows to transform your business. However, this potential is only realized if you make a conscious decision to reorganize the information flows. Digitalization without reorganization of information flow will at most give you some gain in efficiency without a business transformation.

The transformation enabled by word processors can be described as reorganizing information flow to:

Some to many A set of co-producers to many consumers
Collaboration Searchable information that can be found in more than one context
Revised info Easier to update and republish information from producers to consumers

Reorganization of information flows has introduced some complexity by adding co-producers (blue) and more consumers (purple). In addition, the information is now searchable by consumers and can be revised by producers more frequently.

Going back to our book example, we can now have several authors producing the book together. Because the book is electronically searchable, we can find it in many different contexts (no need to sort it into a category). The book can easily be revised.

The re-organization of information flows have introduced some complexity by adding co-producers (blue) and more consumers (purple). In addition the information is now searchable by the consumers and can be more frequently revisioned by producers.

 

LEVEL 3 - OUTSIDE-IN DIGITAL TRANSFORMATION

At the next level we digitalize on the consumer side of our information flows by introducing consumer computers. As before, this gives you the potential to reorganize information flows further to transform your business. Again, if you fail to do this reorganization, you miss out on business transformation. 

The re-organization of information flows have now introduced complexity by letting the previous consumers (purple) also start acting producers (blue) as information is fed back to the business from consumers. 

The transformation enabled by consumer computers can be described as reorganizing information flow to:

Bidirectional A set of co-producers to many consumers, with consumers giving feedback to the producers
Context driven Information is delivered to unique channels and consumer experiences
Iterative info Information is continuously updated and can be published in sections

Reorganization of information flows has now introduced complexity by allowing consumers (purple) to also act as producers (blue) as information is fed back to the business from consumers. 

Allowing consumers to act as producers of information enables a “learning organization” that uses external knowledge to impact its products and services. This feedback loop will, over time, outperform competitors who are still on level 1.

Letting consumers also act as producers of information enables a “learning organization” that uses knowledge on the outside of the business to impact its products and services.

Continuing with our book example, we can now say that we have moved on to a type of business more similar to Netflix, where our publishing is based not only on our own editorial competence but also on the consumer feedback we are continuously gathering to impact our content. Because revisions are easy to do, we can start releasing content chapter by chapter or episode by episode with consumer feedback influencing the content of our next release.

Note: This feedback does not have to come directly from consumers, e.g., by voting or liking content, but can be more elaborate analytics such as “most watched episode,” “time viewers dropped off while watching episode,” “minutes watched,” “time viewers most paused while watching episode,” etc. These consumer insights will allow us to improve our content episode by episode.

 

LEVEL 4 - NETWORKED DIGITAL TRANSFORMATION

At this level we digitalize by introducing various producer and consumer devices that can access information in an “omni channel” experience, i.e., the information produced and consumed is channel agnostic with the tech being used of lower importance. The phrase “bring your own device” emerged as a result of a more open and interconnected world. As before, this gives you the potential to reorganize information flows further to transform your business. Again, if you fail to do this reorganization, you miss out on business transformation.

At this level we do digitalization by introducing various producer and consumer devices that can access the information in an “omni channel” experience.

The transformation enabled by omni channel technology can be described as reorganizing information flow:

Any to any Interconnected parties interchangeably acting as consumers and producers
Networked Information is tailored for a specific purpose, based on the rest of the network’s
Flowing info Information is instant and real-time


The organization of information flows is now at high sophistication. Everyone participating in the business model is networked and interchanging information in solutions powered by APIs for data access and/or streams of events for instant access to real time updates of information.

The organization of information flows is now highly sophisticated. All participants in the business model are networked and interchanging information in solutions powered by APIs for data access and/or streams of events for instant access to real-time updates of information. This is called “the application network” of a business. One or more interconnected businesses form a “network of networks”.

This leap in transformation has been rare in the past but is now increasing in a new category of businesses called “disruptors” with a level of business sophistication that outperforms the lower-level alternatives. We have now moved from a Netflix type of business to two-sided marketplaces (e.g., YouTube, Uber, and Airbnb), interconnected, intelligent Internet of Things businesses (e.g., self-driving cars with decision making based on large-scale dataflow and AI with machine learning of driver community behaviors), and any other information-driven businesses that have moved fully into the information economy by understanding that “data is the new oil.”

 

SUMMARY

Avoid the pitfall of digitalization without transformation by conceptualizing your digital transformation as deliberate reorganization of your information flows into an application network as described in steps 1-4.

Use a tool to organize and visualize your information flows, which equips your developer community take your business to higher levels. Check out Starlify and get your information flows organized for you. 

Take me to Starlify

If you are already on the path of digitalization, make sure to accelerate it by enabling your developer community! Read more about how we enabled Volvo Cars' developer community here. 

Organize and visualize your application network

entiros-starlify-logo

Collaborate throughout your organization and enable your developer community. Starlify is a tool for scoping, organizing, and visualizing your integrations, applications, and services. This brings warp speed to your team’s integration delivery, saving both time and money.

Sign up for free

PSD2-timeline-infographic

ACCESS TO DATA SETS THE LIMIT FOR SUCCESS IN DIGITALIZATION

 

 


Entiros CEO Gustav speaking at CIO Best Practice (in Swedish).

ACCESS TO DATA SETS THE LIMIT FOR SUCCESS IN DIGITALIZATION

No matter how much data, or "big data," you have, it's useless if it's not accessible within your organization. Connectivity is the fourth industrial revolution. We have plenty of good technologies, systems, and data, and whoever optimizes using them together will be the winner.

With over two decades of experience in system integration for enterprise companies, Gustav provides insight into how to connect people and systems to share data effectively within and between organizations.

You'll get methods and tools for how to:

• organize and visualize critical data flows and applications
• increase the reuse of available data both internally and externally
• minimize the time integration projects take by working in a distributed way to a specific standard

Shared knowledge is doubled knowledge!

In today's world, knowledge is increasingly important both for the individual and for strengthening a company's competitiveness. There is a strong drive to continuously develop and acquire new knowledge. To maximize this knowledge, we need to find good ways to share it both internally and externally. Shared knowledge is doubled knowledge! 

WHAT IS GRAPHQL?

 

jonas.tsby Jonas Törnblad Sandell

GraphQL is basically a query language that retrieves and modifies data on a server and subscribes to changes. Instead of making several different API calls like REST does, a GraphQL query asks for all the data needed.

With typical use, a web browser first retrieves and fills a web page with user-adapted information, then sends back changes based on user input, and then continuously receives reports from the server when relevant information is changed.

The main unique feature of GraphQL is that the query language that can request hierarchical data that are delivered in the same structure as they are requested.

Origin

GraphQL was first developed by Facebook in 2012 and was published in 2015 as open source. Its development is ongoing. For example, schedules were added in January 2018.

What GQL is not

GraphQL is not an individual implementation by or for clients or servers: There are, however, several implementations of GraphQL for multiple languages and platforms.

GraphQL is not a programming language: It contains a query language, a schema defining language, and a type-defining/data-modeling language.

GraphQL vs. REST

As REST is a well-known architecture for web services, it is worthwhile to describe GraphQL compared with REST. GraphQL was developed by Facebook because REST was insufficient for their needs. The main differences is GraphQL's emphasis on flexibility and efficient use of resources. There is also a big difference in terms of where and when the most development effort is required. Both REST and GraphQL can be used in applications other than web, but both were developed primarily for that purpose.

When and for whom is REST best?

  • When the API is created for a known client: The client's data requirements are known and a tailor-made API streamlines development and usage.
  • When the client or their developer needs a simple interface: A flexible query language is not needed and is unnecessarily complicated for some developers or types of clients.
  • When the client's need for data is controlled by one or only a few known parameters, such as date, sort order, and number: The domain and its data can be so simple in nature that a few simple REST calls is the optimal solution.
  • When the client's need for data does not change based on the data's structure, for example, the same type of object is retrieved and most of its properties are used: Certain services and data are not rooted in standards or enforced and therefore have a low degree of change.
  • When data types other than text need to be retrieved directly: GraphQL does not support media types other than text, such as images, video, audio, binary data, etc.

When is and for whom is GraphQL the best?

  • When the client's selection of objects varies or is advanced, for example, selection through relationship between objects: The domain is complex to structure and has potentially high data turnover such that active server and client part development may be inhibited by fixed APIs.
  • When the client needs to query the server with add-on questions based on an earlier download: The data have deep structure and/or relationships between them or then selection criteria are numerous or have complex conditions.
  • When the client's needs for the parameters of the objects vary, for example, only a few parameters in large objects are needed: If data needs to be gathered from multiple types of objects, a REST API must be tailored to this to avoid unnecessary load on the server and network.

d5QaPsG

Combining the two

A larger integration project or web service can, of course, fit into several of the above descriptions, so a combination of GraphQL and REST can be the best overall API solution.

Migration between the two

Successful migration from REST to GraphQL requires some preliminary analysis of statistics from the call and load on the existing API. Which usage cases are listed under "When and for whom is XXX the best?" above, for example? Typical goals are to reduce the number of calls and complexity of processing for the client. An effective caching solution for the client may require identification of reusable objects and structures and an adaptation to this in both the data model and the queries.

Successful migration from GraphQL to REST could be done if a usage study reveals that the client's needs really only fall under "When and for whom is REST best?" above. However, since a GraphQL solution has the highest cost at the beginning of development, such migration seems difficult to justify.

PUBLISHING GRAPHQL APIS

To document GraphQL

With the introduction of GraphQL Schema Definition Language (SDL), the APIs can be visualized and viewed more easily. There are several tools for graphically visualizing them, including interactive:

GraphQL Visualizer 

graphqlviz

demo

GraphQL Voyagerdemo-gif

The query language also supports the demand for the schedule.

Analysis and statistics

Collection and analysis of more than basic information such as the number of calls, call rate per client dataset, and public server load requires that the server that implements GraphQL has special support for detailed analysis and statistics, similar to database servers.

Implementations for clients

Apollo Client: A production-ready GraphQL client mainly for React, but also for JavaScript, iOS, and android. MIT License.

Relay / Relay Modern: A JavaScript library for building React applications based on GraphQL. Developed and used by Facebook. Published under MIT License.

React is a library for building user interfaces underlying Apollo and Relay/Relay Modern.

Deployments for servers

GraphQL libraries for the server side are available for most popular programming languages, inclusing Java, Ruby, Python, Elexir, Scala, and JavaScript (in NodeJS).

MuleSoft's Anypoint Exchange supports GraphQL.

Resources online

A good and comprehensive review of GraphQL's many aspects can be found at How To GraphQL https://www.howtographql.com/

ADVANTAGES OF API-LED CONNECTIVITY

 

nils-circularby Nils Kanevad

 

How will your business keep up with the ongoing digital transformation?

Digital transformation is changing markets at an accelerated pace. It's time to embrace digital transformation to avoid the risk of losing market share to competitors who are able to adapt more quickly.

Digital transformation leads to companies adjusting their relationships with their customers, suppliers, and employees. The ability to utilize new technology enables companies to reach, engage, and communicate with their customers in ways that were not previously possible.

New technologies including SaaS, mobile, and Internet of Things (IoT) require new levels of fast and flexible connectivity that cannot be achieved with yesterday's integration methods.

 

WHAT IS API-LED CONNECTIVITY?

Since the beginning of IT, companies have accumulated many different systems, e.g., ERP, SAP Customer System, CRM, and Logistics Systems. Large companies that use these different systems often have complex, intertwining connectivity and point-to-point integration solutions, factors that often result in a tangle of integrations. The solution is an integration platform based on API-led connectivity. Its architecture is built on three layers:

 Experience APIs guide information and data safely to all the channels and devices.

 Process APIs apply logic, alter, and refine information and data, which enables them to flow between the System and Experience layers.

 System APIs make it possible to access the source data from, e.g., ERP systems, physical systems, or external services.

API-led connectivity

An integration solution is designed with the data usage in focus. APIs provide an open and governed method for connectivity in systems, with the role of securing and managing access to all connections in the architecture.


WHAT ARE THE BENEFITS OF API-LED CONNECTIVITY?

 To clarify the advantages of API-led connectivity, here is a comparison between the most common integration methods.


point-to-point-integration-en.png 

P2P

Point-to-point based solutions work just like it sounds, by connecting one business operation to another, point to point. For a company that needs to integrate multiple operations, this will quickly become a chaotic nightmare scenario. Integrations need to be made repeatedly to connect more components, which takes time and costs a lot of money. Eventually, the business will end up very complex with many dependencies.

  • Hard to change
  • Expensive long-term
  • High operations risk
     

End-to-End integration

E2E

The end-to-end approach is based on centralizing as much as possible. Unlike the P2P solution, which links all the business operations with each other, the E2E solution is based on an integration platform where information is collected. The integration platform processes the information and passes it on to the right receiver. E2E solutions centralize and reuse components such as logging, monitoring, transaction handling, and error handling. These don't need to be redeveloped for each new integration, reducing integration costs by approximately 30%.

  •  Managed change
  • 30% cost savings through reuse of centralized integration components
  • Managed operations risk

 

 

API-led connectivity integration

API-LED CONNECTIVITY

An API-led connectivity approach has two main purposes, to enable integration flows in the platform to be reused by many parties and to reuse of integration flows within the integration platform. Logic is distilled into its constituent parts and reused in different applications. Reuse makes it possible for developers to evolve their results based on already completed integrations and APIs. The result is increased developer productivity through reuse. 

APIs are created in multiple layers, external and internal, and the main advantage compared with end-to-end solutions is that more components can be reused, which makes it easier to implement new systems and services.

By exposing data assets and services to a wider audience,  IT becomes a platform, a center for enablement, for the company and enables self-service of new features for its various business areas. Avoiding this integration becomes a bottleneck for the IT department and the company as whole.

An API-led connectivity strategy can make development time three times faster, decreasing the time-to-market for new services significantly. Reduced development time reduces the integration costs by about 70%, which together with fast deployment are very important for a company's competitiveness and profitability.

  • APIs enable innovation possibilities for the whole company
  • 3x faster and 70% cost savings through extensive reuse
  • No central operations risk


A FLEXIBLE AND SCALABLE SOLUTION FOR KEOLIS

How does it work in practice?

A uniform API-based solution enables Keolis to rapidly scale IT support up and down for function and performance.

Why was API-led connectivity the way to go? Read more in our Case Study with Keolis

Case Study Keolis - Flexible and scalable integrations for Keolis

 

Reduce the time you spend on your integration projects by 50%!

Download 5 essentials for a landscape of integrations

Using best practice methods, following the Certified Integrator quality standard can reduce the time you spend on your integration projects by 50%.

This lean way of carrying out integration projects also resultsin less risk in meeting project targets and ensures that the integrations your organization is building are reusable.

Download Now

pdf_image

HOW TO ACHIEVE EXCELLENCE IN YOUR INTEGRATION PROJECT DELIVERIES

Andyby Andreas Bogatic

An in-depth look at how to set non-functional requirements and achieve excellent deliveries from your integration suppliers.

Apart from the everyday challenges of running agile deliveries from teams of external suppliers, making sure specified business requirements are met by handling integration deliveries has its own unique set of challenges.

As a technically specific branch of IT, integration deliveries are more demanding than other kinds of deliveries in many ways. In order to gain control over functional and non-functional requirements, there is a set of best practices to understand and master.

When initiating an integration development, you start with the original business requirements and the functional requirements derived from them. Depending on the nature of your business, an integration development project may vary in performance or even be part of a sprint-based delivery, but the initial business requirements and connected functional requirements will be the same. Likewise, there is a set of non-functional requirements that needs to be addressed in all integration development projects, regardless of the nature of the overall business and functional requirements at hand.

Architecture, versioning, and operability

Starting from a strategic approach when covering the various non-functional requirements that needs to be addressed, three distinct areas can be identified:

  • How to manage versioning
  • Shaping the overall architecture of the integration landscape to facilitate reuse
  • The desired level of operability

Since these three areas have broad implications for integration development in terms of affecting the design of individual integrations, achieving a unison approach will facilitate operations and maintenance of the entire integration landscape - so spending time to address them in depth is a good investment.

Architecture

There are innumerable ways to build an architecture for an integration landscape. Factors such as the basic systems involved, the nature of your business, and legacy inherent from previous decisions on architecture in the overall information landscape set unique limits and conditions for each integration landscape. Individual variation in these factors makes it impossible to apply specific guidelines that are applicable for all integration landscapes, so focus needs to be on general guidelines that can be adapted to any situation.

Identifying possible reuse

Designing or reviewing an integration landscape requires a strategy to help maximize opportunities for reuse of integrations. Having a registry of all integrations and some type of chart of the landscape as part of your integration landscape documentation is vital for facilitating possible reuse by helping to identify candidates for reuse. Consider applying a layered architecture for APIs and integrations in your integration landscape. The most common type is a three-layered model with a System layer, a Process layer, and an Experience layer.

Structured lifecycle

Make sure that development will follow a structured versioning. This is fundamental for lifecycle management of an integration landscape. By using X.Y.Z versioning of integration changes as major, minor, or patch, you establish a lifecycle management strategy and corresponding lifecycle plans for each integration. Minor and patch versions refer to non-contract breaking changes or updates, effectively leaving the integration fully operational for all concerned stakeholders without any separate changes to any of the relevant systems in the integration. A major change would constitute a breaking change for the integration, requiring adaptations or changes to one or more of the relevant systems to uphold the business requirements set for the integration. The lifecycle plans for the integrations mentioned above will facilitate control over the entire integration landscape.

Operability - from good to excellent

Planning the desired level of operability for your integration landscape will provide important input into design of the individual integrations to be developed. Establishing the required logging, manual handling, and monitoring will determine the level of operability that's possible for Operations and Support maintaining the integration landscape's functionality to achieve after deployment. What might appear to be a purely operational issue can in fact be a deeply embedded strategic issue since it’s difficult and expensive to add features for operability into the design of an integration after its initial development and deployment. Decisions about operability will also be key factors in maintenance costs of the integration landscape.

See blog post 'Operability of End-to-end Integrations' for a further look into operability.

Summarizing strategic approaches

Make sure to address the strategic non-functional requirements:

  • Establishing an architecture for the integration landscape
  • Structuring a lifecycle management based on versioning
  • Achieving excellent operability through proper error handling

 

The above measures have the same general and significant impact on all integrations being designed and therefore need to be considered in advance.

Tactical planning

With strategic approaches in place, it’s time to focus on tactical planning. Three areas of interest need to be addressed - standard patterns for the design, achieving scalability, and covering essentials of security.

Transforming business requirements into a conceptual integration pattern

Going into the design phase, the use of a standard pattern for integrations by the integration teams is critical. The pattern should describe the integration according to a conceptual model where systems provide services and are consumed as contracts, with one or more contracts constituting an integration. Consider how to define an integration based on the business process. Start by identifying a logical function in the business process with a clearly defined input, process, and output. Defining the integration based on the business process will make it easier to connect ownership of the integration lifecycle management and operational costs to the stakeholders for the business process.

Group 6893-2 (1)

 

Scaling

During the design phase, make sure that two areas are properly addressed and included into the integration design before the team starts development - capacity limits and scaling. Meet the business requirements by requiring the integration performance to reach a desired level. Make sure that capacity limits are clearly defined, with business implications described when the limits are transgressed. By requiring possible scenarios for scaling the solution to be included in the design, possibilities for known future business scenarios can be determined, thereby not limiting future strategic choices in terms of scaling.

Security

Ensuring that the proper level of security is applied to the integration is imperative. Using the triad of Confidentiality, Integrity, and Availability and balancing these three, make sure that business requirements for security are met in the integration design. Remember that choices of where in the OSI-model the security solution is applied can affect future development of the solution to meet new business needs.

Skärmavbild 2018-12-10 kl. 11.55.18

To summarize this blog post, identifying how the above series of non-functional requirements for integration development apply to your integration projects and your business is vital for achieving excellent deliveries. For those who wish to gain more knowledge about this and other best practices for integrations, please visit www.certifiedintegrator.com 

OPERABILITY OF END-TO-END INTEGRATIONS

Andyby Andreas Bogatic

When designing and building an integration solution, a key factor is to address operability in a structured manner, not only to deliver a viable service but to also deliver an efficient and easily operated service. The term operability in this context refers support and operations staff 's ability to handle unexpected or unwanted execution of the integration. No matter how much effort goes into design and coding, there will always be scenarios with unexpected results that need to be handled.

BE PREPARED

Quality delivery of an integration service will encompass a solution that addresses all requested functions, in effect giving the system owners a service that fulfills all business needs. However, true excellence in a delivery of an integration service can only be achieved when there is also a viable solution for an easy, fast, and intuitive handling of unexpected executions. These can be automated responses but are most often also manual ways to handle a set of actions to make the integration service deliver the desired results and remedy any faulty results.

In order to achieve excellence in integration services, some prerequisites need to be in place. I will only briefly mention them here since this blog post focuses on operability, but I will probably return to this topic in later posts if there is interest.

BE UNIQUE

Starting from the top, unique identifiers for all end-to-end integrations and APIs are essential to build a proper monitoring with full traceability and logging of events and non-events. It might seem simple and self-evident, but the value of unique identifiers cannot be overstated. To those starting work on a solution where "only one integration will ever be needed,” thus making a unique identifier seem unnecessary, I ask one thing - when have you ever received a set of requirements from a business side that did not change at all before the final deploy?  

BE ON GUARD

With a unique identifier in place, you're able to build a way to monitor all events in the integration. The next step is to make sure that this monitoring is robust and also it’s imperative that this monitoring is accessible through external channels, enabling logging and traceability regardless of the health of the integration services or the server where the deployables are located.

BE TRACEABLE

Logging should be built to catch various activities as well as certain key inactivities. Here it’s important to really get close to the business side of the relevant systems, to fully understand the various reasons WHY certain functionalities of the integration service are requested by the business. By achieving this level of understanding of the business process that the integration service should uphold and support, it’s easier for the developer to identify potential error scenarios and - with a healthy dose of imagination - better foresee what the key issues that will arise frequently will be.

BE FRIENDS WITH OPS

After briefly covering these prerequisites for operability, let’s dig into operability itself. While the requirements for a new integration normally comes from the business side, for example from the system owner or system administrator, the requirements for operability have an alternate source. The first place to go is Operations and the second is Support. You will probably have already spoken with Operations to go over the above prerequisites - if not, you’re in trouble - but return here to have a chat about the existing integration landscape, or if that’s a blank, the landscape you just started to build…

INCLUDE OPS

Operations will definitely have a list for you with must-haves and some nice-to-haves when it comes to basic functionalities they want to have in place to make things easier for them at work on those days when nothing seems to be working properly.

The same goes for Support, this is the first line of defense - when those alarms go off and it’s only a matter of minutes until the manager calls - they really want to have access to some good tools.

The items on these lists are directly relevant to the business requirements for the functions of the integration itself, even if a few will be more generic in nature. If you're pretty much repeating what you heard in the Important Meeting at the very beginning of the project where everyone was invited - take this as a lesson! Always require Operations and Support to be present at the first Important Meeting, since these important participants always seems to be forgotten when the meeting invites are sent out, first of all so you don’t need to repeat yourself, secondly so that it doesn't become your responsibility to make sure that Operations and Support really understand what Business needs, but that this responsibility stays with Business - where it belongs!

So after this little tip, let’s look at some common operability issues that you will likely encounter:

HAVE A RETRY POLICY

A retry is basically a trigger of the integration itself. It’s manual, giving Support a tool to execute the integration if nothing happened when something was supposed to happen or to run the integration ahead of the scheduled time for some particular reason. It’s also an excellent tool for Operations or Second Line (often the same department) to use when working with more advanced troubleshooting to get fresh log data of the problem by triggering the integration and thereby recreating the problem.

A retry is often built in as an automated response for failures to execute the integration due to inability to reach a specified source, not receiving confirmation on execution, etc.

GET RID OF PROBLEMS

Dequeueing clears a queue of actions being stacked and awaiting execution. Many integrations are designed to stack executions in a queue, for example as a way to handle temporary high loads with a crude scaling of the integration to handle intermittent connectivity. Being able to manually clear this queue is a simple but powerful tool to use when something goes wrong.

One reason to dequeue is to remove stacked triggers that will be executed when you don't want them to be. Another is to erase data that are corrupted for some reason and would cause damage if executed. When a target source for data is unavailable for a prolonged period, this often has a negative impact on the business, so you need to export the data to a file and manually load it to the target system. Being able to clear that queue will be absolutely necessary to avoid duplicate data.

It's impossible to create a complete list of operability tools, but I’ll mention a few more to point you in the right direction.

  • Call up the last sent message content

  • Call up the date/time of the last successful execution

  • List performed executions for a given time period

  • List all sent messages for a given time period

  • Call up the number of stacked messages in a queue

  • Send a test shot for a whole or partial integration, without actual data content


To summarize this blog post, achieving proper operability for an integration is imperative if you want to go from delivering a good integration to delivering an excellent integration. By making sure that Support and Operations are included in the delivery project from day one, you make friends with the pillars of the IT department in charge of operating your integration and increase your chance of delivering a truly excellent service to the business. For those who want to gain more knowledge about this and other best practices for integrations, please visit www.certifiedintegrator.com, register for an upcoming webinar, or register for an upcoming one-day course to be certified,  https://www.certifiedintegrator.com/events.