The Importance of Scalability In Software Design

Scalability is an essential component of enterprise software. Prioritizing it from the start leads to lower maintenance costs, better user experience, and higher agility.

Software design is a balancing act where developers work to create the best product within a client’s time and budget constraints.

There’s no avoiding the necessity of compromise. Tradeoffs must be made in order to meet a project’s requirements, whether those are technical or financial.

Too often, though, companies prioritize cost over scalability or even dismiss its importance entirely. This is unfortunately common in big data initiatives, where scalability issues can sink a promising project.

Scalability isn’t a “bonus feature.” It’s the quality that determines the lifetime value of software, and building with scalability in mind saves both time and money in the long run.

What is Scalability?

A system is considered scalable when it doesn’t need to be redesigned to maintain effective performance during or after a steep increase in workload.

Workload” could refer to simultaneous users, storage capacity, the maximum number of transactions handled, or anything else that pushes the system past its original capacity.

Scalability isn’t a basic requirement of a program in that unscalable software can run well with limited capacity.

However, it does reflect the ability of the software to grow or change with the user’s demands.

Any software that may expand past its base functions- especially if the business model depends on its growth- should be configured for scalability.

The Benefits of Scalable Software

Scalability has both long- and short-term benefits.

At the outset it lets a company purchase only what they immediately need, not every feature that might be useful down the road.

For example, a company launching a data intelligence pilot program could choose a massive enterprise analytics bundle, or they could start with a solution that just handles the functions they need at first.

A popular choice is a dashboard that pulls in results from their primary data sources and existing enterprise software.

When they grow large enough to use more analytics programs, those data streams can be added into the dashboard instead of forcing the company to juggle multiple visualization programs or build an entirely new system.

Building this way prepares for future growth while creating a leaner product that suits current needs without extra complexity.

It requires a lower up-front financial outlay, too, which is a major consideration for executives worried about the size of big data investments.

Scalability also leaves room for changing priorities. That off-the-shelf analytics bundle could lose relevance as a company shifts to meet the demands of an evolving marketplace.

Choosing scalable solutions protects the initial technology investment. Businesses can continue using the same software for longer because it was designed to be grow along with them.

When it comes time to change, building onto solid, scalable software is considerably less expensive than trying to adapt less agile programs.

There’s also a shorter “ramp up” time to bring new features online than to implement entirely new software.

As a side benefit, staff won’t need much training or persuasion to adopt that upgraded system. They’re already familiar with the interface, so working with the additional features is viewed as a bonus rather than a chore.

The Fallout from Scaling Failures

So, what happens when software isn’t scalable?

In the beginning, the weakness is hard to spot. The workload is light in the early stages of an app. With relatively few simultaneous users there isn’t much demand on the architecture.

When the workload increases, problems arise. The more data stored or simultaneous users the software collects, the more strain is put on the software’s architecture.

Limitations that didn’t seem important in the beginning become a barrier to productivity. Patches may alleviate some of the early issues, but patches add complexity.

Complexity makes diagnosing problems on an on-going basis more tedious (translation: pricier and less effective).

As the workload rises past the software’s ability to scale, performance drops.

Users experience slow loading times because the server takes too long to respond to requests. Other potential issues include decreased availability or even lost data.

All of this discourages future use. Employees will find workarounds for unreliable software in order to get their own jobs done.

That puts the company at risk for a data breach or worse.

[Read our article on the dangers of “shadow IT” for more on this subject.]

When the software is customer-facing, unreliability increases the potential for churn.

Google found that 61% of users won’t give an app a second chance if they had a bad first experience. 40% go straight to a competitor’s product instead.

Scalability issues aren’t just a rookie mistake made by small companies, either. Even Disney ran into trouble with the original launch of their Applause app, which was meant to give viewers an extra way to interact with favorite Disney shows. The app couldn’t handle the flood of simultaneous streaming video users.

Frustrated fans left negative reviews until the app had a single star in the Google Play store. Disney officials had to take the app down to repair the damage, and the negative publicity was so intense it never went back online.

Setting Priorities

Some businesses fail to prioritize scalability because they don’t see the immediate utility of it.

Scalability gets pushed aside in favor of speed, shorter development cycles, or lower cost.

There are actually some cases when scalability isn’t a leading priority.

Software that’s meant to be a prototype or low-volume proof of concept won’t become large enough to cause problems.

Likewise, internal software for small companies with a low fixed limit of potential users can set other priorities.

Finally, when ACID compliance is absolutely mandatory scalability takes a backseat to reliability.

As a general rule, though, scalability is easier and less resource-intensive when considered from the beginning.

For one thing, database choice has a huge impact on scalability. Migrating to a new database is expensive and time-consuming. It isn’t something that can be easily done later on.

Principles of Scalability

Several factors affect the overall scalability of software:

Usage

Usage measures the number of simultaneous users or connections possible. There shouldn’t be any artificial limits on usage.

Increasing it should be as simple as making more resources available to the software.

Maximum stored data

This is especially relevant for sites featuring a lot of unstructured data: user uploaded content, site reports, and some types of marketing data.

Data science projects fall under this category as well. The amount of data stored by these kinds of content could rise dramatically and unexpectedly.

Whether the maximum stored data can scale quickly depends heavily on database style (SQL vs NoSQL servers), but it’s also critical to pay attention to proper indexing.

Code

Inexperienced developers tend to overlook code considerations when planning for scalability.

Code should be written so that it can be added to or modified without refactoring the old code. Good developers aim to avoid duplication of effort, reducing the overall size and complexity of the codebase.

Applications do grow in size as they evolve, but keeping code clean will minimize the effect and prevent the formation of “spaghetti code”.

Scaling Out Vs Scaling Up

Scaling up (or “vertical scaling”) involves growing by using more advanced or stronger hardware. Disk space or a faster central processing unit (CPU) is used to handle the increased workload.

Scaling up offers better performance than scaling out. Everything is contained in one place, allowing for faster returns and less vulnerability.

The problem with scaling up is that there’s only so much room to grow. Hardware gets more expensive as it becomes more advanced. At a certain point, businesses run up against the law of diminishing returns on buying advanced systems.

It also takes time to implement the new hardware.

Because of these limitations, vertical scaling isn’t the best solutions for software that needs to grow quickly and with little notice.

Scaling out (or “horizontal scaling”) is much more widely used for enterprise purposes.

When scaling out, software grows by using more- not more advanced- hardware and spreading the increased workload across the new infrastructure.

Costs are lower because the extra servers or CPUs can be the same type currently used (or any compatible kind).

Scaling happens faster, too, since nothing has to be imported or rebuilt.

There is a slight tradeoff in speed, however. Horizontally-scaled software is limited by the speed with which the servers can communicate.

The difference isn’t large enough to be noticed by most users, though, and there are tools to help developers minimize the effect. As a result, scaling out is considered a better solution when building scalable applications.

Guidelines for Building Highly Scalable Systems

It’s both cheaper and easier to consider scalability during the planning phase.  Here are some best practices for incorporating scalability from the start:

Use load balancing software

Load balancing software is critical for systems with distributed infrastructure (like horizontally scaled applications).

This software uses an algorithm to spread the workload across servers to ensure no single server gets overwhelmed. It’s an absolute necessity to avoid performance issues.

Location matters

Scalable software does as much near the client (in the app layer) as possible. Reducing the number of times apps must navigate the heavier traffic near core resources leads to faster speeds and less stress on the servers.

Edge computing is something else to consider. With more applications requiring resource-intensive applications, keeping as much work as possible on the device lowers the impact of low signal areas and network delays.

Cache where possible

Be conscious of security concerns, but caching is a good way to keep from having to perform the same task over and over.

Lead with API

Users connect through a variety of clients, so leading with API that don’t assume a specific client type can serve all of them.

Asynchronous processing

It refers to processes that are separated into discrete steps which don’t need to wait for the previous one to be completed before processing.

For example, a user can be shown a “sent!” notification while the email is still technically processing.

Asynchronous processing removes some of the bottlenecks that affect performance for large-scale software.

Limit concurrent access to limited resources

Don’t duplicate efforts. If more than one request asks for the same calculation from the same resource, let the first finish and just use that result. This adds speed while reducing strain on the system.

Use a scalable database

NoSQL databases tend to be more scalable than SQL. SQL does scale read operations well enough, but when it comes to write operations it conflicts with restrictions meant to enforce ACID principles.

Scaling NoSQL requires less stringent adherence to those principles, so if ACID compliance isn’t a concern a NoSQL database may be the right choice.

Consider PaaS solutions

Platform-as-a-service relieves a lot of scalability issues since the PaaS provider manages scaling. Scaling can be as easy as upgrading the subscription level.

Look into FaaS

Function-as-a-service evolved from PaaS and is very closely related. Serverless computing provides a way to only use the functions that are needed at any given moment, reducing unnecessary demands on the back-end infrastructure.

FaaS is still maturing, but it could be worth looking into as a way to cut operational costs while improving scalability.

Don’t forget about maintenance

Set software up for automated testing and maintenance so that when it grows, the work of maintaining it doesn’t get out of hand.

Build with An Eye to the Future

Prioritizing scalability prepares your business for success. Consider it early, and you’ll reap the benefits in agility when it’s most needed.

Are you looking for software that can grow with your company? Set up a free appointment with one of our developers to talk about where you need to go and how we can get you there!

Everything Executives Need to Know About NodeJS

nodejs-executives

NodeJS is a rising star in the enterprise software world. It’s being used by everyone from fledgeling chains to entertainment giants. For those tasked with leading software projects, though, popularity is the least important aspect of technology.

They’re more concerned with tangible benefits – what NodeJS is, why developers love it, and how it can boost their digital initiatives.

Read on for answers to the most common executive questions about NodeJS.

What is NodeJS?

NodeJS is an open source platform for developing server-side and networking applications. Written in JavaScript, it’s quick to build with and scales extremely well.

What do people actually use NodeJS for?

NodeJS may be best known as a tool for real-time applications with a large number of concurrent users, but it also sees use for:

  • Backends and servers
  • Frontends
  • Developing API
  • Microservices
  • Scripting and automation

Why do developers like NodeJS?

Being easy to work with makes a tool popular with developers, and NodeJS is both lightweight and efficient. The Javascript is written in a clear, easy to read format.

Because developers can use the same language throughout the project developers find working with teammates assigned to other areas of the stack less disruptive.

The Node Packet Manager is another major draw. With half a million NPM packages available for use, developers can find something to suit all but the most specific needs.

There’s also the fact that technical tasks that are usually difficult – for example, communicating between workers or  sharing cache state – are incredibly simple with NodeJS.

Finally, many developers just like using NodeJS. Creating performant apps can be fast, easy, and fun. There’s an active and engaged community full of peers to share ideas or coordinate with on a tough problem.

When a tool makes their job more enjoyable, developers are going to want to use it whenever possible.

How does NodeJS benefit enterprise?

When it comes to business value, NodeJS brings a lot to the table.

  • Faster development: NPM packages help reduce the amount of code that must be written from scratch. On top of that, using the same language end to end makes for a smoother, more productive development process. It’s faster than Ruby, Python, or Perl. Testing goes faster, as well.
  • Scalability: NodeJS uses non-blocking I/O. Processes aren’t held up by others that are taking too long. Instead, the system handles the next request in line while waiting for the return of the previous request. This lets apps handle thousands of concurrent connections.
  • High quality user experience: Applications built with NodeJS are fast and responsive, handling real-time updating and streaming data smoothly. They provide the kind of user experience that makes a positive impression on customers.
  • Less expensive development: Open source tools are a great way to lower development costs. The productivity offered by NodeJS pushes savings even farther; developers spend less time building the same quality app as they would with other tools. NodeJS can be hosted nearly anywhere, too.

How are companies using NodeJS now?

  • Netflix: The largest and best-known streaming media provider in the world reduced their startup time by 70% by integrating NodeJS.
  • Walmart: As their online store gained popularity, Walmart experienced problems handling the flood of requests. NodeJS’ non-blocking IO improved their ability to manage concurrent requests for a better user experience.
  • Paypal: Originally there were separate teams for browser-specific code and app layer-specific code, which caused a lot of miscommunications. Switching their backend to NodeJS meant everyone was now speaking the same language. More cohesive development allows the Paypal team to respond faster to issues.

Are there times when NodeJS should not be used?

Although it’s a powerful tool, there are times when NodeJS doesn’t fit. Using it for CPU-intensive operations basically cancels out all its benefits.

It doesn’t support multi-threaded programming, meaning it’s not the best choice for games and similar applications.

The best use cases for NodeJS are when there will be a high volume of concurrent requests and when real-time updating is key. Other benefits – low costs, smoother development – can also be found with other tools, but performance at scale is a serious advantage.

Is NodeJS the right tool for your next project? Talk through your options with one of Concepta’s development team to find out!

Request a Consultation

 

Building Custom Software Ready for Integration

Building-Custom-Software

Building custom software solves a lot of problems that businesses face while pushing digital transformation. Whether it’s an analytics dashboard that unites data from different platforms or a social media chatbot that boosts customer service ratings, custom software can be built to a company’s exact specifications without the compromises that come with third party solutions.

Custom software does present one problem: how to integrate it smoothly with existing systems. Often those systems contain third-party software which requires special design consideration.

It doesn’t have to be a roadblock, though. Keep these core concepts in mind during development and the result will be custom software ready for integration.

Think Modular

Adding to an existing monolith is complicated and makes the deployment process harder than it needs to be. For a more forward-thinking approach, lean towards a microservice architecture. Microservices isolate a specific function into its own module which can operate independently of other functions.

There’s a lot to be gained through microservices. They’re highly scalable and easy to integrate into a stack using APIs. When one microservice needs maintenance, it can be worked on or replaced without taking the entire system offline. It’s also possible to modernize outdated legacy systems by adding new functions via microservices.

Use API to Connect Necessary Resources

Application programming interfaces, or API, are software “middlemen” that allow unrelated software to communicate with each other.

The most visible API are the public ones that extend functionality to third parties for mutual benefit, like social media API. However, using private in-house API is an excellent way to integrate new software.

API reduce the risk of affecting existing software when adding new features. They position the system as a whole for agility and consistency. If several components communicate using the API, they must also use the same data formats, requirements to mandatory and optional parameters, and dependencies between fields.

This simplifies data governance and makes it simple to scale or add new functionalities.

Adopt Continuous Integration and Delivery

Something which is often done last when building custom software is actually integrating it into the existing stack.

Under this practice, bugs or gaps in function that previously went unseen can derail launch at the last minute. Adopting continuous integration and delivery helps address such problems before they affect delivery timelines.

Continuous integration means automating the build and testing of code every time a change is committed to version control. It merges all changes into a shared version control repository, encouraging developers to share their code and unit tests.

This results in fewer merge conflicts and earlier identification of bugs.

Continuous delivery, which is often used in tandem with continuous integration, involves automating the release process. Changes can be pushed straight to customers at the press of a button.

Some developers take this a step further with continuous deployment, where changes get sent out as soon as they’re committed. A bad tests will prevent deployment, but otherwise changes are pushed straight to users.

Why does this make integration easier? Frequent, productive communication is essential for building quality software on time and within budget. Continuous integration and delivery provides the basis for that communication.

It allows for smoother collaboration and frequent client feedback, letting developers fine-tune their approach for a seamless integration.

On top that, frequent testing and validation leads to faster discovery of errors. This is especially important when integrating with existing systems because bugs can affect those systems if they aren’t caught before integration.

Repairing bugs earlier in the process can be more easily than at later stages, which lowers the overall cost of development.

Custom Software, Custom Integration

While there are challenges when integrating custom software with an existing stack, those challenges apply to any software integration. Taken as a whole the opportunities outweigh the risks. With custom solutions, companies can guide the integration process and minimize the disruption to their daily business.

Concepta’s software development team has 12 years of experience with building custom solutions. If you’re looking for guidance on how new custom software can fit into your stack, set up your free consultation today!

Request a Consultation

Why You Should be Leveraging API for Your Software

leveraging-api-software

API aren’t just a catchy tech trend. They are valuable components of modern digital strategy that can boost the scalability, performance, and flexibility of software.

There’s a lot of debate these days over the best types of API to use, but many times the business case for this technology gets glossed over. That’s unfortunate, since there’s a strong argument to be made for why every company needs (or will eventually need) to leverage API while undergoing digital transformation.

API Definition

An application programming interface, or API, acts as an intermediary between software components. It allows for controlled access to internal data and operations by specifying what software accessing the target component does and doesn’t have permission to do.

What API Bring To The Table

API have more to offer than easy social media logins and mobile payment options. The technology’s applications are nearly endless, so when making a business case for API it’s more impactful to highlight the potential enterprise benefits first.

A solid API strategy can:

Boost Customer Experience and Retention

Customers want a rich, uninterrupted digital experience. 83% of them agree that a seamless experience across devices and platforms is important to them. Customers expect to be able use their favorite software in the most convenient manner, meaning in tandem with complimentary tools.

This is where API come in. By exposing select services to third-party use, API make the platform as a whole more functional and interactive. That translates into a richer customer experience. With their need for personalization and interactivity met, customers aren’t motivated to seek other services. After all, why should they go to the effort when they can handle all their product-specific needs in one place?

A great example of this effect in action is the Goodreads – Amazon partnership. Goodreads uses Amazon’s API to provide highly detailed product data. The platform’s users can make purchases or add items directly to Amazon wish lists from Goodreads. The end result is happier, more loyal customers with favorable impressions of both platforms.

There’s also a “fear of loss” effect in play that encourages customer retention. When an API is used in several different ways it becomes an integral part of the customer’s routine. Leaving the original platform disrupts their daily habits, which is a hassle most customers don’t want to handle.

That provides a cushion of tolerance that companies can lean on while fixing issues that might otherwise send churn skyrocketing.

Enrich Interactions With Partners

There are structural barriers preventing perfect cooperation between a company and its business partners. Partners must request data when they need it, causing delays when unexpected requirements come up or misinformation is accidentally passed.

An API takes out the middleman. Partners have controlled access to all the information and processes needed for smooth operations without being privy to more sensitive information.

The risk of expensive misunderstandings is reduced since everyone is working from the same data. It’s possible to allow partners some access for updating information and being active in joint processes, too, so data is always current.

Power Mobile Strategy

The future of digital enterprise lies heavily in mobile. 80% of adults own a smartphone, and they spend nearly four hours a day on mobile devices. That time is valuable from a business standpoint, too: mobile devices have higher conversion rates than desktops.

However, no company can develop their own extensions for every possible mobile device. There’s too much territory to cover. Even when companies choose hybrid apps to speed up smartphone coverage, the growing IoT trend means there are potential applications for smartwatches, fitness wearables, and more. It isn’t cost-effective to try and service them all.

API allow software to be adapted for use in a wider variety of devices. Market demand can determine where connectivity is wanted without additional investment by the parent company.

For example, a smart home platform might use a cleaning company’s API to allow customers to set up and oversee services while on vacation.

The company doesn’t need to develop the software themselves; the smart home company does that in order to provide their own customers with better service.

Modernize Legacy Systems

Outdated legacy systems present a challenge to digital transformation efforts. Often formed as rigid monoliths, they’re complex, hard to scale, and don’t connect easily with new tools and processes.

Internal API can be used to expose portions of a monolith architecture. They let existing functions interact with more modern tools or pull them out into more independent microservices.

Using API in this way has two main benefits. It increases the system’s performance and scalability by reducing the strain on its overall structure. Plus, internal systems that weren’t previously connected can talk to each other using the API.

This streamlines internal operations and breaks down data silos between departments.

Making the Call

The applications of API are so diverse and produce such marked results that it’s hard to find reasons not to develop them. In fact, as a company grows so does the social pressure to provide interconnectivity and data portability through public API. Those who don’t risk being passed by in favor of more tech-ready competitors.

How can API improve your business? Set up a free appointment with one of Concepta’s experienced developers to learn what this technology can do for you.

Request a Consultation

API Design and Development Best Practices

Application Programming Interfaces, or API, offer a set level of access to a company’s digital resources. Other software can leverage these API to create an expanded range of services for mutual customers which benefits both companies.

User experience is key for modern companies. A big part of providing better experience is offering ways for customer-facing software to interact with other popular services: social media, analytics programs, mobile wallets, and more.

API are most valuable when they’re widely adopted. It’s important to build something functional and elegant that makes it simple for other developers to use it. With that in mind, here are some best practices for creating secure, effective, popular API.

Always Use Versioning

Never change a published API. Changing the structure could disrupt software created by API consumers, which can lead to anything from reputation damage to pricey liability claims.

Instead, label every version sequentially to provide a clear signal of which is most recent. Incorporate versioning into URLs (for example, https://concepta.com/v4/projects).

Always provide plenty of notice when stopping support for old versions to allow developers time to update their software.

There’s no downside to versioning. It’s free and easy to do, so there’s no real reason to skip this step even if the API isn’t intended for public release.

Secure Your Endpoints

Endpoints are the only access points to the outside world, making them major vulnerabilities. At the same time they must be accessible, since the point of API is for other organizations to access a company’s resources.

To resolve this dilemma, focus on securing endpoints against unauthorized intrusion using authentication. There are three primary options:

  • Basic Authentication: Authentication is provided using a base64 encoded string.
  • Token-based authentication: The user signs in with a username and password, then receives a token for authentication to further resources.
  • Hash-based Message Authentication Code (HMAC): Server and client each have a unique cryptographic key. The client uses this key to create a hash with the request data before sending it.

In addition, always apply timestamps to API requests and responses. It’s also a good idea to consider role-based access control to allow tiered access to different levels of users.

Validate Everything

Bad data can throw the entire system off, so don’t automatically accept user input. Run all incoming data through a validation protocol.

The parameters here should be simple yet detailed enough to establish whether the data “looks right”. Some questions to ask:

  • Is it the right number or type of numbers?
  • Is it an acceptable scheme?
  • Are all required request parameters included?

Validation is a basic level of defense that keeps obviously wrong data out of the system.

Use HTTP Correctly

HTTP is a serious asset when it’s used properly. Resist the urge to be “more efficient and precise” by ignoring error handling or using obscure codes.

Two areas to focus on are CRUD operations and response codes.

Common CRUD operations should be used according to common practice:

  • GET- Retrieve a representation of a resource.
  • POST- Create new resources or sub-resources
  • PUT/PATCH- Update existing resources
  • DELETE- Delete existing resources

Response codes provide feedback to help developers understand how to use the API.

  • 2xx Successful operation (OK, Created, Accepted, No Content)
  • 4xx Client-side failure (Bad Request, Unauthorized, Too Many Requests, Request Timeout)
  • 5xx Server-side failure (Bad Gateway, Gateway Timeout, HTTP Version Not Supported)

Favor specific codes when possible. It’s very hard to use an API that only returns general failure or success codes.

Linking to the appropriate section of documentation when returning error codes will make API more user-friendly, too.

Favor Descriptive Nouns For Collections/Resources

Using nouns as opposed to verbs in the URL serves two purposes. First, it keeps the URL simple and easy to read and use.

Second, verbs are used to operate on resources described by the URL. Having them also IN the URL can be confusing.  Notice that POST /postNewAddresses is much less elegant than POST / addresses.

Be descriptive with nouns so users don’t have to play guessing games. The biggest problem here is when developers favor style over efficiency.

Instead of using /photos they use something cute like /memories or /favorites that doesn’t tell users what the resource actually is.

A last note note: it’s common practice to use plural nouns, so do so in the interests of consistency and expected behavior.

Provide Thorough Documentation

Documentation can make or break an API. Provide everything users can conceivably want to know to smooth their development process as much as possible.

Keep info in context since users will very rarely be reading documentation in order.

The best documentation addresses these elements:

  • Index or navigation aid
  • Quickstart guide
  • Authentication
  • HTTP requests
  • Functions of every call in API
  • Each parameter and all of their possible values (types, formatting, rules, whether or not it’s required)
  • Error handling and error codes used (with easily understood explanations to aid in troubleshooting)
  • Tutorials that cover a specific task thoroughly without adding extraneous information
  • Version numbers

Be Consistent and Predictable

The goal is to create an API that’s easy and intuitive to use. Make paths easy to follow and use consistent behavior logic throughout.

Use codes how people will expect, favoring common methods when there’s no pressing reason to change something.

In short, don’t try to reinvent the wheel. Prioritize functionality and simplicity. If the API is too complex it creates a barrier to adoption.

API aren’t only for public use. Private API are a powerful tool for exposing outdated legacy systems to new technology. To learn about how to update your stack using API, schedule a consultation with the experienced developers at Concepta!


Request a Consultation

How API Gateways Enable Agile Applications

api gateway microservices

The growing popularity of microservices has led to some interesting technical challenges for developers.

One of those is how all users access the same microservices regardless of device.

The average digital consumer owns 3.64 connected devices and switches between them throughout the day.

For example, a user may access the same content via a laptop at work and a smartphone while travelling.

An application that splits its components into microservices needs to provide consistent performance across devices.

To do that, developers funnel incoming traffic through a single “point of entry”: an API gateway.

Microservice-Based Applications

Monolithic applications may be faster to build, but otherwise microservice architecture offers a lot of advantages.

Rather than designing one large application with everything in one data store, components of the application are divided into self-contained web services that communicate through HTTP or REST.

Developers tend to prefer to build large applications using microservices.

They’re simpler to test and refine. Also, since individual components can be scaled individually scaling is much easier.

Challenges of a Microservices Architecture

While it is an incredibly functional architecture style for building scalable apps, using microservices does add a level of complexity.

The client has to fetch data from several different services, which requires knowing where those services are in the first place.

The central problem is the one mentioned earlier, that users come to the application through a wide variety of devices.

Network performance varies widely due to device capabilities and connection type.

Each device needs a difference amount and style of data.

A desktop would get more details on a product landing page than a smartphone.

Also, some services may use protocols that aren’t usually accessible by all clients.

Consider an enterprise app that technicians use on service calls.

The app needs to access customer records, but that system could have a non-mobile friendly interface.

Benefits of an API gateway

API gateways solve most of these problems.

An API gateway is a “wrapper” that provides a single point of access to all clients.

It shouldn’t be confused with API management; the gateway can incorporate a management layer but it isn’t necessary.

The most obvious benefit of API gateways is that they insulate clients from internal partitioning.

Upon reaching the gateway, each client is met with the right API.

Clients don’t need to know the location every every microservice, and they can retrieve data from multiple microservices in one round trip.

If engineers need to rearrange or make changes to the microservices they can do so without affecting how clients interact with the app.

Microservices are popular because they allow developers to use their preferred technology or experiment with new technology.

API Gateways are what make that possible.

Engineers use whatever whatever protocols work best internally, then the gateway translates the client-facing web-friendly API to match internal protocols.

API gateways enable microservices to be more simple and focused, as well.

They handle common concerns like access control enforcement, so individual microservices are less complex.

Limitations of API Gateways

API gateways are necessary for microservice architecture, but they do come with some limitations.

They need expert configuration for scalable applications, and (while users of most devices won’t notice) there’s a slight increase in response time due to the extra network hop.

Greater overall structural complexity can lead to increased development time.

There are more moving pieces involved, meaning additional points of failure.

This is generally considered a good tradeoff for later scalability and performance.

Future-focused Apps

API gateways enable agile, future-focused applications.

Scalability and flexibility are core values of strong digital strategy; using microservices leads to apps that are flexible enough to meet the ever-changing needs of modern enterprise.

Are you having trouble getting your field techs and the office on the same digital page? Concepta has experience building mobile fleet management and field service solutions that will save your leaders time and resources. Contact us for a free consultation!


Request a Consultation