The Importance of Scalability In Software Design

Scalability is an essential component of enterprise software. Prioritizing it from the start leads to lower maintenance costs, better user experience, and higher agility.

Software design is a balancing act where developers work to create the best product within a client’s time and budget constraints.

There’s no avoiding the necessity of compromise. Tradeoffs must be made in order to meet a project’s requirements, whether those are technical or financial.

Too often, though, companies prioritize cost over scalability or even dismiss its importance entirely. This is unfortunately common in big data initiatives, where scalability issues can sink a promising project.

Scalability isn’t a “bonus feature.” It’s the quality that determines the lifetime value of software, and building with scalability in mind saves both time and money in the long run.

What is Scalability?

A system is considered scalable when it doesn’t need to be redesigned to maintain effective performance during or after a steep increase in workload.

Workload” could refer to simultaneous users, storage capacity, the maximum number of transactions handled, or anything else that pushes the system past its original capacity.

Scalability isn’t a basic requirement of a program in that unscalable software can run well with limited capacity.

However, it does reflect the ability of the software to grow or change with the user’s demands.

Any software that may expand past its base functions- especially if the business model depends on its growth- should be configured for scalability.

The Benefits of Scalable Software

Scalability has both long- and short-term benefits.

At the outset it lets a company purchase only what they immediately need, not every feature that might be useful down the road.

For example, a company launching a data intelligence pilot program could choose a massive enterprise analytics bundle, or they could start with a solution that just handles the functions they need at first.

A popular choice is a dashboard that pulls in results from their primary data sources and existing enterprise software.

When they grow large enough to use more analytics programs, those data streams can be added into the dashboard instead of forcing the company to juggle multiple visualization programs or build an entirely new system.

Building this way prepares for future growth while creating a leaner product that suits current needs without extra complexity.

It requires a lower up-front financial outlay, too, which is a major consideration for executives worried about the size of big data investments.

Scalability also leaves room for changing priorities. That off-the-shelf analytics bundle could lose relevance as a company shifts to meet the demands of an evolving marketplace.

Choosing scalable solutions protects the initial technology investment. Businesses can continue using the same software for longer because it was designed to be grow along with them.

When it comes time to change, building onto solid, scalable software is considerably less expensive than trying to adapt less agile programs.

There’s also a shorter “ramp up” time to bring new features online than to implement entirely new software.

As a side benefit, staff won’t need much training or persuasion to adopt that upgraded system. They’re already familiar with the interface, so working with the additional features is viewed as a bonus rather than a chore.

The Fallout from Scaling Failures

So, what happens when software isn’t scalable?

In the beginning, the weakness is hard to spot. The workload is light in the early stages of an app. With relatively few simultaneous users there isn’t much demand on the architecture.

When the workload increases, problems arise. The more data stored or simultaneous users the software collects, the more strain is put on the software’s architecture.

Limitations that didn’t seem important in the beginning become a barrier to productivity. Patches may alleviate some of the early issues, but patches add complexity.

Complexity makes diagnosing problems on an on-going basis more tedious (translation: pricier and less effective).

As the workload rises past the software’s ability to scale, performance drops.

Users experience slow loading times because the server takes too long to respond to requests. Other potential issues include decreased availability or even lost data.

All of this discourages future use. Employees will find workarounds for unreliable software in order to get their own jobs done.

That puts the company at risk for a data breach or worse.

[Read our article on the dangers of “shadow IT” for more on this subject.]

When the software is customer-facing, unreliability increases the potential for churn.

Google found that 61% of users won’t give an app a second chance if they had a bad first experience. 40% go straight to a competitor’s product instead.

Scalability issues aren’t just a rookie mistake made by small companies, either. Even Disney ran into trouble with the original launch of their Applause app, which was meant to give viewers an extra way to interact with favorite Disney shows. The app couldn’t handle the flood of simultaneous streaming video users.

Frustrated fans left negative reviews until the app had a single star in the Google Play store. Disney officials had to take the app down to repair the damage, and the negative publicity was so intense it never went back online.

Setting Priorities

Some businesses fail to prioritize scalability because they don’t see the immediate utility of it.

Scalability gets pushed aside in favor of speed, shorter development cycles, or lower cost.

There are actually some cases when scalability isn’t a leading priority.

Software that’s meant to be a prototype or low-volume proof of concept won’t become large enough to cause problems.

Likewise, internal software for small companies with a low fixed limit of potential users can set other priorities.

Finally, when ACID compliance is absolutely mandatory scalability takes a backseat to reliability.

As a general rule, though, scalability is easier and less resource-intensive when considered from the beginning.

For one thing, database choice has a huge impact on scalability. Migrating to a new database is expensive and time-consuming. It isn’t something that can be easily done later on.

Principles of Scalability

Several factors affect the overall scalability of software:


Usage measures the number of simultaneous users or connections possible. There shouldn’t be any artificial limits on usage.

Increasing it should be as simple as making more resources available to the software.

Maximum stored data

This is especially relevant for sites featuring a lot of unstructured data: user uploaded content, site reports, and some types of marketing data.

Data science projects fall under this category as well. The amount of data stored by these kinds of content could rise dramatically and unexpectedly.

Whether the maximum stored data can scale quickly depends heavily on database style (SQL vs NoSQL servers), but it’s also critical to pay attention to proper indexing.


Inexperienced developers tend to overlook code considerations when planning for scalability.

Code should be written so that it can be added to or modified without refactoring the old code. Good developers aim to avoid duplication of effort, reducing the overall size and complexity of the codebase.

Applications do grow in size as they evolve, but keeping code clean will minimize the effect and prevent the formation of “spaghetti code”.

Scaling Out Vs Scaling Up

Scaling up (or “vertical scaling”) involves growing by using more advanced or stronger hardware. Disk space or a faster central processing unit (CPU) is used to handle the increased workload.

Scaling up offers better performance than scaling out. Everything is contained in one place, allowing for faster returns and less vulnerability.

The problem with scaling up is that there’s only so much room to grow. Hardware gets more expensive as it becomes more advanced. At a certain point, businesses run up against the law of diminishing returns on buying advanced systems.

It also takes time to implement the new hardware.

Because of these limitations, vertical scaling isn’t the best solutions for software that needs to grow quickly and with little notice.

Scaling out (or “horizontal scaling”) is much more widely used for enterprise purposes.

When scaling out, software grows by using more- not more advanced- hardware and spreading the increased workload across the new infrastructure.

Costs are lower because the extra servers or CPUs can be the same type currently used (or any compatible kind).

Scaling happens faster, too, since nothing has to be imported or rebuilt.

There is a slight tradeoff in speed, however. Horizontally-scaled software is limited by the speed with which the servers can communicate.

The difference isn’t large enough to be noticed by most users, though, and there are tools to help developers minimize the effect. As a result, scaling out is considered a better solution when building scalable applications.

Guidelines for Building Highly Scalable Systems

It’s both cheaper and easier to consider scalability during the planning phase.  Here are some best practices for incorporating scalability from the start:

Use load balancing software

Load balancing software is critical for systems with distributed infrastructure (like horizontally scaled applications).

This software uses an algorithm to spread the workload across servers to ensure no single server gets overwhelmed. It’s an absolute necessity to avoid performance issues.

Location matters

Scalable software does as much near the client (in the app layer) as possible. Reducing the number of times apps must navigate the heavier traffic near core resources leads to faster speeds and less stress on the servers.

Edge computing is something else to consider. With more applications requiring resource-intensive applications, keeping as much work as possible on the device lowers the impact of low signal areas and network delays.

Cache where possible

Be conscious of security concerns, but caching is a good way to keep from having to perform the same task over and over.

Lead with API

Users connect through a variety of clients, so leading with API that don’t assume a specific client type can serve all of them.

Asynchronous processing

It refers to processes that are separated into discrete steps which don’t need to wait for the previous one to be completed before processing.

For example, a user can be shown a “sent!” notification while the email is still technically processing.

Asynchronous processing removes some of the bottlenecks that affect performance for large-scale software.

Limit concurrent access to limited resources

Don’t duplicate efforts. If more than one request asks for the same calculation from the same resource, let the first finish and just use that result. This adds speed while reducing strain on the system.

Use a scalable database

NoSQL databases tend to be more scalable than SQL. SQL does scale read operations well enough, but when it comes to write operations it conflicts with restrictions meant to enforce ACID principles.

Scaling NoSQL requires less stringent adherence to those principles, so if ACID compliance isn’t a concern a NoSQL database may be the right choice.

Consider PaaS solutions

Platform-as-a-service relieves a lot of scalability issues since the PaaS provider manages scaling. Scaling can be as easy as upgrading the subscription level.

Look into FaaS

Function-as-a-service evolved from PaaS and is very closely related. Serverless computing provides a way to only use the functions that are needed at any given moment, reducing unnecessary demands on the back-end infrastructure.

FaaS is still maturing, but it could be worth looking into as a way to cut operational costs while improving scalability.

Don’t forget about maintenance

Set software up for automated testing and maintenance so that when it grows, the work of maintaining it doesn’t get out of hand.

Build with An Eye to the Future

Prioritizing scalability prepares your business for success. Consider it early, and you’ll reap the benefits in agility when it’s most needed.

Are you looking for software that can grow with your company? Set up a free appointment with one of our developers to talk about where you need to go and how we can get you there!

Using React with GraphQL: An Apollo Review

As enterprise AI and the Internet of Things (IoT) expand, flexibility is crucial in the software development world.

Developers need tools that help them manage a shifting network of technology while creating products that are economical to maintain.

One of the newest – with a stable release in 2016 – is GraphQL. This open source tool created by Facebook has some developers calling it “the future of APIs”.

What Is GraphQL?

GraphQL is Facebook’s query language for APIs.

It’s a syntax that outlines how to request specific data, and it’s most often used by a server to load data to a client.

In simple terms, GraphQL serves as an intermediate layer between the client and a collection of data sources.

It receives requests from the client, fetches data according to its instructions, and returns what was requested by the call.

Flexibility and specificity set GraphQL apart from other options like REST APIs.

The client can ask for the data it really wants and draw only that data from multiple sources. It pulls many resources in one call, all organized by types.

What problem does GraphQL solve?

REST APIs were a huge step forward, but they have some baggage.

Much of the data pulled never gets used, wasting time and potentially slowing an application with no payoff. REST API also use multiple calls to access separate resources.

With GraphQL, the server can query data from several hard-to-connect sources from a single endpoint and deliver it in an expected format.

It’s a standardized, straightforward way to ask for exactly what is needed.

It also solves the problem of backward compatibility. With REST APIs any changes to endpoints necessitates a version change to prevent compatibility issues.

New requirements don’t necessitate a new endpoint when using GraphQL.

React + GraphQL = Apollo

Apollo Client is a small, flexible, fully-featured client often used with GraphQL.

It has integrations for many tools including Angular and React, the latter being very popular with developers right now.

Apollo has several useful advantages. It’s simple to learn and use, so bringing teams up to speed is easy.

It can be used as the root component for state management. The client gives the calls and queries answers as props.

Plus, developers can make changes and see them reflected in the UI immediately. Apollo also features a helpful client library and good developer tools.

One of the biggest operational benefits is that Apollo is incrementally adoptable. Developers can drop it into part of an existing app without having to rebuild the entire thing.

It works with any build set-up and any GraphQL server or schema, too.

Strengths of Apollo

Being able to get complex relations with a single call- plus avoiding problems with types- are major benefits.

Apollo also offers multiple filter types, can be used as state management, and removes the need to handle HTTP calls.

With Apollo subscriptions are usually implemented with WebSockets, which is an advantage over React’s competitors.

Most importantly from an operations standpoint, Apollo is easy to learn and use.

It’s painless for team members to add it to their toolkits.

Limitations of Apollo

API are still needed for authorization and security (including tokens, JSON Web Tokens, and session management).

It’s also true that Apollo can’t go as deeply as Redux does, so when building complex apps, the tools have to be combined.


GraphQL is often compared to REST APIs, though they aren’t exactly the same thing.

REST is an API design architecture which decouples the relationship between the underlying system and the API response, much as GraphQL serves as an intermediary, but it takes a different approach.

There are multiple endpoints compared to GraphQL’s single endpoint philosophy. That adds complexity as the application scales.

REST also suffers from under – and over-fetching. Using Apollo GraphQL queries only what is needed at the time, which eliminates the problem.

Some developers like to use Relay instead of Apollo. Relay is Facebook’s open-sourced GraphQL client.

It’s heavily optimized for performance, working to reduce network traffic wherever possible. The tradeoff is that Relay is complex and hard to learn. Many find it simpler just to use.

Future Outlook

Once considered a niche technology, GraphQL is now proving its worth.

Major companies are using it in production, including Facebook, Airbnb, GitHub, and Twitter. With this much growth over just a few years, it’s a safe bet GraphQL has a long functional life ahead of it.

Wondering if GraphQL would work for your company’s next project? Set up a complimentary meeting to review your needs and find out what kind of solution we could build for you!

Request a Consultation

What Is the Difference Between Front-End and Back-End Development?

Originally published February 9, 2017, updated Feb. 27, 2019.

Front-end developers work on what the user can see while back-end developers build the infrastructure that supports it.

Both are necessary components for a high-functioning application or website.

It’s not uncommon for companies to get tripped up by the “front-end versus back-end” divide when trying to navigate the development of new software.

After all, there are a growing number of tools on the market aimed at helping developers become more “full stack” oriented, so it’s easy for non-technicians to assume there isn’t a big difference between front-end and back-end specialists.

Front-end and back-end developers do work in tandem to create the systems necessary for an application or website to function properly. However, they have opposite concerns.

The term “front-end” refers to the user interface, while “back-end” means the server, application and database that work behind the scenes to deliver information to the user.

The user enters a request through the interface.

It’s then verified and communicated to the server, which pulls the necessary data from the database and sends it back to the user.

Here’s a closer look at the difference between front-end and back-end development.

What is Front-End Development?

The front-end is built using a combination of technologies such as Hypertext Markup Language (HTML), JavaScript and Cascading Style Sheets (CSS).

Front-end developers design and construct the user experience elements on the web page or app including buttons, menus, pages, links, graphics and more.


Hypertext Markup Language is the core of a website, providing the overall design and functionality.

The most recent version was released in late 2017 and is known as HTML5.2.

The updated version includes more tools aimed at web application developers as well as adjustments made to improve interoperability.


Cascading style sheets give developers a flexible, precise way to create attractive, interactive website designs.


This event-based language is useful for creating dynamic elements on static HTML web pages.

It allows developers to access elements separate from the main HTML page, as well as respond to server-side events.

Front-end frameworks such as Angular, Ember, Backbone, and React are also popular.

These frameworks let developers keep up with the growing demand for enterprise software without sacrificing quality, so they’re earning their place as standard development tools.

One of the main challenges of front-end development – which also goes by the name “client-side development” – is the rapid pace of change in the tools, techniques and technologies used to create the user experience for applications and websites.

The seemingly simple goal of creating a clear, easy-to-follow user interface is difficult due to sometimes widely different mobile device and computer screen resolutions and sizes.

Things get even more complicated when the Internet of Things (IoT) is considered.

Screen size and network connection now have a wider variety, so developers have to balance those concerns when working on their user interfaces.

What is Back-End Development?

The back-end, also called the server side, consists of the server which provides data on request, the application which channels it, and the database which organizes the information.

For example, when a customer browses shoes on a website, they are interacting with the front end.

After they select the item they want, put it in the shopping cart, and authorize the purchase, the information is kept inside the database which resides on the server.

A few days later when the client checks on the status of their delivery, the server pulls the relevant information, updates it with tracking data, and presents it through the front-end.

Back-end Tools

The core concern of back-end developers is creating applications that can find and deliver data to the front end.

Many of them use reliable enterprise-level databases like Oracle, Teradata, Microsoft SQL Server, IBM DB2, EnterpriseDB and SAP Sybase ASE.

There’s also a number of other popular databases including MySQL, NoSQL and PostgreSQL.

There are a wide variety of frameworks and languages used to code the application, such as Ruby on Rails, Java, C++/C/C#, Python and PHP.

Over the last several years Backend-as-a-Service (BaaS) providers have been maturing into a viable alternative.

They’re especially useful when developing mobile apps and working within a tight schedule.

What is Full-Stack Development?

The development of both the back- and front-end systems has become so specialized, it’s most common for a developer to specialize in only one.

As a general rule, full-stack development by a single programmer isn’t a practical solution.

However, at times a custom software development company will have developers who are proficient with both sides, known as a full stack developer.

They powerful team players because they have the breadth of knowledge to see the big picture, letting them suggest ways to optimize the process or remove roadblocks that might be slowing down the system.

To find out which database and framework to use on your next project, read our article “What is the Best Front-End/Back-End Combo for an Enterprise App.”

If you’re ready to see how we can put our knowledge to work for you, set up a free consultation today!

Request a Consultation

The Hottest Web Development Trends of 2019

Originally published January 6, 2017, updated Feb. 5, 2019.

Web developers are focusing on the customer this year.

There’s been a growing emphasis on the customer journey over the last few years, and 2019 will see more focus on providing a responsive, customized experience to every visitor.

To that end, the leading web development trends for 2019 are those that help developers engage visitors and provide personalized service.

Motion UI

Visitors won’t spend much time on a site if they can’t find what they need.

Simple, interactive design keeps users engaged and makes navigation straightforward.

One of the most common ways to add this functionality in 2019 is motion UI.

The trend includes both the concept of featuring simplified motion effects in web design and a specific tool for doing so.

Motion UI is a standalone Sass library for creating CSS transitions and animations.

It offers interactive motion effects that visually guide site visitors towards popular features. Developers like Motion UI for its customizable components and flexibility.

Adding effects is simple, so it’s an easy way to add interest to a site without throwing off development schedules.

Whether it’s done using Motion UI or another tool, dynamic visual effects are showing promise as a way to improve user engagement.

Expect wider adoption as developers explore its value in an enterprise context.

Adaptive Design

The line between mobile and home computing is so faint it’s practically invisible.

Consumers own just under 4 connected devices each (including smartphones, tablets, laptops, and other devices) and switch between them regularly during an average of 5.9 hours of daily media usage.

Companies who can provide consistent user experience regardless of how visitors reach their site will enjoy greater engagement and more return traffic.

It’s not enough for web design to be responsive anymore. Responsive design leads to awkward or unattractive sites on some devices.

Now design needs to be adaptive, able to rearrange itself to suit different device classes while providing a high-quality user experience for each.

This year, developers will explore optimized templates that translate content to a variety of device classes.

Artificial Intelligence and Chatbots

Artificial intelligence is everywhere. It’s already being used to improve search results, upsell products, power facial recognition programs on social media, and sort articles on sites like Wikipedia.

Now, it’s making a place in the customer service arena.

More than 40% of organizations worldwide plan to launch customer-facing artificial intelligence technology this year.

Chatbots are leading the charge. Within the next five years they’re set to become the most common AI application across all consumer applications.

Natural language processing (NLP) has matured enough that chatbots offer real value instead of frustrating customers.

In fact, over half of consumers like having the constant point of access to businesses chatbots provide.

Look for more chatbots, virtual agents, and NLP-based form filling tools throughout 2019.

Progressive Web Apps

Progressive web apps are still generating excitement in 2019.

Developers view them as a serious competitor for native apps, especially as more browsers support their full suite of features.

There are a lot of benefits to using PWAs. Development is often shorter and less costly. They offer excellent performance even on poor devices and in low signal areas.

Dropping below the three seconds most users wait before leaving a slow site helps lower bounce rates and increase time spent on-site.

PWA service workers provide limited offline functionality, which is a significant benefit with 70% of world economic growth over the next over next several years coming from emerging markets, that’s a significant advantage.

There are still some problems with browser compatibility, but those will fade away as browsers catch up to the latest W3c standards.

Looking forward

Some of these trends should grow in popularity as 2019 proceeds.

Artificial intelligence, for example, is making strides in proving its worth as an enterprise tool.

It would be hard to imagine anyone abandoning it right as it begins to realize its full potential.

Others aren’t as easy to predict. Motion UI may be exciting, but there aren’t any numbers on its practical impact yet.

For now, these are all solid tools for developers looking to boost performance and improve the customer experience.

Questions? Concepta’s team stays up to date on the latest web development trends. Drop us a line to talk about which ones are best for your next project!

Request a Consultation

Download FREE AI White Paper

Using Docker to Increase Developer Efficiency

Docker is a cross-platform program for building and deploying containerized software. It enables faster, more efficient development while reducing maintenance complexity in the long run.

As technology – especially enterprise technology- races forward at breakneck speed, it’s both a good and a bad time to be in the software business.

On one hand, there’s plenty of work for skilled developers. On the other, there may be too much work.

The enterprise software market is expected to grow 8.3% this year, and experts suggest it would grow faster if there were enough developers to meet the demand.

Faced with this pressure to produce more and better software, developers’ toolkits are expanding.

The priority now is technology that improves development speed and efficiency- tools like Docker.

What is Docker?

Docker is a cross-platform virtualization program used to create containers: lightweight, portable, self-contained environments where software runs independently of other software installed on the host machine.

Containers are largely isolated from each other and communicate through specific channels.

They contain their own application, tools, libraries and configuration files, but they’re still more lightweight than virtual machines.

Though container technology has been around since 2008, Docker’s release in late 2013 boosted their popularity. The program featured simple tooling that created an easy path for adoption.

Now, it’s a favorite DevOps tool which facilitates the work of developers and system administrators alike.

The Power of Containers

Containerization provides a workaround for some irritating development hurdles. For instance, running several different applications in a single environment causes complexity.

The individual components don’t always work well together, and managing updates gets complicated fast.

Containers solve these problems by separating applications into independent modules.

They feed into the enterprise-oriented microservice architecture style, letting developers work on different parts of an application simultaneously.

This increases the speed and efficiency of development while making applications that are easier to maintain and update.

Taken as a whole, it’s obvious why both software developers and IT teams like containers.

The technology enables the rapid, iterative development and testing cycles which lie at the core of Agile methodologies.

It also takes the burden of dependency management off system administrators, who can then focus on runtime tasks (such as logging, monitoring, lifecycle management and resource utilization).

Why Docker Is the Right Choice

Docker isn’t the only containerization software around, but it is the industry standard.

It’s a robust, easy-to-use API and ecosystem that makes using containers more approachable to developers and more enterprise-ready.

The program has an edge on previous solutions when it comes to portability and flexibility.

Using Docker simplifies the process of coordinating and chaining together container actions, and it can be done faster than on virtual machines.

Docker removes dependencies and allows code to interact with the container instead of the server (Docker handles server interactions).

Plus, there’s a large repository of images available:

Getting up to speed with Docker doesn’t take long. The documentation is thorough, and there are plenty of tutorials online for self-taught developers.

Docker In Action: The Financial Times

The Financial Times is a London newspaper founded in 1888. Their online portal,, provides current business and economic news to an international audience.

The media outlet was one of the earlier adopters of Docker back in 2015. Docker containers helped cut their server costs by 80%.

Additionally, they were able to increase their productivity from 12 releases per year to 2,200.

Looking Forward

Last year, the median container density per host rose 50% from the previous year.

In fact, the application container market is poised to explode over the next five years.  Experts predict that annual revenue will quadruple, rising from $749 million in 2016 to over $3.4 billion by 2021.

Docker specifically still leads the pack despite the niche popularity of emerging tools.

83% of developers use Docker. CoreOS trails well behind it at 12% with Mesos Containerizer at 4%.

Overall, Docker Containers is a highly enterprise-oriented solution.

Other tools are emerging to add functionality (like container orchestration platform Kubernetes), so there’s no reason it shouldn’t continue growing in popularity.

Concepta focuses on enterprise-ready tools like Docker that let us target our clients’ specific needs. To explore solutions for your own business goals, set up a complimentary appointment today!

Request a Consultation

Why Less (Code) Is More


Writing less code helps developers build clean, functional software that’s easy to maintain over time.

Ask any industry expert what makes a good developer and they’ll offer a variety of answers.

Broad technical experience, good communication skills, and excellent time management head the list. Those are all useful characteristics.

However, there is one trait that usually gets overlooked, something that has an enormous impact on both the development process and final quality: the ability to write lean, concise code.

The best developers know how to get more mileage from less code.

It’s an especially important skill in this era of reusable code, when the availability of ready-made components provides so many shortcuts for busy developers.

Those components represent a huge step forward by cutting the amount of tedious programming required in the early stages of a project.

The downside is that these development tools make it easy for inexperienced developers to write bulky code.

By flipping the script – focusing on writing less code instead of faster code – developers can build reliable software with low technical debt.

What Do Developers Do (Really)?

Developers are subject matter experts with the technical skills to build and maintain software.

hey work to understand technology in order to create technology-based solutions for real world problems.

Nowhere in that description does it say, “write code”.

Code is obviously how the technology gets built, but it should be seen as a means to an end rather than the whole job description.

Developers need to combine business sense, problem-solving skills, and technical knowledge if they want to deliver the best value to their clients.

Too often, developers forget their true purpose.

They write code according to habit, personal style, or excitement over a new tool they’ve been hoping to use instead of prioritizing ease of maintenance and scalability.

The result is software with a huge code base and a shortened usable life.

The Code Economy Advantage

When it comes to code, more is actually less. The odds of project failure go up as the code size increases.

After all, more code translates into more chances for something to go wrong.

One area of concern is bugs that make it into the final software.  The industry average ranges from 15-50 errors per 1000 lines of code delivered.

Projects with sizable code bases are going to have more flaws as a flat statistical reality.

Denser code is less likely to be read thoroughly, too, which means the ratio of errors to code will fall towards the higher end of that scale.

Having more lines of code also leads to higher technical debt.

Future maintainers (and anyone trying to update the software) must navigate that code to fix bugs, add features, and integrate the software with future systems.

Software development is a place where labor is a significant expense. When time spent on development and maintenance rises with a program’s size, it spurs an equal rise in IT spending.

There’s another increase in developer overhead from additional meetings and longer on-boarding processes when new team members are added.

Considering all of this, there are clear advantages to emphasizing concise code, both in cost and quality.

Code written efficiently and directly is:

  • Simple to maintain
  • Easy to understand
  • More flexible
  • Ages better
  • Easier to update and build onto
  • Reusable & elegant

Developers should work to write as much code as they need to get the job done correctly- and no more.

Why Developers Get “Code Happy”

If writing less code has such a powerful effect, why do developers continue to write far more code than is actually needed?

There are a few typical motivations:

Desire for Productivity

In some agencies, lines of code is used as a measure of productivity. The thinking goes that more lines of code – often abbreviated to LoC – equals more work done.

This is particularly common when running distributed teams who typically work without in-person direction.

The problem is that measuring productivity by LoC completed leads to sloppy writing and a focus on quantity over quality.

It’s like measuring a hotel’s success by how many towels they use; the fact has some bearing on success but can be very misleading.

Misaligned Priorities

Software development always involves trade-offs: simplicity versus feature richness, security versus performance, speed over space.

Sometimes the value of writing less code gets lost in the shuffle.

There’s no unimportant component of development.

Every project has different demands that require a tailored approach.

However, brevity is important enough that it should always be a high priority.

Personal Preference

Developers tend to be “code geeks”. They like to write a bunch of software and try new things purely for the sake of trying new things.

It’s a great quality when they’re learning or experimenting.

However, it’s not the best idea when working on enterprise software. The approach usually results in four times as many lines of code as needed for a task.

Developers need to direct their talent towards building software that meets the product owner’s goals even when that conflicts with their personal preferences.

Lack of Skill

Writing clean, concise code takes skill and practice.

Often less-experienced developers don’t know how to reduce LoC without cutting features or impacting performance.

Almost everyone does this in the beginning, but getting past it is part of honing developer skills.


Developers all learned their trade somewhere.

Every school of thought and development philosophy imposes certain ideas about how to make code more readable.

The issue arises when holding to convention comes at the expense of code economy.

As much as 54% of LoC are inspired by convention as opposed to utility.

Creating whitespace, intentionally skipping lines, and linguistic keywords are examples.

There are ways to improve readability without conventions that pad out the code base.

How to Write Less Code?

Making something complex look simple is hard, but it’s very easy to complicate simple things. Code economy is like that.

These are a few straightforward guidelines that can help developers get past complexity and write less code.

Build on an Existing Foundation

There’s no reason to reinvent the wheel. Use libraries instead of recreating what others have done well countless times.

Innovation should be saved for problems that don’t already have good solutions.

The Right Black Boxes are Good

Choose tools that enable code efficiency.

The Chrome V8 JavaScript Engine powers NodeJS, React Native, Electron, and Chrome Browser with 1,884,670 lines of code.

Be Careful Selecting Dependencies & Frameworks

Anything used should be mature, well-supported, and preferably have already proven its worth in practice.

Prioritize lean and simple frameworks whenever possible.

It’s also important to consider the size of the community and strength of company backing so future developers will have an easier time working with the code.

Reframe “LoC written” as “LoC spent”

Break down the connection between LoC and assumed productivity by changing the way it’s measured.

Instead of counting how many lines a developer was able to write, measure how many they needed to get the job done.

Compare it to a golf score: the fewer LoC that can be written while still delivering a good product and meeting sprint deadlines, the better.

Spend some time during the planning phase to brainstorm how less code can be written.

Planning ahead instead of charging in allows more opportunities for code economy.

The Code Economy Mindset

A huge part of writing less code is maintaining a direct, economical mindset.

Optimize code for correctness, simplicity, and brevity.

Don’t depend on assumptions that aren’t contained in the code, and never use three lines where one is just as readable and effective.

Make consistent style choices that are easy to understand. Avoid “run on coding sentences” by breaking different thoughts and concepts into separate LoC.

Only add code that will be used now, too.

There’s a tendency to try and prepare for the future by guessing what might be needed and adding in the foundations of those tools.

It seems like a smart approach, but in practice it causes problems, mainly:

  • Guesses about what may be useful in the future may be wrong.
  • Time spent writing code that won’t be used yet can delay the product’s launch.
  • Extra work translates into a higher investment before the product has even proven its worth or begun generating ROI.

Abstracting is another practice that should only be done according to present need. Do not abstract for the future.

Along these same lines, don’t add in comments or TODOs.

This messy habit encourages sloppy code that can’t be understood on its own.

Comments are more easily accessed when placed where everyone can read them using inline documentation.

Reusable code is a major asset, but make sure it’s fully understood instead of blindly copying and pasting to save time.

Try to choose options that follow good code economy guidelines.

Finally, don’t rush through development with the idea of refactoring later on.

The assumption that refactoring is “inevitable” leads to software that already needs work at launch.

It is possible to create a solid product with low technical debt by writing clean, concise code up front.

Keep A Hand on The Reins

Most importantly, don’t go too far by trying to use the absolute minimum LoC over more practical concerns.

As discussed earlier, a developer’s job is to solve problems.

When minimalist code is forced to override other needs, it becomes part of the problem instead of a solution.

Writing less code helps our developers create technology-based enterprise solutions with a long shelf life. Set up a free consultation to find out how we can solve your company’s most urgent business problems!

Request a Consultation