It’s a syntax that outlines how to request specific data, and it’s most often used by a server to load data to a client.
In simple terms, GraphQL serves as an intermediate layer between the client and a collection of data sources.
It receives requests from the client, fetches data according to its instructions, and returns what was requested by the call.
Flexibility and specificity set GraphQL apart from other options like REST APIs.
The client can ask for the data it really wants and draw only that data from multiple sources. It pulls many resources in one call, all organized by types.
What problem does GraphQL solve?
REST APIs were a huge step forward, but they have some baggage.
Much of the data pulled never gets used, wasting time and potentially slowing an application with no payoff. REST API also use multiple calls to access separate resources.
With GraphQL, the server can query data from several hard-to-connect sources from a single endpoint and deliver it in an expected format.
It’s a standardized, straightforward way to ask for exactly what is needed.
It also solves the problem of backward compatibility. With REST APIs any changes to endpoints necessitates a version change to prevent compatibility issues.
New requirements don’t necessitate a new endpoint when using GraphQL.
React + GraphQL = Apollo
Apollo Client is a small, flexible, fully-featured client often used with GraphQL.
It has integrations for many tools including Angular and React, the latter being very popular with developers right now.
Apollo has several useful advantages. It’s simple to learn and use, so bringing teams up to speed is easy.
It can be used as the root component for state management. The client gives the calls and queries answers as props.
Plus, developers can make changes and see them reflected in the UI immediately. Apollo also features a helpful client library and good developer tools.
One of the biggest operational benefits is that Apollo is incrementally adoptable. Developers can drop it into part of an existing app without having to rebuild the entire thing.
It works with any build set-up and any GraphQL server or schema, too.
Strengths of Apollo
Being able to get complex relations with a single call- plus avoiding problems with types- are major benefits.
Apollo also offers multiple filter types, can be used as state management, and removes the need to handle HTTP calls.
With Apollo subscriptions are usually implemented with WebSockets, which is an advantage over React’s competitors.
Most importantly from an operations standpoint, Apollo is easy to learn and use.
It’s painless for team members to add it to their toolkits.
Limitations of Apollo
API are still needed for authorization and security (including tokens, JSON Web Tokens, and session management).
It’s also true that Apollo can’t go as deeply as Redux does, so when building complex apps, the tools have to be combined.
GraphQL is often compared to REST APIs, though they aren’t exactly the same thing.
REST is an API design architecture which decouples the relationship between the underlying system and the API response, much as GraphQL serves as an intermediary, but it takes a different approach.
There are multiple endpoints compared to GraphQL’s single endpoint philosophy. That adds complexity as the application scales.
REST also suffers from under – and over-fetching. Using Apollo GraphQL queries only what is needed at the time, which eliminates the problem.
Some developers like to use Relay instead of Apollo. Relay is Facebook’s open-sourced GraphQL client.
It’s heavily optimized for performance, working to reduce network traffic wherever possible. The tradeoff is that Relay is complex and hard to learn. Many find it simpler just to use.
Once considered a niche technology, GraphQL is now proving its worth.
Major companies are using it in production, including Facebook, Airbnb, GitHub, and Twitter. With this much growth over just a few years, it’s a safe bet GraphQL has a long functional life ahead of it.
Wondering if GraphQL would work for your company’s next project? Set up a complimentary meeting to review your needs and find out what kind of solution we could build for you!
Originally published February 9, 2017, updated Feb. 27, 2019.
Front-end developers work on what the user can see while back-end developers build the infrastructure that supports it.
Both are necessary components for a high-functioning application or website.
It’s not uncommon for companies to get tripped up by the “front-end versus back-end” divide when trying to navigate the development of new software.
After all, there are a growing number of tools on the market aimed at helping developers become more “full stack” oriented, so it’s easy for non-technicians to assume there isn’t a big difference between front-end and back-end specialists.
Front-end and back-end developers do work in tandem to create the systems necessary for an application or website to function properly. However, they have opposite concerns.
The term “front-end” refers to the user interface, while “back-end” means the server, application and database that work behind the scenes to deliver information to the user.
The user enters a request through the interface.
It’s then verified and communicated to the server, which pulls the necessary data from the database and sends it back to the user.
Here’s a closer look at the difference between front-end and back-end development.
What is Front-End Development?
Front-end developers design and construct the user experience elements on the web page or app including buttons, menus, pages, links, graphics and more.
Hypertext Markup Language is the core of a website, providing the overall design and functionality.
The most recent version was released in late 2017 and is known as HTML5.2.
The updated version includes more tools aimed at web application developers as well as adjustments made to improve interoperability.
Cascading style sheets give developers a flexible, precise way to create attractive, interactive website designs.
This event-based language is useful for creating dynamic elements on static HTML web pages.
It allows developers to access elements separate from the main HTML page, as well as respond to server-side events.
These frameworks let developers keep up with the growing demand for enterprise software without sacrificing quality, so they’re earning their place as standard development tools.
One of the main challenges of front-end development – which also goes by the name “client-side development” – is the rapid pace of change in the tools, techniques and technologies used to create the user experience for applications and websites.
The seemingly simple goal of creating a clear, easy-to-follow user interface is difficult due to sometimes widely different mobile device and computer screen resolutions and sizes.
Things get even more complicated when the Internet of Things (IoT) is considered.
Screen size and network connection now have a wider variety, so developers have to balance those concerns when working on their user interfaces.
What is Back-End Development?
The back-end, also called the server side, consists of the server which provides data on request, the application which channels it, and the database which organizes the information.
For example, when a customer browses shoes on a website, they are interacting with the front end.
After they select the item they want, put it in the shopping cart, and authorize the purchase, the information is kept inside the database which resides on the server.
A few days later when the client checks on the status of their delivery, the server pulls the relevant information, updates it with tracking data, and presents it through the front-end.
The core concern of back-end developers is creating applications that can find and deliver data to the front end.
Many of them use reliable enterprise-level databases like Oracle, Teradata, Microsoft SQL Server, IBM DB2, EnterpriseDB and SAP Sybase ASE.
There’s also a number of other popular databases including MySQL, NoSQL and PostgreSQL.
There are a wide variety of frameworks and languages used to code the application, such as Ruby on Rails, Java, C++/C/C#, Python and PHP.
Over the last several years Backend-as-a-Service (BaaS) providers have been maturing into a viable alternative.
They’re especially useful when developing mobile apps and working within a tight schedule.
What is Full-Stack Development?
The development of both the back- and front-end systems has become so specialized, it’s most common for a developer to specialize in only one.
However, at times a custom software development company will have developers who are proficient with both sides, known as a full stack developer.
They powerful team players because they have the breadth of knowledge to see the big picture, letting them suggest ways to optimize the process or remove roadblocks that might be slowing down the system.
Originally published December 7, 2017, updated Feb. 21, 2019.
Scalability is an essential component of enterprise software. Prioritizing it from the start leads to lower maintenance costs, better user experience, and higher agility.
Software design is a balancing act where developers work to create the best product within a client’s time and budget constraints.
There’s no avoiding the necessity of compromise. Tradeoffs must be made in order to meet a project’s requirements, whether those are technical or financial.
Too often, though, companies prioritize cost over scalability or even dismiss its importance entirely. This is unfortunately common in big data initiatives, where scalability issues can sink a promising project.
Scalability isn’t a “bonus feature.” It’s the quality that determines the lifetime value of software, and building with scalability in mind saves both time and money in the long run.
What is Scalability?
A system is considered scalable when it doesn’t need to be redesigned to maintain effective performance during or after a steep increase in workload.
“Workload” could refer to simultaneous users, storage capacity, the maximum number of transactions handled, or anything else that pushes the system past its original capacity.
Scalability isn’t a basic requirement of a program in that unscalable software can run well with limited capacity.
However, it does reflect the ability of the software to grow or change with the user’s demands.
Any software that may expand past its base functions- especially if the business model depends on its growth- should be configured for scalability.
The Benefits of Scalable Software
Scalability has both long- and short-term benefits.
At the outset it lets a company purchase only what they immediately need, not every feature that might be useful down the road.
For example, a company launching a data intelligence pilot program could choose a massive enterprise analytics bundle, or they could start with a solution that just handles the functions they need at first.
A popular choice is a dashboard that pulls in results from their primary data sources and existing enterprise software.
When they grow large enough to use more analytics programs, those data streams can be added into the dashboard instead of forcing the company to juggle multiple visualization programs or build an entirely new system.
Building this way prepares for future growth while creating a leaner product that suits current needs without extra complexity.
It requires a lower up-front financial outlay, too, which is a major consideration for executives worried about the size of big data investments.
Scalability also leaves room for changing priorities. That off-the-shelf analytics bundle could lose relevance as a company shifts to meet the demands of an evolving marketplace.
Choosing scalable solutions protects the initial technology investment. Businesses can continue using the same software for longer because it was designed to be grow along with them.
When it comes time to change, building onto solid, scalable software is considerably less expensive than trying to adapt less agile programs.
There’s also a shorter “ramp up” time to bring new features online than to implement entirely new software.
As a side benefit, staff won’t need much training or persuasion to adopt that upgraded system. They’re already familiar with the interface, so working with the additional features is viewed as a bonus rather than a chore.
The Fallout from Scaling Failures
So, what happens when software isn’t scalable?
In the beginning, the weakness is hard to spot. The workload is light in the early stages of an app. With relatively few simultaneous users there isn’t much demand on the architecture.
When the workload increases, problems arise. The more data stored or simultaneous users the software collects, the more strain is put on the software’s architecture.
Limitations that didn’t seem important in the beginning become a barrier to productivity. Patches may alleviate some of the early issues, but patches add complexity.
Complexity makes diagnosing problems on an on-going basis more tedious (translation: pricier and less effective).
When the software is customer-facing, unreliability increases the potential for churn.
Google found that 61% of users won’t give an app a second chance if they had a bad first experience. 40% go straight to a competitor’s product instead.
Scalability issues aren’t just a rookie mistake made by small companies, either. Even Disney ran into trouble with the original launch of their Applause app, which was meant to give viewers an extra way to interact with favorite Disney shows. The app couldn’t handle the flood of simultaneous streaming video users.
Frustrated fans left negative reviews until the app had a single star in the Google Play store. Disney officials had to take the app down to repair the damage, and the negative publicity was so intense it never went back online.
Some businesses fail to prioritize scalability because they don’t see the immediate utility of it.
Scalability gets pushed aside in favor of speed, shorter development cycles, or lower cost.
There are actually some cases when scalability isn’t a leading priority.
Software that’s meant to be a prototype or low-volume proof of concept won’t become large enough to cause problems.
Likewise, internal software for small companies with a low fixed limit of potential users can set other priorities.
Finally, when ACID compliance is absolutely mandatory scalability takes a backseat to reliability.
As a general rule, though, scalability is easier and less resource-intensive when considered from the beginning.
For one thing, database choice has a huge impact on scalability. Migrating to a new database is expensive and time-consuming. It isn’t something that can be easily done later on.
Principles of Scalability
Several factors affect the overall scalability of software:
Usage measures the number of simultaneous users or connections possible. There shouldn’t be any artificial limits on usage.
Increasing it should be as simple as making more resources available to the software.
Maximum stored data
This is especially relevant for sites featuring a lot of unstructured data: user uploaded content, site reports, and some types of marketing data.
Data science projects fall under this category as well. The amount of data stored by these kinds of content could rise dramatically and unexpectedly.
Whether the maximum stored data can scale quickly depends heavily on database style (SQL vs NoSQL servers), but it’s also critical to pay attention to proper indexing.
Code should be written so that it can be added to or modified without refactoring the old code. Good developers aim to avoid duplication of effort, reducing the overall size and complexity of the codebase.
Applications do grow in size as they evolve, but keeping code clean will minimize the effect and prevent the formation of “spaghetti code”.
Scaling Out Vs Scaling Up
Scaling up (or “vertical scaling”) involves growing by using more advanced or stronger hardware. Disk space or a faster central processing unit (CPU) is used to handle the increased workload.
Scaling up offers better performance than scaling out. Everything is contained in one place, allowing for faster returns and less vulnerability.
The problem with scaling up is that there’s only so much room to grow. Hardware gets more expensive as it becomes more advanced. At a certain point, businesses run up against the law of diminishing returns on buying advanced systems.
It also takes time to implement the new hardware.
Because of these limitations, vertical scaling isn’t the best solutions for software that needs to grow quickly and with little notice.
Scaling out (or “horizontal scaling”) is much more widely used for enterprise purposes.
When scaling out, software grows by using more- not more advanced- hardware and spreading the increased workload across the new infrastructure.
Costs are lower because the extra servers or CPUs can be the same type currently used (or any compatible kind).
Scaling happens faster, too, since nothing has to be imported or rebuilt.
There is a slight tradeoff in speed, however. Horizontally-scaled software is limited by the speed with which the servers can communicate.
The difference isn’t large enough to be noticed by most users, though, and there are tools to help developers minimize the effect. As a result, scaling out is considered a better solution when building scalable applications.
Load balancing software is critical for systems with distributed infrastructure (like horizontally scaled applications).
This software uses an algorithm to spread the workload across servers to ensure no single server gets overwhelmed. It’s an absolute necessity to avoid performance issues.
Scalable software does as much near the client (in the app layer) as possible. Reducing the number of times apps must navigate the heavier traffic near core resources leads to faster speeds and less stress on the servers.
Edge computing is something else to consider. With more applications requiring resource-intensive applications, keeping as much work as possible on the device lowers the impact of low signal areas and network delays.
Cache where possible
Be conscious of security concerns, but caching is a good way to keep from having to perform the same task over and over.
Lead with API
Users connect through a variety of clients, so leading with API that don’t assume a specific client type can serve all of them.
It refers to processes that are separated into discrete steps which don’t need to wait for the previous one to be completed before processing.
For example, a user can be shown a “sent!” notification while the email is still technically processing.
Asynchronous processing removes some of the bottlenecks that affect performance for large-scale software.
Limit concurrent access to limited resources
Don’t duplicate efforts. If more than one request asks for the same calculation from the same resource, let the first finish and just use that result. This adds speed while reducing strain on the system.
Use a scalable database
NoSQL databases tend to be more scalable than SQL. SQL does scale read operations well enough, but when it comes to write operations it conflicts with restrictions meant to enforce ACID principles.
Scaling NoSQL requires less stringent adherence to those principles, so if ACID compliance isn’t a concern a NoSQL database may be the right choice.
Consider PaaS solutions
Platform-as-a-service relieves a lot of scalability issues since the PaaS provider manages scaling. Scaling can be as easy as upgrading the subscription level.
Look into FaaS
Function-as-a-service evolved from PaaS and is very closely related. Serverless computing provides a way to only use the functions that are needed at any given moment, reducing unnecessary demands on the back-end infrastructure.
FaaS is still maturing, but it could be worth looking into as a way to cut operational costs while improving scalability.
Don’t forget about maintenance
Set software up for automated testing and maintenance so that when it grows, the work of maintaining it doesn’t get out of hand.
Build with An Eye to the Future
Prioritizing scalability prepares your business for success. Consider it early, and you’ll reap the benefits in agility when it’s most needed.
Are you looking for software that can grow with your company? Set up a free appointment with one of our developers to talk about where you need to go and how we can get you there!
Chatbots are leading the charge. Within the next five years they’re set to become the most common AI application across all consumer applications.
Natural language processing (NLP) has matured enough that chatbots offer real value instead of frustrating customers.
In fact, over half of consumers like having the constant point of access to businesses chatbots provide.
Look for more chatbots, virtual agents, and NLP-based form filling tools throughout 2019.
Progressive Web Apps
Progressive web apps are still generating excitement in 2019.
Developers view them as a serious competitor for native apps, especially as more browsers support their full suite of features.
There are a lot of benefits to using PWAs. Development is often shorter and less costly. They offer excellent performance even on poor devices and in low signal areas.
Dropping below the three seconds most users wait before leaving a slow site helps lower bounce rates and increase time spent on-site.
PWA service workers provide limited offline functionality, which is a significant benefit with 70% of world economic growth over the next over next several years coming from emerging markets, that’s a significant advantage.
There are still some problems with browser compatibility, but those will fade away as browsers catch up to the latest W3c standards.
Some of these trends should grow in popularity as 2019 proceeds.
Artificial intelligence, for example, is making strides in proving its worth as an enterprise tool.
It would be hard to imagine anyone abandoning it right as it begins to realize its full potential.
Others aren’t as easy to predict. Motion UI may be exciting, but there aren’t any numbers on its practical impact yet.
For now, these are all solid tools for developers looking to boost performance and improve the customer experience.
Questions? Concepta’s team stays up to date on the latest web development trends. Drop us a line to talk about which ones are best for your next project!
Docker is a cross-platform program for building and deploying containerized software. It enables faster, more efficient development while reducing maintenance complexity in the long run.
As technology – especially enterprise technology- races forward at breakneck speed, it’s both a good and a bad time to be in the software business.
On one hand, there’s plenty of work for skilled developers. On the other, there may be too much work.
The enterprise software market is expected to grow 8.3% this year, and experts suggest it would grow faster if there were enough developers to meet the demand.
Faced with this pressure to produce more and better software, developers’ toolkits are expanding.
The priority now is technology that improves development speed and efficiency- tools like Docker.
What is Docker?
Docker is a cross-platform virtualization program used to create containers: lightweight, portable, self-contained environments where software runs independently of other software installed on the host machine.
Containers are largely isolated from each other and communicate through specific channels.
They contain their own application, tools, libraries and configuration files, but they’re still more lightweight than virtual machines.
Though container technology has been around since 2008, Docker’s release in late 2013 boosted their popularity. The program featured simple tooling that created an easy path for adoption.
Now, it’s a favorite DevOps tool which facilitates the work of developers and system administrators alike.
The Power of Containers
Containerization provides a workaround for some irritating development hurdles. For instance, running several different applications in a single environment causes complexity.
The individual components don’t always work well together, and managing updates gets complicated fast.
Containers solve these problems by separating applications into independent modules.
They feed into the enterprise-oriented microservice architecture style, letting developers work on different parts of an application simultaneously.
This increases the speed and efficiency of development while making applications that are easier to maintain and update.
Taken as a whole, it’s obvious why both software developers and IT teams like containers.
The technology enables the rapid, iterative development and testing cycles which lie at the core of Agile methodologies.
It also takes the burden of dependency management off system administrators, who can then focus on runtime tasks (such as logging, monitoring, lifecycle management and resource utilization).
Why Docker Is the Right Choice
Docker isn’t the only containerization software around, but it is the industry standard.
It’s a robust, easy-to-use API and ecosystem that makes using containers more approachable to developers and more enterprise-ready.
The program has an edge on previous solutions when it comes to portability and flexibility.
Using Docker simplifies the process of coordinating and chaining together container actions, and it can be done faster than on virtual machines.
Docker removes dependencies and allows code to interact with the container instead of the server (Docker handles server interactions).
Plus, there’s a large repository of images available:
Getting up to speed with Docker doesn’t take long. The documentation is thorough, and there are plenty of tutorials online for self-taught developers.
Docker In Action: The Financial Times
The Financial Times is a London newspaper founded in 1888. Their online portal, FT.com, provides current business and economic news to an international audience.
The media outlet was one of the earlier adopters of Docker back in 2015. Docker containers helped cut their server costs by 80%.
Additionally, they were able to increase their productivity from 12 releases per year to 2,200.
Last year, the median container density per host rose 50% from the previous year.
In fact, the application container market is poised to explode over the next five years. Experts predict that annual revenue will quadruple, rising from $749 million in 2016 to over $3.4 billion by 2021.
Docker specifically still leads the pack despite the niche popularity of emerging tools.
83% of developers use Docker. CoreOS trails well behind it at 12% with Mesos Containerizer at 4%.
Overall, Docker Containers is a highly enterprise-oriented solution.
Other tools are emerging to add functionality (like container orchestration platform Kubernetes), so there’s no reason it shouldn’t continue growing in popularity.
Concepta focuses on enterprise-ready tools like Docker that let us target our clients’ specific needs. To explore solutions for your own business goals, set up a complimentary appointment today!
A huge part of writing less code is maintaining a direct, economical mindset.
Optimize code for correctness, simplicity, and brevity.
Don’t depend on assumptions that aren’t contained in the code, and never use three lines where one is just as readable and effective.
Make consistent style choices that are easy to understand. Avoid “run on coding sentences” by breaking different thoughts and concepts into separate LoC.
Only add code that will be used now, too.
There’s a tendency to try and prepare for the future by guessing what might be needed and adding in the foundations of those tools.
It seems like a smart approach, but in practice it causes problems, mainly:
Guesses about what may be useful in the future may be wrong.
Time spent writing code that won’t be used yet can delay the product’s launch.
Extra work translates into a higher investment before the product has even proven its worth or begun generating ROI.
Abstracting is another practice that should only be done according to present need. Do not abstract for the future.
Along these same lines, don’t add in comments or TODOs.
This messy habit encourages sloppy code that can’t be understood on its own.
Comments are more easily accessed when placed where everyone can read them using inline documentation.
Reusable code is a major asset, but make sure it’s fully understood instead of blindly copying and pasting to save time.
Try to choose options that follow good code economy guidelines.
Finally, don’t rush through development with the idea of refactoring later on.
The assumption that refactoring is “inevitable” leads to software that already needs work at launch.
It is possible to create a solid product with low technical debt by writing clean, concise code up front.
Keep A Hand on The Reins
Most importantly, don’t go too far by trying to use the absolute minimum LoC over more practical concerns.
As discussed earlier, a developer’s job is to solve problems.
When minimalist code is forced to override other needs, it becomes part of the problem instead of a solution.
Writing less code helps our developers create technology-based enterprise solutions with a long shelf life. Set up a free consultation to find out how we can solve your company’s most urgent business problems!