Distributed computing in simple words can be defined as a group of computers that are working together at the backend while appearing as one to the end-user. The individual computers working together in such groups operate concurrently and allow the whole system to keep working if one or some of them fail.

In a distributed system multiple computers can host different software components, but all the computers work to accomplish a common goal. The computers in a distributed system or group can be physically located at the same place or close together, connected via a local network or connected by a Wide Area Network.

Distributed systems can also consist of different configurations or a combination of configurations such as personal computers, workstations and mainframes.

Why Distributed Computing?

Deploying, maintaining and troubleshooting distributing systems can be a complex and challenging task. The main reason behind their increasing acceptance is perhaps necessity as they allow scaling horizontally. For example, traditional databases that run on a single machine require users to upgrade the hardware to handle increasing traffic (vertical scaling).

The biggest issue with vertical scaling is that even the best and most expensive hardware would prove to be insufficient after a certain time. On the other hand horizontal scaling allows managing increasing traffic/performance demands by adding more computers instead of constantly upgrading a single system.

The initial costs of horizontal scalability might be higher, but after a certain point it becomes a lot more efficient. Costs associated with vertical scalability start to rise sharply after a certain point, which makes horizontal scaling a much better option after a certain threshold. Vertical scaling might not be suitable for tech companies dealing with big data and very high workloads.

How distributed computing works?

Let’s take an example of a web application that is experiencing twice as much workload as just a month ago. Since the database now has to handle twice many as requests as it previously did, the performance would start to decline, which the end-users will also notice. One way of dealing with such an increase in workload is to upgrade the hardware, add more memory and bandwidth and so on. But what if the workload kept increasing? At a certain point it would become technically and financially impractical to upgrade the system.

That’s where distributed computing can help users meet their increasing demands. Information is read a lot more frequently than inserted or modified in a typical web app. Distributed computing allows using multiple configurations of computers, including Master-Slave Replication, which helps increase read performance.

In the above example, we can create new database servers (slave) that sync with the primary server (master) and are only meant to ‘read’ information. Whenever a user tries to access or read information, the two new servers handle the request, while the master server handles insert and modification requests. The master server keeps the slave servers updated about the new changes and entries (which isn’t instantaneous in most cases).

In a nutshell, distributed computing allows different machines (aka sites or nodes) to communicate and coordinate to accomplish common goals. A distributed system is designed to tolerate failure of individual computers so the remaining computers keep working and provide services to the users. Some examples of distributed systems include:

  • Telecommunication networks
  • The internet
  • Peep-to-peer networks
  • Airline reservation systems
  • Distributed databases
  • Scientific computing
  • Distributed rendering

Distributed vs. Parallel Computing

The term distributed computing is often used interchangeably with parallel computing as both have a lot of overlap. While there is no clear distinction between the two, parallel computing is considered as form of distributed computing that’s more tightly coupled. For example, in distributed computing processors usually have their own private or distributed memory, while processors in parallel computing can have access to the shared memory.

Distributed Computing Environment

Developed by the OSF (Open Software Foundation) DCE is a software technology for deploying and managing data exchange and computing in a distributed system. Used typically in large computing network systems, DCE provides underlying concepts and some of its major users include Microsoft (DCOM, ODBC) and Enrica.

Advantages and Benefits of Distributed Computing

Scalability and Modular Growth

Distributed systems are inherently scalable as they work across different machines and scale horizontally. This means a user can add another machine to handle the increasing workload instead of having to update a single system over and over again. There is virtually no cap on how much a user can scale. A system under high demand can run each machine to its full capacity and take machines offline when workload is low.

Fault Tolerance and Redundancy

Distributed systems are also inherently more fault tolerant than single machines. A business running a cluster of 8 machines across two data centers means its apps would work even if one data center goes offline. This translates into more reliability as in case of a single machine everything goes down with it. Distributed systems stay put even if one or more nodes/sites stop working (performance demand on the remaining nodes would go up).

Low Latency

Since users can have a node in multiple geographical locations, distributed systems allow the traffic to hit a node that’s closest, resulting in low latency and better performance. However, the software also has to be designed for running on multiple nodes at the same time, which can result in higher cost and more complexity.

Cost Effectiveness

Distributed systems are much more cost effective compared to very large centralized systems. Their initial cost is higher than standalone systems, but only up to a certain point after which they are more about economies of scale. A distributed system made up of many mini computers can be more cost effective than a mainframe machine.

Efficiency

Distributed systems allow breaking complex problems/data into smaller pieces and have multiple computers work on them in parallel, which can help cut down on the time needed to solve/compute those problems.

Disadvantages of Distributed Computing

Complexity

Distributed computing systems are difficult to deploy, maintain and troubleshoot/debug than their centralized counterparts. The increased complexity is not only limited to the hardware as distributed systems also need software capable of handling the security and communications.

Higher Initial Cost

The deployment cost of a distribution is higher than a single system. Increased processing overhead due to additional computation and exchange of information also adds up to the overall cost.

Security Concerns

Data access can be controlled fairly easily in a centralized computing system, but it’s not an easy job to manage security of distributed systems. Not only the network itself has to be secured, users also need to control replicated data across multiple locations.

Conclusion

Distributed computing helps improve performance of large-scale projects by combining the power of multiple machines. It’s much more scalable and allows users to add computers according to growing workload demands. Although distributed computing has its own disadvantages, it offers unmatched scalability, better overall performance and more reliability, which makes it a better solution for businesses dealing with high workloads and big data.