- 01
Low entry barrier
Running code in a serverless environment is relatively simple. To get an idea of how to program it to work in an ecosystem of an industrial level, it is enough to take any textbook and follow its recommendations. Learning the basics of deploying serverless environments is much easier than acquiring the basic knowledge of DevOps development; moreover, many of the agile development skills are not needed at all to implement a serverless architecture. These include, for example, server management skills, configuration, or patching. A low barrier to entry is one of the features of serverless computing.
This means that developers need less time to master them than to familiarize themselves with other IT architectures, and the general learning curve becomes steeper as you learn (mastering new material is easier). As a result, more and more developers are connecting to serverless projects, making an effective contribution to them. The ability to quickly learn new skills is one of the reasons that projects enter the market faster.
Each year, the IT infrastructure becomes more complex, filling up not only with physical equipment, but also with virtual ones, including virtual machines, containers, cloud applications, and linking all this economy is not so simple. Traditional tools, such as “infrastructure as code”, event log management, monitoring, and networking, do not lose their significance, and it is necessary to determine how each of these elements will interact with a serverless environment. If a developer possesses development skills other than serverless ones, he will have to master some of its features.
- 02
Hostless
The lack of direct interaction with server clusters is the essence of a serverless architecture. Today there is a wide selection of hosts where you can install and run the service - whether it be physical machines, virtual machines, containers, etc. - without resorting to servers. To denote this phenomenon, in addition to the term "serverless", you can resort to its synonym - "hostless", or decentralized architecture. One of its advantages is significantly lower operating costs for servicing the server fleet - they are an order of magnitude less than the amounts required to maintain a traditional IT environment.
The enterprise does not need to worry about updating servers - security patches will be installed automatically. A hostless environment allows you to track some characteristics of the applications, due to the fact that most of the used basic services will stop displaying information about the processor, memory, disk size, etc. and therefore administrators no longer need to understand the low-level intricacies of the architecture. However, instead, they will have to learn how to configure the architecture. For example, AWS DynamoDB provides basic capabilities for monitoring and configuring read and write capabilities.
As for the features of AWS Lambda, it imposes a limit on the number of simultaneous requests (events), and not on the number of processor cores. But the features of this serverless service do not end there: for example, when the number of allocated memory changes, Lambda will change the number of processor cores. If you use a single AWS account for performance testing and commercial environments, and if performance testing unexpectedly runs out of concurrent events, then Lambda can be paused. As you know, AWS well documents the limitations for each of its services, so before making architectural decisions, you need to make sure that they are correct.
Although a hostless environment significantly reduces operating costs, it is worth noting that in some cases you have to change the settings of the base server. This is due to the fact that the application can rely on its own libraries and after updating the OS, you need to make sure that they are not out of order.
- 03
Inability to save state
Functions as a service (FaaS) are ephemeral, therefore computing containers cannot store executable application code in memory - the platform will create and destroy them automatically. Thus, the impossibility of maintaining state (stateless) is another sign of serverless architecture. And this is a big plus for scaling applications horizontally.
The essence of stateless is that the user cannot save the intermediate states of the application. This allows you to scale more instances horizontally without worrying about the state of the application. It is noteworthy that this greatly reduces the risk of propagation of errors. Except, of course, computing containers - they can be re-deployed with state preservation, you should be careful here.
- 04
Flexibleness
The hostless system has the property of elasticity. Most serverless services are designed for tasks of a high degree of flexibility, where they can be scaled from zero to the maximum allowable number, and then back. Their management is carried out mainly in automatic mode. Scalability is inconceivable without flexibility, and this is perhaps the main trump card of serverless architecture, which is even more valuable in that the vast majority of operations do not require manual control.
For example, administrators do not need to allocate computing resources, and, no less important, the customer will only pay for the resources that he actually consumes - this will significantly reduce operating costs (provided that the minimum resource consumption template is installed).
The elasticity of a serverless architecture significantly reduces the chances of attackers with denial of service attacks but makes it more vulnerable to denial of wallet attacks. Their mechanism is that an attacker tries to hack the application, forcing it to exceed the limits of cloud resources allocated to the account.
- 05
Distribution
As already mentioned, the serverless architecture does not support the stable state of the application, all the requirements for the organization to store objects will be stored in BaaS as a combination, so the deployment units and functions will become less than usual. As a result, serverless architecture is distributed by default, integrating with many components over the network. Ready-made infrastructure will include related services, such as authentication, database, distributed queue, and so on.
In addition to elasticity, distributed systems have other advantages, including high availability and a geographically distributed database, that is, when a cloud service provider loses one availability zone, the architecture will be able to use other working zones. The choice of architecture is always a compromise.
Typically, each of the serverless services in the cloud has its own connectivity model. For example, AWS S3 using the read-after-write function provides immediate visibility and PUT Object connectivity for all clients in the S3 bouquet. When choosing BaaS, you need to keep track of how they provide data connectivity.
- 06
Event-driven
Serverless systems are inherently event-driven systems and therefore represent event-oriented architecture. This changes the approach to development, management and architecture. Many BaaS provided by serverless platforms support events in third-party (client) services, which partially compensate for the fact that users do not have control over the code base of their services. According to some observations, the development teams like event-oriented architecture, but this does not mean that it needs to be implemented. As with elasticity, it can be disabled.
Event-driven brings many benefits. Firstly, it is a weak connection between the components of the architecture. Secondly, in serverless architecture, it is easy to add a new function that reads changes to the blob store, but adding function B does not change function A, which increases the consistency of functions. The presence of connectivity (cohesive function) in the architecture means that the operation - function B - can be repeated in the event of failure without the costly launch of function A. Cloud service providers offer FaaS and BaaS integration services. FaaS will be triggered by event notifications.
The disadvantage of event architecture is that it does not provide a holistic view of the system as a whole, which complicates troubleshooting. Distributed tracing can alleviate this problem - this is a new area in hostless architecture. To try it out, you can use AWS X-Ray - a ready-made service out of the box. It provides comprehensive tracking of requests and their routes in the application and shows a map of the internal components of the application. X-Ray is used to analyze applications at the development and deployment stage, from simple three-stage options to complex applications consisting of thousands of microservices. It helps developers fix bugs and evaluate application performance.
Conclusions
Analysts, users, and skeptics argue about how serverless technology matters. Is it evolutionary or revolutionary? Will it be used to execute most or only part of the applications? The market is still in its infancy, and there are no answers yet. But the hype around the technology and its potential benefits cannot be ignored. The most important difference between serverless platforms is that they pay exactly as much processor time as consumed, accounting accuracy is plus or minus 100 ms. At the same time, you do not need to wait for the servers to start up and adjust the load balancing - the tasks are simply performed automatically as many times as needed. Experts say that such platforms are faster than others, allowing developers to test ideas and bring products to the market.
CTO at Geniusee
2019-11-08 / Modified on 2019-11-08
Enjoy this blog?
Please, spread the word :)