Hyperledger Fabric performance is a question that comes up frequently as users try to compare it to transactional databases or other blockchain platforms in terms of the maximum TPS (Transactions Per Second). Performance in a Hyperledger Fabric network is complex because in an appropriately deployed network there will be many organizations participating, each with their own hardware and networking infrastructure, along with different solution characteristics such as number of channels, chaincode implementations and policies.
Hyperledger Fabric 2.x has performance improvements over Hyperledger Fabric 1.4. Fabric 1.4 is now out of LTS and should not be used in production environments. Fabric 2.5 is the latest LTS version and includes the new peer Gateway service. When used with the new gateway SDKs, applications will demonstrate improved performance relative to applications based on the legacy SDKs.
Hyperledger Fabric documentation now has a section on Performance Considerations that can be found in the 2.5 and latest versions of the Hyperledger Fabric documentation. This content provides useful insight into helping to achieve a performant Fabric network. It doesn’t include information about certain scalability concerns such as large numbers of PDCS, channels, organizations, etc.
In conjunction with the new Performance Considerations documentation, the community has leveraged Hyperledger Caliper for some initial performance benchmarks for Hyperledger Fabric 2.5. This post shares the benchmark results for a sampling of Hyperledger Fabric 2.5 builds to give an example of transactions per second (TPS) achieved, based on a very specific chaincode running on dedicated hardware.
The Hyperledger Fabric topology used was two peer Organizations (PeerOrg0, PeerOrg1) with a single peer node in each and one Ordering Service Organization (OrdererOrg0) with a single orderer service node configured for Raft. TLS was enabled for each node.
Each node had the same identical hardware
The machines were all on the same switch.
Hyperledger Fabric was deployed natively to three physical machines (i.e., the native binaries were installed and executed; no container technology such as Docker or Kubernetes was used).
Hyperledger Caliper 0.5.0 was used as the load generator and for the report output. Caliper was bound to Fabric 2.4, which means it used the peer Gateway Service to invoke and evaluate transactions. All transactions were generated from the same organization to its gateway peer.
The load itself was defined from fixed-asset in Hyperledger Caliper-Benchmarks.
Caliper used four bare metal machines to host remote Caliper workers and also to host a single Caliper manager to generate the load on the Hyperledger Fabric Network. In order to generate enough workload on the Fabric network, we have to use multiple Caliper workers, which equate to the number of current clients connecting to the network. In the results section, the number of Caliper workers is provided.
These results were generated against the latest builds of Hyperledger Fabric 2.5 and utilized the default node and channel config values. Specifically the following block cutting parameters were used:
In order to be able to push enough workload through without hitting concurrency limits, the gateway concurrency limit was set to 20,000.
In summary the following benchmarks are presented here:
The following should also be noted
A Blind write is a transaction that performs a single write to a key regardless of whether that key exists and contains data. This is a Create Asset type of scenario.
Caliper test configuration:
The TPS here is the peak. Trying to push beyond this resulted in unexpected failures and a drop in overall throughput.
Caliper test configuration:
Here we see that we can achieve roughly the same throughput (i.e., the peak) but latency increases.
This is a test where the transaction will randomly pick an already existing key with data, read it, then modify that key. The world state was loaded with 1 million assets for this test to reduce the chance of using the same key in two concurrent transactions resulting in MVCC_READ_CONFLICT validation errors. In this example, the TPS rate was low enough and fortunate that no MVCC_READ_CONFLICT validation errors were received.
Caliper test configuration:
Note that the above results were done with an expectation of no failures. We see that the fabric network was not reaching capacity in this test as latency remains very low.
The above was repeated using a 1000 byte asset size.
Caliper test configuration:
Note that the above results were done with an expectation of no failures. We see that the Fabric network was not reaching capacity in this test as latency remains very low.
I would like to thank Shivdeep Singh for running the Hyperledger Caliper benchmarks to get the results presented in this blog and Dave Enyeart for reviewing the content.