Using the metrics obtained via dotnet-counters expect that the GC % time is within 1% of the REST control load Using Process Explorer to measure utilization, no more than a 5% increase when compared to the REST control load. As a result, the metrics of this test are expressed in terms of % increases over a comperable REST workload via a baseline ASP.NET web api controller. Using GraphQL (as opposed to REST) will generate some additional overhead to parse and execute the query on top of the REST request which invokes it. This test measures the throughput of queries and mutations through the runtime as well as the load those queries place on the server CPU. Time between GC cycles has improved to once every 33 seconds a 65% increase in duration.Ī test with the server executing in release mode, WITHOUT the subscription server attached and with monitoring via passive dotnet-counters. There is a notable decrease in memory pressure. Similar results to the v0.13.1-beta test in terms of generational memory allocations. While no objects make it to Gen1 or Gen2 a GC cycle occurs about every 20 seconds. The aim of this test is to ensure acceptable memory pressure and GC cycles on the server instance in a controlled usage scenario and ensure no memory leaks occur.Įxecution is consistant. When the test completes, the server returns to a steady state of memory usage prior to the test beginning. Given the artificial environment restrictions this imposes its difficult to pin down exact KPIs but in general this test is used to monitor: MetricĮxpect to see steady Gen0 allocation over time, with no extreme spikes.Įxpect to see little to no objects surviving to Gen1 and Gen2 heap collections per GC cycle. Each user executes a query every 0-15ms until failureĪ qualitative test executed with the server instance running in release mode, harnessed via dotMemory.Starts 20 new users each second until failure.JMeter script: graphql-max-load-generator.jmx.This workload acts as a control to compare performance of the baseline web api against the overhead of the graphql library.Each user executes 10,000 requests at most 15ms apart.300 concurrent users executing a REST Query that mutates and returns a single object.300 concurrent users executing a REST Query that fetches a single object.2 registered subscription per client (1000 total client subscriptions).A custom console app that registers subscribers to receive subscription events generated via the Load Generating Workstation mutations.300 concurrent users executing a graphql mutation (and raising a subscription event).300 concurrent users executing a graphql query to fetch a single object.JMeter script: graphql-load-generator.jmx.Each user executes 10,000 requests at most 30ms apart.15 concurrent users executing a graphql query to fetch a single object.JMeter script: graphql-memory-profiling.jmx. Simple queries to fetch or mutate a single, in-memory object.Garbage collection executing in server mode.src/ancillary-projects/benchmarking/graphql-aspnet-load-server/ These tests are executed in a controlled setting with the following conditions: Test Configuration and Specs GraphQL ASP.NET Server: Our goal is to measure the theoretical limits in a multi-user, "production like" scenario. We periodically execute tests against the library to measure throughput and stability, for a single server instance, under load. Be sure to execute your own load tests using queries indicative of your expected user base and act accordingly. database connections, service orchrestration, business logic etc.). Your use cases will be different and effected by factors not present in our lab environment (e.g. These performance tests are not intended to be used as data points when determining scaling requirements for your own production workloads. If there is a specific query type or scenario that you are seeing a significant performance degregation with please open an issue on github and let us know! Obviously, real world workloads are going to be slower than these theoretical values, but the faster we can make the benchmarks the faster all other scenarios will be.Īs you can see all query types execute in sub-millisecond timeframes. The goal of our benchmarks is to measure the library's abiliy to process a query in isolation, to perform the query and execute user code not how long an action method takes to query a database for data and send a request over the wire. These are executed against an in-memory data store without an attached database. For our benchmarks, we are tracking a number of query types which measure performance via the various paths through the library multi controller queries, queries with variables etc. Benchmarks & Performance Query Benchmarking
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |