Sinobu

Overview

This section presents benchmark results comparing the performance of Sinobu with other popular libraries for various common tasks. The goal is to provide objective data on Sinobu's efficiency in terms of execution speed, memory allocation, and garbage collection impact.

🔬 Methodology

Benchmarks were conducted using a custom benchmarking framework inspired by principles similar to JMH (Java Microbenchmark Harness). This includes dedicated warm-up phases to allow for JIT compilation and stabilization, and techniques like blackholes to prevent dead code elimination and ensure accurate measurement of the intended operations. Each major benchmark suite (e.g., JSON, Logging) is typically run in a separate JVM process to ensure isolation and prevent interference between tests. Benchmarks were run under specific hardware and software configurations. The results shown in the graphs typically represent throughput (operations per second), execution time, or memory allocation. Interpretation depends on the specific metric shown in each graph (e.g. higher throughput is better, lower time/allocation is better).

📈 Comparisons and Metrics

Comparisons are often made against well-known libraries relevant to each domain (e.g., Jackson/Gson for JSON, Logback/Log4j2 for Logging). The latest stable versions of competitor libraries available at the time of measurement were typically used.

Operations specific to each domain (e.g., JSON parsing, logging throughput, template rendering) are performed to measure key performance indicators such as:

  • Execution speed (throughput or time per operation)
  • Garbage collection load (allocation rate)
  • Memory consumption (footprint, retained size - though less frequently shown in graphs)

Lower values for time and allocation generally indicate better performance, while higher values for throughput are better.

⚠️ Disclaimer

Benchmark results can vary depending on the execution environment (JVM version, OS, hardware). These results should be considered indicative rather than absolute measures of performance in all scenarios.

Logging

Compares the performance of Sinobu's logging framework against other logging libraries. Focuses on throughput (operations per second) and garbage generation under different scenarios, highlighting Sinobu's garbage-less design advantage.

JSON

Compares Sinobu's JSON processing capabilities (parsing, traversing, mapping) against other well-known Java JSON libraries like FastJSON, Jackson, and Gson. Results highlight performance across various operations and document sizes.

Parse Small

Measures the time and resources required to parse small JSON documents.

Parse Large

Measures the performance of parsing larger JSON documents, testing scalability.

Parse Huge

Measures the performance of parsing very large (huge) JSON documents, stressing memory and CPU usage.

Traversing

Evaluates the efficiency of navigating and accessing data within a parsed JSON structure (DOM-like access).

Mapping

Benchmarks the process of mapping JSON data directly to Java objects (POJOs/Records).

HTML

Compares the performance of Sinobu's HTML/XML parser (including tag soup handling) against other Java parsers. Focuses on parsing speed and memory usage for typical web documents.

Template Engine

Compares the performance of Sinobu's Mustache template engine implementation. Measures rendering speed and overhead for template processing with context data.

Persistence