This commit is contained in:
2026-02-26 13:05:16 +01:00
commit 4db60e822e
11 changed files with 2034 additions and 0 deletions

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.zig-cache
zig-out

11
LICENSE Normal file
View File

@@ -0,0 +1,11 @@
Copyright 2026 Pascal Zittlau
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

58
README.md Normal file
View File

@@ -0,0 +1,58 @@
# HashMapConcurrent
A thread-safe, fixed-capacity, open-addressing hash map for Zig.
This implementation combines *Robin Hood hashing* (to minimize probe lengths) with *Sequence
Locking* (to provide wait-free-like read performance) and *Shard-Level Locking* for writers.
Deletions use *Backward-Shift* to maintain table compactness without the performance degradation
of tombstones.
## Quick Start
```zig
const std = @import("std");
const HashMap = @import("hashmap_concurrent.zig").AutoHashMapConcurrent;
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = gpa.allocator();
// capacity must be a power of two.
// num_shards balances writer contention vs reader retry probability.
var map = try HashMap(u64, u64).init(allocator, 1024, 64);
defer map.deinit(allocator);
map.put(42, 1337);
const val = map.get(42);
std.debug.print("Value: {d}\n", .{val});
}
```
## Iteration
There are two ways to iterate over entries, depending on your consistency requirements:
`lockingIterator()`: Uses *Lock Coupling* to prevent elements from being missed or seen twice if they
are moved across shard boundaries during iteration. Due to the locking you must call `it.deinit()`
if you break or return from the loop early to release the held shard lock.
`approximateIterator()`: Optimistic and approximate because it just uses *Sequence Locks*. It may
miss entries or see the same entry twice if concurrent writers move elements. It's Lock-free and
safe to use on const references. Safe to break early without cleanup.
## Usage & Safety
For a detailed explanation of the concurrency model, deadlock safety, and memory reclamation, please
refer to the documentation at the top of `hashmap_concurrent.zig`.
## License
BSD 3-Clause. See `hashmap_concurrent.zig` and `LICENSE` for the full text.
## Benchmark Results
Done with 100 million iterations on a quiet AMD Ryzen AI 5 340.
![Throughput Results](throughput_results.png)
![Speedup Results](speedup_results.png)

73
benchmark_results.csv Normal file
View File

@@ -0,0 +1,73 @@
implementation,load_factor,workload,threads,time_ns,ops_per_sec,speedup
Concurrent,0.50,Read-Heavy,1,2594978833,38535959,1.00
Concurrent,0.50,Read-Heavy,2,1326809363,75368777,1.96
Concurrent,0.50,Read-Heavy,4,827656094,120823130,3.14
Concurrent,0.50,Read-Heavy,8,495779507,201702568,5.23
Concurrent,0.50,Balanced,1,3146544458,31780895,1.00
Concurrent,0.50,Balanced,2,1778935887,56213380,1.77
Concurrent,0.50,Balanced,4,1245658739,80278809,2.53
Concurrent,0.50,Balanced,8,830033664,120477041,3.79
Concurrent,0.50,Write-Heavy,1,3333508883,29998420,1.00
Concurrent,0.50,Write-Heavy,2,2085712406,47945248,1.60
Concurrent,0.50,Write-Heavy,4,1612342392,62021565,2.07
Concurrent,0.50,Write-Heavy,8,1304384182,76664529,2.56
Concurrent,0.80,Read-Heavy,1,3366065688,29708273,1.00
Concurrent,0.80,Read-Heavy,2,1728374611,57857827,1.95
Concurrent,0.80,Read-Heavy,4,1060776557,94270559,3.17
Concurrent,0.80,Read-Heavy,8,635564332,157340484,5.30
Mutex,0.80,Read-Heavy,1,1233203046,81089647,1.00
Mutex,0.80,Read-Heavy,2,2360541025,42363169,0.52
Mutex,0.80,Read-Heavy,4,3807812714,26261795,0.32
Mutex,0.80,Read-Heavy,8,5135552906,19472100,0.24
Concurrent,0.80,Balanced,1,4435896174,22543359,1.00
Concurrent,0.80,Balanced,2,2397903683,41703092,1.85
Concurrent,0.80,Balanced,4,1598169912,62571569,2.78
Concurrent,0.80,Balanced,8,999038161,100096276,4.44
Mutex,0.80,Balanced,1,4708858885,21236567,1.00
Mutex,0.80,Balanced,2,8139654453,12285533,0.58
Mutex,0.80,Balanced,4,12237717880,8171458,0.38
Mutex,0.80,Balanced,8,18573464579,5384025,0.25
Concurrent,0.80,Write-Heavy,1,5069543262,19725642,1.00
Concurrent,0.80,Write-Heavy,2,2894152061,34552434,1.75
Concurrent,0.80,Write-Heavy,4,2078008185,48123005,2.44
Concurrent,0.80,Write-Heavy,8,1639219952,61004625,3.09
Mutex,0.80,Write-Heavy,1,7725405995,12944303,1.00
Mutex,0.80,Write-Heavy,2,13195595046,7578286,0.59
Mutex,0.80,Write-Heavy,4,19729207049,5068627,0.39
Mutex,0.80,Write-Heavy,8,27973750545,3574779,0.28
Concurrent,0.90,Read-Heavy,1,4033587298,24791827,1.00
Concurrent,0.90,Read-Heavy,2,2053063859,48707690,1.96
Concurrent,0.90,Read-Heavy,4,1251686308,79892221,3.22
Concurrent,0.90,Read-Heavy,8,773497586,129282885,5.21
Concurrent,0.90,Balanced,1,6811306267,14681471,1.00
Concurrent,0.90,Balanced,2,3618463836,27636036,1.88
Concurrent,0.90,Balanced,4,2445776487,40886810,2.78
Concurrent,0.90,Balanced,8,1560194793,64094560,4.37
Concurrent,0.90,Write-Heavy,1,9030716397,11073318,1.00
Concurrent,0.90,Write-Heavy,2,4865192757,20554170,1.86
Concurrent,0.90,Write-Heavy,4,3185281920,31394395,2.84
Concurrent,0.90,Write-Heavy,8,2085262651,47955589,4.33
Concurrent,0.95,Read-Heavy,1,5434471374,18401053,1.00
Concurrent,0.95,Read-Heavy,2,2859545884,34970587,1.90
Concurrent,0.95,Read-Heavy,4,1855366830,53897697,2.93
Concurrent,0.95,Read-Heavy,8,1176508461,84997263,4.62
Concurrent,0.95,Balanced,1,15019825711,6657866,1.00
Concurrent,0.95,Balanced,2,8227069541,12154996,1.83
Concurrent,0.95,Balanced,4,5613794177,17813264,2.68
Concurrent,0.95,Balanced,8,3676663020,27198576,4.09
Concurrent,0.95,Write-Heavy,1,23277408496,4296010,1.00
Concurrent,0.95,Write-Heavy,2,12697585928,7875512,1.83
Concurrent,0.95,Write-Heavy,4,8735850145,11447082,2.66
Concurrent,0.95,Write-Heavy,8,5679183896,17608163,4.10
Concurrent,0.98,Read-Heavy,1,11941995261,8373810,1.00
Concurrent,0.98,Read-Heavy,2,7209174145,13871214,1.66
Concurrent,0.98,Read-Heavy,4,5688360374,17579758,2.10
Concurrent,0.98,Read-Heavy,8,3948386161,25326803,3.02
Concurrent,0.98,Balanced,1,67323105198,1485374,1.00
Concurrent,0.98,Balanced,2,38678509614,2585415,1.74
Concurrent,0.98,Balanced,4,29027638159,3444992,2.32
Concurrent,0.98,Balanced,8,19293002197,5183226,3.49
Concurrent,0.98,Write-Heavy,1,121332903848,824178,1.00
Concurrent,0.98,Write-Heavy,2,69752601907,1433638,1.74
Concurrent,0.98,Write-Heavy,4,46248969309,2162210,2.62
Concurrent,0.98,Write-Heavy,8,31756104338,3149000,3.82
1 implementation load_factor workload threads time_ns ops_per_sec speedup
2 Concurrent 0.50 Read-Heavy 1 2594978833 38535959 1.00
3 Concurrent 0.50 Read-Heavy 2 1326809363 75368777 1.96
4 Concurrent 0.50 Read-Heavy 4 827656094 120823130 3.14
5 Concurrent 0.50 Read-Heavy 8 495779507 201702568 5.23
6 Concurrent 0.50 Balanced 1 3146544458 31780895 1.00
7 Concurrent 0.50 Balanced 2 1778935887 56213380 1.77
8 Concurrent 0.50 Balanced 4 1245658739 80278809 2.53
9 Concurrent 0.50 Balanced 8 830033664 120477041 3.79
10 Concurrent 0.50 Write-Heavy 1 3333508883 29998420 1.00
11 Concurrent 0.50 Write-Heavy 2 2085712406 47945248 1.60
12 Concurrent 0.50 Write-Heavy 4 1612342392 62021565 2.07
13 Concurrent 0.50 Write-Heavy 8 1304384182 76664529 2.56
14 Concurrent 0.80 Read-Heavy 1 3366065688 29708273 1.00
15 Concurrent 0.80 Read-Heavy 2 1728374611 57857827 1.95
16 Concurrent 0.80 Read-Heavy 4 1060776557 94270559 3.17
17 Concurrent 0.80 Read-Heavy 8 635564332 157340484 5.30
18 Mutex 0.80 Read-Heavy 1 1233203046 81089647 1.00
19 Mutex 0.80 Read-Heavy 2 2360541025 42363169 0.52
20 Mutex 0.80 Read-Heavy 4 3807812714 26261795 0.32
21 Mutex 0.80 Read-Heavy 8 5135552906 19472100 0.24
22 Concurrent 0.80 Balanced 1 4435896174 22543359 1.00
23 Concurrent 0.80 Balanced 2 2397903683 41703092 1.85
24 Concurrent 0.80 Balanced 4 1598169912 62571569 2.78
25 Concurrent 0.80 Balanced 8 999038161 100096276 4.44
26 Mutex 0.80 Balanced 1 4708858885 21236567 1.00
27 Mutex 0.80 Balanced 2 8139654453 12285533 0.58
28 Mutex 0.80 Balanced 4 12237717880 8171458 0.38
29 Mutex 0.80 Balanced 8 18573464579 5384025 0.25
30 Concurrent 0.80 Write-Heavy 1 5069543262 19725642 1.00
31 Concurrent 0.80 Write-Heavy 2 2894152061 34552434 1.75
32 Concurrent 0.80 Write-Heavy 4 2078008185 48123005 2.44
33 Concurrent 0.80 Write-Heavy 8 1639219952 61004625 3.09
34 Mutex 0.80 Write-Heavy 1 7725405995 12944303 1.00
35 Mutex 0.80 Write-Heavy 2 13195595046 7578286 0.59
36 Mutex 0.80 Write-Heavy 4 19729207049 5068627 0.39
37 Mutex 0.80 Write-Heavy 8 27973750545 3574779 0.28
38 Concurrent 0.90 Read-Heavy 1 4033587298 24791827 1.00
39 Concurrent 0.90 Read-Heavy 2 2053063859 48707690 1.96
40 Concurrent 0.90 Read-Heavy 4 1251686308 79892221 3.22
41 Concurrent 0.90 Read-Heavy 8 773497586 129282885 5.21
42 Concurrent 0.90 Balanced 1 6811306267 14681471 1.00
43 Concurrent 0.90 Balanced 2 3618463836 27636036 1.88
44 Concurrent 0.90 Balanced 4 2445776487 40886810 2.78
45 Concurrent 0.90 Balanced 8 1560194793 64094560 4.37
46 Concurrent 0.90 Write-Heavy 1 9030716397 11073318 1.00
47 Concurrent 0.90 Write-Heavy 2 4865192757 20554170 1.86
48 Concurrent 0.90 Write-Heavy 4 3185281920 31394395 2.84
49 Concurrent 0.90 Write-Heavy 8 2085262651 47955589 4.33
50 Concurrent 0.95 Read-Heavy 1 5434471374 18401053 1.00
51 Concurrent 0.95 Read-Heavy 2 2859545884 34970587 1.90
52 Concurrent 0.95 Read-Heavy 4 1855366830 53897697 2.93
53 Concurrent 0.95 Read-Heavy 8 1176508461 84997263 4.62
54 Concurrent 0.95 Balanced 1 15019825711 6657866 1.00
55 Concurrent 0.95 Balanced 2 8227069541 12154996 1.83
56 Concurrent 0.95 Balanced 4 5613794177 17813264 2.68
57 Concurrent 0.95 Balanced 8 3676663020 27198576 4.09
58 Concurrent 0.95 Write-Heavy 1 23277408496 4296010 1.00
59 Concurrent 0.95 Write-Heavy 2 12697585928 7875512 1.83
60 Concurrent 0.95 Write-Heavy 4 8735850145 11447082 2.66
61 Concurrent 0.95 Write-Heavy 8 5679183896 17608163 4.10
62 Concurrent 0.98 Read-Heavy 1 11941995261 8373810 1.00
63 Concurrent 0.98 Read-Heavy 2 7209174145 13871214 1.66
64 Concurrent 0.98 Read-Heavy 4 5688360374 17579758 2.10
65 Concurrent 0.98 Read-Heavy 8 3948386161 25326803 3.02
66 Concurrent 0.98 Balanced 1 67323105198 1485374 1.00
67 Concurrent 0.98 Balanced 2 38678509614 2585415 1.74
68 Concurrent 0.98 Balanced 4 29027638159 3444992 2.32
69 Concurrent 0.98 Balanced 8 19293002197 5183226 3.49
70 Concurrent 0.98 Write-Heavy 1 121332903848 824178 1.00
71 Concurrent 0.98 Write-Heavy 2 69752601907 1433638 1.74
72 Concurrent 0.98 Write-Heavy 4 46248969309 2162210 2.62
73 Concurrent 0.98 Write-Heavy 8 31756104338 3149000 3.82

36
build.zig Normal file
View File

@@ -0,0 +1,36 @@
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const mod = b.addModule("hashmap_concurrent", .{
.root_source_file = b.path("hashmap_concurrent.zig"),
.target = target,
});
const exe = b.addExecutable(.{
.name = "benchmark",
.root_module = b.createModule(.{
.root_source_file = b.path("hashmap_concurrent.zig"),
.target = target,
.optimize = optimize,
}),
});
b.installArtifact(exe);
const run_step = b.step("bench", "Run the benchmark");
const run_cmd = b.addRunArtifact(exe);
run_step.dependOn(&run_cmd.step);
run_cmd.step.dependOn(b.getInstallStep());
if (b.args) |args| {
run_cmd.addArgs(args);
}
const mod_tests = b.addTest(.{
.root_module = mod,
});
const run_mod_tests = b.addRunArtifact(mod_tests);
const test_step = b.step("test", "Run tests");
test_step.dependOn(&run_mod_tests.step);
}

13
build.zig.zon Normal file
View File

@@ -0,0 +1,13 @@
.{
.name = .hashmap_concurrent,
// This is a [Semantic Version](https://semver.org/).
.version = "0.1.0",
.fingerprint = 0x2614796ee2b381e, // Changing this has security and trust implications.
.minimum_zig_version = "0.15.2",
.paths = .{
"build.zig",
"build.zig.zon",
"hashmap_concurrent.zig",
"LICENSE",
},
}

1706
hashmap_concurrent.zig Normal file

File diff suppressed because it is too large Load Diff

68
plot_speedup.gp Normal file
View File

@@ -0,0 +1,68 @@
# Output Settings
set terminal pngcairo size 1200,1600 enhanced linewidth 2
set output 'speedup_results.png'
set datafile separator ","
set multiplot layout 3,1 title "Concurrent HashMap Benchmark Results\nSpeedup vs Thread Count" offset 0, -0.05 scale 1, 0.95
# Axis Settings
set grid y
set grid x
set xlabel "Threads"
set ylabel "Speedup"
set logscale x 2
set format y "%.1f"
set yrange [0:*]
# Key (Legend) Settings
set key outside right top box
set key title "Configuration"
# Colors
c_050 = "#17becf" # Cyan
c_080 = "#2ca02c" # Green
c_090 = "#1f77b4" # Blue
c_095 = "#ff7f0e" # Orange
c_098 = "#d62728" # Red
# Point Types (Shapes)
# 13=Diamond, 7=Circle, 5=Square, 9=Triangle Up, 11=Triangle Down
pt_conc_050 = 13
pt_conc_080 = 7
pt_conc_090 = 5
pt_conc_095 = 9
pt_conc_098 = 11
# Open version for Mutex
pt_mutex_080 = 6
# Helper function to filter data
# Col 1: Impl, Col 2: LF, Col 3: Workload, Col 7: Speedup
filter(workload, impl, lf) = (strcol(3) eq workload && strcol(1) eq impl && abs($2 - lf) < 0.001) ? $7 : 1/0
set title "Workload: Read-Heavy (3% Put, 2% Remove, 95% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
set title "Workload: Balanced (25% Put, 25% Remove, 50% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
set title "Workload: Write-Heavy (45% Put, 45% Remove, 10% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
unset multiplot

67
plot_throughput.gp Normal file
View File

@@ -0,0 +1,67 @@
# Output Settings
set terminal pngcairo size 1200,1600 enhanced linewidth 2
set output 'throughput_results.png'
set datafile separator ","
set multiplot layout 3,1 title "Concurrent HashMap Benchmark Results\nThroughput vs Thread Count" offset 0, -0.05 scale 1, 0.95
# Axis Settings
set grid y
set grid x
set xlabel "Threads"
set ylabel "Throughput (Operations / Sec)"
set logscale x 2
set format y "%.1s%c"
# Key (Legend) Settings
set key outside right top box
set key title "Configuration"
# Colors
c_050 = "#17becf" # Cyan
c_080 = "#2ca02c" # Green
c_090 = "#1f77b4" # Blue
c_095 = "#ff7f0e" # Orange
c_098 = "#d62728" # Red
# Point Types (Shapes)
# 13=Diamond, 7=Circle, 5=Square, 9=Triangle Up, 11=Triangle Down
pt_conc_050 = 13
pt_conc_080 = 7
pt_conc_090 = 5
pt_conc_095 = 9
pt_conc_098 = 11
# Open version for Mutex
pt_mutex_080 = 6
# Helper function to filter data
# Col 1: Impl, Col 2: LF, Col 3: Workload, Col 6: Ops/Sec
filter(workload, impl, lf) = (strcol(3) eq workload && strcol(1) eq impl && abs($2 - lf) < 0.001) ? $6 : 1/0
set title "Workload: Read-Heavy (3% Put, 2% Remove, 95% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Read-Heavy", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
set title "Workload: Balanced (25% Put, 25% Remove, 50% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Balanced", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
set title "Workload: Write-Heavy (45% Put, 45% Remove, 10% Get)"
plot \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.50)) w lp lc rgb c_050 pt pt_conc_050 t "Concurrent (LF 0.50)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.80)) w lp lc rgb c_080 pt pt_conc_080 t "Concurrent (LF 0.80)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.90)) w lp lc rgb c_090 pt pt_conc_090 t "Concurrent (LF 0.90)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.95)) w lp lc rgb c_095 pt pt_conc_095 t "Concurrent (LF 0.95)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Concurrent", 0.98)) w lp lc rgb c_098 pt pt_conc_098 t "Concurrent (LF 0.98)", \
'benchmark_results.csv' every ::1 using 4:(filter("Write-Heavy", "Mutex", 0.80)) w lp lc rgb c_080 dt 2 pt pt_mutex_080 t "Mutex (LF 0.80)",
unset multiplot

BIN
speedup_results.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

BIN
throughput_results.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB