L1VM benchmark 04
I did run “the benchmark 4”, the double numbers adding loop on the new Python 3.11 and on my L1VM.
Here are the results:
$ time python3.11 double-test.py start... 8800000881.0 python3.11 double-test.py 13,76s user 0,01s system 99% cpu 13,813 total
$ time lua double-test.lua start... 8800000881.0 lua double-test.lua 5,32s user 0,00s system 99% cpu 5,333 total
$ time node double-test.js start... 8800000881 node double-test.js 3,19s user 0,04s system 97% cpu 3,301 total
$ time l1vm double-test -q 8800000881.0000000000 l1vm double-test -q 5,92s user 0,01s system 99% cpu 5,946 total
Can we go faster? YESSS!!! Fasten your seatbelts!
$ time l1vm double-test-optimized -q 8800000881.0000000000 l1vm double-test-optimized -q 1,41s user 0,01s system 99% cpu 1,432 total
This is insane!!! It’s running in lightspeed.
My L1VM is running in interpreted mode without any JIT-compiler running!
I did add a new opcode in Brackets: (x load). It loads an int64 or double number into a register.
This can be done before a JIT-compiler code block. The JIT-compiler can’t load/pull variables.
So you can do it before such a code block.
Here is the runtime of the JIT-compiled program:
time l1vm double-test-optimized-jit -q 8800000881.0000000000 l1vm double-test-optimized-jit -q 1,18s user 0,00s system 99% cpu 1,189 total
Here is the benchmark 04.
And for all who wanted to know how fast a C program is:
$ time ./double-test 8800000881.0000000000 ./double-test 0,30s user 0,00s system 99% cpu 0,306 total
So my optimized Brackets program is about factor 4,7 slower than C.
Here is the normalized table:
language | version | factor +--------------+---------+------- C | | 1 L1VM JIT comp | 2.0.5 | 4 L1VM optimized | 2.0.5 | 4.7 Node.js | 19.0.0 | 10.6 Lua | 5.4.4 | 17.7 L1VM | 2.0.5 | 19.7 Python | 3.11 | 45.9
There is also a JIT-compiled version with inline assembly in the ZIP archive:
It runs as fast as the C program. So you can write programs which are very fast in inline assembly.