BitOps.Net is a high-performance C# library for bitwise operations.
It bridges the gap between generic mathematical flexibility and high-throughput, low-level memory manipulation.
- Zero-Allocation: Operations on buffers occur strictly "in-place" using
Span<byte>, preventing heap allocations and reducing pressure on the Garbage Collector. - Safety-First: Unlike raw loops, the library includes defensive validation (buffer length checks) to prevent
IndexOutOfRangeExceptionand silent data corruption. - Unified API: Whether you are doing math on primitives or shifting bits across a 1GB memory buffer, the API is consistent and intuitive.
- BitOps.Net (Generic Math): A fluent, generic API for bitwise math using
IBinaryInteger<T>. - BitOps.Net.Buffers (Memory Library): A suite of extensions for
Span<byte>that treats the memory as a bitstream. Ideal for high-throughput network programming, file parsing, and encryption protocols.
-
Generic Math
Write bitwise logic once, use it for any number.
using BitOps.Net; int value = 0xAA; int result = value.BitwiseAnd(0x0F); // 0x0A
-
High-Performance Buffers
Perform complex bit-shifting and masking without creating temporary arrays.
using BitOps.Net.Buffers; Span<byte> buffer = stackalloc byte[] { 0xAA, 0xBB }; ReadOnlySpan<byte> mask = stackalloc byte[] { 0x0F, 0x0F }; // In-place modification: No allocations, no garbage generated buffer.BitwiseAndInPlace(mask);
| Operation | Generic Method | Buffer (In-Place) Method |
|---|---|---|
| AND | .BitwiseAnd(val) | .BitwiseAndInPlace(span) |
| OR | .BitwiseOr(val) | .BitwiseOrInPlace(span) |
| XOR | .BitwiseXor(val) | .BitwiseXorInPlace(span) |
| NOT | .BitwiseNot() | .BitwiseNotInPlace() |
| Operation | Generic Method | Buffer (In-Place) Method |
|---|---|---|
| Left Shift | .ShiftLeft(count) | .ShiftLeftInPlace(count) |
| Arithmetic Right | .ShiftRight(count) | .ShiftRightInPlace(count) |
| Logical Right | .ShiftRightUnsigned(count) | .ShiftRightUnsignedInPlace(count) |
BitOps.Net is designed for high-throughput, zero-allocation scenarios.
the benchmarks compare BitOps.Net against "Manual" implementations (raw for loops).
Zero-Allocation Guarantee: All library operations benchmarked below perform Zero-Allocation (0 B).
(Benchmarks performed on .NET 10)
See the full, detailed Benchmark Report for all data points, including different ShiftCounts.
You may notice a slight performance difference (a few nanoseconds) between the library methods and manual for loops in
smaller buffers.
This is expected — it is the "Safety Tax." Unlike raw loops, the library performs bounds checking and length
validation (
ThrowIfLengthsMismatch) to ensure the application remains stable under heavy load.
I believe this minimal overhead is a worthwhile trade-off for the robustness, maintainability, and clean API that
BitOps.Net provides.
As buffer sizes grow, the optimized in-place implementation scales better than standard manual approaches, providing significant performance gains in high-throughput scenarios.
I believe code should be both safe and fast. I optimize for the common case and strictly maintain zero-allocation behavior across all buffer operations.