Skip to content

gitisabel/ml-math-kernels-c

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚡ ml-math-kernels-c

C BLAS License: MIT Build

High-performance mathematical kernels for ML inference — pure C, zero dependencies


📖 Overview

This library provides hand-optimized C implementations of the core math operations required for neural network inference: matrix multiplication, convolution, activation functions, softmax, and more. Designed for embedded systems and edge ML where Python runtimes are unavailable.


🏗️ Structure

ml-math-kernels-c/
├── include/
│   ├── linalg.h          # Matrix / vector types
│   ├── activations.h     # ReLU, GELU, Sigmoid, Softmax
│   └── conv.h            # 1D/2D convolution
├── src/
│   ├── matmul.c          # GEMM — O(n³) + loop-unrolled fast path
│   ├── activations.c     # Vectorisable activation impls
│   ├── conv2d.c          # im2col + batched matmul
│   └── softmax.c         # Numerically stable log-softmax
├── tests/
│   ├── test_matmul.c
│   └── test_activations.c
├── benchmarks/
│   └── bench_gemm.c
├── Makefile
└── README.md

🚀 Quick Start

git clone https://github.com/gitisabel/ml-math-kernels-c
cd ml-math-kernels-c
make          # builds libmlkernels.a
make test     # runs unit tests
make bench    # runs GEMM benchmark

🔑 Key API

#include "linalg.h"
#include "activations.h"

// Dense matrix multiply: C = A × B
void gemm(const float *A, const float *B, float *C,
          int M, int N, int K);

// In-place ReLU
void relu_inplace(float *x, size_t n);

// Numerically stable softmax
void softmax(const float *in, float *out, size_t n);

📊 Benchmarks (Apple M2, single core)

Operation Size GFLOPS
GEMM (float32) 512×512 18.4
Conv2D (3×3) 64ch/128 12.7
Softmax 50 000 3.1 GB/s

🧪 Tests

make test
# test_matmul   .......... OK
# test_activations ....... OK

📄 License

MIT © Isabel

About

⚡ High-performance math primitives for ML in C — BLAS-like vector/matrix ops, activations, memory utilities

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors