An interactive Python-based tool for solving systems of linear equations using multiple numerical methods, designed to compare computational efficiency and numerical accuracy and a comprehensive linear system solver in Python that compares the accuracy and efficiency of NumPy's built-in solver, PLU decomposition, matrix inversion, and RREF methods.
- Explore built-in Python functions for solving systems of linear equations.
- Compare different numerical options (NumPy, PLU, Inverse, and RREF) in terms of accuracy (residuals) and efficiency (time).
- Analyze the performance of algorithms as the system dimension (
$n$ ) increases.
- Interactive GUI: A Tkinter-based interface for easy data entry.
- Natural Language Parsing: Input systems of equations directly (e.g., 2x + 3y = 10).
- Multiple Solvers:
- NumPy linalg.solve: The industry-standard direct solver.
- PLU Decomposition: Factors matrix
$A$ into$P$ ,$L$ , and$U$ matrices. - Matrix Inverse: Solves via
$x = A^{-1}b$ . - RREF: Manual Gauss-Jordan elimination implementation.
- Detailed Analytics:
Real-time display of residual vectors (
$r = Ax - b$ ) and execution times.
solving-linear-systems-in-python/
├── src/
│ ├── Solver_gui.py # Main GUI Application
│
├── requirements.txt # Dependencies (numpy, scipy)
└── README.md # Project documentation
The solver breaks down matrix
The tool provides deep insights into how different algorithms handle the same matrix:
-
Residual Calculation: Computes
$r = Ax - b$ for each method to verify numerical precision. -
RREF Analysis: Unlike standard solvers that crash on singular matrices, this manual implementation classifies systems:
- Unique Solution: Full rank matrix.
-
Inconsistent: Rows appearing as
$[0 0 ... 0 | k]$ where$k \neq 0$ . - Dependent: Fewer pivots than variables, indicating free variables exist.
Follow these steps to run the solver on your local machine:
-
-
Environment Setup Ensure you have Python installed, then install the necessary numerical libraries:
pip install -r requirements.txt
-
-
-
Launching the Application Run the graphical interface script directly from your terminal:
python src/Solver_gui.py
-
-
-
Solving a System Set Dimension: Enter the number of variables (e.g., 3).
Choose Input Method:
- Equation Mode: Type your system (e.g., 1x + 2y + 3z = 1) and click "Solve ".
- Matrix Mode: Type the coefficients as a grid and the constants on the last line, then click "Solve ".
-
-
- Analyze Results: View the generated Matrix A, Vector b, and the comparison of results from the different numerical solvers in the output log.
When using Matrix Mode, ensure your input is strictly numerical. When using Equation Mode, use standard variable notation (x, y, z) for the parser to correctly build the coefficient matrix
- System Classification (The RREF Test):
The Reduced Row Echelon Form (RREF) determines the system's nature based on its augmented matrix
$[A|b]$ :- Unique Solution: The RREF reveals an identity matrix on the left, indicating exactly one point of intersection.
- No Solution (Inconsistent): Look for a "contradictory row" like [0 0 0 | 1], representing the impossible equation
$0 = 1$ . - Infinite Solutions (Dependent): Occurs if there are no contradictions but the number of non-zero rows is less than the number of variables (
$n$ ).
- Method Comparison
- Accuracy: Compare NumPy Solve against RREF results; matching values indicate high numerical precision.
- Efficiency: Check the Execution Time; NumPy is typically fastest due to its optimized back-end.
- Decomposition: Review the L and U matrices to see how the system was factored for calculation.
This project demonstrates that while there are multiple mathematical paths to solve a linear system, their real-world application depends on the nature of the matrix and the need for computational speed.
- Algorithmic Superiority: NumPy’s solver is the most efficient for large-scale systems due to its optimized underlying libraries.
- The Power of RREF: Manual Gauss-Jordan elimination (RREF) remains the best diagnostic tool for identifying singular matrices (Infinite or No Solution cases) where standard "black-box" solvers might fail.
- Factorization Insight: PLU Decomposition illustrates how computers simplify complex matrices into triangular forms to save memory and time when solving repeatedly.
- Numerical Accuracy: By analyzing residuals, we confirm that floating-point arithmetic requires a tolerance threshold (
$\epsilon$ ) to distinguish between true zeros and infinitesimal "noise".