*Residual Norm – backward Error *Error Norm Observations: A. Direct Methods Among the direct methods, LU (Doolittle) Algorithm had been most efficient in giving the approximate solution vector as compared with Cholesky and QR Methods for 𝑛 = 10, 𝑛 = 100 and 𝑛 = 1000. This observation is based on the computed residual norm with LU (Doolittle) Method obtaining the smallest among the three. However, it is also noticeable that the computed error norms are almost the same for n = 100 and n = 1000. This is an indication that the direct methods target almost the same solution vector and only needs a little adjustment to reach the exact solution vector based on the small residual norms. B. Iterative Methods Symmetric Successive Over-relaxation Method obtained the most precise solution vector based on its computed residual norm which is the smallest compared to JOR, SOR and BSOR methods. In addition, it is observed that at 𝜔 = 1.0, all the Over-relaxation methods obtained a smaller residual norm compared with computations done at 𝜔 = 0.5. It is also observed that only the JOR method needed 669 iterations to arrive at the approximated solution vector for 𝑛 = 10. But as one increases the value of 𝑛, the computed residual norms get larger as a result of the continuous approximations. In terms of the gradient methods, the Conjugate Residual method obtained the least number of iterations and the least computed residual norms (except for 𝑛 = 1000), making it the best among the gradient methods. C. Methods in Solving the linear system 𝑨𝒙 = 𝒃 Based on the computed residual norms and error norms, the Conjugate Residual Method is the best among the methods discussed. It requires the least adjustment to achieve the exact solution vector. The gradient methods are better than the over-relaxation methods in terms of efficiency and precision.