原文地址(需要翻墙)

Solving systems of linear equations in Haskell

Haskell isn’t normally used for things like this, but it’s quite possible to solve systems of linear equations with Haskell.

There are already several libraries for doing this, and other more advanced matrix manipulating. But here, I’m going to start simple.

In mathematics, systems of linear equations are usually represented by an augmented matrix. A system of n linear equations would be represented by an augmented matrix with n rows and n+1 columns.

For example, we have this system of equations:

This would be represented as an augmented matrix:

In Haskell we represent this as a list of lists, like this:

[ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]

Here I’ll store each entry not as an integer, but as a floating point. You could also use Rational in Data.Ratio but both are fine for now.

The advantage of using Rational over Float is that sometimes you will end up with fractions that don’t work very well with floating point numbers. However, I’ve found that printing a list of lists of Rational types makes it difficult to read, unless you implement a custom show function for it.

So this is how we define our matrix types in Haskell:

type Row = [Float]
type Matrix = [Row]

The approach to solving this problem is rather simple. First we reduce whatever matrix we have to REF, or Row Echelon Form and then get the actual roots with some back substitution.

The algorithm used to transform a matrix to its Row Echelon Form is known as the Gaussian Elimination. Here’s what a matrix should look like after Gaussian Elimination (a represents any value):

Our matrix should look like this after Gaussian Elimination:

The REF form is not unique, so that is only one of the possible valid outputs for the Gaussian Elimination.

Why do we want to have the matrix in REF form? A matrix in this form can easily be solved using back substitution. Consider this matrix as a series of linear equations, as we did before:

Now it would be very clear how to solve for the three variables.

The Gaussian Elimination Algorithm

Here is a diagram of how Gaussian Elimination works. On each iteration, the element circled in green is considered the pivot element, while the elements enclosed in the red square are the ones we intend to remove (zero) in each iteration.

Removing the red elements in the matrix is actually quite simple. Consider how you would eliminate in equation here:

Probably you would multiply equation by 2, giving , then subtract from it, giving , eliminating .

We can also write that as . Basically to eliminate a variable, just multiply a row so it matches up, and subtract. This is middle school algebra.

To make things easier for us, we divide the row we are on so that the pivot is always 1. We do this now because we need them to be 1 anyways, and this avoids an unnecessary division in the next step.

We could, of course, not have the pivot always be 1, but we would have to do the divisions later when substituting to get the solutions. More on this later.

So to eliminate the variable under the pivot, multiply the whole row by that number. I have a picture to clarify:

We simply repeat this for all elements under the pivot.

An edge case

This is where it gets a little bit tricky. What if the pivot is 0?

We have no way of making it 1 by any kind of multiplication. Further, we cannot eliminate any elements below the pivot. What do we do now?

Simple. We swap the current row with any other row so that the pivot is not zero. Any row will do, so we’ll just pick the first one that fits.

If there is not a single element below the pivot that is not zero, the matrix is either under-determined or singular; either case it is unsolvable.

Here is my Haskell code on what I just covered:

gaussianReduce :: Matrix -> Matrix
gaussianReduce matrix = fixlastrow $ foldl reduceRow matrix [0..length matrix-1] where--swaps element at position a with element at position b.swap xs a b| a > b = swap xs b a| a == b = xs| a < b = let(p1,p2) = splitAt a xs(p3,p4) = splitAt (b-a-1) (tail p2)in p1 ++ [xs!!b] ++ p3 ++ [xs!!a] ++ (tail p4)reduceRow matrix1 r = let--first non-zero element on or below (r,r).firstnonzero = head $ filter (\x -> matrix1 !! x !! r /= 0) [r..length matrix1-1]--matrix with row swapped (if needed)matrix2 = swap matrix1 r firstnonzero--row we're working withrow = matrix2 !! r--make it have 1 as the leading coefficientrow1 = map (\x -> x / (row !! r)) row--subtract nr from row1 while multiplyingsubrow nr = let k = nr!!r in zipWith (\a b -> k*a - b) row1 nr--apply subrow to all rows belownextrows = map subrow $ drop (r+1) matrix2--concat the lists and repeatin take r matrix2 ++ [row1] ++ nextrowsfixlastrow matrix' = leta = init matrix'; row = last matrix'; z = last row; nz = last (init row)in a ++ [init (init row) ++ [1, z / nz]]

Edit: There was a bug in the above code, found by Alan Zimmerman. I think it’s been fixed.

This may be a bit difficult to read because there is no syntax highlighting and the code is cut off. I’ll provide a link to the full source code at the end.

Admittedly Haskell may not have been the best language to implement this algorithm this particular way because there is so much state changing. Any language that allows mutable state would probably perform better than this code.

Notice that at the end, the last row does not get divided. The fixlastrow function corrects this problem.

Let’s test this code:

*Main> gaussianReduce [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]
[[1.0,2.0,-1.0,-4.0],[0.0,1.0,-1.0,3.0],[-0.0,0.0,1.0,-2.0]]

Excellent.

Finishing up

The next step of the algorithm is to solve the variables by back substitution. This is pretty easy, I think. My code keeps a list of already-found solutions. Folding from the right, each step it substitutes in the corresponding solution and multiplies & subtracts to get the next solution, adding that to the solution list.

--Solve a matrix (must already be in REF form) by back substitution.
substitute :: Matrix -> Row
substitute matrix = foldr next [last (last matrix)] (init matrix) wherenext row found = letsubpart = init $ drop (length matrix - length found) rowsolution = last row - sum (zipWith (*) found subpart)in solution : found

To get a list of solutions from a matrix, we chain the substitute and gaussianReduce functions:

solve :: Matrix -> Row
solve = substitute . gaussianReduce
*Main> solve [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]
[-8.0,1.0,-2.0]

This means the solutions are . That seems correct, so we’re done!

The code is far from practical, though. Although it works, I haven’t really tested its performance (probably not very good), and it doesn’t handle all edge cases.

The code is available for download if anyone wants it.

转载于:https://www.cnblogs.com/bailiang/archive/2012/06/29/2570154.html

使用haskell来解线性方程相关推荐

  1. matlab 高斯消去法程序,高斯消去法解线性方程的Matlab程序

    1151091杨晨辉 高斯消去法解线性方程的Matlab程序 方法一: function x = gauss(A,b) n = length(b); for k = 1 : n-1 if A(k,k) ...

  2. doolittle分解法解线性方程

    doolittle分解法解线性方程 相比于gauss分解法解线性方程组,doolittle分解法虽然复杂点,但是对于高阶线性方程,处理起来更明了,简洁,目的性更强.A =(a(i,j)),n×n矩阵, ...

  3. 高斯消元法java_高斯消元法解线性方程(java实现)

    高斯消元法解线性方程(java实现) 检测是否为增广矩阵 矩阵必须要符合这个要求,才能得以计算. x1 - 2x2 + x3 = 0 2x2 - 8x3 = 8 -4x1 + 5x2 + 9x3 = ...

  4. 计算方法:列主元消去法,LU分解法, 雅可比迭代法,高斯塞德尔迭代法 解线性方程(C++)

    Matrix.h包括矩阵类Matrix的定义,Matrix.cpp包括该类成员函数的实现,LinearEqu.h包括线性方程类LinearEqu的定义,继承自Matrix类,其中solve()方法为列 ...

  5. matlab lu解线性方程,MATLAB报告用LU分解法求解线性方程组.doc

    MATLAB报告用LU分解法求解线性方程组 <MATLAB>期末大报告 报告内容:用LU分解法求解线性方程组 学院(系): 专 业: 班 级: 学 号: 学生姓名: 2014 年 6 月 ...

  6. 牛顿迭代法解线性方程matlab程序,牛顿迭代法matlab程序(解线性方程组)

    <牛顿迭代法matlab程序(解线性方程组)>由会员分享,可在线阅读,更多相关<牛顿迭代法matlab程序(解线性方程组)(4页珍藏版)>请在金锄头文库上搜索. 1.牛顿迭代法 ...

  7. 用gsl计算非方阵矩阵除法--解线性方程

    问题描述: 实际开发过程中遇到的一个问题axisPt=A\b(matlab表达式),求axisPt.实际上就是求inv(A)*b,但是A不是一个方阵,没有找到非方阵如何求逆,遇到瓶颈. 解决办法 以上 ...

  8. python 1、输入a,b,c解二元一次方程;2、克莱姆法则解线性方程;3、输入今天之后未来的天数,显示今天是星期几;4、输入一个数,检测是否能被5和6整除;5、输入人民币和美元的汇率和转换金额;

    import math (a,b,c)=eval(input("请你输入a,b,c:")) if (b**2-4*a*c)>0:r1=(-b+(b**2-a*c*4)**0. ...

  9. 机器学习--详解人脸对齐算法SDM-LBF

    https://www.cnblogs.com/Anita9002/p/7095380.html 引自:http://blog.csdn.net/taily_duan/article/details/ ...

最新文章

  1. Linux系统下载linux系统源码
  2. 一个时代的终结:为什么是时候放弃ITOM四大巨头了?这对IT领导者来说意味着什么?...
  3. CSS快速学习:几种导航条案例
  4. 神经网络----笔记(1)
  5. pat1091. Acute Stroke (30)
  6. 基于改进蜂群算法和灰色模型的管道腐蚀预测 - 附代码
  7. Linux内核源码阅读之系统调用mmap()
  8. TestNG基础教程 - IntelliJ IDEA中配置TestNG.xml, 查看TestNG Report
  9. win10计算机安全模式怎么,Win10进入安全模式的多种方法
  10. 工程经济学99分速成复习——第一章 绪论
  11. 内核抢占PREEMPT_RT
  12. UML时序图速查——架构设计必备技能
  13. linux安装Aria2和部署AriaNg Web服务
  14. c语言 自动计时的秒表,c语言实现的简单秒表计时器
  15. SAT阅读常见重要词汇
  16. Transformer Tracking
  17. 鸿蒙系统支持高清通话吗,电信VoLTE开通方法介绍 所有注意点全在这了
  18. 传感器与变送器的区别与联系
  19. 统计数字问题_统计问题
  20. NETCONF配置CISCO XE(csr1000v)初体验

热门文章

  1. Transformer 杀疯了,图像去雨、人脸幻构、风格迁移、语义分割等通通上分
  2. CVPR2020 | 谷歌提出多目标(车辆)跟踪与检测框架 RetinaTrack
  3. 2019全球程序员薪酬报告:软件开发比机器学习抢手!40岁后收入下滑
  4. 5分钟就能完成的Python小项目,赶紧拿去玩玩吧
  5. 2021年,作为算法工程师的你们会在CV业务上用Transformer吗?
  6. 内卷到逆天!机器学习领域不读PhD,我配不配找工作?
  7. NVIDIA开源了基于PyTorch的3D深度学习的综合库
  8. c++中enum 如何使用(转)
  9. DWA论文解析(CurvatureVelovityMethod)(3)
  10. c语言RePutDate用法,住宿结帐管理系统--C语言课程设计.doc