z-logo
open-access-imgOpen Access
Solving Linear Programming Problems by Reducing to the Form with an Obvious Answer
Author(s) -
Gleb D. Stepanov
Publication year - 2021
Publication title -
modelirovanie i analiz informacionnyh sistem
Language(s) - English
Resource type - Journals
eISSN - 2313-5417
pISSN - 1818-1015
DOI - 10.18255/1818-1015-2021-4-434-451
Subject(s) - mathematics , matrix (chemical analysis) , simplex algorithm , coefficient matrix , linear programming , simplex , gaussian elimination , system of linear equations , transformation (genetics) , algebra over a field , analogy , square matrix , algebraic number , pure mathematics , mathematical optimization , mathematical analysis , combinatorics , symmetric matrix , eigenvalues and eigenvectors , biochemistry , materials science , physics , chemistry , linguistics , philosophy , quantum mechanics , gene , composite material , gaussian
The article considers a method for solving a linear programming problem (LPP), which requires finding the minimum or maximum of a linear functional on a set of non-negative solutions of a system of linear algebraic equations with the same unknowns. The method is obtained by improving the classical simplex method, which when involving geometric considerations, in fact, generalizes the Gauss complete exclusion method for solving systems of equations. The proposed method, as well as the method of complete exceptions, proceeds from purely algebraic considerations. It consists of converting the entire LPP, including the objective function, into an equivalent problem with an obvious answer. For the convenience of converting the target functional, the equations are written as linear functionals on the left side and zeros on the right one. From the coefficients of the mentioned functionals, a matrix is formed, which is called the LPP matrix. The zero row of the matrix is the coefficients of the target functional, $a_{00}$ is its free member. The algorithms are described and justified in terms of the transformation of this matrix. In calculations the matrix is a calculation table. The method under consideration by analogy with the simplex method consists of three stages. At the first stage the LPP matrix is reduced to a special 1-canonical form. With such matrices one of the basic solutions of the system is obvious, and the target functional on it is $ a_{00}$, which is very convenient. At the second stage the resulting matrix is transformed into a similar matrix with non-positive elements of the zero column (except $a_{00}$), which entails the non-negativity of the basic solution. At the third stage the matrix is transformed into a matrix that provides non-negativity and optimality of the basic solution. For the second stage the analog of which in the simplex method uses an artificial basis and is the most time-consuming, two variants without artificial variables are given. When describing the first of them, along the way, a very easy-to-understand and remember analogue of the famous Farkas lemma is obtained. The other option is quite simple to use, but its full justification is difficult and will be separately published.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here