Optimization Engineering
Dr. LIPIKA PANIGRAHI
Associate professor in mathematics
Gandhi Institute foe Education and Technology
Outlines
LINEAR PROGRAMMING�Segment 1
Definitions
Feasibility and optimality
Linear programs in standard form
“Economical” interpretation/application
LP in two dimensions – geometric interpretation
Linear programs in slack form
Linear Programming
Optimize (maximize or minimize) �a linear objective function subject to a set �of linear constraints (equalities and inequalities)
A problem of this form is called a linear program.
LP:
Feasibility and optimality
(Assuming the LP is a maximization program.)
An LP is feasible if it has a feasible solution.
An LP has an optimal solution iff it is feasible and bounded.
Importance of Linear Programming
Linear Programming has a lot of applications.
LP can be solved efficiently both in theory and in practice.
Many problems, including some of the problems �considered in this course, are LP problems.
Many software packages can solve LP problems.
We shall learn the simplex algorithm, and use it �to prove the duality theorem for linear programming
The simplex algorithm behaves very well in practice. But, �there are pathological examples on which it is not polynomial.
There are other, more complicated, LP algorithms that �run in polynomial time. (Ellipsoid, interior-point, etc.)
LPs in standard form
All variables are constrained to be non-negative.�(Very special linear inequalities.)
LPs in standard form
transpose
“Economic” application
How many units of each product should �we produce to maximize our profit?
Converting an LP to standard form
Every LP can be easily converted to standard form.
LP in two dimensions
Figure 29.2 from CLRS
Each constraint �defines a half-plane.
The intersection of all these �half-planes is a polygon,�called the feasibility polygon.
The objective function �specifies a direction.
The goal is to find the furthest feasible point in this direction, �i.e., the point with the largest inner-product with the direction.
LP in two dimensions
Figure 29.2 from CLRS
Each constraint defines a half-space.
The intersection of the constraints is the feasibility polyhedron.
The extreme points of the polyhedron are called vertices.
The finite intersection of half-spaces is a polyhedron.
There is always an optimal solution which is a vertex.
Equational Form
Standard form 🡪 Slack form
The resulting LP is said to be in slack form.
All constraints, except the non-neg constraints are equalities.
Slack form
Standard form
Standard form 🡪 Slack form
For those interested,�this is the corresponding�equational form:�(We will not use �it explicitly.)
Standard form
Slack form
END OF �SEGMENT 1
LINEAR PROGRAMMING�Segment 2
The simplex algorithm by example
The simplex algorithm by example
max
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
s.t.
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| ||||||||
s.t.
Start with an LP in standard form:
Convert it into slack form:
Basic and non-basic variables
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| ||||||||
s.t.
In a basic solution, all non-basic variables have value 0.
Pivoting step (1)
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| ||||||||
s.t.
Pivoting step (2)
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
s.t.
For brevity, from now on we do not�explicitly show the non-negativity constraints.
It is important to remember that they are still there!
|
Pivoting step (3)
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
s.t.
Pivoting step (4)
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
s.t.
Pivoting step (5)
max
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
s.t.
An equivalent slack form
max
| | |
|
| |
| |
|
| | | |
| |
| |
|
| | | |
| |
| |
|
| | | |
| |
| |
|
s.t.
The new form is completely equivalent to the previous form.
The objective function is the same.
We have a new basic feasible solution of value 27.
Another pivoting step
max
| | |
|
| |
| |
|
| | | |
| |
| |
|
| | | |
| |
| |
|
| | | |
| |
| |
|
s.t.
We do row operations to get a new equivalent form.
Another equivalent slack form
max
| |
|
|
| |
| |
|
| |
| |
| |
| |
|
| |
| |
| |
| |
|
| |
| |
| |
| |
|
s.t.
Final slack form
max
| |
|
|
| |
| |
|
| |
| |
| |
| |
|
| |
| |
| |
| |
|
| |
| |
| |
| |
|
s.t.
It achieves a value of 28.
END OF �SEGMENT 2
LINEAR PROGRAMMING�Segment 3
The simplex algorithm
The simplex algorithm�[Dantzig (1947)]
Start with a problem in standard form.
Try to find an equivalent slack form that �yields a basic feasible solution (bfs).
If no such slack form is found, the problem is infeasible.
If all coefficients in the objective function are �non-positive, then the current bfs is optimal.
The simplex algorithm – more details
If no initial bfs can be found, the problem is infeasible,
If there are several non-basic variables with positive coefficients (also called modified costs), we need to choose one of them.�This is done using a pivoting rule.
If there are no degenerate steps, then the objective value �strictly increases, and the algorithm must eventually terminate.
Basic and non-basic variables
The slack form, also known as the simplex tableau:
The basic solution corresponding to the slack form is:
If it is feasible, it is called a basic feasible solution (bfs).
If the set is empty, the problem is unbounded.
A pivoting step – updating the table (1)
A pivoting step – updating the table (2)
A pivoting step – updating the table (3)
Finally, we update the objective function:
All simple algebraic operations.
END OF �SEGMENT 2
THE SIMPLEX ALGORITHM�Segment 4
Pseudocode
Termination
Degeneracy
Cycling
Pivoting rules
Complexity
The simplex algorithm – pseudocode
A pivoting step
Convert the slack form:
into the slack form:
Termination
There may, however, be degenerate iterations in �which the objective function stays the same.
If there are no degenerate iterations, then the objective �function strictly increases we can never visit a bfs again.
Thus, if there are no degenerate iterations, the simplex� algorithm terminates after a finite number of iterations.
Note, however, that the upper bound on the �number of iterations is not polynomial.
Degeneracy
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | + | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
Example of degeneracy:
degenerate step
Degeneracy occurs a lot when solving combinatorial problems.
Cycling
In the example given, the degenerate step �was followed by a non-degenerate step.
Can we have several consecutive degenerate steps?
Can the degenerate steps form a cycle,�i.e., return to a previously visited bfs?
Unfortunately, cycling can happen, if we do not �carefully choose the entering and leaving variables.
(The smallest cycling example known is on an LP �with 7 variables that gives a cycle of length 6.)
Avoiding cycles
A crude way of avoiding cycles is to slightly perturb �the linear program. Cycles are unlikely to happen.
But, this slightly changes the optimal solution.�(Can be fixed by doing the perturbations ‘symbolically’.)
A more satisfying to solution is to use Bland’s rule.
Bland’s rule: (Use the smallest index)
Theorem: The simplex algorithm with Bland’s rule cannot cycle.
(We do not prove the theorem in this course.)
Pivoting rules
However, rhe pivoting rule can have a significant effect �on the efficiency of the simplex algorithm.
Note: We can revert to Bland’s rule if we see �that we are about to perform a degenerate step.
If cycles are avoided, the simplex algorithm returns �an optimal solution no matter which pivoting rule is used.
Common pivoting rules
Unfortunately, the simplex algorithm with these rules, �and many other rules, is not polynomial in the worst case.
It is a major open problem whether there is a pivoting �rule that makes the simplex algorithm polynomial.
END OF �SEGMENT 4
THE SIMPLEX ALGORITHM�Segment 5
Finding an initial bfs
The auxiliary LP
Existence of optimal basic solutions
Checking feasibility and �finding an initial basic feasible solution
Standard form
Slack form
We need to check whether the LP is feasible, and if so, �find a bfs to start the simplex algorithm from.
Auxiliary LP
Original LP
Auxiliary LP
To check feasibility, we introduce an auxiliary LP.
Running simplex on the Auxiliary LP
To solve the auxiliary LP we use the simplex algorithm.
We first transform it into slack form.
But, don’t we have exactly the same problem?
But, we can easily manipulate the slack form to get a bfs.
Finding a bfs of the auxiliary LP
The new basic solution is feasible!
Solving the auxiliary LP
We found a bfs of the auxiliary LP,�i.e., a slack form that defines a bfs.
We can thus use the simplex algorithm to solve the auxiliary LP.
We do not present pseudo-code.
The auxiliary LP is bounded, as�the value of the objective function is at most 0.
Optimal basic solutions
Corollaries of the correctness of the simplex algorithm:
Other implications of the simplex algorithm will be discussed �in the next segment when we consider LP duality.
(The theorems say that there is always an optimal solution �which is a vertex (extreme point) of the feasibility polyhedron.)
END OF �SEGMENT 5
LINEAR PROGRAMMING�Segment 6
LP duality
“Deriving” the dual program.
Weak duality
Defining the dual program.
LP duality theorem
LP duality
primal
To every LP in standard form we define a dual:
dual
Maximization becomes minimization.
LP duality theorem
Theorem: If the primal and dual linear programs�are both feasible, then their optimal values are equal.
primal
dual
Deriving the dual
We want to get an upper bound on the optimal value of the primal.
The theorem, which needs a proof, says that the best bound is tight!
primal
Weak duality
In the previous slide we proved the following weaker �version of the duality theorem known as weak duality:
Here is the same proof written in matrix notation:
END OF �SEGMENT 6
LINEAR PROGRAMMING�Segment 7
LP duality
Economical interpretation of the dual
Interpretation of the dual (1)
Recall:
primal
dual
Interpretation of the dual (2)
Primal: How do we maximize our profit?
primal
dual
Interpretation of the dual (3)
Dual: What is the minimal total worth of an offer we accept?
Instead of producing products, we try to sell the raw materials.
primal
dual
“Shadow prices”
END OF �SEGMENT 7
LINEAR PROGRAMMING�Segment 8
LP duality
Proof of the duality theorem
Lemma
Before proving the lemma we use it to prove strong duality.
Consider an initial slack form:
and the objective function of an equivalent slack form:
Then:
Reduced costs
Proof of duality theorem
We show that the simplex algorithm produces not only a �primal solution, but also a dual solution, and they both have �the same value. By weak duality, they are both optimal.
Proof of lemma
Simple calculations similar to those we saw before.
Claim of lemma
END OF �SEGMENT 8
LINEAR PROGRAMMING�Segment 9
LP duality
A “full” version of the duality theorem
Duals of LP programs that not in standard form
Frakas’ Lemma
Full LP duality theorem
(P)
(D)
(1) If (P) is feasible and bounded, then so is (D), and vice verse. �(P) and (D) have optimal values and their optimal values are equal.
(2) If (P) is feasible but unbounded, then (D) is infeasible.
(It is possible that both (P) and (D) are infeasible.)
(3) If (D) is feasible but unbounded, then (P) is infeasible.
Primal/Dual pairs
Standard form
Inequalities form
Equational form
The LP duality theorem holds for all these forms.
The dual of the dual is the primal.
Frakas’ Lemma (1902)
Proof: Consider the pair of primal dual programs:
(P)
(D)
(1) says that (P) is feasible and bounded.
(P) always bounded. (D) always feasible. By the duality theorem:
If (P) is feasible (and bounded), then (D) is bounded. Only (1) holds.
(2) says that (D) is feasible and unbounded
Dualization Recipe
| Max problem (P) | Min problem (D) |
variables | | |
matrix | | |
constants | | |
objective | | |
-constraint in (P)� | | |
-constraint in (D)� | | |
(You do not need to memorize it.)
END OF �SEGMENT 9
LINEAR PROGRAMMING�Segment 10
Linear Programming�vs.�Integer Programming
“Economic” application revisited
(LP)
Can we force the variables to assume only integral values?
Integer (Linear) Programs
(IP)
The simplex algorithm does not solve IP problems.
Solving IP problems in NP-hard.
Sometimes, we can ignore the integrality constraints, solve the �resulting LP relaxation, and then round the solution, hoping �that the solution obtained is close to optimal.
This gives us an approximation algorithm.
In some “lucky” cases, the LP relaxation always has �optimal integral solutions, so we get an exact result.
IP for Maximum Independent Set
(IP)
Finding an independent set of maximum size is NP-complete.
We can easily get an IP for the MIS problem.
Thus, IP is also an NP-complete problem.
END OF �SEGMENT 10
LINEAR PROGRAMMING�Segment 11
Formulating problems as Linear Programs
Single Source Shortest Paths
Formulating problems as linear programs
A wide variety of problems can be formulated as linear programs.
We focus in this course on graph/network related problems.
We consider LP formulations of the following problems:
Shortest paths
Maximum flow in a network
Minimum cost flow
Multi-commodity flow
For Shortest Paths we saw more efficient direct algorithms.
For Maximum flow we shall see more efficient direct algorithms.
The power of LP is its versatility and robustness.
For specific problems, more efficient algorithms can be found.
Shortest Paths – reminder
Proof: Review exercise on shortest paths.
Shortest Paths as an LP problem
(LP)
Shortest Paths as an LP problem
Shortest Paths is a minimization problem.
(LP)
Why did we formulate it as a maximization LP problem?
We can get an alternative formulation of Shortest Paths as a minimization LP problem by taking the dual. (Stay tuned!)
(LP) is infeasible.
(LP) is unbounded.
Shortest Paths as a maximization problem
(LP)
Each edge is represented by a piece of rope.
The ropes of the edges incident on a vertex a tied together.
Shortest Paths as an LP problem
What did we gain from this?
(LP)
We know much more efficient algorithms �for finding shortest paths directly.
First, it is an instructive example.
Second, if we change the problem definition, our �shortest paths algorithms are likely to stop working, �while the LP approach may remain valid.
Examples, beyond the scope of this course, are Stochastic �Shortest Paths (SSPs) and Markov Decision Processes (MDPs)
END OF �SEGMENT 11
LINEAR PROGRAMMING�Segment 12
Formulating problems as Linear Programs�Network flow problems
Maximum flow
Minimum cost flow
Multicommodity flow
Network Flow
capacities
source
sink
Network Flow
capacities
source
sink
Maximum flow
capacity
flow
Maximum Flow is an LP problem
Maximize the total net amount of flow entering the sink.�(We show later that all this flow must leave the source.)
Subject to the conservation constraints,
capacity constraints,
and non-negativity constraints.
The problem is always feasible and bounded,�so it has an optimal solution.
Maximum Flow and relatives
As Maximum Flow is an LP problem, we immediately get �fairly efficient (polynomial time) algorithms for solving it.
This demonstrates the versatility of the LP problem.
As with shortest paths, we can get even more efficient �maximum flow algorithms by studying the problem directly.
We next consider several variants of the maximum flow problem:
Using LP, we immediately get algorithms for them too.
More efficient algorithms can also be obtained for some �of these problems, but we will not do it in this course.
Minimum cost flow
Multicommodity flow
Minimum cost multicommodity flow
Maximum flow with losses
Minimum cost Flow
capacity
source
sink
cost
Minimum cost Flow
capacity
source
sink
cost
Minimum cost Flow is an LP problem
If the problem is feasible, then it is bounded �and has an optimal solution.
Minimum cost Flow �with general supplies and demands
Multicommodity Flow
Can this be done?
Multicommodity Flow is an LP problem
The problem is always feasible and bounded,�and hence has an optimal solution.
Note how quickly we were able to adapt the�LP formulations of the various flow problems we considered.
END OF �SEGMENT 12
LINEAR PROGRAMMING�Segment 13
LP duals of:
Maximum flow
Minimum cost flow
Shortest Paths
The LP dual of Maximum Flow
Before taking the dual, we do a “trick”.�(Not essential, but slightly simplifies things.)
Maximum Flow – Primal LP
We assume that no edges enter the source,�and that no edges leave the sink.
For “convenience”, we measure the flow �going out of the source, rather then the flow going into the sink.
Maximum Flow – Primal and dual LPs
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
|
|
|
|
|
|
|
|
|
|
|
|
|
Maximum Flow – Primal and dual LPs
(P)
(D)
(D)
Maximum Flow – dual LP
(D)
What does it mean?
The capacity of a cut is the sum of �the capacities of the edges that cross the cut.
Maximum Flow – Minimum Cut
(D)
We shall soon prove theorem, �which is special case of the LP duality theorem.
The primal LP for Shortest Paths
The primal LP for the shortest paths problem is �a minimum cost flow problem with no capacities!
An optimal solution of (P) is easily �obtained from a shortest paths tree.
Corollary: (P) always has an �integral optimal solution.
(P)
END OF �SEGMENT 13
END OF �LINEAR PROGRAMMING
Transportation Problem (TP) and Assignment Problem (AP)
(special cases of Linear Programming)
1. Transportation Problem (TP)
Distributing any commodity from any group of supply centers, called sources, to any group of receiving centers, called destinations, in such a way as to minimize the total distribution cost (shipping cost).
1. Transportation Problem (TP)
Total supply must equal total demand.
If total supply exceeds total demand, a dummy destination, whose demand equals the difference between the total supply and total demand is created. Similarly if total supply is less than total demand, a dummy source is created, whose supply
equals the difference.
All unit shipping costs into a dummy destination or out of a dummy source are 0.
Example 1:
Example 2:
| | Destination | | | Supply |
| D1 | D2 | D3 | D4 | |
S1 | 50 | 75 | 35 | 75 | 12 |
Source S2 | 65 | 80 | 60 | 65 | 17 |
S3 | 40 | 70 | 45 | 55 | 11 |
(D) | 0 | 0 | 0 | 0 | 10 |
Demand | 15 | 10 | 15 | 10 | |
Transportation Tableau:
1. Northwest Corner Starting Procedure
1. Select the remaining variable in the upper left (northwest) corner and note the supply remaining in the row, s, and the demand remaining in the column, d.
2. Allocate the minimum of s or d to this variable. If this minimum is s, eliminate all variables in its row from future consideration and reduce the demand in its column by s; if the minimum is d, eliminate all variables in the column from future consideration and reduce the supply in its row by d.
REPEAT THESE STEPS UNTIL ALL SUPPLIES HAVE BEEN ALLOCATED.
Initial Solution Procedure:
Total sipping cost = 2250
2. Least Cost Starting Procedure
1. For the remaining variable with the lowest unit cost, determine the remaining supply left in its row, s, and the remaining demand left in its column, d (break ties arbitrarily).
2. Allocate the minimum of s or d to this variable. If this minimum is s, eliminate all variables in its row from future consideration and reduce the demand in its column by s; if the minimum is d, eliminate all variables in the column from future consideration and reduce the supply in its row by d.
REPEAT THESE STEPS UNTIL ALL SUPPLIES HAVE BEEN ALLOCATED.
Total sipping cost = 2065
3. Vogel’s Approximation Method Starting Procedure
1. For each remaining row and column, determine the difference between the lowest two remaining costs; these are called the row and column penalties.
2. Select the row or column with the largest penalty found in step 1 and note the supply remaining for its row, s, and the demand remaining in its column, d.
3. Allocate the minimum of s or d to the variable in the selected row or column with the lowest remaining unit cost. If this minimum is s, eliminate all variables in its row from future consideration and reduce the demand in its column by s; if the minimum is d, eliminate all variables in the column from future consideration and reduce the supply in its row by d.
REPEAT THESE STEPS UNTIL ALL SUPPLIES HAVE BEEN ALLOCATED.
Total sipping cost = 2030
Solving TP – Transportation Simplex Method
1. Find the current Cij–Zij values for each nonbasic variable and select the one with the most negative Cij–Zij value as the entering variable; if all Cij–Zij values are nonnegative, the current solution is optimal.
2. Determine which basic variable reaches 0 first when the entering variable is increased.
3. Determine a new basic solution and repeat the steps.
Step 1: Determine the Cij–Zij Values for the Nonbasic Variables
1. If Ui is the dual variable associated with the i-th supply constraint, and Vj is the dual variable associated with the j-th demand constraint, then for shipments from node i to node j, one can find the corresponding Zij value by Zij = Ui - Vj. Thus the Cij–Zij value for variable Xij is found by
Cij - Zij = Cij - (Ui - Vj) = Cij - Ui + Vj
2. Given that there is a redundant equation among the m n constraints (and any of the m n constraints can be considered the redundant one), one can show that the Ui or Vj associated with the redundant equation is 0. Thus one Ui or Vj can arbitrarily be selected and set to 0. Arbitrarily choose U1 = 0.
3. Since the Cij–Zij values for basic variables are 0 (i.e., Cij - Ui + Vj = 0 for basic variables), we can easily solve for the remaining values of the Ui’s and Vj’s from the m + n - 1 equations for the basic variables.
4. Once the Ui’s and Vj’s have been determined, the Cij–Zij values for the nonbasic variables can be calculated by
Cij - Zij = Cij - Ui + Vj
Non-basic cells:
Note: X13 is the entering variable.
Step 2: Determine Which Current Basic Variable Reaches 0 First
Note: 1. Cycle property
2. X12 is the leaving variable
Step 3: Determine the Next Transportation Tableau
Total shipping cost = 2020
{Improvement = 2 (-5) = 10 }
Transportation Simplex Method
Find an initial basic feasible solution by some starting procedure. Then,
1. Set U1 0. Solve for the other Ui’s and Vj’s by:
Cij – Ui + Vj = 0 for basic variables.
Then calculate the Cij–Zij values for nonbasic variables by:
Cij – Zij = Cij – Ui + Vj
Choose the nonbasic variable with the most negative Cij–Zij value as the entering variable. If all Cij–Zij values are nonnegative, STOP; the current solution is optimal.
2. Find the cycle that includes the entering variable and some of the BASIC variables. Alternating positive and negative changes on the cycle, determine the “change amount” as the smallest allocation on the cycle at which a subtraction will be made.
3. Modify the allocations to the variables of the cycle found in step 2 by the “change amount” and return to step 1.
Note: there must be m + n - 1 basic variables for the transportation simplex method to work!
=> Add dummy source or dummy destination, if necessary
(m=# of sources and n=# of destinations)
2. The Assignment Problem (AP) —
a special case of TP with m=n and si=dj for all i, and j.
The Hungarian Algorithm
=> solving the assignment problem of a least
cost assignment of m workers to m jobs
Assumptions:
1. There is a cost assignment matrix for the m “people” to be assigned to m “tasks.” (If necessary dummy rows or columns consisting of all 0’s are added so that the numbers of people and tasks are the same.)
2. All costs are nonnegative.
3. The problem is a minimization problem.
The Hungarian Algorithm
Initialization
1. For each row, subtract the minimum number from all numbers in that row.
2. In the resulting matrix, subtract the minimum number in each column from all numbers in the column.
Iterative Steps
1. Make as many 0 cost assignments as possible. If all workers are assigned, STOP; this is the minimum cost assignment. Otherwise draw the minimum number of horizontal and vertical lines necessary to cover all 0’s in the matrix. (A method for making the maximum number of 0 cost assignments and drawing the minimum number of lines to cover all 0’s follows.)
2. Find the smallest value not covered by the lines; this number is the reduction value.
3. Subtract the reduction value from all numbers not covered by any lines. Add the reduction value to any number covered by both a horizontal and vertical line.
GO TO STEP 1.
For small problems, one can usually determine the maximum number of zero cost assignments by observation. For larger problems, the following procedure can be used:
Determining the Maximum Number of Zero-Cost Assignments
1. For each row, if only one 0 remains in the row, make that assignment and eliminate the row and column from consideration in the steps below.
2. For each column, if only one 0 remains, make that assignment and eliminate that row and column from consideration.
3. Repeat steps 1 and 2 until no more assignments can be made. (If 0’s remain, this means that there are at least two 0’s in each remaining row and column. Make an arbitrary assignment to one of these 0’s and repeat steps 1 and 2.)
Again, for small problems, the minimum number of lines required to cover all the 0’s can usually be determined by observation. The following procedure, based on network flow arguments, can be used for larger problems:
Drawing the Minimum Number of Lines to Cover All 0’s
1. Mark all rows with no assignments (with a “‧”).
2. For each row just marked, mark each column that has a 0 in that row (with a “‧”).
3. For each column just marked, mark each row that has an assignment in that column (with a “‧”).
4. Repeat steps 2 and 3 until no more marks can be made.
5. Draw lines through unmarked rows and marked columns.
Example:
Minimum uncovered
number
Minimum uncovered
number
CONVERSION OF A MAXIMIZATION PROBLEM TO A MINIMIZATION PROBLEM
The Hungarian algorithm works only if the matrix is a cost matrix. A maximization assignment problem can be converted to a minimization problem by creating a lost opportunity matrix. The problem then is to minimize the total lost opportunity.
Profit Matrix:
J1 J2 J3 J4
W1 67 58 90 55
W2 58 88 89 56
W3 74 99 80 22
(D) 0 0 0 0
The lost opportunity matrix given below is derived by subtracting each number in the J1 column from 74, each number in the J2 column from 99, each number in the J3 column from 90, and each number in the J4 from 56.
J1 J2 J3 J4
W1 7 41 0 1
W2 16 11 1 0
W3 0 0 10 34
(D) 74 99 90 56
The Hungarian algorithm can now be applied to this lost opportunity matrix to determine the maximum profit set of assignments.
Integer linear programming
Formulation
Example
job | Pay ($) | Time (min) | Fun index |
1 | 1 | 3 | 3 |
2 | 2 | 5 | 2 |
3 | 1 | 2 | 2 |
4 | 3 | 2 | 1 |
Maximize fun time so that you make at least $75 and do not spend more than two hours.
Example Formulation
job | Pay | Time | Fun |
1 | 1 | 3 | 3 |
2 | 2 | 5 | 2 |
3 | 1 | 2 | 2 |
4 | 3 | 2 | 1 |
Only two variables are likely to be non-zero. Which do you expect them to be?
Continuous solution from Solver
Objective Cell (Max) | | | | | | | |||
| Cell | Name | Original Value | Final Value | | | | | |
| $E$1 | fun | 80 | 112.4999958 | | | | | |
| | | | | | | | | |
| | | | | | | | | |
Variable Cells | | | | | | | | ||
| Cell | Name | Original Value | Final Value | Integer | | | | |
| $B$1 | x1 | 10 | 0 | Contin | | | | |
| $B$2 | x2 | 10 | 0 | Contin | | | | |
| $B$3 | x3 | 10 | 52.49999759 | Contin | | | | |
| $B$4 | x4 | 10 | 7.500000626 | Contin | | | | |
| | | | | | | | | |
| | | | | | | | | |
Constraints | | | | | | | | ||
| Cell | Name | Cell Value | Formula | Status | Slack | | | |
| $E$2 | pay | 74.99999946 | $E$2>=75 | Binding | 0 | | | |
| $E$3 | time | 119.9999964 | $E$3<=120 | Not Binding | 3.57628E-06 | | | |
| | | | | | | | | |
| | | | | | | | | |
Integer solution
Objective Cell (Max) | | | | | | ||
| Cell | Name | Original Value | Final Value | | | |
| $E$1 | fun | 112.4999958 | 112.0000005 | | | |
| | | | | | | |
| | | | | | | |
Variable Cells | | | | | | | |
| Cell | Name | Original Value | Final Value | Integer | | |
| $B$1 | x1 | 0 | 2.000000156 | Contin | | |
| $B$2 | x2 | 0 | 0 | Contin | | |
| $B$3 | x3 | 52.49999759 | 49 | Integer | | |
| $B$4 | x4 | 7.500000626 | 8 | Integer | | |
| | | | | | | |
| | | | | | | |
Constraints | | | | | | | |
| Cell | Name | Cell Value | Formula | Status | Slack | |
| $E$2 | pay | 75.00000016 | $E$2>=75 | Binding | 0 | |
| $E$3 | time | 120.0000005 | $E$3<=120 | Binding | 0 | |
| $B$3=Integer |
|
|
|
|
| |
| $B$4=Integer |
|
|
|
|
| |
Integer problems in laminate design
Relaxation: A general optimization technique
600.325/425 Declarative Methods - J. Eisner
159
Relaxation: A general optimization technique
600.325/425 Declarative Methods - J. Eisner
160
Integrality constraints: if we drop all�of these, we can just use simplex.�“LP relaxation of the ILP problem.”
But how can we add �integrality constraints back?
(simplex relies on having dropped them all)
Rounding doesn’t work
600.325/425 Declarative Methods - J. Eisner
161
image adapted from �Jop Sibeyn
round to nearest int (3,3)? No, infeasible.
round to nearest feasible int (2,3) or (3,2)? No, suboptimal.
round to nearest integer vertex (0,4)? � No, suboptimal.
Really do have to add the integrality constraints back somehow, and solve a new optimization problem.
Cutting planes: add new linear constraints
600.325/425 Declarative Methods - J. Eisner
162
figure adapted from Papadimitriou & Steiglitz
x1
x2
x3=x0
x1
x2
Add new linear constraints: Cutting planes
600.325/425 Declarative Methods - J. Eisner
163
figure adapted from Papadimitriou & Steiglitz
x1
x2
x3=x0
Example
600.325/425 Declarative Methods - J. Eisner
164
No integrality constraints!
But optimal solution is the same.
example from H. P. Williams
Chvátal cuts
600.325/425 Declarative Methods - J. Eisner
165
example from H. P. Williams
even coeffs:�divide by 2
Must be integer �because of integrality �constraints on x1,x2,x3
round
+ ( )
+ 5 * ( )
7x1 + 21x2 + 7x3 ≥ 57
divide by 7 �and round
Gomory cuts
600.325/425 Declarative Methods - J. Eisner
166
x1
x2
x3=x0
figure adapted from Papadimitriou & Steiglitz
Branch and bound: Disjunctive cutting planes!
600.325/425 Declarative Methods - J. Eisner
167
figure from �H. P. Williams
why?
why?
For each leaf,�why is it okay�to stop there?
When does solving�the relaxed problem�ensure integral X2?
Remember branch-and-bound from constraint programming?
600.325/425 Declarative Methods - J. Eisner
168
figure thanks to Tobias Achterberg
Branch and bound: Pseudocode
600.325/425 Declarative Methods - J. Eisner
169
pseudocode thanks to Tobias Achterberg
In notation, ^ is upper bound (feasible but poor objective) – decreases globally
v is lower bound (good objective but infeasible) – increases down the tree
Can round and do stochastic local search to �try to find a feasible solution x^ near xv,�to improve upper bound c^ further
Branch&cut: add new constraints here (cutting planes, conflict clauses, or pick from huge set (“row generation”)) �Branch&price: add new non-0 vars picked from huge set (“column gener.”)
Simplify if desired by propagation; then �relax by dropping integrality constraints
Or set c^, x^ to a known feasible solution.
May simplify (“presolve”) it first.
How do we split into subproblems?
600.325/425 Declarative Methods - J. Eisner
170
figure thanks to Tobias Achterberg
Where’s the variable ordering?
Where’s the value ordering?
How do we add new constraints?
600.325/425 Declarative Methods - J. Eisner
171
figure thanks to Tobias Achterberg
Variable & value ordering heuristics (at a given node)
600.325/425 Declarative Methods - J. Eisner
172
Warning
600.325/425 Declarative Methods - J. Eisner
173
figures from H. P. Williams
Warning
600.325/425 Declarative Methods - J. Eisner
174
Intuition for LP: Only way to get LP optima far from the origin is to have slopes that are close but not quite equal� … which requires large ints.
figures from Papadimitriou & Steiglitz
Warning
600.325/425 Declarative Methods - J. Eisner
175
For ILP: A little trickier.�(Could ILP have huge finite optimum if LP is unbounded?
Answer: no, then ILP unbounded too.)
figures from Papadimitriou & Steiglitz
Reducing ILP to 0-1 ILP
600.325/425 Declarative Methods - J. Eisner
176
Totally Unimodular Problems
600.325/425 Declarative Methods - J. Eisner
177
???
Totally Unimodular Cost Matrix A
600.325/425 Declarative Methods - J. Eisner
178
Some Totally Unimodular Problems
600.325/425 Declarative Methods - J. Eisner
179
Some Totally Unimodular Problems
600.325/425 Declarative Methods - J. Eisner
edge scores �from drawing
180
drawing from Edwards M T et al. Nucl. Acids Res. 2005;33:3253-3262 (Oxford Univ. Press)
If we formulate as Ax ≤ b, x ≥ 0,�the A matrix is totally unimodular:
Sufficient condition: Each column (for edge xij) has at most 2 nonzero entries (for i and j).
These are both +1 (or both -1) and are in different “halves” of the matrix.
(Also okay if they are +1 and -1 and are in same “half” of the matrix.)
each top/bottom�node has at most �one edge
| xI,A | xI,B | xII,B | xIII,C | xIV,B | xIV,C |
(i=I) | 1 | 1 | 0 | 0 | 0 | 0 |
(i=II) | 0 | 0 | 1 | 0 | 0 | 0 |
(i=III) | 0 | 0 | 0 | 1 | 0 | 0 |
(i=IV) | 0 | 0 | 0 | 0 | 1 | 1 |
(j=A) | 1 | 0 | 0 | 0 | 0 | 0 |
(j=B) | 0 | 1 | 1 | 0 | 1 | 0 |
(j=C) | 0 | 0 | 0 | 1 | 0 | 1 |
Not �needed!
(just need�xij ≥ 0)
Some Totally Unimodular Problems
edge scores �from drawing
600.325/425 Declarative Methods - J. Eisner
181
Can formulate as Ax = b, x ≥ 0 so�that A matrix is totally unimodular:
| xsA | XsC | xAB | xBC | xCA | xBt | xCt |
(s) | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
(j=A) | -1 | 0 | 1 | 0 | -1 | 0 | 0 |
(j=B) | 0 | 0 | -1 | 1 | 0 | 1 | 0 |
(j=C) | 0 | -1 | 0 | -1 | 1 | 0 | 1 |
(t) | 0 | 0 | 0 | 0 | 0 | -1 | -1 |
| | | | | | | |
s
B
A
C
t
2
2
3
-1
8
9
5
Q: Can you prove that every feasible solution is a path?
A: No: it could be a path plus some cycles.
But then can reduce cost by throwing away the cycles. So optimal solution has no cycles.
Some Totally Unimodular Problems
edge scores �from drawing
600.325/425 Declarative Methods - J. Eisner
182
Can formulate as Ax = b, x ≥ 0 so�that A matrix is totally unimodular:
Sufficient condition: Each column (for edge xij) has at most 2 nonzero entries (for i and j).
These are +1 and -1 and are in the same “half” of the matrix.
(Also okay to be both +1 or both -1 and be in different “halves.”)
| xsA | XsC | xAB | xBC | xCA | xBt | xCt |
(s) | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
(j=A) | -1 | 0 | 1 | 0 | -1 | 0 | 0 |
(j=B) | 0 | 0 | -1 | 1 | 0 | 1 | 0 |
(j=C) | 0 | -1 | 0 | -1 | 1 | 0 | 1 |
(t) | 0 | 0 | 0 | 0 | 0 | -1 | -1 |
| | | | | | | |
s
B
A
C
t
2
2
3
-1
8
9
5
(this “half” is empty; can divide rows into “halves” �in any way that satisfies sufficient condition)
Not needed! (just need xij ≥ 0)
Some Totally Unimodular Problems
Solving Linear Programs
600.325/425 Declarative Methods - J. Eisner
184
Canonical form of an LP
600.325/425 Declarative Methods - J. Eisner
185
A
x
b
≤
m
n
n
m
So x specifies a linear combination of the columns of A.
Fourier-Motzkin elimination
600.325/425 Declarative Methods - J. Eisner
186
Thanks to the ∃ �properties above.
Warning: P2 may have more edges than P3 has faces. That is, we’ve reduced # of variables but perhaps increased # of constraints. �As usual, might choose variable z carefully (cf. induced width).
Remember variable elimination for SAT?�Davis-Putnam
600.325/425 Declarative Methods - J. Eisner
187
Resolution fuses each pair (V v W v ~X) ^ (X v Y v Z) into (V v W v Y v Z)
Justification #1: Valid way to eliminate X (reverses CNF 🡪 3-CNF idea).
Justification #2: Want to recurse on a CNF version of ((ϕ ^ X) v (ϕ ^ ~X))
Suppose ϕ = α ^ β ^ γ
where α is clauses with ~X, β with X, γ with neither
Then ((ϕ ^ X) v (ϕ ^ ~X)) = (α’ ^ γ) v (β’ ^ γ) by unit propagation
where α’ is α with the ~X’s removed, β’ similarly.
= (α’ v β’) ^ γ = (α’1 v β’1) ^ (α’1 v β’2) ^ … ^ (α’99 v β’99) ^ γ
some two clauses in ϕ
Fourier-Motzkin elimination
600.325/425 Declarative Methods - J. Eisner
188
example adapted from Ofer Strichman
minimize a
a ≤ x+y+z�a ≥ x+y+z
z ≤ x
(doesn’t mention z; leave alone)
z ≤ -2
z ≥ y/2 – x/2
z
a - x - y
a - x - y
z ≥ a - x - y�z ≤ a - x - y
y/2 - x/2
≤
-2
x
≤
Solve for z
∃ a value of z satisfying these constraints
iff each of these ≤ each of these
Eliminate z
y/2 - x/2 ≤ -2
y/2 - x/2 ≤ x
y/2 - x/2 ≤ a - x - y
a - x - y ≤ -2
a - x - y ≤ x
a - x - y ≤ a - x - y
x - y ≥ 0 (unchanged)
Fourier-Motzkin elimination
600.325/425 Declarative Methods - J. Eisner
189
Simplex Algorithm: Basic Insight
From Canonical to Standard Form
≤
600.325/425 Declarative Methods - J. Eisner
191
A
x
b
m
n
n
m
Ax = b
=
+m
(Sometimes # vars is still called n, even in �standard form. It’s usually > # constraints. �I’ll use n+m to denote the # of vars in a standard-form problem – you’ll see why.)
+m
+m
From Canonical to Standard Form
=
600.325/425 Declarative Methods - J. Eisner
192
A
x
b
m
n+m
n+m
m
We can solve linear equations!
If A were square, we could try to invert it to solve for x.
But m < n+m, so there are many solutions x.
(To choose one, we min c⋅x.)
We are looking to express b as a linear combination of A’s columns.
x gives the coefficients of this linear combination.
Standard Form
600.325/425 Declarative Methods - J. Eisner
193
A’ rest
x’
0
0
0
0
b
m
m
n
m
We can solve linear equations!
If A were square, we could try to invert it to solve for x.
But m < n+m, so there are many solutions x.
(To choose one, we min c⋅x.)
=
n
m
If we set these variables to 0, we can get one solution by setting x’ = (A’)-1 b .
(A’ is invertible provided that the m columns of A’ are linearly independent.)
Standard Form
600.325/425 Declarative Methods - J. Eisner
194
rest A’
0
0
0
0
x’
b
m
m
m
m
=
n
Here’s another solution via �x’ = (A’)-1 b .
n
In fact, we can get a “basic solution” like this for any basis A’ formed from m linearly independent columns of A.
This x is a “basic feasible solution” (bfs) if x ≥ 0 (recall that constraint?).
Notice that the bfs in the picture �is optimal when the cost vector is �c = (1,1,1,1,0,0,0,0,…)�Similarly, any bfs is optimal �for some cost vector.
Hmm, sounds like polytope vertices…
Remark: If A is totally unimodular, then the bfs (A’)-1 b will be integral�(assuming b is).
We can solve linear equations!
If A were square, we could try to invert it to solve for x.
But m < n+m, so there are many solutions x.
(To choose one, we min c⋅x.)
Canonical vs. Standard Form
Ax ≤ b
x ≥ 0
m inequalities
+ n inequalities
(n variables)
Ax = b
x ≥ 0
m equalities
+ n+m inequalities
(n+m variables)
add m slack variables�(one per constraint)
Eliminate last m vars
(how?)
rest A’
�x
b
m
m
m
m
=
n
n
m
A’-1
m
m
A’-1
m
Eliminating last m vars�turns the last m “≥ 0” constraints�& the m constraints (“Ax=b”) into m inequalities (“Ax ≤ b”).
E.g., have 2 constraints on xn+m:
xn+m ≥ 0 and the last row, namely
(h1x1+…+hnxn) + xn+m = b.
To elim xn, replace them with
(h1x1+…+hnxn) ≤ b.
Multiply Ax=b through by A’-1.
This gives us the kind of Ax=b that we’d have gotten by starting with Ax ≤ b and adding 1 slack var per constraint.
Now can eliminate slack vars.
And change xn+m �in cost function to�b - (h1x1+…+hnxn). �
A’-1⋅ �rest
�x
A’-1� ⋅ b
=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Canonical vs. Standard Form
=
Pick n of the�n+m constraints �to be tight
vertex
(defined by intersecting n of the constraints, each of which reduces dimensionality by 1)
bfs
(defined by selecting n of the�variables to be 0)
Ax ≤ b
x ≥ 0
m inequalities
+ n inequalities
(n variables)
Ax = b
x ≥ 0
m equalities
+ n+m inequalities
(n+m variables)
add m slack variables�(one per constraint)
Eliminate last m vars
Simplex algorithm
At right, expressed an unused column C5 as linear combination of basis: C5 = C1 + 2C2 - C3.
Gradually phase in unused column C5�while phasing out C1 + 2C2 - C3, to keep Ax=b.
Easy to solve for max ε (=2) that keeps x ≥ 0.
Picked C5 because increasing ε improves cost.
=
Suppose n+m=6 and n=3
Denote A’s columns by C1…C6
x=(5,4,7,0,0,0) is the current bfs (n zeroes)
So C1,C2,C3 form a basis of Rm and Ax=b for
x=(5, 4, 7, 0, 0, 0) 5C1 + 4C2 + 7C3 = b
x=(4.9, 3.8, 7.1, 0, 0.1, 0) …
x=(4.8, 3.6, 7.2, 0, 0.2, 0) …
x=(5-ε, 4-2ε,7+ε,0, ε, 0) …
x=(3, 0, 9, 0, 2, 0) 3C1 + 9C3 + 2C5 = b� is the new bfs
Pick n of the�n+m constraints �to be tight
vertex
(defined by intersecting n of the constraints)
bfs
(defined by selecting n of the�variables to be 0 🡺 n tight constraints;�solve for slack in other constraints)
Canonical vs. Standard Form
Cost of origin is easy to compute (it’s a const in cost function). Eliminating a different set of m variables (picking a different basis) would rotate/reflect/squish the polytope & cost hyperplane to put a different vertex at origin, aligning that vertex’s n constraints with the orthogonal x ≥ 0 hyperplanes. This is how simplex algorithm tries different vertices!
Eliminate last m vars
Ax ≤ b
x ≥ 0
m inequalities
+ n inequalities
(n variables)
Ax = b
x ≥ 0
m equalities
+ n+m inequalities
(n+m variables)
=
Pick n of the�n+m constraints �to be tight
vertex
(defined by intersecting n of the constraints)
bfs
(defined by selecting n of the�variables to be 0 🡺 n tight constraints;�solve for slack in other constraints)
Simplex algorithm: More discussion
=
Pick n of the�n+m equalities �to be tight
vertex
(defined by intersecting n of the constraints)
bfs
(defined by selecting n of the�variables to be 0 🡺 n tight constraints;�solve for slack in other constraints)
Recall: Duality for Constraint Programs
600.325/425 Declarative Methods - J. Eisner
200
original (“primal”) problem:�one variable per letter,�constraints over up to 5 vars
1
2
3
4
5
6
7
8
9
10
11
12
13
slide thanks to Rina Dechter (modified)
12
1,2,3,4,5
5,7,11
5
3,6,9,12
8,9,10,11
9
12,13
10,13
13
3
10
11
transformed (“dual”) problem:�one var per word, 2-var constraints.
Old constraints 🡺 new vars
Old vars 🡺 new constraints
Warning: Unrelated to AND-OR duality from SAT
Duality for Linear Programs (canonical form)
600.325/425 Declarative Methods - J. Eisner
201
max c⋅x
Ax ≤ b
x ≥ 0
(m)
(n)
Primal problem
min b⋅y
ATy ≥ c
y ≥ 0
(n)
(m)
Dual problem
dualize
Old constraints 🡺 new vars
Old vars 🡺 new constraints
Where Does Duality Come From?
Where Does Duality Come From?
example from Rico Zenklusen
Where Does Duality Come From?
example from Rico Zenklusen
positive coefficients�so ≤ doesn’t flip
Where Does Duality Come From?
example from Rico Zenklusen
General case:�y1C1 + y2C2 + y3C3 + y4C4�with y1,…y4 ≥ 0�so that inequalities don’t flip
Duality for Linear Programs (canonical form)
600.325/425 Declarative Methods - J. Eisner
206
max c⋅x
Ax ≤ b
x ≥ 0
(m)
(n)
Primal problem
min b⋅y
ATy ≥ c
y ≥ 0
(n)
(m)
Dual problem
dualize
Dual of dual = Primal�Linear programming duals are “reflective duals” (not true for some other notions of duality)�
600.325/425 Declarative Methods - J. Eisner
207
max c⋅x
Ax ≤ b
x ≥ 0
(m)
(n)
Primal problem
min b⋅y
ATy ≥ c
y ≥ 0
(n)
(m)
Dual problem
dualize
min (-c)⋅x
(-A)x ≥ -b
x ≥ 0
(m)
(n)
Equivalent to primal
dualize
max (-b)⋅y
(-AT)y ≤ (-c)
y ≥ 0
(n)
(m)
Equivalent to dual
Just negate A, b, and c
Primal & dual “meet in the middle”
600.325/425 Declarative Methods - J. Eisner
208
max c⋅x
Ax ≤ b
x ≥ 0
(m)
(n)
Primal problem
min b⋅y
ATy ≥ c
y ≥ 0
(n)
(m)
Dual problem
Primal & dual “meet in the middle”
600.325/425 Declarative Methods - J. Eisner
209
max c⋅x
Ax ≤ b
x ≥ 0
min b⋅y
ATy ≥ c
y ≥ 0
c⋅x
Max achievable under primal constraints
Not feasible under primal constraints
Min achievable under dual constraints
b⋅y
Not feasible under dual constraints
c⋅x ≤ b⋅y for all feasible (x,y).
(So if one problem is unbounded, �the other must be infeasible.)
primal
dual
Duality for Linear Programs (standard form)�Primal and dual are related constrained optimization problems, each in n+m dimensions
210
max c⋅x
Ax + s = b
x ≥ 0
s ≥ 0
(n struct vars)
Primal problem
min b⋅y
ATy - t = c
y ≥ 0
t ≥ 0
(m struct vars)
Dual problem
(m slack vars)
(n surplus vars)
x⋅t + s⋅y = 0
Why duality is useful for ILP
600.325/425 Declarative Methods - J. Eisner
211
max c⋅x
Ax ≤ b
x ≥ 0�x integer
c⋅x
Max achievable under LP relaxation
Can also find this from dual of LP relaxation
b⋅y
Max achievable �for ILP at this node
Min achieved so far at this node as dual simplex runs
best feasible
global solution so far
ILP problem at some node of branch-and-bound tree�(includes some branching constraints)
Optimistic bound�poor enough that�we can prune�this node
min b⋅y
ATy ≥ c
y ≥ 0
Instead, let’s find bound by dual simplex
prune�early!
Multiple perspectives on duality
Drop the names s and t now; use standard form, but call all the variables x and y.
Where Does Duality Come From?
Technically, this is not the method of Lagrange multipliers.
Lagrange (18th century) only handled equality constraints.�Karush (1939) and Kuhn & Tucker (1951) generalized to inequalities.
Where Does Duality Come From?
If it happens that a(x0) ≤ b, we’re done! But what if not?
Lagrangian term (penalty rate λ is a “Lagrange multiplier”)
Where Does Duality Come From?
If it happens that a(x0) ≤ b, we’re done! But what if not?
Increasing λ gets solutions xλ with a(xλ) = 100, then 90, then 80 …
These are solutions to max c(x) subject to a(x) ≤ 100, 90, 80 …
So λ is essentially an indirect way of controlling b.
Adjust it till we hit the b that we want.
Each yi from LP dual acts like a λ (in fact y is λ upside down!)
Where Does Duality Come From?
If it happens that a(x0) ≤ b, we’re done! But what if not?
���
Lagrangian is ≤ 0,�since by assumption �a(x’) ≤ b
Lagrangian is 0�since by assumption �a(xλ) = b
(In fact, Lagrangian actually�rewards x’ with a(x’) < b. These�x’ didn’t win despite this unfair advantage,�because they did worse on c.)
Where Does Duality Come From?
If it happens that a(x0) ≤ b, we’re done! But what if not?
Tightest upper bound: min c(xλ) subject to a(xλ) ≥ b. See where this is going?
Where Does Duality Come From?
Lagrangian
Where Does Duality Come From?
Lagrangian
Where Does Duality Come From?
Lagrangian
Problem
B.ramamurthy
221
3/23/2023
Some More Questions
B.ramamurthy
222
3/23/2023
Queuing Analysis
B.ramamurthy
223
3/23/2023
Alternatives
B.ramamurthy
224
3/23/2023
Example
B.ramamurthy
225
3/23/2023
Queuing Models
B.ramamurthy
226
3/23/2023
Queuing Parameters
λ – average arrival rate (of requests)
w – average queue length
Tw – average wait time
Ts – average service time
Tq – average time spent in the system (Tw+ Ts)
λmax = 1 / Ts theoretical maximum rate that can be handled by the system.
See Table A.1
B.ramamurthy
227
3/23/2023
Queuing structure
B.ramamurthy
228
3/23/2023
Server
arrivals
queue
dispatch
discipline
departures
W- items waiting
Tw – waiting time
q – items in queuing system
Tq – queuing time
Multiple servers
λmax = N / Ts for N servers
Lets look at basic queuing relationships specified a well Little’s formulas.
B.ramamurthy
229
3/23/2023
Little’s Formula
B.ramamurthy
230
3/23/2023
General Single server Multi-server
q = λTq ρ = λTs ρ = λΤs/N
w = λTw q = w + ρ u = ρΝ
Tq = Tw + Ts q = w + Nρ
q -- mean number of items in the system
u -- traffic intensity
w – mean number of items waiting to be served
λ -- arrival rate
Queuing Models
B.ramamurthy
231
3/23/2023