The main features of the library are the following:
x + 2*y + 5*z <= 7
when you mean it;In addition to the basic domains, we also provide generic support for constructing new domains from pre-existing domains. The following domains and domain constructors are provided by the PPL:
In the following sections we describe these domains and domain constructors together with their representations and operations that are available to the PPL user.
In the final section of this chapter (Section Using the Library), we provide some additional advice on the use of the library.
The scalar product of , denoted
, is the real number
For any , the Minkowski's sum of
and
is:
Note that each hyperplane can be defined as the intersection of the two closed affine half-spaces
and
. Also note that, when
, the constraint
is either a tautology (i.e., always true) or inconsistent (i.e., always false), so that it defines either the whole vector space
or the empty set
.
The set is a closed convex polyhedron (closed polyhedron, for short) if and only if either
can be expressed as the intersection of a finite number of closed affine half-spaces of
or
and
. The set of all closed polyhedra on the vector space
is denoted
.
When ordering NNC polyhedra by the set inclusion relation, the empty set and the vector space
are, respectively, the smallest and the biggest elements of both
and
. The vector space
is also called the universe polyhedron.
In theoretical terms, is a lattice under set inclusion and
is a sub-lattice of
.
A bounded polyhedron is also called a polytope.
By definition, each polyhedron is the set of solutions to a constraint system, i.e., a finite number of constraints. By using matrix notation, we have
where, for all ,
and
, and
are the number of equalities, the number of non-strict inequalities, and the number of strict inequalities, respectively.
We denote by (resp.,
,
,
) the set of all the linear (resp., positive, affine, convex) combinations of the vectors in
.
Let , where
. We denote by
the set of all convex combinations of the vectors in
such that
for some
(informally, we say that there exists a vector of
that plays an active role in the convex combination). Note that
so that, if
,
It can be observed that is an affine space,
is a topologically closed convex cone,
is a topologically closed polytope, and
is an NNC polytope.
A point of an NNC polyhedron is a vertex if and only if it cannot be expressed as a convex combination of any other pair of distinct points in
. A ray
of a polyhedron
is an extreme ray if and only if it cannot be expressed as a positive combination of any other pair
and
of rays of
, where
,
and
for all
(i.e., rays differing by a positive scalar factor are considered to be the same ray).
where the symbol '' denotes the Minkowski's sum.
When is a closed polyhedron, then it can be represented by finite sets of lines
, rays
and points
of
. In this case, the 3-tuple
is said to be a generator system for
since we have
Thus, in this case, every closure point of is a point of
.
For any and generator system
for
, we have
if and only if
. Also
must contain all the vertices of
although
can be non-empty and have no vertices. In this case, as
is necessarily non-empty, it must contain points of
that are not vertices. For instance, the half-space of
corresponding to the single constraint
can be represented by the generator system
such that
,
,
, and
. It is also worth noting that the only ray in
is not an extreme ray of
.
Similarly, a generator system for an NNC polyhedron
is said to be minimized if there does not exist a generator system
for
such that
,
,
and
.
Such changes of representation form a key step in the implementation of many operators on NNC polyhedra: this is because some operators, such as intersections and poly-hulls, are provided with a natural and efficient implementation when using one of the representations in a DD pair, while being rather cumbersome when using the other.
In the library, the topology of each polyhedron object is fixed once for all at the time of its creation and must be respected when performing operations on the polyhedron.
Unless it is otherwise stated, all the polyhedra, constraints and/or generators in any library operation must obey the following topological-compatibility rules:
Wherever possible, the library provides methods that, starting from a polyhedron of a given topology, build the corresponding polyhedron having the other topology.
Unless it is otherwise stated, all the polyhedra, constraints and/or generators in any library operation must obey the following (space) dimension-compatibility rules:
While the space dimension of a constraint, a generator or a system thereof is automatically adjusted when needed, the space dimension of a polyhedron can only be changed by explicit calls to operators provided for that purpose.
implies that, for each ,
.
The maximum number of affinely independent points in is
.
A non-empty NNC polyhedron has affine dimension
, denoted by
, if the maximum number of affinely independent points in
is
.
We remark that the above definition only applies to polyhedra that are not empty, so that . By convention, the affine dimension of an empty polyhedron is 0 (even though the ``natural'' generalization of the definition above would imply that the affine dimension of an empty polyhedron is
).
The library only supports rational polyhedra. The restriction to rational numbers applies not only to polyhedra, but also to the other numeric arguments that may be required by the operators considered, such as the coefficients defining (rational) affine transformations and (rational) bounding boxes.
In theoretical terms, the intersection and poly-hull operators defined above are the binary meet and the binary join operators on the lattices and
.
In general, even though are topologically closed polyhedra, their poly-difference may be a convex polyhedron that is not topologically closed. For this reason, when computing the poly-difference of two C polyhedra, the library will enforce the topological closure of the result.
Another way of seeing it is as follows: first embed polyhedron into a vector space of dimension
and then add a suitably renamed-apart version of the constraints defining
.
The operator add_space_dimensions_and_embed
embeds the polyhedron into the new vector space of dimension
and returns the polyhedron
defined by all and only the constraints defining
(the variables corresponding to the added dimensions are unconstrained). For instance, when starting from a polyhedron
and adding a third space dimension, the result will be the polyhedron
In contrast, the operator add_space_dimensions_and_project
projects the polyhedron into the new vector space of dimension
and returns the polyhedron
whose constraint system, besides the constraints defining
, will include additional constraints on the added dimensions. Namely, the corresponding variables are all constrained to be equal to 0. For instance, when starting from a polyhedron
and adding a third space dimension, the result will be the polyhedron
Given a set of variables, the operator remove_space_dimensions
removes all the space dimensions specified by the variables in the set. For instance, letting be the singleton set
, then after invoking this operator with the set of variables
the resulting polyhedron is
Given a space dimension less than or equal to that of the polyhedron, the operator
remove_higher_space_dimensions
removes the space dimensions having indices greater than or equal to . For instance, letting
defined as before, by invoking this operator with
the resulting polyhedron will be
map_space_dimensions
provided by the library maps the dimensions of the vector space
If , i.e., if the function
is undefined everywhere, then the operator projects the argument polyhedron
onto the zero-dimension space
; otherwise the result is
given by
expand_space_dimension
provided by the library adds
This operation has been proposed in [GDMDRS04].
fold_space_dimensions
provided by the library, given a polyhedron
where
and, for ,
,
,
and, finally, for ,
,
,
( denotes the cardinality of the finite set
).
This operation has been proposed in [GDMDRS04].
Similarly, we denote by the preimage under
of
, that is
If , then the relation
is said to be space dimension preserving.
The relation is said to be an affine relation if there exists
such that
where ,
,
and
, for each
.
As a special case, the relation is an affine function if and only if there exist a matrix
and a vector
such that,
The set of NNC polyhedra is closed under the application of images and preimages of any space dimension preserving affine relation. The same property holds for the set
of closed polyhedra, provided the affine relation makes no use of the strict relation symbols
and
. Images and preimages of affine relations can be used to model several kinds of transition relations, including deterministic assignments of affine expressions, (affinely constrained) nondeterministic assignments and affine conditional guards.
A space dimension preserving relation can be specified by means of a shorthand notation:
As an example, assuming , the notation
, where the primed variable
does not occur, is meant to specify the affine relation defined by
The same relation is specified by , since
occurs with coefficient 0.
The library allows for the computation of images and preimages of polyhedra under restricted subclasses of space dimension preserving affine relations, as described in the following.
where
and the (resp.,
) occur in the
st row in
(resp., position in
). Thus function
maps any vector
to
The affine image operator computes the affine image of a polyhedron under
. For instance, suppose the polyhedron
to be transformed is the square in
generated by the set of points
. Then, if the primed variable is
and the affine expression is
(so that
,
), the affine image operator will translate
to the parallelogram
generated by the set of points
with height equal to the side of the square and oblique sides parallel to the line
. If the primed variable is as before (i.e.,
) but the affine expression is
(so that
), then the resulting polyhedron
is the positive diagonal of the square.
The affine preimage operator computes the affine preimage of a polyhedron under
. For instance, suppose now that we apply the affine preimage operator as given in the first example using primed variable
and affine expression
to the parallelogram
; then we get the original square
back. If, on the other hand, we apply the affine preimage operator as given in the second example using primed variable
and affine expression
to
, then the resulting polyhedron is the stripe obtained by adding the line
to polyhedron
.
Observe that provided the coefficient of the considered variable in the affine expression is non-zero, the affine function is invertible.
When and
, then the above affine relation becomes equivalent to the single-update affine function
(hence the name given to this operator). It is worth stressing that the notation is not symmetric, because the variables occurring in expression
are interpreted as primed variables, whereas those occurring in
are unprimed; for instance, the transfer relations
and
are not equivalent in general.
Note that, if are closed polyhedra, the above set is also a closed polyhedron. In contrast, when
is not topologically closed, the above set might not be an NNC polyhedron.
Suppose is an NNC polyhedron and
an arbitrary constraint system representing
. Suppose also that
is a constraint with
and
the set of points that satisfy
. The possible relations between
and
are as follows.
The polyhedron subsumes the generator
if adding
to any generator system representing
does not change
.
The polyhedron represents a box
in
if
is described by a constraint system in
that consists of one constraint for each bounded bound (lower and upper) in an interval in
: Letting
be the vector in
with 1 in the
'th position and zeroes in every other position; if the lower bound of the
'th interval in
is bounded, the corresponding constraint is defined as
, where
is the value of the bound and
is
if it is a closed bound and
if it is an open bound. Similarly, if the upper bound of the
'th interval in
is bounded, the corresponding constraint is defined as
, where
is the value of the bound and
is
if it is a closed bound and
if it is an open bound.
If every bound in the intervals defining a box is either closed and bounded or open and unbounded, then
represents a closed polyhedron.
The bounding box of an NNC polyhedron is the smallest
-dimensional box containing
.
The library provides operations for computing the bounding box of an NNC polyhedron and conversely, for obtaining the NNC polyhedron representing a given bounding box.
The second widening operator, that we call BHRZ03-widening, is an instance of the specification provided in [BHRZ03a]. This operator also requires as a precondition that and it is guaranteed to provide a result which is at least as precise as the H79-widening.
Both widening operators can be applied to NNC polyhedra. The user is warned that, in such a case, the results may not closely match the geometric intuition which is at the base of the specification of the two widenings. The reason is that, in the current implementation, the widenings are not directly applied to the NNC polyhedra, but rather to their internal representations. Implementation work is in progress and future versions of the library may provide an even better integration of the two widenings with the domain of NNC polyhedra.
p
and q
, respectively, then the call q.H79_widening_assign(p)
will assign the polyhedron q
. Namely, it is the bigger polyhedron p.contains(q)
). Note that, in the above context, a call such as p.H79_widening_assign(q)
is likely to result in undefined behavior, since the precondition
The library also supports an improved widening delay strategy, that we call widening with tokens [BHRZ03a]. A token is a sort of wildcard allowing for the replacement of the widening application by the exact upper bound computation: the token is used (and thus consumed) only when the widening would have resulted in an actual precision loss (as opposed to the potential precision loss of the classical delay strategy). Thus, all widening operators can be supplied with an optional argument, recording the number of available tokens, which is decremented when tokens are used. The approximated fixpoint computation will start with a fixed number of tokens, which will be used if and when needed. When there are no tokens left, the widening is always applied.
In particular, for each of the two widenings there is a corresponding limited extrapolation operator, which can be used to implement the widening ``up to'' technique as described in [HPR97]. Each limited extrapolation operator takes a constraint system as an additional parameter and uses it to improve the approximation yielded by the corresponding widening operator. Note that a convergence guarantee can only be obtained by suitably restricting the set of constraints that can occur in this additional parameter. For instance, in [HPR97] this set is fixed once and for all before starting the computation of the upward iteration sequence.
The bounded extrapolation operators further enhance each one of the limited extrapolation operators described above, by ensuring that their results cannot be worse than the smallest bounding box enclosing the two argument polyhedra.
A convex polyhedron is said to be a bounded difference shape (BDS, for short) if and only if either
can be expressed as the intersection of a finite number of bounded difference constraints or
and
.
By construction, a BDS is always topologically closed. Under the usual set inclusion ordering, the set of all BDSs on the vector space is a lattice having the empty set
and the universe
as the smallest and the biggest elements, respectively. In theoretical terms, it is a meet sub-lattice of
, meaning that the intersection of a finite set of BDSs is still a BDS; on the other hand, in general the poly-hull of two BDSs is not a BDS. The smallest BDS containing a finite set of BDSs is said to be their bds-hull.
The PPL provides support for computations on the domain of rational bounded difference shapes that, in selected contexts, can achieve a better precision/efficiency ratio with respect to the corresponding computations on a domain of convex polyhedra. As far as the representation of the rational inhomogeneous term of each bounded difference is concerned, several rounding-aware implementation choices are available, including:
The user interface for BDSs is meant to be as similar as possible to the one developed for the domain of closed polyhedra: in particular, all operators on polyhedra are also available for the domain of BDSs, even though they are typically characterized by a lower degree of precision.
The library also implements an extension of the widening operator for intervals as defined in [CC76]. The reader is warned that such an extension, even though being well-defined on the domain of BDSs, is not provided with a convergence guarantee and is therefore an extrapolation operator.
A set is called non-redundant with respect to `
' if and only if
and
. The set of finite non-redundant subsets of
(with respect to `
') is denoted by
. The function
, called Omega-reduction, maps a finite set into its non-redundant counterpart; it is defined, for each
, by
where denotes
.
As the intended semantics of a powerset domain element is that of disjunction of the semantics of
, the finite set
is semantically equivalent to the non-redundant set
; and elements of
will be called disjuncts. The restriction to the finite subsets reflects the fact that here disjunctions are implemented by explicit collections of disjuncts. As a consequence of this restriction, for any
such that
,
is the (finite) set of the maximal elements of
.
The finite powerset domain over a domain is the set of all finite non-redundant sets of
and denoted by
. The domain includes an approximation ordering `
' defined so that, for any
and
,
if and only if
Therefore the top element is and the bottom element is the emptyset.
omega_reduce()
, e.g., before performing the output of a powerset element. Note that all the documented operators automatically perform Omega-reductions on their arguments, when needed or appropriate.In addition to the operations described for the generic powerset domain in Section Operations on the Powerset Construction, we provide some operations that are specific to this instantiation. Of these, most correspond to the application of the equivalent operation on each of the NNC polyhedra that are in the given set. Here we just describe those operations that are particular to the polyhedra powerset domain.
BGP99_extrapolation_assign
is made parametric by allowing for the specification of a base-level extrapolation operator different from the H79 widening (e.g., the BHRZ03 widening can be used). Note that, in the general case, this operator cannot guarantee the convergence of the iteration sequence in a finite number of steps (for a counter-example, see [BHZ04]).A finite convergence certificate for an extrapolation operator is a formal way of ensuring that such an operator is indeed a widening on the considered domain. Given a widening operator on the base-level domain, together with the corresponding convergence certificate, the BHZ03 framework shows how it is possible to lift this widening so as to work on the finite powerset domain, while still ensuring convergence in a finite number of iterations.
Being highly parametric, the BHZ03 widening framework can be instantiated in many ways. The current implementation provides the templatic operator BHZ03_widening_assign<Certificate, Widening>
which only exploits a fraction of this generality, by allowing the user to specify the base-level widening function and the corresponding certificate. The widening strategy is fixed and uses two extrapolation heuristics: first, the least upper bound is tried; second, the BGP99 extrapolation operator is tried, possibly applying pairwise merging. If both heuristics fail to converge according to the convergence certificate, then an attempt is made to apply the base-level widening to the poly-hulls of the two arguments, possibly improving the result obtained by means of the poly-difference operator. For more details and a justification of the overall approach, see [BHZ03b] and [BHZ04].
The library provides two convergence certificates: while BHRZ03_Certificate is compatible with both the BHRZ03 and the H79 widenings, H79_Certificate is only compatible with the latter. Note that using different certificates will change the results obtained, even when using the same base-level widening operator. It is also worth stressing that it is up to the user to see that the widening operator is actually compatible with a given convergence certificate. If such a requirement is not met, then an extrapolation operator will be obtained.
The libary supports two representations for the grids domain; congruence systems and grid generator systems. We first describe linear congruence relations which form the elements of a congruence system.
Let . For each vector
and scalars
, the notation
stands for the linear congruence relation in
defined by the set of vectors
when , the relation is said to be proper;
(i.e., when
) denotes the equality
.
is called the frequency or modulus and
the base value of the relation. Thus, provided
, the relation
defines the set of affine hyperplanes
if ,
defines the universe
and the empty set, otherwise.
We also say that is described by
and that
is a congruence system for
.
The grid domain is the set of all rational grids described by finite sets of congruence relations in
.
If the congruence system describes the
, the empty grid, then we say that
is inconsistent. For example, the congruence systems
meaning that
and
, for any
, meaning that the value of an expression must be both even and odd are both inconsistent since both describe the empty grid.
When ordering grids by the set inclusion relation, the empty set and the vector space
(which is described by the empty set of congruence relations) are, respectively, the smallest and the biggest elements of
. The vector space
is also called the universe grid.
In set theoretical terms, is a lattice under set inclusion.
We denote by (resp.,
) the set of all the integer (resp., integer and affine) combinations of the vectors in
.
If are each finite subsets of
and
where the symbol '' denotes the Minkowski's sum, then
is a rational grid (see Section 4.4 in [Sch99] and also Proposition 8 in [BDHMZ05]). The 3-tuple
is said to be a generator system for
and we write
.
Note that the grid if and only if the set of points
. If
, then
where, for some
,
.
Similarly, a minimized generator system for
is such that, if
is another generator system for
, then
and
. Note that a minimized generator system for a grid has no more than a total of
lines, parameters and points.
As for convex polyhedra, such changes of representation form a key step in the implementation of many operators on grids such as, for example, intersection and grid join.
In theoretical terms, the intersection and grid join operators defined above are the binary meet and the binary join operators on the lattice .
Another way of seeing it is as follows: first embed grid into a vector space of dimension
and then add a suitably renamed-apart version of the congruence relations defining
.
The operator add_space_dimensions_and_embed
embeds the grid into the new vector space of dimension
and returns the grid defined by all and only the congruences defining
(the variables corresponding to the added dimensions are unconstrained). For instance, when starting from a grid
and adding a third space dimension, the result will be the grid
In contrast, the operator add_space_dimensions_and_project
projects the grid into the new vector space of dimension
and returns the grid whose congruence system, besides the congruence relations defining
, will include additional equalities on the added dimensions. Namely, the corresponding variables are all constrained to be equal to 0. For instance, when starting from a grid
and adding a third space dimension, the result will be the grid
Given a set of variables, the operator remove_space_dimensions
removes all the space dimensions specified by the variables in the set.
Given a space dimension less than or equal to that of the grid, the operator
remove_higher_space_dimensions
removes the space dimensions having indices greater than or equal to .
map_space_dimensions
provided by the library maps the dimensions of the vector space
with . Dimensions corresponding to indices that are not mapped by
are removed.
If , i.e., if the function
is undefined everywhere, then the operator projects the argument grid
onto the zero-dimension space
; otherwise the result is a grid in
given by
expand_space_dimension
provided by the library adds
fold_space_dimensions
provided by the library, given a grid
where
for ,
,
,
and, for ,
,
,
The affine image operator computes the affine image of a grid under
. For instance, suppose the grid
to be transformed is the non-relational grid in
generated by the set of points
. Then, if the considered variable is
and the linear expression is
(so that
,
), the affine image operator will translate
to the grid
generated by the set of points
which is the grid generated by the point
and parameters
; or, alternatively defined by the congruence system
. If the considered variable is as before (i.e.,
) but the linear expression is
(so that
), then the resulting grid
is the grid containing all the points whose coordinates are integral multiples of 3 and lie on line
.
The affine preimage operator computes the affine preimage of a grid under
. For instance, suppose now that we apply the affine preimage operator as given in the first example using variable
and linear expression
to the grid
; then we get the original grid
back. If, on the other hand, we apply the affine preimage operator as given in the second example using variable
and linear expression
to
, then the resulting grid will consist of all the points in
where the
coordinate is an integral multiple of 3.
Observe that provided the coefficient of the considered variable in the linear expression is non-zero, the affine transformation is invertible.
Note that, when and
, so that the transfer function is an equality, then the above operator is equivalent to the application of the standard affine image of
with respect to the variable
and the affine expression
.
Suppose is a grid and
an arbitrary congruence system representing
. Suppose also that
is a congruence relation with
. The possible relations between
and
are as follows.
A grid subsumes a generator
if adding
to any generator system representing
does not change
.
Each bounded interval in determines a congruence
in
. Letting
be the vector in
with 1 in the
'th position and zeroes in every other position; if both the bounds of the interval are closed, then the congruence
is defined as
, where
is the value of the lower bound and
is the (non-negative) difference between the lower and upper bounds. If one of the bounds is open, then
is the congruence
representing the inconsistent equality
.
Let be the set of congruences defined by the bounded intervals in a rational box
; then we say that
represents the rational grid
. Any grid
that can be represented by a box is said to be rectilinear.
A covering box of a grid is a rational box representing the smallest rectilinear grid that contains
.
As for convex polyhedra, the library will provide operations for computing the rectilinear grid corresponding to a given box and, also, a covering box for any given grid.
l_1
and l_2
, respectively, then the call l_2.grid_widening_assign(l_1)
will assign the grid l_2
. Namely, it is the bigger gridIn particular, for each grid widening that is provided, there is a corresponding limited extrapolation operator, which can be used to implement the widening ``up to'' technique as described in [HPR97]. Each limited extrapolation operator takes a congruence system as an additional parameter and uses it to improve the approximation yielded by the corresponding widening operator. Note that, as in the case for convex polyhedra, a convergence guarantee can only be obtained by suitably restricting the set of congruence relations that can occur in this additional parameter.
The bounded extrapolation operators further enhance each one of the limited extrapolation operators described above, by ensuring that their results cannot be worse than the smallest rectilinear grid that contains the two argument grids.
In earlier versions of the library, a number of operators were introduced in two flavors: a lazy version and an eager version, the latter having the operator name ending with _and_minimize
. In principle, only the lazy versions should be used. The eager versions were added to help a knowledgeable user obtain better performance in particular cases. Basically, by invoking the eager version of an operator, the user is trading laziness to better exploit the incrementality of the inner library computations. Starting from version 0.5, the lazy and incremental computation techniques have been refined to achieve a better integration: as a consequence, the lazy versions of the operators are now almost always more efficient than the eager versions.
One of the cases when an eager computation still makes sense is when the well-known fail-first principle comes into play. For instance, if you have to compute the intersection of several polyhedra and you strongly suspect that the result will become empty after a few of these intersections, then you may obtain a better performance by calling the eager version of the intersection operator, since the minimization process also enforces an emptiness check. Note anyway that the same effect can be obtained by interleaving the calls of the lazy operator with explicit emptiness checks.
virtual
). In practice, this restriction means that the library types should not be used as public base classes to be derived from. A user willing to extend the library types, adding new functionalities, often can do so by using containment instead of inheritance; even when there is the need to override a protected
method, non-public inheritance should suffice.// Find a reference to the first point of the non-empty polyhedron `ph'. const Generator_System& gs = ph.generators(); Generator_System::const_iterator i = gs.begin(); for (Generator_System::const_iterator gs_end = gs.end(); i != gs_end; ++i) if (i->is_point()) break; const Generator& p = *i; // Get the constraints of `ph'. const Constraint_System& cs = ph.constraints(); // Both the const iterator `i' and the reference `p' // are no longer valid at this point. cout << p.divisor() << endl; // Undefined behavior! ++i; // Undefined behavior!
i
and the reference p
. Anyway, if really needed, it is always possible to take a copy of, instead of a reference to, the parts of interest of the polyhedron; in the case above, one may have taken a copy of the generator system by replacing the second line of code with the following: Generator_System gs = ph.generators();
polymake
: a framework for analyzing convex polytopes. In G. Kalai and G. M. Ziegler, editors, Polytopes - Combinatorics and Computation, pages 43-74. Birkhäuser, 2000.
polymake
: an approach to modular software design in computational geometry. In Proceedings of the 17th Annual Symposium on Computational Geometry, pages 222-231, Medford, MA, USA, 2001. ACM.
implies that, for each ,
,
,
.
The maximum number of linearly independent points in is
. Note that linear independence implies affine independence, but the converse is not true.
Proposition
If is an
matrix, the maximum number of linearly independent rows of
, viewed as vectors of
, equals the maximum number of linearly independent columns of
, viewed as vectors of
.
Proposition
A polyhedron is a convex set.
Informally, this theorem states that, whenever a polyhedron has a vertex, there exists a decomposition such that
The conditions that is not empty and
are equivalent to the condition that
has a vertex. (See also Nemhauser and Wolsey - Integer and Combinatorial Optimization - propositions 4.1 and 4.2 on pages 92 and 93).
Proposition
Under the same hypotheses of Minkowski's theorem, if is a rational polyhedron then all the vertices in
have rational coefficients and we can consider a set
of extreme rays having rational coefficients only.
The second theorem, called Weyl's theorem, states that any system of generators having rational coefficients defines a rational polyhedron:
then is a rational polyhedron.
In fact, since consists of the sum of convex combinations of the rows of
with positive combinations of the rows of
, we can think of
as the matrix of vertices and
as the matrix of rays.
A polyhedral cone is either pointed, having the origin as its only vertex, or has no vertices at all.
and it is denoted by .
where: ;
is the
matrix having, for its first
rows, the submatrix
; and, for the (
)'st row,
where
. We call
the corresponding polyhedral cone for
.
The ()'st row
represents the positivity constraint
.
Note that is contained in
since the intersection of
with the hyperplane defined by the equality
is
. Therefore, it is always possible to transform a polyhedron
to its corresponding polyhedral cone
and then recover
by means of this intersection.
As always includes the origin and, hence, is non-empty, by Minkowski's theorem, it can also be represented by a system of generators.
The systems of generators for and
are such that:
Thus, in the cone , a ray derived from a vertex in
differs from a ray derived from a ray in
only in that, for a vertex, the (
)'st term is different from zero and, for a ray, it is zero.
Note that, in a double description for a non-empty polyhedron, the system of constraints subsumes the positivity constraint while the system of generators (which has only rays and lines corresponding to the vertices, rays and lines for
) implicitly assumes the origin in
as a point so that the cone
represented by the generators is non-empty.
Given a polyhedron generated by
vertices,
rays and
lines, we say that:
Note that, in the PPL representation of a polyhedron , vertices are represented as rays so that this concept of a redundant ray also applies to the vertices of
.
When is non-empty, we say that
supports
.
The empty polyhedron and the universe polyhedron both have no proper faces, because the only face of an empty polyhedron is itself, while the faces of the universe polyhedron are itself and the emptyset.
Let be a non-empty polyhedron. The set
where is a point of
and the symbol '
' denotes the Minkowski's sum, is a minimal proper face of the polyhedron if
is a proper face of
.
Proposition
Let a polyhedron in
. The set of all faces is a lattice under inclusion: the minimal face is the emptyset, while the maximal face is the polyhedron.
Proposition
Let be a polyhedron in
and
be the polyhedral cone in
obtained from
by homogenization, then:
Thus a polyhedron can be always decomposed in its
and its
.
Note that, since and
are polyhedra, their affine dimensions can be computed using the definition of affine dimension given for polyhedra.
The spaces defined are connected by some consistency rules shown below.
The proofs of these properties can be obtained considering the definitions of affine dimension and the decomposition of a polyhedron.
Similarly, considering an equality :
A constraint (i.e., an equality or an inequality) is satisfied by a ray if the ray saturates or verifies the constraint.
Proposition
Let be a polyhedral cone and
. If the sets
with
are proper faces of
,
is equal to
if and only if the set of constraints that are saturated by
is equal to the set of constraints that are saturated by
.
For instance, in the saturation matrix sat_g, the elements are defined as follows:
For efficiency reasons, the PPL uses both the sat_g and sat_c matrices.
These rules are a consequence of the saturation concept.
Proposition
Let be a polyhedral cone. Then the minimal proper face of
in an
-dimensional space can also be represented as
To see this, note that the minimal proper face of a polyhedral cone is equal to its lineality space. This for definition is composed by all of
that satisfies
To remove redundant constraints/generators we will use the following characterization:
It is useful to note that:
In order not to depend on the particular family of floating point numbers considered, we consider an abstraction that is parametric in the number of bits in the mantissa and gives no limit to the magnitude of the exponent
. For
let
Let denote the function defined by
Notice that is an odd function, that is, it satisfies
for all
. For
,
with
, we also write
These are the integer division and remainder function as defined by the C99 standard [ISO/IEC 9899:1999(E), Programming Languages - C (ISO and ANSI C99 Standard)].
Proposition A If ,
and
, then
.
The proof is given in the next three lemmas.
Lemma 1 Let . Then
. Furthermore, if
then there exist
and
such that
.
Proof Let . There is a non negative integer
such that
. Then
with
and
. Here
so that
. The same argument shows that odd integers larger than
do not in fact belong to
, since the corresponding value of
would exceed the bound
in the definition.
For the second part, let . Let
with
odd and
. Then
is an odd integer that belongs to
since
, using the first part. Hence we may take
which is non negative since otherwise
would not be an integer as assumed.
Lemma 2 If ,
and
does not divide
, then
.
Proof By Lemma 1 above we may assume that and
with
,
odd integers, and
,
. Let
. The goal is to prove that
: we may assume that
, that is, that
for otherwise
and there is nothing to prove.
In other words, this integer is and therefore it is smaller than
.
In all cases, we wrote as the product of a power of 2 and an element of
, and this product is another element of
.
Lemma 3 For ,
with
, we have
Proof Throughout the proof we write and
. First, assume that
and that
. Let
, by the property above. We have
Next, assume that and that
. Let
. We have
Finally, assume that and that
. Let
, again by the property above. We have
This completes the proof.
Lemma 4 If ,
then
.
Proof Let and
with
,
odd integers, and
,
. Then
, and therefore it belongs to
, since
so that it belongs to
.
Lemma 5 If ,
, then
.
Proof With the same notation as in the previous Lemma, both and
: but all positive odd integers up to and including
belong to
, so that
does as well. By Lemma 1
.