----------------
The original LP is
min 1*x
s.t A x = 0
0 != x >= 0
---------------
The "SDP" would be
min I*X //I is a unit matrix of size n*n. X is a diagonal matrix of size n*n
s.t A_i . X >>= 0 for all i = 1..nrows(A) // A_i is a diagonal matrix such that diag(A_i) = i th row of A. Also, >>=0 is the relation for SEMI DEFINITE
0 != X >= 0
----------------
Since this conversion is trivial, what can we do something more clever?
How can we schedule the points using SDP?
to do: read lovasz''s notes. Vazirani's chapter. chapter from complexity-approximation. See if SDP is covered in Schrijver's new book.
--
Start with the example in CM-Z
--
Zero-cycle and eigen values/vectors: Zero cycle equation is Ax=0. Compare this with the eigen value equation Ax=(lambda)x. This means that, for a graph with zero-cycles, the eigen values are 0. Is this right? Also it could mean that all the eigen vectors are the zero vectors. What coule be a better classification?
--
Approach1:
What happens if we actually take the ZC equation and solve it for eigen values?
Two problems: (1) The block matrices are not necessarily a square matrix. Suppose, for now that we add additional edges/nodes so that we make it a square it.
(2) The block matrix is certainly not symmetric. This is not a big problem, as it will just lead to complex eigen vectors.
Approach2:
The usual method
From the block matrix, we can make the following observations.
The top part of the block matrix just represents a path in the directed graph. The bottom part represents the weighed sum of the components of the dependence vectors (i.e., the weighted sum of the first component of all the dependence vector, weighted sum of the second component of all the dependence vectors etc).
Top part.
--
to do What does it mean by the trace of a directed graph? Given that we need a square matrix, should we look at the adjacency representation, rather than incidence matix representation?
The important question may be, what does the zero cycle path mean in terms of the eigen values/eigen vectors?
--
Think of a directed graph represented as an adjacency matrix and each non-zero element annotated by the weight of that particular edge (the weight could be a d-dimensional vector). Now, formulate the problem of zero cycle detection in this matrix.
We have to find a n*n matrix X which has non-zero entries when that particular edge is "touched" in a zero-cycle. Further, the non-zero entry is equal to the number of times that edge is touched. Given a matrix X like this, it is easy to detect the zero-cycle (find the directed MST?).
--
Both the self-loops and multiple-edges can be removed by a simple method of addition of fictitious nodes and edges. We are left with a simple graph (not multi-graph) which has all 0's in the adjacency matrix. Let us call such a matrix A.
Let us call as X, a n*n matrix, which will have a non-zero element in the element (i,j) iff the edge between vertices i and j is a part of a zero-cycle. Are we minimizing the program? How do we schedule it?
Now is the matrix A semi-definite? What does the following mean X >>= 0?
define the definiteness of a directed and undirected graphs.
--
Define a directed graph, whose edges are annotated with scalar weight edges. What can we say about the SDness of the adjacency matrix?
--
The spectrahedra of an undirected graph has been well studied. What about the spectrahedra of a directed graph and what about the eigen values of the graph.
--
Since we are starting with matrices whose diagonal elements are zero, what can we say about the eigen values of each matrix? There are no self-loops => the diagonal elements of the matrix are zeroes => the coeff of the (n-1) term = a11+a22+a33+... = 0. Also, ...
--
Look up the relationship between unitary PRDG's and Hadamard matrices . Look it up in Stanley's book and Knuth.
--
min X
s.t A_1 @ X >>= 0
A_2 @ X >>= 0
A_3 @ X >>= 0
...
A_d @ X >>= 0
X >>= 0
--
Now, A_i can be thought of as the adjacency matrix of the PRDG, which is a directed graph, annotated by the weights of the ith component. It has no self-loops. This means that its diagonal elements are zeroes. So the trace of the matrix A_i is zero. It is called a trace-free matrix(see the link trace link). This is because of the property of the characteristic polynomial induced by the matrix A_i. A property of trace states that tr(A@B) = tr(A^T B)
--
the link on Characteristic polynomial is nice.
Question many things seem to depend on the positive definiteness of the matrix A_i. How can we prove that it is positive definite? If it is not +veSD, can we apply a transformation to th PRDG so that the resultant graph is +veSD?
Question is What can you say about the \lambda_min of a directed graph? What can you say about the definiteness of a non-symmetric matrix? In particular, a matrix whose trace is zero?
--
According to this link
Most of the eigenvalue optimization theory has been developed for real, symmetric matrices. It is known that such matrices have real eigenvalues. Unsymmetric matrices, on the other hand, have complex eigenvalues in general. It is possible, however, to translate the constraint on the real part of the eigenvalues of a real unsymmetric matrix (say A) to be negative, into a positive definiteness condition on a real symmetric matrix (P) through Lyapunov’s matrix equality (2). Since it is a sufficient and necessary condition for a real symmetric matrix to be positive definite, its eigenvalues to be positive, the condition on the eigenvalues of the "difficult" unsymmetric matrix A is translated into another condition on the eigenvalues of the "not-so-difficult" symmetric matrix P. In order to avoid the potential non-smoothness arising in eigenvalue optimization, interior-point / logarithmic-barrier-transformation techniques have been successfully applied (Ringertz, 1997). For a comprehensive reference of interior-point optimization, see Fiacco and McCormick (1990). Making use of logarithmic and matrix determinant properties, it will be shown that the potentially non-smooth constraints on the eigenvalues of matrix P may be expressed in terms of the determinant of matrix P, which is a smooth function of the optimization variables.
The Lyapunov's matrix equality happens to be the following
Matrices A, P and Q are related through Lyapunov’s matrix equality. Read Lovasz's notes on the same. What does it mean for a directed graph? How can you "make" a undirected graph which is equivalent to it?
According to the Horn's Topics in Matrix analysis,
GA + A*G = H is the Lyapunov;s equation. (A*: means skew-hermintion not transpose: check)
It says that A is positive stable iff there exists a +veSD G of size n*n and H is +ve definite.
Hurwitz matrix
+ve cycles and stability of systems are equavalent!!!!!