Thursday, September 22, 2005

Cornuejols book: Combinatorial Optimization

I had been reading this book on and off. Very nice presentation, just like the other books -- like Tarjan's book -- in the series by CBMS-NSF.

The interesting part is the material in chapter 6, which talks about {0,+1,-1} matrices. The picture on page 50 gives the relationship between the classes of {0,+1,-1} matrices. The four classes used in the book are PERFECT, IDEAL, BALANCED and TOTALLY UNIMODULAR. A balanced matrix is both perfect and ideal. The set of matrices which are TUM is a (strict) subset of balanced matrices.

I had talked in a couple of posts about TUM matrices. The posts are here. In this post we will see the generalizations of TUM matrices.

Let n(A) be a function which takes a {0,+1,-1} matrix and returns a column vector whose ith component is the number of -1's in the ith column of A.

Now, compare Ax and 1 - n(A) for some x in {0,1}^n. If

  • the former wins, we have a ideal matrix --> COVERING
  • the latter wins, we have a perfect matrix --> PACKING (how is it related to perfect graphs?)
  • if they are tied, we have a balanced matrix.
    Refer to the matrix (on page 61) which is called the 0,1 extension of a {0,+1,-1} matrix.
    A bipartite graph is said to be a perfect graph. What about directed bipartite graphs?
    The book uses the term bigraph. According to wolfram, a bigraph is the same as a bipartite graph, which is possibly incorrect. A bigraph has signs on the edges.

    Partially integral Polytopes: Our incidence matrix induces a polytope. We donot know if the vertices of this polytope are integral or not. The dimensions of this polytope is d. However, this polytope is not as important as the polytope of (n+d) dimensions. The interesting things is, this (n+d) dim polytope is partially integral, as the n vertices always induce integral vertices (IS THIS RIGHT???) the question remaining, what about the d vertices? Read the rest of this entry >>
  • Wednesday, September 21, 2005

    Schrijver's book, Tardos result, and {0,+1,-1} matrices

    Schrijver's book has a chapter 15 which is titled Further polynomiality results in linear programming. In that chapter, he covers some additional methods of solution -- as opposed to well known results by Khachiyan, Karmarkar and others -- to the LP problem. The additional results being Megiddo's result, Tardos result. My post on Megiddo's result is this. The result by by Tardos (corollary 15.3a of the book) is significant as it gives a characterization of {0,+1,-1} matrices. It is in pages 195-199 of Schrijver's book. Also, Megiddo's result follows in pages 199-204. The following is the statement we are interested in:

    Corollary 15.3a: Tardos result states that there exists a strongly polynomial time algorithm for rational LP problems with {0,+1,-1} constraint matrix.
    Intuition: The intuition for this corollary is pretty clear: Khachiyan's method is a polynomial time algorithm, but not strictly polynomial time algorithm. The running time of Khachiyan's algorithm -- for a problem of the type Ax <= b -- is dependent on not just the size of the matrix A, meaning the number of rows and columns of A, but also the number of bits used in encoding the matrix A. OTOH, when we restrict the matrix A to be a {0,+1,-1} matrix, which means we are operating in the unary mode, Khachiyan's algorithm takes strictly polynomial time.

    The other contribution of Tardos is that the running time of Khachiyan's algorithm takes time independent of either the size of b or the number of bits used in encoding b. This is as important contribution, as we can reduce the problem complexity to that of a homogeneous system.

    Look up if Schrijver's new book has any more information on the same.
    What does Schrijver's books says about hypergraphs??

    Schrijver's new book also has lot of information on these questions. esp. in chapters 21,24 etc.

    Are we searching for multi commodity flows? Read the rest of this entry >>

    Saturday, September 17, 2005

    Srimad Bhagavata-anta

    In a previous post, I had briefly compared Datta-Atreya and Suka. In this post we will see the how in the 12th chapter of Srimad-Bhagavatha, Datta-Atreya teaches the essence of Advaita.

    First an explanation on the title of the post. The interpretation of Upanishads being referred to as Vedanta is quite well known. The title Bhagavatha-anta can also be interpreted as the following: Suka and Vyasa have come to a conclusion in Bhagavatha. What follows in chapter 12 is the essence of their conclusion.

    Srimad-Bhagavatha also is well known for its many discrete, and sometimes contradicting schools of thought. More notes is here. As explained in the previous post, the conclusion of the Bhagavatha can be thought of the highly theistic chapters of 10 and 11, where the story of Lord Krishna is explained. In chapter 12, however, there is the advaitic conclusion of the Bhagavatha as explained to Uddhava by Lord Krishna himself. The dialogue seems to happen at the time when Lord Krishna is about to leave his material body.

    Avadhutha refers to how the learnt all he wanted to from the world itself. He sites 24 examples (I am sure this 24 has got a metaphorical meaning: like number of letters in Gayathri-Mantra) of his learnings from the world. He gives simple examples of teachings which emphasize detachement, etc. This is a post about the 24 teachings Avadhuta explained to Yadu. Read the rest of this entry >>

    Ray Miller's Memoirs

    The link to a function at UMD to commemorate the career of Ray Miller is this. The following is the introduction to the function:

    Prof. Ray Miller, who retired from our department in 2002, is truly one of the pioneers of the field of computer science. He has had three successful careers – the first at IBM, where he helped build one of the world’s foremost computer science research programs and worked with many of the other leading theoretical computer scientists of his generation – people like Dick Karp, Shmuel Winograd and Nick Pippinger. Ray then had a second career at Georgia Tech,

    This is a picture of Karp and Miller. The following "relevant part" is excerpted from pages 25-26 of memoirs of Ray Miller (a PDF link):

    Upon returning to IBM Research from Cal Tech in the summer of 1963 I was ready to look into some new areas of research. Thus, it was quite appealing when Herman Goldstine, the Director of the Mathematical Sciences Department, suggested to Dick Karp, Sam Winograd, Larry Horwitz and me that looking into parallel computation might be of some interest. Even though this was long before parallel computation was on the minds of many, there was already a clear indication that parallelism could be used to speed up some special purpose computational tasks. IBM had a product developed for the oil industry called the "convolver box". This was a special purpose device that could be attached to the main bus of an IBM computer to do convolutions very rapidly. Convolution was used extensively in oil exploration for the analysis of soundings taken over land and sea to show the deep structure of the earth layers and expose potential locations where oil might be found. With this suggestion we started to look at parallel computation. We designed approaches, algorithms and designs, for many different special purpose computations: parenthesis checking, macro-instruction execution, the Cooley-Tukey convolution algorithm, and others. Our design for the Cooley-Tukey algorithm even rated a footnote in their original paper "An Algorithm for the Machine Computation of Complex Fourier Series" in Mathematics of Computation, April 1965. We published a paper in JACM on uniform recurrence equations that described how parallelism could be used to speed up the numerical computation of systems of differential equations. After a while, the designing of parallel algorithms for these various tasks seemed to be getting quite repetitive to Dick Karp and me, and this led us to realize that there was a hidden approach that we seemed to be using over and over. We formulated this underlying structure, which we called "computation graphs" and published the paper, "Properties of a Model of Parallel Computations: Determinacy, Termination, Queueing" in the SIAM Journal of November 1966. In 1966 this was the only SIAM Journal, but now they publish a number of journals in specialized areas. Computation graphs proved to be quite effective for designing inner loops of computations, but were limited in not 25 having more general computation structures, such as conditional branching, that were needed for more general computations. This led us to a more general model that we termed, "Parallel Program Schemata". We wrote a long paper about these schemata which was finally published in JCSS in May 1969. I continued to work on parallelism even though Dick Karp decided to leave IBM to become a faculty member at Berkeley with a joint appointment in computer science and operations research. I thought it was quite surprising that Dick left IBM at that time because within only a few more months he would have had 10 years service with IBM and thus been vested for retirement benefiets. I wrote another paper on parallel program schemata that discussed some undecidability results based on a schema not possessing the property that we called "repetition freeness", and this paper was published in the first issue of the SIAM Journal of Computing in March 1972.
    Read the rest of this entry >>

    Saturday, September 10, 2005

    Tell a story

    It was the campus of the top graduate school of India. Raj and Ram were walking together, back to their hostel rooms. They were good friends and had endured the hard first semester together. They thought they knew everything about each other.

    The time now was 2am. They had just submitted their programming assigment at 11:59pm of the previous day, before the dmidnight deadline. Then they started their next assignment which was due this weekend.

    They were both tired. Cold breeze was blowing at that time in the beautiful campus of Bangalore. They were in their second semester of their stay. They both had to pass an All India entrance test and get the top ranks to get into the computer science department of the top graduate school in India.

    Suddenly Raj said "I saw that girl again today, she was wearing the same offensive t-shirt 1". Though Ram was quite sleepy to respond, he asked, "So you named the t-shirt1?". They were talking about a girl they used to notice sometimes a Girl who used to sit next to them sometimes in the Common labs.

    The common labs used to have students from people other than computer science department too. Not many of Computer science students wanted to go to that lab. The reason was that the machines were slow. It was kind of prestige question for CS students to sit in a common lab with students of other departmens. Some of these students were noisy and came to the labs only to have fun. Most importanlty, many of the machines there were running Microsoft aka MACRO-SLOW. The CS labs on the other hand were quite and ran the revolutionary OS linux and most of the students there were sober.

    This night, both Ram and Raj has started late. So their own labs were full and they had to sit in the common lab.

    Ram asked, "What were you saying about the t-shirt?" Raj responded "I have named the t-shirt 'Do you think that ' as t-shirt 1. Also I have named 'CS graduates are .... '. Ram asked, I thought you were too busy doing the assignment to notice the t-shirts of others, leave alone of girls.

    Raj said, "I couldnot help but notice the t-shirt when I was helping fix her code". This was too much for Ram. He surely thought Raj was making this up. They both used to talk like drunken guys, without any aim on their way back to hostels on such days.

    The next afternoon, they met at the college cafe. They both had missed the morning breakfast. After they were done with eeating it hungrily, Ram asked "yesterday were you saying you helped that Girl?" Raj said, "Oh yeah, and I asked her name. and I asked where she was from and ...". Ram said "ok. ok. we can makeup stories later. How is yor assignement going?" It is fine, I have to.

    Ram said, "I am sure you are making this up. I know you and how you behave. I am very sure that you did not talk a single word with that girl till now." Raj said, with his tongue in cheek "I know that you still think me to be a studious guy coming from a all boys engineering college. I bet that I talked to that girl."

    Ram got enraged "If you prove it to me that you talked to that Girl, I promise to be cleanly shaven for 1 month". Ram was quite infamous among his classmates to for his unshaven face. "and, if you cannot prove it within the next week, you will not shave for the next 1 month". Neither of them, or any of their classmates minded unshaven faces, as everyone in the school was busy with atleast 5 courses in a semester.

    The next day, they were discussing their assignment in the common longue of the library. Ram, with just noticed that the girl passed by. He acted as if he did not notice the girl. Raj too noticed that the girl. He stood up and called softly, "Hello Kusum". Ram was taken aback! "Hi! said Kusum" coming to them. "Kusum, this is my close friend, Ram". "Hello Ram", said Kusum. "So, you are wearing a new tshirt today is it?" said Raj. "Yes" said the girl. May I know what is behind? luckily for Ram, who was still in a state of shock, she was not wearing a coat over the tshirt which covered her back. She turned around. The tshirt read "CS graduates are very boring". She turned back again, said "My day is made, I told the truth atleast on the faces of two nerds today". "See you both later. bye" and she vanished. Raj winked at Ram.

    Next act.

    An year later, the wedding was in Calcutta in the east. Ram was one of the special invitees. Kusum was from the .......

    Bombay in the west, Calcutta in the east, Meerut in the north and Cochin in the extreme south west. Ram and Raj were in a southern city, but quite far from the city in which Kusum's was originally from. This was a truly cross-indian marriage with all the different cultures. To satisfy both the parents, Raj and Kusum had two weddings. The major one was in Raj's home in East. The minor one was in the ancestral home of the girl in south west. The boy and girl, after the marriage went and stayed for some time in the boy's parents house in Meerut in the north and for some time with the girl's parents in the west in Bombay.

    So, what was behind the two shirts Ram asked Raj. Well, offensive tshirt 1 said "CS graduates are very ... boring" and offensive tshort 2 said "Am I cute? ... No Shit!"

    Ram smiled and said, "Thanks for letting me know. If I was less busy when we were in the Institute, I would have understood how all this worked". Atleast, I will be on the watch out for friends of mine Read the rest of this entry >>

    Wednesday, September 07, 2005

    Unitary graph scheduling problem


    All pair shortest paths

    LP formulations of the computability/scheduling problem

    Motivation: Given that we have a matrix of size (n+d)*m which contains the components {0,+1,-1}. There are many ways in which we can formulate a linear program with this matrix somehow acting as constraints. Let us observe the different ways we can do so, for a close enough look at the linear programs may yield insights into the unitary graph scheduling problem.

  • Standard/state-of-the-art way (as in Darte-Vivien): Associate the variables of the LP problem to the columns of the matrix. We have m variables in the LP program. This can be thought of as (i) associating variables to edges of the hypergraph and also, (ii) associating variables to the edge-vertices of the bipartite graph. If a hyperedge has a zero component, we throw it away before sending it to next level.

  • Dual problem with n variables: What if we associate weights with vertices of the RDG? then we somehow can obtain the depth that node is disassociated with the rest of the graph. This is the matrix that Darte-Vivien use for scheduling at every level.

  • Dual problem with (n+d) variables: This has not been previously expored. We associate a variable to each of the (n+d) "vertices" of the bipartite graph.

  • many constraints:What if we associate weights with edges of the bipartite graph? There will be (n+d)*m elements we have to find.

  • Using the decomposition tree: Assume that the decomposition tree is given to us. How do you find the schedule component for each of the variables at that depth?

    Bipartite matching Tarjan's algorithm and Hopcroft-Karp's matching algorithm, and Vazirani's algorithm. What is the connection between Bipartite matching and SURE scheduling?

    Hall's theorem and its variants from Khuller's notes. Hypergraph basics from Berge's book.

    What does a zero weight cycle in th RDG meain the hypergraph. What does it mean in the bipartite graph?

    How does the EDG of the hypergraph look like?

    How does the EDG of the hypergraph look like? What are the vertices, what are the edges?

    A zero weight cycle in the EDG corresponds to the computability problem of the usual SURE. When does a "node" in the hypergraph-EDG have to wait for infinite time before its computation starts?

    (taking a cue from KMW in their statement/proof of colrollary1) What if we think the n vertices also as the dimensions? so the computability problem is a URE in n+d dimensions. How do you label the 'n' to be similar to the 't'? You may want to use this in proving some properties of the URE. The example are lower/upper bound for the schedule. lower/upper bound for the memory allocation.

    When does the hyper-EDG have a node which has an infinite length path ending in it?

    If loops in the EDG are hindering you, forget them for a moment.

    It is clear that the usual EDG has the linearity implicitly embedded in the structure of the polyhedron. We are somehow destroying it by using the hypergraph representation.

  • Possible correspondence: All this is ok. but, given a hyper-RDG, How do you reconstruct the EDG? vertices of the hyper-RDG: For each of the vertices of the hyper-RDG, we have a corresponding vertex at every point in F_n. Edges of the hyper-RDGIn the hRDG, if vertices v1 and v2 participate in an edge h_e -- which will actually be used a function, returning a set of vertices of the hypergraph, the V (meaning v_h and v_t), exactly 2 and the W set, represented by w_i --, then there is a edge between vertices (v_h,z) and (v_t,z-z') with z' being a d dimensional vector and z'_i = 1 if d_i belongs to the set W and z'_i = 0, if d_i does not belong to W.

    What does incomputability mean in the hRDG? A vertex v_i of the hRDG is computable if there exists a path of infinite length ending in it.
  • How hard is the Software-pipelining problem (If it is NPO, what is the best approximation algorithm possible)?
  • How hard is the SURE scheduling problem? Can we do faster for a special case?
  • How hard is the SARE scheduling problem?
    Is it possible to obtain a Theta((n+d)*m) multi-dimensional scheduling algorithm (or even faster)?
    The algorithm for maximum matching also -- like max flow problem -- seems to use the idea of augmenting pathsRead this link
    Ideas for Polynomial schedules (possibly better than linear schedules and are a kind of free schedules)
    If we divide the vertices of the hypergraph into actual-vertices and dependence-vertices, it is clear that, if a AV does not have an path of length 1 from one DV, then that AV can be computed independent of that dependence component. OTOH, if a AV does not have a path of length 1 to one DV, then the AV can be computed....

    Also, if an AV is is not connected to a DV, then that component of AV can be computed independent of that DV. If so, that vertex has a schedule that is a polynomial of better degree that rest of the vertices. If no vertices exist which satisfy these conditions, then the best polynomial schedule has degree of d. Read the rest of this entry >>