Schrijver's book has a chapter 15 which is titled Further polynomiality results in linear programming. In that chapter, he covers some additional methods of solution -- as opposed to well known results by Khachiyan, Karmarkar and others -- to the LP problem. The additional results being Megiddo's result, Tardos result. My post on Megiddo's result is this. The result by by Tardos (corollary 15.3a of the book) is significant as it gives a characterization of {0,+1,-1} matrices. It is in pages 195-199 of Schrijver's book. Also, Megiddo's result follows in pages 199-204. The following is the statement we are interested in:
Corollary 15.3a: Tardos result states that there exists a strongly polynomial time algorithm for rational LP problems with {0,+1,-1} constraint matrix.
Intuition: The intuition for this corollary is pretty clear: Khachiyan's method is a polynomial time algorithm, but not strictly polynomial time algorithm. The running time of Khachiyan's algorithm -- for a problem of the type Ax <= b -- is dependent on not just the size of the matrix A, meaning the number of rows and columns of A, but also the number of bits used in encoding the matrix A. OTOH, when we restrict the matrix A to be a {0,+1,-1} matrix, which means we are operating in the unary mode, Khachiyan's algorithm takes strictly polynomial time.
The other contribution of Tardos is that the running time of Khachiyan's algorithm takes time independent of either the size of b or the number of bits used in encoding b. This is as important contribution, as we can reduce the problem complexity to that of a homogeneous system.
Look up if Schrijver's new book has any more information on the same.
What does Schrijver's books says about hypergraphs??
Schrijver's new book also has lot of information on these questions. esp. in chapters 21,24 etc.
--
Are we searching for multi commodity flows?
Read the rest of this entry >>
Wednesday, September 21, 2005
Saturday, September 17, 2005
Srimad Bhagavata-anta
In a previous post, I had briefly compared Datta-Atreya and Suka. In this post we will see the how in the 12th chapter of Srimad-Bhagavatha, Datta-Atreya teaches the essence of Advaita.
First an explanation on the title of the post. The interpretation of Upanishads being referred to as Vedanta is quite well known. The title Bhagavatha-anta can also be interpreted as the following: Suka and Vyasa have come to a conclusion in Bhagavatha. What follows in chapter 12 is the essence of their conclusion.
Srimad-Bhagavatha also is well known for its many discrete, and sometimes contradicting schools of thought. More notes is here. As explained in the previous post, the conclusion of the Bhagavatha can be thought of the highly theistic chapters of 10 and 11, where the story of Lord Krishna is explained. In chapter 12, however, there is the advaitic conclusion of the Bhagavatha as explained to Uddhava by Lord Krishna himself. The dialogue seems to happen at the time when Lord Krishna is about to leave his material body.
Avadhutha refers to how the learnt all he wanted to from the world itself. He sites 24 examples (I am sure this 24 has got a metaphorical meaning: like number of letters in Gayathri-Mantra) of his learnings from the world. He gives simple examples of teachings which emphasize detachement, etc. This is a post about the 24 teachings Avadhuta explained to Yadu. Read the rest of this entry >>
First an explanation on the title of the post. The interpretation of Upanishads being referred to as Vedanta is quite well known. The title Bhagavatha-anta can also be interpreted as the following: Suka and Vyasa have come to a conclusion in Bhagavatha. What follows in chapter 12 is the essence of their conclusion.
Srimad-Bhagavatha also is well known for its many discrete, and sometimes contradicting schools of thought. More notes is here. As explained in the previous post, the conclusion of the Bhagavatha can be thought of the highly theistic chapters of 10 and 11, where the story of Lord Krishna is explained. In chapter 12, however, there is the advaitic conclusion of the Bhagavatha as explained to Uddhava by Lord Krishna himself. The dialogue seems to happen at the time when Lord Krishna is about to leave his material body.
Avadhutha refers to how the learnt all he wanted to from the world itself. He sites 24 examples (I am sure this 24 has got a metaphorical meaning: like number of letters in Gayathri-Mantra) of his learnings from the world. He gives simple examples of teachings which emphasize detachement, etc. This is a post about the 24 teachings Avadhuta explained to Yadu. Read the rest of this entry >>
Ray Miller's Memoirs

The link to a function at UMD to commemorate the career of Ray Miller is this. The following is the introduction to the function:
Prof. Ray Miller, who retired from our department in 2002, is truly one of the pioneers of the field of computer science. He has had three successful careers – the first at IBM, where he helped build one of the world’s foremost computer science research programs and worked with many of the other leading theoretical computer scientists of his generation – people like Dick Karp, Shmuel Winograd and Nick Pippinger. Ray then had a second career at Georgia Tech,
...
This is a picture of Karp and Miller. The following "relevant part" is excerpted from pages 25-26 of memoirs of Ray Miller (a PDF link):
Read the rest of this entry >>
Upon returning to IBM Research from Cal Tech in the summer of 1963 I was ready to look into some new areas of research. Thus, it was quite appealing when Herman Goldstine, the Director of the Mathematical Sciences Department, suggested to Dick Karp, Sam Winograd, Larry Horwitz and me that looking into parallel computation might be of some interest. Even though this was long before parallel computation was on the minds of many, there was already a clear indication that parallelism could be used to speed up some special purpose computational tasks. IBM had a product developed for the oil industry called the "convolver box". This was a special purpose device that could be attached to the main bus of an IBM computer to do convolutions very rapidly. Convolution was used extensively in oil exploration for the analysis of soundings taken over land and sea to show the deep structure of the earth layers and expose potential locations where oil might be found. With this suggestion we started to look at parallel computation. We designed approaches, algorithms and designs, for many different special purpose computations: parenthesis checking, macro-instruction execution, the Cooley-Tukey convolution algorithm, and others. Our design for the Cooley-Tukey algorithm even rated a footnote in their original paper "An Algorithm for the Machine Computation of Complex Fourier Series" in Mathematics of Computation, April 1965. We published a paper in JACM on uniform recurrence equations that described how parallelism could be used to speed up the numerical computation of systems of differential equations. After a while, the designing of parallel algorithms for these various tasks seemed to be getting quite repetitive to Dick Karp and me, and this led us to realize that there was a hidden approach that we seemed to be using over and over. We formulated this underlying structure, which we called "computation graphs" and published the paper, "Properties of a Model of Parallel Computations: Determinacy, Termination, Queueing" in the SIAM Journal of November 1966. In 1966 this was the only SIAM Journal, but now they publish a number of journals in specialized areas. Computation graphs proved to be quite effective for designing inner loops of computations, but were limited in not 25 having more general computation structures, such as conditional branching, that were needed for more general computations. This led us to a more general model that we termed, "Parallel Program Schemata". We wrote a long paper about these schemata which was finally published in JCSS in May 1969. I continued to work on parallelism even though Dick Karp decided to leave IBM to become a faculty member at Berkeley with a joint appointment in computer science and operations research. I thought it was quite surprising that Dick left IBM at that time because within only a few more months he would have had 10 years service with IBM and thus been vested for retirement benefiets. I wrote another paper on parallel program schemata that discussed some undecidability results based on a schema not possessing the property that we called "repetition freeness", and this paper was published in the first issue of the SIAM Journal of Computing in March 1972.
Saturday, September 10, 2005
Tell a story
It was the campus of the top graduate school of India. Raj and Ram were walking together, back to their hostel rooms. They were good friends and had endured the hard first semester together. They thought they knew everything about each other.
The time now was 2am. They had just submitted their programming assigment at 11:59pm of the previous day, before the dmidnight deadline. Then they started their next assignment which was due this weekend.
They were both tired. Cold breeze was blowing at that time in the beautiful campus of Bangalore. They were in their second semester of their stay. They both had to pass an All India entrance test and get the top ranks to get into the computer science department of the top graduate school in India.
Suddenly Raj said "I saw that girl again today, she was wearing the same offensive t-shirt 1". Though Ram was quite sleepy to respond, he asked, "So you named the t-shirt1?". They were talking about a girl they used to notice sometimes a Girl who used to sit next to them sometimes in the Common labs.
The common labs used to have students from people other than computer science department too. Not many of Computer science students wanted to go to that lab. The reason was that the machines were slow. It was kind of prestige question for CS students to sit in a common lab with students of other departmens. Some of these students were noisy and came to the labs only to have fun. Most importanlty, many of the machines there were running Microsoft aka MACRO-SLOW. The CS labs on the other hand were quite and ran the revolutionary OS linux and most of the students there were sober.
This night, both Ram and Raj has started late. So their own labs were full and they had to sit in the common lab.
Ram asked, "What were you saying about the t-shirt?" Raj responded "I have named the t-shirt 'Do you think that ' as t-shirt 1. Also I have named 'CS graduates are .... '. Ram asked, I thought you were too busy doing the assignment to notice the t-shirts of others, leave alone of girls.
Raj said, "I couldnot help but notice the t-shirt when I was helping fix her code". This was too much for Ram. He surely thought Raj was making this up. They both used to talk like drunken guys, without any aim on their way back to hostels on such days.
The next afternoon, they met at the college cafe. They both had missed the morning breakfast. After they were done with eeating it hungrily, Ram asked "yesterday were you saying you helped that Girl?" Raj said, "Oh yeah, and I asked her name. and I asked where she was from and ...". Ram said "ok. ok. we can makeup stories later. How is yor assignement going?" It is fine, I have to.
Ram said, "I am sure you are making this up. I know you and how you behave. I am very sure that you did not talk a single word with that girl till now." Raj said, with his tongue in cheek "I know that you still think me to be a studious guy coming from a all boys engineering college. I bet that I talked to that girl."
Ram got enraged "If you prove it to me that you talked to that Girl, I promise to be cleanly shaven for 1 month". Ram was quite infamous among his classmates to for his unshaven face. "and, if you cannot prove it within the next week, you will not shave for the next 1 month". Neither of them, or any of their classmates minded unshaven faces, as everyone in the school was busy with atleast 5 courses in a semester.
The next day, they were discussing their assignment in the common longue of the library. Ram, with just noticed that the girl passed by. He acted as if he did not notice the girl. Raj too noticed that the girl. He stood up and called softly, "Hello Kusum". Ram was taken aback! "Hi! said Kusum" coming to them. "Kusum, this is my close friend, Ram". "Hello Ram", said Kusum. "So, you are wearing a new tshirt today is it?" said Raj. "Yes" said the girl. May I know what is behind? luckily for Ram, who was still in a state of shock, she was not wearing a coat over the tshirt which covered her back. She turned around. The tshirt read "CS graduates are very boring". She turned back again, said "My day is made, I told the truth atleast on the faces of two nerds today". "See you both later. bye" and she vanished. Raj winked at Ram.
=============
Next act.
=============
An year later, the wedding was in Calcutta in the east. Ram was one of the special invitees. Kusum was from the .......
Bombay in the west, Calcutta in the east, Meerut in the north and Cochin in the extreme south west. Ram and Raj were in a southern city, but quite far from the city in which Kusum's was originally from. This was a truly cross-indian marriage with all the different cultures. To satisfy both the parents, Raj and Kusum had two weddings. The major one was in Raj's home in East. The minor one was in the ancestral home of the girl in south west. The boy and girl, after the marriage went and stayed for some time in the boy's parents house in Meerut in the north and for some time with the girl's parents in the west in Bombay.
So, what was behind the two shirts Ram asked Raj. Well, offensive tshirt 1 said "CS graduates are very ... boring" and offensive tshort 2 said "Am I cute? ... No Shit!"
Ram smiled and said, "Thanks for letting me know. If I was less busy when we were in the Institute, I would have understood how all this worked". Atleast, I will be on the watch out for friends of mine Read the rest of this entry >>
The time now was 2am. They had just submitted their programming assigment at 11:59pm of the previous day, before the dmidnight deadline. Then they started their next assignment which was due this weekend.
They were both tired. Cold breeze was blowing at that time in the beautiful campus of Bangalore. They were in their second semester of their stay. They both had to pass an All India entrance test and get the top ranks to get into the computer science department of the top graduate school in India.
Suddenly Raj said "I saw that girl again today, she was wearing the same offensive t-shirt 1". Though Ram was quite sleepy to respond, he asked, "So you named the t-shirt1?". They were talking about a girl they used to notice sometimes a Girl who used to sit next to them sometimes in the Common labs.
The common labs used to have students from people other than computer science department too. Not many of Computer science students wanted to go to that lab. The reason was that the machines were slow. It was kind of prestige question for CS students to sit in a common lab with students of other departmens. Some of these students were noisy and came to the labs only to have fun. Most importanlty, many of the machines there were running Microsoft aka MACRO-SLOW. The CS labs on the other hand were quite and ran the revolutionary OS linux and most of the students there were sober.
This night, both Ram and Raj has started late. So their own labs were full and they had to sit in the common lab.
Ram asked, "What were you saying about the t-shirt?" Raj responded "I have named the t-shirt 'Do you think that ' as t-shirt 1. Also I have named 'CS graduates are .... '. Ram asked, I thought you were too busy doing the assignment to notice the t-shirts of others, leave alone of girls.
Raj said, "I couldnot help but notice the t-shirt when I was helping fix her code". This was too much for Ram. He surely thought Raj was making this up. They both used to talk like drunken guys, without any aim on their way back to hostels on such days.
The next afternoon, they met at the college cafe. They both had missed the morning breakfast. After they were done with eeating it hungrily, Ram asked "yesterday were you saying you helped that Girl?" Raj said, "Oh yeah, and I asked her name. and I asked where she was from and ...". Ram said "ok. ok. we can makeup stories later. How is yor assignement going?" It is fine, I have to.
Ram said, "I am sure you are making this up. I know you and how you behave. I am very sure that you did not talk a single word with that girl till now." Raj said, with his tongue in cheek "I know that you still think me to be a studious guy coming from a all boys engineering college. I bet that I talked to that girl."
Ram got enraged "If you prove it to me that you talked to that Girl, I promise to be cleanly shaven for 1 month". Ram was quite infamous among his classmates to for his unshaven face. "and, if you cannot prove it within the next week, you will not shave for the next 1 month". Neither of them, or any of their classmates minded unshaven faces, as everyone in the school was busy with atleast 5 courses in a semester.
The next day, they were discussing their assignment in the common longue of the library. Ram, with just noticed that the girl passed by. He acted as if he did not notice the girl. Raj too noticed that the girl. He stood up and called softly, "Hello Kusum". Ram was taken aback! "Hi! said Kusum" coming to them. "Kusum, this is my close friend, Ram". "Hello Ram", said Kusum. "So, you are wearing a new tshirt today is it?" said Raj. "Yes" said the girl. May I know what is behind? luckily for Ram, who was still in a state of shock, she was not wearing a coat over the tshirt which covered her back. She turned around. The tshirt read "CS graduates are very boring". She turned back again, said "My day is made, I told the truth atleast on the faces of two nerds today". "See you both later. bye" and she vanished. Raj winked at Ram.
=============
Next act.
=============
An year later, the wedding was in Calcutta in the east. Ram was one of the special invitees. Kusum was from the .......
Bombay in the west, Calcutta in the east, Meerut in the north and Cochin in the extreme south west. Ram and Raj were in a southern city, but quite far from the city in which Kusum's was originally from. This was a truly cross-indian marriage with all the different cultures. To satisfy both the parents, Raj and Kusum had two weddings. The major one was in Raj's home in East. The minor one was in the ancestral home of the girl in south west. The boy and girl, after the marriage went and stayed for some time in the boy's parents house in Meerut in the north and for some time with the girl's parents in the west in Bombay.
So, what was behind the two shirts Ram asked Raj. Well, offensive tshirt 1 said "CS graduates are very ... boring" and offensive tshort 2 said "Am I cute? ... No Shit!"
Ram smiled and said, "Thanks for letting me know. If I was less busy when we were in the Institute, I would have understood how all this worked". Atleast, I will be on the watch out for friends of mine Read the rest of this entry >>
Wednesday, September 07, 2005
Unitary graph scheduling problem
Approaches:
All pair shortest paths
max-flow?
--
LP formulations of the computability/scheduling problem
Motivation: Given that we have a matrix of size (n+d)*m which contains the components {0,+1,-1}. There are many ways in which we can formulate a linear program with this matrix somehow acting as constraints. Let us observe the different ways we can do so, for a close enough look at the linear programs may yield insights into the unitary graph scheduling problem.
Standard/state-of-the-art way (as in Darte-Vivien): Associate the variables of the LP problem to the columns of the matrix. We have m variables in the LP program. This can be thought of as (i) associating variables to edges of the hypergraph and also, (ii) associating variables to the edge-vertices of the bipartite graph. If a hyperedge has a zero component, we throw it away before sending it to next level.
Dual problem with n variables: What if we associate weights with vertices of the RDG? then we somehow can obtain the depth that node is disassociated with the rest of the graph. This is the matrix that Darte-Vivien use for scheduling at every level.
Dual problem with (n+d) variables: This has not been previously expored. We associate a variable to each of the (n+d) "vertices" of the bipartite graph.
many constraints:What if we associate weights with edges of the bipartite graph? There will be (n+d)*m elements we have to find.
Using the decomposition tree: Assume that the decomposition tree is given to us. How do you find the schedule component for each of the variables at that depth?
--
Bipartite matching Tarjan's algorithm and Hopcroft-Karp's matching algorithm, and Vazirani's algorithm. What is the connection between Bipartite matching and SURE scheduling?
Hall's theorem and its variants from Khuller's notes. Hypergraph basics from Berge's book.
--
What does a zero weight cycle in th RDG meain the hypergraph. What does it mean in the bipartite graph?
How does the EDG of the hypergraph look like?
How does the EDG of the hypergraph look like? What are the vertices, what are the edges?
A zero weight cycle in the EDG corresponds to the computability problem of the usual SURE. When does a "node" in the hypergraph-EDG have to wait for infinite time before its computation starts?
(taking a cue from KMW in their statement/proof of colrollary1) What if we think the n vertices also as the dimensions? so the computability problem is a URE in n+d dimensions. How do you label the 'n' to be similar to the 't'? You may want to use this in proving some properties of the URE. The example are lower/upper bound for the schedule. lower/upper bound for the memory allocation.
When does the hyper-EDG have a node which has an infinite length path ending in it?
If loops in the EDG are hindering you, forget them for a moment.
It is clear that the usual EDG has the linearity implicitly embedded in the structure of the polyhedron. We are somehow destroying it by using the hypergraph representation.
Possible correspondence: All this is ok. but, given a hyper-RDG, How do you reconstruct the EDG? vertices of the hyper-RDG: For each of the vertices of the hyper-RDG, we have a corresponding vertex at every point in F_n. Edges of the hyper-RDGIn the hRDG, if vertices v1 and v2 participate in an edge h_e -- which will actually be used a function, returning a set of vertices of the hypergraph, the V (meaning v_h and v_t), exactly 2 and the W set, represented by w_i --, then there is a edge between vertices (v_h,z) and (v_t,z-z') with z' being a d dimensional vector and z'_i = 1 if d_i belongs to the set W and z'_i = 0, if d_i does not belong to W.
What does incomputability mean in the hRDG? A vertex v_i of the hRDG is computable if there exists a path of infinite length ending in it.
--
How hard is the Software-pipelining problem (If it is NPO, what is the best approximation algorithm possible)?
How hard is the SURE scheduling problem? Can we do faster for a special case?
How hard is the SARE scheduling problem?
--
Is it possible to obtain a Theta((n+d)*m) multi-dimensional scheduling algorithm (or even faster)?
--
The algorithm for maximum matching also -- like max flow problem -- seems to use the idea of augmenting pathsRead this link
--
Ideas for Polynomial schedules (possibly better than linear schedules and are a kind of free schedules)
If we divide the vertices of the hypergraph into actual-vertices and dependence-vertices, it is clear that, if a AV does not have an path of length 1 from one DV, then that AV can be computed independent of that dependence component. OTOH, if a AV does not have a path of length 1 to one DV, then the AV can be computed....
Also, if an AV is is not connected to a DV, then that component of AV can be computed independent of that DV. If so, that vertex has a schedule that is a polynomial of better degree that rest of the vertices. If no vertices exist which satisfy these conditions, then the best polynomial schedule has degree of d. Read the rest of this entry >>
All pair shortest paths
max-flow?
--
LP formulations of the computability/scheduling problem
Motivation: Given that we have a matrix of size (n+d)*m which contains the components {0,+1,-1}. There are many ways in which we can formulate a linear program with this matrix somehow acting as constraints. Let us observe the different ways we can do so, for a close enough look at the linear programs may yield insights into the unitary graph scheduling problem.
--
Bipartite matching Tarjan's algorithm and Hopcroft-Karp's matching algorithm, and Vazirani's algorithm. What is the connection between Bipartite matching and SURE scheduling?
Hall's theorem and its variants from Khuller's notes. Hypergraph basics from Berge's book.
--
What does a zero weight cycle in th RDG meain the hypergraph. What does it mean in the bipartite graph?
How does the EDG of the hypergraph look like?
How does the EDG of the hypergraph look like? What are the vertices, what are the edges?
A zero weight cycle in the EDG corresponds to the computability problem of the usual SURE. When does a "node" in the hypergraph-EDG have to wait for infinite time before its computation starts?
(taking a cue from KMW in their statement/proof of colrollary1) What if we think the n vertices also as the dimensions? so the computability problem is a URE in n+d dimensions. How do you label the 'n' to be similar to the 't'? You may want to use this in proving some properties of the URE. The example are lower/upper bound for the schedule. lower/upper bound for the memory allocation.
When does the hyper-EDG have a node which has an infinite length path ending in it?
If loops in the EDG are hindering you, forget them for a moment.
It is clear that the usual EDG has the linearity implicitly embedded in the structure of the polyhedron. We are somehow destroying it by using the hypergraph representation.
What does incomputability mean in the hRDG? A vertex v_i of the hRDG is computable if there exists a path of infinite length ending in it.
--
--
Is it possible to obtain a Theta((n+d)*m) multi-dimensional scheduling algorithm (or even faster)?
--
The algorithm for maximum matching also -- like max flow problem -- seems to use the idea of augmenting pathsRead this link
--
Ideas for Polynomial schedules (possibly better than linear schedules and are a kind of free schedules)
If we divide the vertices of the hypergraph into actual-vertices and dependence-vertices, it is clear that, if a AV does not have an path of length 1 from one DV, then that AV can be computed independent of that dependence component. OTOH, if a AV does not have a path of length 1 to one DV, then the AV can be computed....
Also, if an AV is is not connected to a DV, then that component of AV can be computed independent of that DV. If so, that vertex has a schedule that is a polynomial of better degree that rest of the vertices. If no vertices exist which satisfy these conditions, then the best polynomial schedule has degree of d. Read the rest of this entry >>
Wednesday, August 31, 2005
Berge: Theory of Graphs
It seems that the concept of incidence matrix was started by Kirchoff.
Representation of a graph: A graph can be represented as a pair (X,Gamma) or (X,U). Gamma is a function from X -> X. This encodes multi-graphs and infinite graphs (if we allow Gamma to be a multi-function). U is a subset of the possible arc sets. If U is a symmetric set, then the graph is undirected.
Chapter 3
Core concept: bounded is different from finite. Finite is different from infinite.
If a set has bounded number of elements, its cardinality is finite and not infinite.
If a set has unbounded number of elements, its cardinality could be finite, or infinite.
If a set has finite-unbounded number of elements
A good example is the 1-d graph which starts at origin and extends unboundedly towards right. Each node on the graph has edges to its immediate left neighbour and all the nodes to its left. For such a graph, the outdegree of any given vertex is unbounded. Still it is finite!!!
Also, a set could have unbounded number of elements and still be finite.
Finite Graphs: A graph is called finite if |X| is finite.
[If there exists a function that maps a vertex of the graph to a natural number, then the graph is not finite. Further, it is an infinite graph. Also, there seems to be many types of infinite graphs. However, Berge gives talks about certain types of infinite graphs meaning ones that satisfy some conditions, esp. the early theorems so that the results of finite graphs can be extended to those types of infinite graphs.]
Gamma-finite: A graph is Gamma-finite, if Gamma(x) < inf for all x in X. (This means that every vertex has finite number of edges out it.)
Gamma^-1-finite: A similar definition about Gamma^-1 rather than Gamma (Gamma^-1 is the inverse of the function Gamma.)
A graph is called locally finite if it is both Gamma-finite and Gamma^-1-finite. Also, a finite graph is locally finite.
Gamma-bounded: A graph is said to be Gamma-bounded if there exists a m such that Gamma(x)<=m for all x in X.
Progresssively finite: A vertex v is progressively finite if no paths of inf length start at v. A graph is progressively finite if is is progressively finite at each of its vertices. [see a note below about progressively finite and having no circuits.]
Progressively bounded: A graph is progressively bounded at vertex v if a number m exists such that the length of all paths starting from v is less than or equal to m. A graph is progressively bounded if it is progressively bounded at each of its points.
The relations regressively finite and regressively bounded are similar to the definitions of progressively finite and progressively bounded. The later deal with the graph (X,Gamma), while the former deal with the graph (X,Gamma^-1).
Relevant examples
RDG: An RDG is a finite graph. So it is Gamma-finite and Gamma^-1-finite. It is also Gamma-bounded and Gamma^-1 bounded. It is usually has cycles, as we deal with a strongly connected component of the RDG. So, it is not progressively bounded. Neither is it progressively finite. It is also not usually regressively bounded and not regressively finite.
EDG of a RDG defined across a finite domain: Such a graph is a finite graph. It is also Gamma-finite and also Gamma^-1 finite. It is also Gamma-bounded and Gamma^-1-bounded. It is progressively finite, if its associated SURE is computable and there exists a schedule. It is progressively bounded if the SURE has a bounded schedule. What about regressively bounded and regressively finite???
[actually the graph theoretic definitions preceed the definitions of computability and schedules. Here we will allow circular definitions.]
EDG of a RDG defined across the entire positive orthant: Such an EDG is an infinite graph. It is Gamma-finite and also Gamma^-1 finite. It is also Gamma-bounded and Gamma^-1 bounded. It is not progressively finite and so is not progressively bounded. It is also not regressively finite and so is not regressively bounded.
EDG of a RDG of a SARE defined over a finite domain: Such a graph is a finite graph. It is Gamma-finite, Gamma-bounded and so Gamma^-1-finite and Gamma^-1-bounded. It is progressively bounded, and progressively finite. It is regressively bounded and regressively finite.
EDG of a RDG of a SARE defined across the entire positive orthant: Such a graph is not finite. It is not Gamma-bounded. Neither is it Gamma^-1-bounded. It is Gamma-finite and Gamma^-1-finite.
I donot know about the computability conditions of either of the types of SAREs. So, no statements about the progressively.* and regressively.* for now.
Theorems
Theorem 1: If a graph is finite, the properties progressively finite, progressively bounded and 'without circuits' are equivalent.
[Note the implication of 'without circuits': If a graph is finite and it is progressively finite, it cannot have any circuits, as that will lead to having infinite path lengths beginning at any of the vertices participating in the circuit. A similar statement can be said about the progressively boundedness of a finite graph.]
Proof of therorem 1: Direct from applying the definitions to a finite graph.
[This proof is kind of easy, because, for a finite graphs |X| < inf. Also, if it does not have any circuits, there exists a bound on the maximum length of a path from each vertex. So, the graph is progressively bounded and is trivially progressively finite. As Berge himself says, infinite graphs are more interesting.]
Questions on the wording of theorem 1: Theorem 1 talks about the equivalence of 'progressively finite' and 'without circuits' for finite graphs. When are the properties not equivalent for infinite graphs?
[which of the 'progressively finite', 'progressively bounded' and 'without circuits' implies the other? It seems that 'progressively finite' combined with 'the graph is finite' is enough. The other two properties seem a little too strong.]
Answer: My current answer is the following: It does not seem that we can give a statement about all paths of an infinte graphs. Why not?
Answer 2: Is one of the conditions 'progressively finite' necessary for the other? and that the other is a sufficient condtion? Is it that the necessary condition and sufficient conditon are satisfied by finite graphs and only one of them is satisfied by infinite graphs?
Meaning of infinite path: What does an infinite path mean? When is a path infinite and when is it finite? Suppose we had just three vertices and a circuit along them, how many paths are there? Suppose we have an infinte number of vertices and the graph does not have loops and circuits. Does the graph have any infinite paths?? NO????? Are we getting confused between an infinite path [meaning a path of infinite length] and infinite number of paths?
Infinite sets: Here are two defs of infinite sets. 1 , 2.
A circuit induces an infinte number of paths on each of the vertices participating in it. Each of the infinite paths is of finite length. An infinte graph (for example the 1-d graph) has infinite number of paths, each of finite length. THEN WHAT IS AN INFINITE PATH? A path whose length is neither bounded by any integer nor there exists a finite number which can specity its length. What is such an example?
Theorem 2: If a graph is progressively finite and Gamma-finite, we have Gamma-hat(x) < inf for all x in X.
The theorem is kind of disturbing. This is because, it states that even in a graph with infinite number of nodes, it gives a statement about the cardinality of the set Gamma^hat. It says that the cardinality is a number, which is possibly unbounded. However, it is a finite number. See the definition of Gamma^hat.
Proof of theorem 2: Suppose the graph is progressively finite and Gamma-finite and that we had a vertex such that Gamma-hat(x)=inf. As the graph is Gamma-finite, x should be adjacent to a vertex x1 such that Gamma-hat(x1)=inf. by similar reasoning, x1 must be adjacent to x2 such that Gamma-hat(x2)=inf. etc... The path [x,x1,x2...] is therefore of inf length. Which contradicts that the graph is progressively finite.
Proof for necessity of theorem 1's premise: The method followed is proof by contradiction. The technique is the following: The theorem statement is of the form: A => B => C. With A being 'graph is Gamma-finite', B being 'graph is progressively finite', C being 'for all x in X, |Gamma^hat(x)| < inf '. B=>C is equivalent to NOT(B) OR C. We assume its contradiction is true; i.e., B AND NOT(C) is true and get a contradiction with A. proof method is: B and NOT(C) means, 'the graph is progressively finite' and 'there exists a vertex x (in X) such that |Gamma^hat(x)| = inf'. To get: contradiction with 'Gamma-finiteness of any vertex of the graph'.
let v be the vertex such that |Gamma^hat(x)| = inf. Let k be the number of paths out of v [objective: k is infinite] The graph is progressively finite meaning 'there exist no paths of infinite length' => all paths out of v are of finite length => There exists a path of maximum length (this is because, in a set of numbers of which infinity is not a member, we can always select an maximum). Then k*length_of_the_max_length_path is the upper bound on the number of vertices reachable by v. By hypothesis, this number is infinite.
Corollary 1: A Gamma-finite which is progressively finite at a vertex x0 is also progressively bounded at x0.
[The real punch line to this benign looking proof is applying the absolutely harmless looking theorem 1 to the subgraph induced by Gamma-hat(x) which happens to be finite.]
Proof of Corollary 1: The subgraph determined by the set Gamma^hat(x) is finite! So, applying the theorem1 to this subgraph, the terms 'progressively finite', 'progressively bounded' and 'without circuits' are equivalent.
[note the equivalence of 'without circuits' and 'cardinality of Gamma^hat(x)' as implied by the statement of this theorem.]
Fillable Holes in the proof of Corollary 1: from G to G' and back again. How can we make a statement about the graph G' when all we are talking about is about the subgraph induced by Gamma^hat(x0). Also, when and how are theorem 2, theorem 1 used? At what particular locations? How is the result for G' extensikble to G? remember, we are talking about two graphs here, G and G'. Also about two theorems 1 and 2. The statement of corollary 1, from the outside is more general, but needs to be taken care of.
Proof by contradiction: If Read the rest of this entry >>
Representation of a graph: A graph can be represented as a pair (X,Gamma) or (X,U). Gamma is a function from X -> X. This encodes multi-graphs and infinite graphs (if we allow Gamma to be a multi-function). U is a subset of the possible arc sets. If U is a symmetric set, then the graph is undirected.
Chapter 3
Core concept: bounded is different from finite. Finite is different from infinite.
If a set has bounded number of elements, its cardinality is finite and not infinite.
If a set has unbounded number of elements, its cardinality could be finite, or infinite.
If a set has finite-unbounded number of elements
A good example is the 1-d graph which starts at origin and extends unboundedly towards right. Each node on the graph has edges to its immediate left neighbour and all the nodes to its left. For such a graph, the outdegree of any given vertex is unbounded. Still it is finite!!!
Also, a set could have unbounded number of elements and still be finite.
[If there exists a function that maps a vertex of the graph to a natural number, then the graph is not finite. Further, it is an infinite graph. Also, there seems to be many types of infinite graphs. However, Berge gives talks about certain types of infinite graphs meaning ones that satisfy some conditions, esp. the early theorems so that the results of finite graphs can be extended to those types of infinite graphs.]
Relevant examples
RDG: An RDG is a finite graph. So it is Gamma-finite and Gamma^-1-finite. It is also Gamma-bounded and Gamma^-1 bounded. It is usually has cycles, as we deal with a strongly connected component of the RDG. So, it is not progressively bounded. Neither is it progressively finite. It is also not usually regressively bounded and not regressively finite.
EDG of a RDG defined across a finite domain: Such a graph is a finite graph. It is also Gamma-finite and also Gamma^-1 finite. It is also Gamma-bounded and Gamma^-1-bounded. It is progressively finite, if its associated SURE is computable and there exists a schedule. It is progressively bounded if the SURE has a bounded schedule. What about regressively bounded and regressively finite???
[actually the graph theoretic definitions preceed the definitions of computability and schedules. Here we will allow circular definitions.]
EDG of a RDG defined across the entire positive orthant: Such an EDG is an infinite graph. It is Gamma-finite and also Gamma^-1 finite. It is also Gamma-bounded and Gamma^-1 bounded. It is not progressively finite and so is not progressively bounded. It is also not regressively finite and so is not regressively bounded.
EDG of a RDG of a SARE defined over a finite domain: Such a graph is a finite graph. It is Gamma-finite, Gamma-bounded and so Gamma^-1-finite and Gamma^-1-bounded. It is progressively bounded, and progressively finite. It is regressively bounded and regressively finite.
EDG of a RDG of a SARE defined across the entire positive orthant: Such a graph is not finite. It is not Gamma-bounded. Neither is it Gamma^-1-bounded. It is Gamma-finite and Gamma^-1-finite.
I donot know about the computability conditions of either of the types of SAREs. So, no statements about the progressively.* and regressively.* for now.
Theorems
Theorem 1: If a graph is finite, the properties progressively finite, progressively bounded and 'without circuits' are equivalent.
[Note the implication of 'without circuits': If a graph is finite and it is progressively finite, it cannot have any circuits, as that will lead to having infinite path lengths beginning at any of the vertices participating in the circuit. A similar statement can be said about the progressively boundedness of a finite graph.]
Proof of therorem 1: Direct from applying the definitions to a finite graph.
[This proof is kind of easy, because, for a finite graphs |X| < inf. Also, if it does not have any circuits, there exists a bound on the maximum length of a path from each vertex. So, the graph is progressively bounded and is trivially progressively finite. As Berge himself says, infinite graphs are more interesting.]
Questions on the wording of theorem 1: Theorem 1 talks about the equivalence of 'progressively finite' and 'without circuits' for finite graphs. When are the properties not equivalent for infinite graphs?
[which of the 'progressively finite', 'progressively bounded' and 'without circuits' implies the other? It seems that 'progressively finite' combined with 'the graph is finite' is enough. The other two properties seem a little too strong.]
Answer: My current answer is the following: It does not seem that we can give a statement about all paths of an infinte graphs. Why not?
Answer 2: Is one of the conditions 'progressively finite' necessary for the other? and that the other is a sufficient condtion? Is it that the necessary condition and sufficient conditon are satisfied by finite graphs and only one of them is satisfied by infinite graphs?
Meaning of infinite path: What does an infinite path mean? When is a path infinite and when is it finite? Suppose we had just three vertices and a circuit along them, how many paths are there? Suppose we have an infinte number of vertices and the graph does not have loops and circuits. Does the graph have any infinite paths?? NO????? Are we getting confused between an infinite path [meaning a path of infinite length] and infinite number of paths?
Infinite sets: Here are two defs of infinite sets. 1 , 2.
A circuit induces an infinte number of paths on each of the vertices participating in it. Each of the infinite paths is of finite length. An infinte graph (for example the 1-d graph) has infinite number of paths, each of finite length. THEN WHAT IS AN INFINITE PATH? A path whose length is neither bounded by any integer nor there exists a finite number which can specity its length. What is such an example?
Theorem 2: If a graph is progressively finite and Gamma-finite, we have Gamma-hat(x) < inf for all x in X.
The theorem is kind of disturbing. This is because, it states that even in a graph with infinite number of nodes, it gives a statement about the cardinality of the set Gamma^hat. It says that the cardinality is a number, which is possibly unbounded. However, it is a finite number. See the definition of Gamma^hat.
Proof of theorem 2: Suppose the graph is progressively finite and Gamma-finite and that we had a vertex such that Gamma-hat(x)=inf. As the graph is Gamma-finite, x should be adjacent to a vertex x1 such that Gamma-hat(x1)=inf. by similar reasoning, x1 must be adjacent to x2 such that Gamma-hat(x2)=inf. etc... The path [x,x1,x2...] is therefore of inf length. Which contradicts that the graph is progressively finite.
Proof for necessity of theorem 1's premise: The method followed is proof by contradiction. The technique is the following: The theorem statement is of the form: A => B => C. With A being 'graph is Gamma-finite', B being 'graph is progressively finite', C being 'for all x in X, |Gamma^hat(x)| < inf '. B=>C is equivalent to NOT(B) OR C. We assume its contradiction is true; i.e., B AND NOT(C) is true and get a contradiction with A. proof method is: B and NOT(C) means, 'the graph is progressively finite' and 'there exists a vertex x (in X) such that |Gamma^hat(x)| = inf'. To get: contradiction with 'Gamma-finiteness of any vertex of the graph'.
let v be the vertex such that |Gamma^hat(x)| = inf. Let k be the number of paths out of v [objective: k is infinite] The graph is progressively finite meaning 'there exist no paths of infinite length' => all paths out of v are of finite length => There exists a path of maximum length (this is because, in a set of numbers of which infinity is not a member, we can always select an maximum). Then k*length_of_the_max_length_path is the upper bound on the number of vertices reachable by v. By hypothesis, this number is infinite.
Corollary 1: A Gamma-finite which is progressively finite at a vertex x0 is also progressively bounded at x0.
[The real punch line to this benign looking proof is applying the absolutely harmless looking theorem 1 to the subgraph induced by Gamma-hat(x) which happens to be finite.]
Proof of Corollary 1: The subgraph determined by the set Gamma^hat(x) is finite! So, applying the theorem1 to this subgraph, the terms 'progressively finite', 'progressively bounded' and 'without circuits' are equivalent.
[note the equivalence of 'without circuits' and 'cardinality of Gamma^hat(x)' as implied by the statement of this theorem.]
Fillable Holes in the proof of Corollary 1: from G to G' and back again. How can we make a statement about the graph G' when all we are talking about is about the subgraph induced by Gamma^hat(x0). Also, when and how are theorem 2, theorem 1 used? At what particular locations? How is the result for G' extensikble to G? remember, we are talking about two graphs here, G and G'. Also about two theorems 1 and 2. The statement of corollary 1, from the outside is more general, but needs to be taken care of.
Proof by contradiction: If Read the rest of this entry >>
Saturday, August 27, 2005
Index Set Splitting of PUREs
(We use the notation of Saouter-Quinton-1203)
PURE: a Parametrized Uniform Recurrence Equation.
Index set splitting is a transformation that finds out if splitting a system of equations into individual sub-components can result in a set of SUREs. The scheduling of which can result in a better schedule.
What can be done better for PUNREs (Parametrized Unitarized Recurrence Equations).
Simple example for index set splitting:
Let the domain be 1 <= i,j <= N (the positive quadrant)
i <= j, X[i,j] = f(X[i,j-1])
i > j, X[I,J] = f(X[i-1,j])
The system can be broken down into two sub-domains
i <= j and i > j. Each of the subdomains have 1-d schedules. Read the rest of this entry >>
PURE: a Parametrized Uniform Recurrence Equation.
Index set splitting is a transformation that finds out if splitting a system of equations into individual sub-components can result in a set of SUREs. The scheduling of which can result in a better schedule.
What can be done better for PUNREs (Parametrized Unitarized Recurrence Equations).
Simple example for index set splitting:
Let the domain be 1 <= i,j <= N (the positive quadrant)
i <= j, X[i,j] = f(X[i,j-1])
i > j, X[I,J] = f(X[i-1,j])
The system can be broken down into two sub-domains
i <= j and i > j. Each of the subdomains have 1-d schedules. Read the rest of this entry >>
Saturday, August 20, 2005
Notes on KMW and DV: chapter4
Notation
Notation for graphs: KMW call as dependence graph, what we now know as RDG. KMW donot give a name to EDG. KMW call a path as pi. It could be used to define a cycle and simple cycle. Also, its length could be inf if its length is infinite. The cycle and simple cycle are defined only for paths whose length is not inf. (We have to be a little careful about undefined and inf here.)
Other Notations: From KMW KMW use the functions: T, S, F, F_n (they are different), phi_s. S is any schedule. T is the free schedule, the fastest possible schedule. F is defined to test for presence of free schedule. F_n is the positive orthant in the n dimensional Euclidean space. From the book, the notations are: T for any schedule, T_f is the free schedule and F is defined for presence of free schedule, F_n is the positive orthant. phi_s (see later for a proper definition and a possible intuition).
More notices from the notation department: Gamma and EDG in DV and KMW are (slightly) different: The Gamma that KMW define is an EDG with dependence edges. The EDG that Darte-Vivien define has edges that mean data flows. The difference in definitions means changes in simple things like whether the longest path has source or sink of a vertex (k,p) in the EDG. Sticking to the notations in the book seems to be useful.
RDG having depenendence edges/data-flow edges
The equations being defined by a_j = ... a_k(z - w) ... or a_j = ... a_k(z+w)
Relation between Brent's theorem (1974) and KMW's statement of computability:
Computability
Conditions for computability: KMW give both the necessary and sufficient conditions for computability of a RDG. The intuitive definition: An SURE is computable if there exists a schedule for it. In other words, an SURE is computable if every vertex in its EDG is computable.
Computability of a SURE over a bounded domain: Since we usually deal with bounded domains, the computability problem boils down to testing for acyclicity of the EDG. This works, but it is very costly and input size dependent (ala Ford-Fulkerson algorithm for maxflow). Further, this method cannot be applied to unbounded domains.
A note fom Quinton-Saouter-1203 (QS): When the domain is allowed to be unbounded, the SURE is (trivially) incomputable. This result was proved by Joinnault. Most realistic algorithms have bounded domains. QS say that the result may not be that useful and that the interesting cases are the computability questions of SUREs defined over domains of bounded size. Also, QS give the condition for incomputability of Parametrized Uniform Recurrence Equations PUREs and PAREs. (See the nice figures at the end of QS-paper for figures of PUREs which are computable and which are not. The difference being a change in the value of a parameter.)
EDG: Assume we use the Darte-Vivien's notation of edges in the EDG meaning data flow. An EDG is computable iff for all points (i,p) in the EDG, all paths ending at (i,p) are of finite length.
Size of Domain: unbounded and entire positive orthant: Darte-Vivien (chapter 4, theorem 17) give the condition for computability of a SURE defined over entire positive orthant (which is an unbounded domain). It seems that the notion of unboundedness is limited to equations defined over positive orthant (not the entire space: I donot know why and its implications). If a SURE is defined over unbounded domain (meaning positive orthant), then it is computable iff G there is no nonpositive weight cycle in it (where G is its RDG).
Size of domain is bounded: Darte-Vivien (chapter4, theorem 18) give the relationship between the computability and the zero-weight cycles in any SURE defined over bounded domains. Basically, they show that a SURE defined over a bounded domain is computable iff G has no cycle of zero-weight. The phrase "All systems of uniform recurrence equations defined from a RDG G" makes more sense once we understand the definition of a RDG is (quite) independent of the domain of the underlying SURE (the usual definition of a RDG does not encode the domains of the variables). So a RDG can induce many SUREs, each in a bounded domain.
Fat domains: The computability of a SURE defined over bounded domain makes sense iff the domain is sufficiently fat. Read more about this.
Schedule
Schedule: A valid schedule respects the dependences of the SURE. If an SURE does not have such a schedule, then it is said to be incomputable. Examples are either a node in the EDG depending on itself, or a node in the EDG at the end of infinite path before its operands are ready.
Free schedule vs. Linear schedule: KMW donot use the concept of linear schedule (it seems that they donot even use the term: I did a text search). They use a concept of free schedule. Darte-Vivien show that free schedule could beat the fastest linear schedule by a constant, independent of the size of the domain.
Questions: What do KMW mean by bounded and unbounded parallelism? What about the notions of tube (which intuitively means bounded along a dimension)? What about the storage requirements?
Questions on definitions of Schedule S(k,p): As defined on page 566, A schedule seems to be similar to that of a 1-d schedule as we know now. This is because, they talk about every vertex of the EDG getting a 'single time stamp'. But, that does not preclude many vertices getting the same time stamp (right?).
Free Schedule T(k,p) If a vertex of EDG does is free, we evaluate it at time 1. If all the operands of a vertex of the EDG are available at time t, we evaluate the point at time t+1.
Machinery on page 566 involving T and F: From the outside, T and F seem so similar. T gives us a recursive definition and F looks like the expansion of the recursive definition given by T. However, there is a critical there exists clause in the definition of F. That is very crucial to understand that length of a path could go on to undefined values as given by the definition of T. Is there something we need to act here??????
=============================
Section 3: Lemma 1 notes
[Also refer to notes from Berge. This is the link.]
Lemma 1: premise: Number of edges directed out of each vertex is finite. C1: If there is no upper bound on the lengths of paths directed out of vertex v, C2: there exists an infinite path directed out of v.
Strategy to prove that the premise is required: P => (A=>B) is the given structure of lemma. Why is the premise required? The necessity of the premise can be proved by assuming that it is false, i.e., NOT(P) is true. In that case, if we find that the clause (A=>B) is false, we are done. in other words, we should prove that NOT(P) is true and NOT(A=>B) is true. or, that NOT(P) is true and NOT(NOT(A) OR B) is true. or, that NOT(P) is true and A AND NOT(B) is true.
An infinite graph has a vertex w such that outdegree(w) is unbounded. There are two interesting cases: either there exists a path from w to v or there exists a path from v to w. Let us prove for the case when there is a path from v to w(the other case: when there exists a path from w to v can be proved with a similar argument. The third case: when there does not exist a path between v and w blows away the negation of the premise).
There exists a path from v to w. Also, There is no upper bound on the lengths of paths directed out of vertex v. Also, there does not exist an infinite path directed out of v.
Should we give a example or prove the fact? If we give an example, which would be a counter example to the fact that the premise is not required, we are done.
Proof of the Lemma and related notes:
====================
Lemma 1: Let H be a directed graph in the edges directed out each vertex is finite. Then, if there is a no upper bound on the lengths of paths directed out of vertex v, there exists an infinite path directed of v.
Proof strategy: A => B => C. Assume A. Assume NOT(C). If it leads to NOT(B), then we have a contradiction. which proves that B => C.
Proof: Assume the premise. Assume that there does not exist an infinite path directed out of vertex v. Then the longest path directed out of v is the function 1 + max{}. This function is a recursive function defined with peano arithmetic. So, there exists an upper bound on the lengths of paths directed out of vertex v. This is a contradiction with the hypothesis.
Proof 2: The graph is progressively finite. Prove that it is progressively bounded. [The key idea is to get that a finite set has an upper bound.] The set of its descendants is finite.
Proof for the premise: prove that if the graph is progressively finite and not progressevely bounded, there exists a vertex whose outdegree is not finite. This will be contradiction.
The statement actually says that if A (as usual, Gamma-finite), then if it is not progressively bounded, it is not progressively finite. This is the same as saying that if it is progressively finite, it is progressively bounded. (same as if it is (not progressively finite) or (progressively bounded)).
To prove the premise, we have to say that, if it is progressively finite and (not progressively bounded), there exists a vertex whose outdegree is infinite.
Given that there exists no infinite paths and that there is no upper bound on any paths, k*Gamma(x)
=========================
Why do we need the premise? Contrary to the premise, assume that there exists a vertex w which has infinite number of edges out of it. Case 1: If there exists a path from vertex v to vertex w -- regardless of the number of edges out of vertex v -- there exist infinite number of paths out of vertex v. This is because, a path from vertex v to vertex w is all we need (this is why we started from the assumption that the graph is connected). The set of infinite edges out of vertex w lead to infinite number of paths out of vertex w: they are the following: {[v->w,w1],[v->w,w2],[v->w,w3],...,[v->w,wk],...inf}. Case 2: If there exists no path from vertex v to vertex w (or any vertex u which has infinite number of edges out of it), then the premise is not needed for the lemma. Is this right?
Counter example: So a counter example would be an infinite graph (finite graphs are not that interesting) with the folllowing properties.
To remove some of these owes, we can assume that the graph is a connected graph.
In all the proofs here, we assume the premise and prove the lemma.
Proof by deduction:Assume the premise. also assume C1, that there is no upper bound on the lengths of paths directed out of vertex v. for a vertex v, let us define a set S_v, which is the set of all paths from the vertex v. Define L(S_v) as a function, which returns a set L_v, each of whose elements are integers denoting the lengths of paths of elements of S_v.
Since there is no upper bound, the function must return inf. Which is another way of stating C2. Is this right?
Proof by deduction: Assume the premise. also assume C1, that there is no upper bound on the lengths of paths directed out of vertex v. There exists two cases, when the graph is finite (|X| is finite) or not (|X|=inf). case1: the graph is finite: As there is no upper bound on the length of paths out of vertex v, v participates in a circuit. This means that we can obtain paths of *any* length and without bound from v. This means that we can obtain a path of infinite length from v. Q.E.D (???check this proof for circularity???)
Proof by contradiction: Assume premise, assume C1 and negation(C2): case 1: the graph is finite (|X| < inf): NOT(C2) => there is no inf path directed out of v => all paths out of v are of finite (but possibly unbounded) length => there exists a vertex w which is at a maximum distance from v. Let us call this distance k. => all paths out of v are of length <= k. This means that there is a upper bound on the lengths of paths. A contradiction. case 2 (|X|=inf): If the graph is connected (there exists a path between every two vertices), every vertex participates in a path of unbounded length. Which means that the statement is obviously true. If the graph consists of more than one connected component, we can apply the argument of case 1 to the component which contains v.
Assume premise, assume C1, assume NOT(C2): NOT(C2) means that there exists no path of infinite length out of vertex v. Since vertex v has finite outdegree (premise), there does not exist a path of infinite length out of each vertices belonging to the set Gamma(x). This means that ......... ..........
We have to use the fact that v does not praticipate in a path of infinite length. Let us assume for now that an infinite path length is obtained by a circuit. from premise, let us assume that w1,...,wn are the vertices adjacent to v. None of w1,...,wn have a path back to v. This means that the set Gamma(W) is strictly greater than Gamma(v). This means that Gamma^hat(v) is an unbounded set. What do we do with these sets...
proof by contradiction: Assume premise. Assume negation(C1) and C2: NOT(C1) => there exists an upper bound on the lengths of paths directed out of vertex v. => All paths through v are of length <= l. This means that there is no path of length inf thro v. A contrtadiction with C2.
Proof by induction
proving an iff by contradiction of the if and only if part is kind of funny ...
=================================
Explicitly Defined vs. Computability: KMW use explicitly defined for a variable a_j(p) instead of talking about its computability. An SURE is said to have a schedule if each of its variables is explicitly defined (page 566 in the paper and cor.3 in the book). This also means that, an SURE is computable if each of its variables is explicitly defined. (we are using the relationship between computability and the existence of a schedule.)
Bounded Parallelism (page 566): phi_s(s,tau): Given a schedule function for every vertex of the EDG (k,p) (if S(k,p)=tau) phi_s(s,tau) is the cardinality of the inverse of the schedule function S.
Intuitive explanation of phi_s: Given a k and p, it is the number of
Intuition behind phi_s: A given schedule maps the vertices of the EDG into N (as it associates a timestamp with every vertex in the EDG). Given a variable of the SURE, phi_s is the cardinality of the number of vertices of the EDG of that type that have the same mapping???? Also read section 4 for another explanation of phi
It is not clear why phi_s should be parametrized on k also. If what we are interested is bounded parallelism, why not just talk about the cardinality of phi_s(tau) = || p | S(k,p) = tau || k = 1...m and tau = 1,2,...
May be they mean that. This opinion is because, the next line, they say 'for all k and tau'. So the notion of bounded parallelism has nothing to do with the 'type of vertices'. OTOH, the definition of phi_s can be general enough that it can encapsulate other ideas??
=============================================
Section 3:Conditions for a function to be explicitly defined
[With Lemma 1, the stage is set about infinite graphs and relationship between existance of upper bound on the length of the path and presence of a infinite length path. The theorem 1 gives a definition of computability.]
Notation: non+ve: A vector q is nonpositive if q_i <= 0.
Difference between nonpositive (defined on page 567) and semi-positive (defined on page 569): A vector in 2-dimensions, is nonpositive if it belongs to the (i) third quadrant, or (ii) -ve y-axis or (iii) -ve x-axis or (iii) is the zero-vector. A vector in 2-dimensions is said to be semi-positive if is belongs to the (i) first quadrant or (ii) +ve x axis or (iii) +ve y-axis.
Note: The regions are mutually exclusive. There exists vectors which are neither. The following "picture" encapsulates all the above ideas.
                                ^
                                |
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
<-nnnnnnnnnnnnsssssssssss->
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
                                |
                                v
[The computability of a SURE and its relation to the presence of positive weight cycles.]
KMW-Theorem 1 statement: Let G be a dependence graph [meaning RDG] specifying a SURE defined over F_n [the positive orthant]. Then the function a_k(p) is explicitly defined iff G does not have a path from v_k to any vertex v_l contained in a nonpositive cycle.
Ambiguity in the statement of theorem1 (in KMW): The statement of the theorem is a little ambiguous and cannot be disambiguated easily in particular, by reading the statement without context. The first meaning is obtained by associating the clause "contained in a nonpositive cycle" with path in the following way: "... G does not have a path, from v_k to any vertex v_l contained in a nonpositive cycle". The other interpretation is to associate the clause "contained in a nonpositive cycle" with the vertex v_l in the following way: "... iff G does not have a path from v_k to any vertex v_l contained in a nonpositive cycle".
Disambiguation of the statement of theorem 1 (in KMW): Method 1: just reading the statement: If we associate the clause with "path", then v_l seems to be redundant in the theorem. On the other hand, if we associate the clause with vertex v_l, then everything falls into place, there is no redundancy in the definition, and there is a more precise definition of the path. Method 2:reading the first line of the proof: The first line of the proof says "Suppose there is a path pi from v_k to v_l and a nonpositive cycle C including v_l as shown in ....". This means that (i) v_l is in a nonpositive cycle (ii) there is a path from v_k to such v_l.
A possible better wording for theorem1: If you donot like the statement of theorem because of the above ambiguity and also the idea of a cycle being 'out of a vertex' is kind of ugly, use the word: reachable. However, it is more precise talking about the explicit-definition of a vertex, rather than the computability of a system. The reason is that some of the vertices of the RDG may not be explicitly defined, but others may be explicitly defined. rewording: A variable a_k of the SURE is said to be explicitly defined if in the RDG of the SURE, no cycle of non-positive weight is reachable from the vertex corresponding to a_k.
[A better statement would be the following: Any vertex which 'participates' in a non-positive-weight cycle is not explicitly defined. Any vertex that has a path to another vetex that 'participates' in such a cycle is not explicitly defined. The maiincrib about this statement is the use of the word 'participate'.]
[Darte-Vivien divide the theorem into two parts, corollary 3 and theorem 17.]
The different kinds of statements of computability from DV/KMW -- The equivalence of three (many?) things: There equivalence of the following things is being talked about: (i) cycle of nonpositive weights in RDG G (ii) incomputability of the SURE S (iii) path of infinite length directed out of vertex (i,p) in EDG Gamma (iv) not every variable of the SURE to be explicitly defined (v) absence of a schedule.
Darte-Vivien's equivalences: In DV, the equivalence of (ii) and (iii) is by definition (and the existence) of free schedule and also by corollary 3. What about the equivalence of rest of the things in DV?
KMW's equivalences: In KMW, the equivalence of (iv) and (ii) is by definition of computability (remember, KMW donot use the word computability). Also, the equivalence of (v) and (ii) is also by definition of computability and schedule. KMW bring lemmma 1 into context because, it gives the equivalence of "absence of explicit " and "presence of infinite length paths": namely the equivalence of (iv) (or even (v)) and (iii).
[All this is required? Yes. The equations are a simple description of the input. We have some associated questions on the input which we want to answer in finite time. The computability problem of the SUREs maps directly to checking for intinite length paths in the EDG. Direct checking is not a feasible way. We map the problem to checking for positive weight cycles in the RDG.]
DV: Corollary 3: The equivalence between (ii) and (iii) is made. The statement: "A SURE is computable iff whatever the vertex (i,p) in Gamma, there is no infinite path in Gamma directed to (i,p). "
Darte-Vivien's statement of the theorem 17: Let G be the RDG of a SURE S defined over F_n (the +ve orthant as domain for all variables). S is computable iff G has no cycle of non+ve weight i.e., w(C) <= 0.
KMW's proof strategy: The theorem statement talks about equivalence of (i) and (iv). KMW have however stated on page 566, the equivalence between the existence of a schedule and each variable of the SURE being explicitly defined (meaning they have the equivalence of ??? ). So they prove theorem 1 by proving the equivalence of (i) and (iii).
Darte-Vivien's proof strategy: The theorem statement talks about equivalence of (i) and (ii). Darte-Vivien have however proved in corollary 3, the equivalence of (ii) and (iii). So they prove theorem 3 by proving the equivalence of (i) and (iii).
KMW statement of corollary 1/2: From the outside, it may look that KMW's cor.1/2 and the statement of Theorem 18 from DVR are different. Howwver, They are the same and KMW donot miss any trick.
Corollary 1: In corollary 1, KMW say that
[just to remember: we are proving the equivalence of (i) = "absence of a cycle of nonpositive weight in G" and (iii) = "absence of a path of infinite length directed out of vertex (i,p) in Gamma". Actually, we are proving the contrapositive: (i)="presence of a cycle of nonpositive weight in G" and (iii)="presence of a path of infinite length directed out of vertex (i,p) in Gamma".]
The proof: (i) => (iii): method induction: If there exists a cycle of nonpositive weight in G, then exists a path of infinite length directed out of some vertex (i,p) in Gamma. Assumption: p, p-w_j for all 1 <= j <= k are in G_n (all these are valid labels for lattice points in the positive orthant). Given: there exists a cycle of nonpositive weight in G => Let i and j be two (not necessarily different) vertices in G. Let the weight of the cycle be w1 as i to j and w2 from j to i. w1 + w2 is a nonpositive vector. Now, ......??? ?????
[Punch line of the proof: w(C)<=0 which means that there exists a ll such that (ll,p-w(C)) is reachable from (i,p). So we can repeat the above procedure ad-infinitum, or give an inductive proof.]
[The statement of the only-if part talks about (iv)=>(i). KMW use the definition of explicit-definition of a variable to say that (iv) is equivalent to (iii). Also, they use lemma 1 to show that (iii) is equivalent to the existance of a specific infinite path.]
The proof: (iii) => (i): There exist an infinite path. We can select a infinite subsequence of the infinite path (i,p) such that the 'p' is non-decreasing, first in the first coordinate, next in the second coordinate etc. This is a list of infinite length [this is because, if it were not, the list is of finite size: ]. It is a cycle, as the coordinates 'i' have to repeat, by pegion-hole principle. This is a cycle of +ve weight, as the p is non+ve.
Possible bug in Darte-Vivien's statement/proof of theorem 17: Darte-Vivien use n differently in the proof of theorem 3: In the theorem statement, n is used as a superscript of mathcal{N} signifying the dimension of the input. In the inductive proof for the "if part", DV use n again -- this time as a variable for induction -- which could result in confusion. A better way is to use alpha, as KMW use.
===========================
My statement of the above theorems: Let S be a SURE. with G as its RDG and its Gamma as its EDG (with dependence edges). The following are equivalent: (A) S is computable. (B) G does not have a vertex v which belongs to a path of infinite length (or every vertex of G is explicitly defined) (C) No vertex in the EDG Gamma has a path of infinite path out of it (into it?).
============================
Proof of Darte-Vivien's theorem17: if part (suppose there is a cycle of non+ve weight. then the SURE has an infinite path ending in ......)
Comment about DV's theorem's 17-18 (or even KMW's theorem 1 and corollary's): Though the statement for the incomputability of a SURE defined over F_n is relatively more interesting, the pragmatic view seems to be to talk about the computability of a SURE defined over a bounded domain (so that we restrict our discussion on just for loops here: see the remark about while loops later). In order to remove the cases when the RDG is computable for some domain sizes and not for others, its better to talk about the incomputability of SURE over any bounded domain. DV and KMW differ here a little. DV talk about the computability of a SURE defined over the families of any finite size domains.OTOH KMW talk about the computability of SUREs defined over strips of F_n (intuitive meaning defined below).
[corollary 1 of KMW talks about the extension of theorem1 to strips of F_n]
Meaning of strips: KMW use the notation Q_t * F_{n-t} to mean the following. (The implicit assumption is that t <= n.) In each of the t dimensions, the set of coordinates in those t dimensions is from a finite set. In the other n-t dimensions, the sizes of the coordinates is not of a finite set.
[idea of KMW's proof of corollary1: Draw a RDG G' in which the vertices are from {1, ... , n} * Q_t. This is a finite graph and can work as the RDG. Now apply theorem1 to the EDG induced by this RDG.]
Corollary 2:
Question about the strips and the scheduling of while loops (n-t while loops out of n dimensional space): Given a n dimensional SURE with only t dimensions are from a fiite set, can the loops be scheduled so that the performance of the overall code be improved? What is the best schedule for the code? What is the best memory allocation for the code?
[KMW talk about strips of F_n in corollary 1 and 2. Darte-Vivien's statement of theorem and its proof are the same as that when t = n; or, when the SURE is defined in a finite domain.]
KMW's corollary 1 of theorem1: Let G be the dependence graph of a SURE defined over R=Q_t * F_{n-t}. Then a_k(p) is explicitly defined iff there donot exist l in {1, ... , m} and p,q,r in R such that (i) (k,p) -> (l,q) -> (l,r) (ii) the vector q - r is zero in the coordinates corresponding to Q_t and nonpositive in the coordinates corresponding to F_{n-t}.
Ambiguity in the statement of KMW-corollary 1: As stated in KMW, corollary (i) talks about the SURE, the EDG and the RDG at the same time. OTOH, corollary 2 talks about only the SURE and the RDG, just as in the spirit of theorem 1. Darte-Vivien remove the need for this type of reasoning by giving "their corollary 3" apriori.
[See the reasoning above which states that the statement of theorem1 is ambiguous. In the samw way, corollary 2 also is ambiguous.]
Questions about strips: Give a statement of computability of an EDG defined over a skewed strip like {i-j <= 10; i-j >= 5; i >= 0; j <= 0}
Variant of theorem about UREs for bounded domains
=======================================
Section 4: The theorems for URE
Notation: Semi positive: a vector q is semi positive if q != 0 and q_i >=0.
Theorem 2: It gives the conditions for the computability of a URE. Basically, it frames the computability problem of a URE as a LP problem and some statements of the theorem are equivalent by Farkas lemma.
In more detail, (i) and (ii) are equivalent by theorem1. Rest by Farkas lemma and duality of the LP.
Examples of separating hyperplanes on page 570:
page 571: for a URE, we can index the function T by just T: m(p) and T(p): remember that
Theorem 4:
Amount of Parallelism (the same as phi_s(tau)): There exists a schedule S such that sup_tau phi_s(tau) = inf.
Intuitive meaning of amount of parallelism:
Notation: sup and inf: According to this link, supremum (aka sup) means least upper bound. infimum (aka inf) means the greatest lower bound. (sup and inf are defined for sets.) The supremum and infimum of a set need not belong to the set.
Storage Requirements for a URE:
sigma_s (secondary objective function??) For an URE, it is the number of computed values that have to be retained at a computation time tau so that other points are computed.
Theorem 5: Memory optimization for a URE: If there exist two (dependence) vectors w_i and w_j which are linearly independent, then there can be no optimization of memory. This is a strong statement because, it applies for any schedule.
What if the dependences are the set (0,1) and (1,0)?
Intuitive meaning of the amount of storage requirements (taken from page 575): Except for special cases, the amount of storage required to compute the function a(p) is unbounded.
=========================
Algorithm not given? (page 580): foot note about constuction of G' It seems that KMW do not give an explcit algorithm for the constuction of G'. They also donot give the running time of the algorithm. I doubt that the method of measuring algorithm by their running time (by Karp/Edmonds/Hartmanis) completed by then?
Page 582: Parallelism: What do KMW mean when they say that they have not solved the problem completely.
Section 5: SUREs
Amount of Parallelism of SUREs when compared with UREs (p577): As shown in theorem The main idea is that there exist explicitly defined systems with n>1 for which free schedule has bounded parallelism. It is not vert clear why only one point of the iteration space can be computed at any time.
Questions about example 6, page 578: Why is are self-dependence loops labelled in a strange way? Also why do KMW seem to imply that this fig.5 can have only sequential schedules? Also, look at their notes for the example when they talk about the free schedule.
Theorem 7: Properties of the subgraph similar to the theorems in DV.
Theorem 8 (page 583):
Storage of SUREs: p 584:
Section 6 (p 588): Implicit schedules
Implicit schedules Vs. Explicit Schedules (page 588):
A Simple preprocessing step (using Euler's theorem) : The KMW decomposition algorithm is searching for union of cycles. We have Euler's theorem that a connected directed graph has a Eulerian Cycle iff each vertex has equal in and out degrees. We can apply this theorem and remove any node v whose indegree is not equal to its outdegree. IS THIS RIGHT? What do do with the hypergraph? What to do with the bipartite graph? What to do with non-simple cycles(remember, Euler;s theorem applies to simple cycles)?
Should we try for free schedule of the hypergraph?
Read the rest of this entry >>
RDG having depenendence edges/data-flow edges
The equations being defined by a_j = ... a_k(z - w) ... or a_j = ... a_k(z+w)
Relation between Brent's theorem (1974) and KMW's statement of computability:
Computability
Fat domains: The computability of a SURE defined over bounded domain makes sense iff the domain is sufficiently fat. Read more about this.
Schedule
=============================
Section 3: Lemma 1 notes
[Also refer to notes from Berge. This is the link.]
An infinite graph has a vertex w such that outdegree(w) is unbounded. There are two interesting cases: either there exists a path from w to v or there exists a path from v to w. Let us prove for the case when there is a path from v to w(the other case: when there exists a path from w to v can be proved with a similar argument. The third case: when there does not exist a path between v and w blows away the negation of the premise).
There exists a path from v to w. Also, There is no upper bound on the lengths of paths directed out of vertex v. Also, there does not exist an infinite path directed out of v.
Should we give a example or prove the fact? If we give an example, which would be a counter example to the fact that the premise is not required, we are done.
Proof of the Lemma and related notes:
====================
Lemma 1: Let H be a directed graph in the edges directed out each vertex is finite. Then, if there is a no upper bound on the lengths of paths directed out of vertex v, there exists an infinite path directed of v.
Proof strategy: A => B => C. Assume A. Assume NOT(C). If it leads to NOT(B), then we have a contradiction. which proves that B => C.
Proof: Assume the premise. Assume that there does not exist an infinite path directed out of vertex v. Then the longest path directed out of v is the function 1 + max{}. This function is a recursive function defined with peano arithmetic. So, there exists an upper bound on the lengths of paths directed out of vertex v. This is a contradiction with the hypothesis.
Proof 2: The graph is progressively finite. Prove that it is progressively bounded. [The key idea is to get that a finite set has an upper bound.] The set of its descendants is finite.
Proof for the premise: prove that if the graph is progressively finite and not progressevely bounded, there exists a vertex whose outdegree is not finite. This will be contradiction.
The statement actually says that if A (as usual, Gamma-finite), then if it is not progressively bounded, it is not progressively finite. This is the same as saying that if it is progressively finite, it is progressively bounded. (same as if it is (not progressively finite) or (progressively bounded)).
To prove the premise, we have to say that, if it is progressively finite and (not progressively bounded), there exists a vertex whose outdegree is infinite.
Given that there exists no infinite paths and that there is no upper bound on any paths, k*Gamma(x)
=========================
To remove some of these owes, we can assume that the graph is a connected graph.
In all the proofs here, we assume the premise and prove the lemma.
Since there is no upper bound, the function must return inf. Which is another way of stating C2. Is this right?
Assume premise, assume C1, assume NOT(C2): NOT(C2) means that there exists no path of infinite length out of vertex v. Since vertex v has finite outdegree (premise), there does not exist a path of infinite length out of each vertices belonging to the set Gamma(x). This means that ......... ..........
We have to use the fact that v does not praticipate in a path of infinite length. Let us assume for now that an infinite path length is obtained by a circuit. from premise, let us assume that w1,...,wn are the vertices adjacent to v. None of w1,...,wn have a path back to v. This means that the set Gamma(W) is strictly greater than Gamma(v). This means that Gamma^hat(v) is an unbounded set. What do we do with these sets...
proving an iff by contradiction of the if and only if part is kind of funny ...
=================================
May be they mean that. This opinion is because, the next line, they say 'for all k and tau'. So the notion of bounded parallelism has nothing to do with the 'type of vertices'. OTOH, the definition of phi_s can be general enough that it can encapsulate other ideas??
=============================================
Section 3:Conditions for a function to be explicitly defined
[With Lemma 1, the stage is set about infinite graphs and relationship between existance of upper bound on the length of the path and presence of a infinite length path. The theorem 1 gives a definition of computability.]
Note: The regions are mutually exclusive. There exists vectors which are neither. The following "picture" encapsulates all the above ideas.
                                ^
                                |
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
                                ssssssssssss
<-nnnnnnnnnnnnsssssssssss->
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
    nnnnnnnnnnnn
                                |
                                v
[The computability of a SURE and its relation to the presence of positive weight cycles.]
[A better statement would be the following: Any vertex which 'participates' in a non-positive-weight cycle is not explicitly defined. Any vertex that has a path to another vetex that 'participates' in such a cycle is not explicitly defined. The maiincrib about this statement is the use of the word 'participate'.]
[Darte-Vivien divide the theorem into two parts, corollary 3 and theorem 17.]
[All this is required? Yes. The equations are a simple description of the input. We have some associated questions on the input which we want to answer in finite time. The computability problem of the SUREs maps directly to checking for intinite length paths in the EDG. Direct checking is not a feasible way. We map the problem to checking for positive weight cycles in the RDG.]
[just to remember: we are proving the equivalence of (i) = "absence of a cycle of nonpositive weight in G" and (iii) = "absence of a path of infinite length directed out of vertex (i,p) in Gamma". Actually, we are proving the contrapositive: (i)="presence of a cycle of nonpositive weight in G" and (iii)="presence of a path of infinite length directed out of vertex (i,p) in Gamma".]
[Punch line of the proof: w(C)<=0 which means that there exists a ll such that (ll,p-w(C)) is reachable from (i,p). So we can repeat the above procedure ad-infinitum, or give an inductive proof.]
[The statement of the only-if part talks about (iv)=>(i). KMW use the definition of explicit-definition of a variable to say that (iv) is equivalent to (iii). Also, they use lemma 1 to show that (iii) is equivalent to the existance of a specific infinite path.]
===========================
============================
[corollary 1 of KMW talks about the extension of theorem1 to strips of F_n]
[idea of KMW's proof of corollary1: Draw a RDG G' in which the vertices are from {1, ... , n} * Q_t. This is a finite graph and can work as the RDG. Now apply theorem1 to the EDG induced by this RDG.]
[KMW talk about strips of F_n in corollary 1 and 2. Darte-Vivien's statement of theorem and its proof are the same as that when t = n; or, when the SURE is defined in a finite domain.]
Ambiguity in the statement of KMW-corollary 1: As stated in KMW, corollary (i) talks about the SURE, the EDG and the RDG at the same time. OTOH, corollary 2 talks about only the SURE and the RDG, just as in the spirit of theorem 1. Darte-Vivien remove the need for this type of reasoning by giving "their corollary 3" apriori.
[See the reasoning above which states that the statement of theorem1 is ambiguous. In the samw way, corollary 2 also is ambiguous.]
=======================================
Section 4: The theorems for URE
In more detail, (i) and (ii) are equivalent by theorem1. Rest by Farkas lemma and duality of the LP.
Storage Requirements for a URE:
What if the dependences are the set (0,1) and (1,0)?
=========================
Page 582: Parallelism: What do KMW mean when they say that they have not solved the problem completely.
Section 5: SUREs
Section 6 (p 588): Implicit schedules
Friday, August 19, 2005
hypergraphs, strongly connected components
In a hypergraph constructed in a way mentioned before, we seem to be finding hyper-articulation points (or hyper-articulation edges). The decomposition algorithm seems to remove some hyperedges. The resultant components are somehow, hyper-strongly connected.
--
Assume we do a DFS on the associated directed bipartite graph, also giving each vertex its [BEGIN,END] times as usual. BEGIN meaning the first time when it is found, and END meaning the time when it is blackened, from grey.
Let V -> u -> W be a hyperedge, with u being the "dummy" node corresponding to the hyperedge. Also, let v \in V and w \in W be two representatiove elements of the sets V and W.
fact: END(u) > END(w) for all w \in W /*this is anyway true */
--
((END(v) > END(u)) and () for all v \in V, then node u is not part of any zero-weight cycle. /*the usual cycle condition*/
This can be stated as the following: no B-arc of the hypergraph is a backward edge in a DFS
Now, if the node u corresponding to a hyperedge E has the following property: for every vertex v such that v->u is a B-hyperedge(beginning part of a hyperedge), END(u) >
--
Preprocessing step: How do we we obtain zero-contributions from some edges? i.e., when we have a equation like q1 + q2 + q3 = 0, we immediately write it as q1 = q2 = q3 = 0. What does this mean? So a preprocessing step could remove vertices which have all incoming or all outgoing edges (actually hyperedge components) to it. Then we remove those edges completely (remember, we have to remove the hyperedge completely) from the graph. This is called a preprocessing step, as this can be done fastly and is something we have to do.
--
Can we do index set splitting fastly for unitary graphs?
--
The main
--
there does not seem much point in reversing the directions of edges ala Tarjan's Strongly Connected Components algorithm. This is because, we are interested in solving Bq=0. This trivially means that -Bq=0 !!!!
So the information in the DFS info of B is contained in the DFS info of -B. Or is it? Since we have to satisfy both Bq = 0 and -Bq=0, what does the undirected hypergraph give us? Maybe nothing: We are losing lot of information by looking at the undirected hypergraph.
brings us back to the question: we should look at either Bq=0 OR -Bq=0. What additional information is present in looking at both (not the undirected hypergraph ofcourse)?
--
There are three types of nodes in the graph: the v's the w's and the e's (which are actually the edges of the hypergraph, but can be recast as vertices of the new graph).
Observation: every edge has to pass through the switch box of the E vertices.
We are looking for a cycle in this graph that passes through atleast one v in V and atleast one w in \W.
or, we are looking to remove edges that donot participate any cycle (what does this mean)?
--
Look at Cormen book for quite good notes on Shortest paths: chapters 24, 25: I am not sure what algorithm should we use for the hypergraph. Once a hypergraph has been constructed, or even the equivalent bipartite graph, the edge weights can be thought of as unit (+1). So, we may be able to do away with a Dijkstra's algorithm???? Read the rest of this entry >>
--
Assume we do a DFS on the associated directed bipartite graph, also giving each vertex its [BEGIN,END] times as usual. BEGIN meaning the first time when it is found, and END meaning the time when it is blackened, from grey.
Let V -> u -> W be a hyperedge, with u being the "dummy" node corresponding to the hyperedge. Also, let v \in V and w \in W be two representatiove elements of the sets V and W.
fact: END(u) > END(w) for all w \in W /*this is anyway true */
--
((END(v) > END(u)) and () for all v \in V, then node u is not part of any zero-weight cycle. /*the usual cycle condition*/
This can be stated as the following: no B-arc of the hypergraph is a backward edge in a DFS
Now, if the node u corresponding to a hyperedge E has the following property: for every vertex v such that v->u is a B-hyperedge(beginning part of a hyperedge), END(u) >
--
Preprocessing step: How do we we obtain zero-contributions from some edges? i.e., when we have a equation like q1 + q2 + q3 = 0, we immediately write it as q1 = q2 = q3 = 0. What does this mean? So a preprocessing step could remove vertices which have all incoming or all outgoing edges (actually hyperedge components) to it. Then we remove those edges completely (remember, we have to remove the hyperedge completely) from the graph. This is called a preprocessing step, as this can be done fastly and is something we have to do.
--
Can we do index set splitting fastly for unitary graphs?
--
The main
--
there does not seem much point in reversing the directions of edges ala Tarjan's Strongly Connected Components algorithm. This is because, we are interested in solving Bq=0. This trivially means that -Bq=0 !!!!
So the information in the DFS info of B is contained in the DFS info of -B. Or is it? Since we have to satisfy both Bq = 0 and -Bq=0, what does the undirected hypergraph give us? Maybe nothing: We are losing lot of information by looking at the undirected hypergraph.
brings us back to the question: we should look at either Bq=0 OR -Bq=0. What additional information is present in looking at both (not the undirected hypergraph ofcourse)?
--
There are three types of nodes in the graph: the v's the w's and the e's (which are actually the edges of the hypergraph, but can be recast as vertices of the new graph).
Observation: every edge has to pass through the switch box of the E vertices.
We are looking for a cycle in this graph that passes through atleast one v in V and atleast one w in \W.
or, we are looking to remove edges that donot participate any cycle (what does this mean)?
--
Look at Cormen book for quite good notes on Shortest paths: chapters 24, 25: I am not sure what algorithm should we use for the hypergraph. Once a hypergraph has been constructed, or even the equivalent bipartite graph, the edge weights can be thought of as unit (+1). So, we may be able to do away with a Dijkstra's algorithm???? Read the rest of this entry >>
Tuesday, August 16, 2005
All pair shortest paths on directed graphs
The textbook algorithms for APSP assume a weighed directed graph. The algorithms are the following:
Dijkstra: basically a SSSP for each vertex: assumes no negative weight edges. running time O(V^3).
Dijkstra: better data structures: using binary min-heap: O(VE* lg V).
Dijkstra: better data structure: using Tarjan's Fibonacci heap: O(V^2*lg V + V*E).
Bellman-Ford: Also a SSSP from eaqch vertex, however, allows negative weight edges. running time: O(V^2*E) (this could be O(V^4) on dense graphs).
Bellman-Ford: improved running time
Floyd-Warshall: uses the repeated squaring algorithm Very similar to transitive closure running time: O(V^3).
Johnson: An improvement for sparse graphs. uses both Dijkstra's algorithm and Bellman-Ford algorithm as subroutines. Uses a reweighing technique. O(V^2* lg V + VE)
--
Convergence of a PDE??
equivalence of two SUREs
Predicting the final values without computation??? Read the rest of this entry >>
--
Convergence of a PDE??
equivalence of two SUREs
Predicting the final values without computation??? Read the rest of this entry >>
Sunday, August 14, 2005
Arvind Sharma's book on Experiential Dimension of Advaita Vedanta
Reading the book, The Experiential Dimension of Advaita Vedanta, by Arvind Sharma.
This book is dedicated to Eliot Deutsch.
The preface itself is good. Sharma says that no word other than the sanskrit word "Advaita" is necessary to understand the concept of experiential advaita. To prove this, he claims to use only five previously unknown words in the book: Advaita Vedanta, Sankara, Ramana and Nisargadatta.
Sharma answers the question of "why is the concept of experience important or relevant" by saying that experience is something everyone can feel for themselves. This is to differentiate it from scriptural over emphasis someone may find in such an exposition.
A similar thought is given his other book titled "Advaita Vedanta", which seems to be written later. This is my post on that book (also contains the amazon link to that book). That book is divided into three main chapters: scriptural, rational and experiential aspects of Advaita. This is my post on the third part of that book.
Whose experience are we talking about? To disambiguate the term experience, as to who's experiences and which experiences, Sharma divides the term experience into the four categories:
ordinary experiences of ordinary people,
extraordinary experiences of ordinary people,
ordinary experiences of extraordinary people, and
extraordinary experiences of ordinary people.
In the introduction, Sharma says that Sankara is the leading expositor of doctrinal Advaita and Ramana is the leading expositor of experiential Advaita.
The first chapter titled "what is normal experience". At the end of chapter, Sharma raises 11 points and sub-points about how people give primacy to waking, over those of dream and dreamless sleep. This is even among among people who agree that all three are just states of consciousness and should be comparable.
In the second chapter titled "critique of normal experience" gives the counter arguments that Advaita provides to each of the 11 points and sub-points raised in the first chapter. Some significant conclusions seem to be in point 3.vi, where to answer the question 'If a contradicting experience is superior to a contradicted experience, is not waking state superior to dreaming?'. Sharma admits that a contradicting experience is superior to a contradicted experience. However all three states are capable of contradicting each other [How can dreamless state contradict any of the other? Ans: we experience pain when awake. When asleep, we donot experience it. So pain characterizes not the body but the body-consciousness as it comes and goes with it.]. For example, a rich man may dream that he is poor, which is a contradiction when he is in dream state. So, the fact that a state can be a contradicting some other state enforces our view that the contradicting state can be contradicted.
The third chapter titled "Conclusions on the critique" makes some conclusions. The important being that the three states contradict each other in terms of reality in each of them. None of these three states represents reality by itself. What is common between all three? The being I is common all three states.
The fourth chapter titled "Advaitin Experience and its relationship to Normal Experience", Sharma makes some interesting points. He answers the question, how does an Advaitin experience reality (and dualities like pleasure and pain) different from normal people. He says that
He says the following about the differences in experience of dualities by the realized and ordinary person.
The eight chapter is titled "Some accounts of Advaitin Experience". It is mainly about the experience of Ramana, of his 'disciple' Paul Brunton and of Nisargadatta. [Paul Brunton wrote the book "Search in secret India". This is the Amazon link.]
Sharma concludes the book with the following:
Read the rest of this entry >>
This book is dedicated to Eliot Deutsch.
The preface itself is good. Sharma says that no word other than the sanskrit word "Advaita" is necessary to understand the concept of experiential advaita. To prove this, he claims to use only five previously unknown words in the book: Advaita Vedanta, Sankara, Ramana and Nisargadatta.
Sharma answers the question of "why is the concept of experience important or relevant" by saying that experience is something everyone can feel for themselves. This is to differentiate it from scriptural over emphasis someone may find in such an exposition.
A similar thought is given his other book titled "Advaita Vedanta", which seems to be written later. This is my post on that book (also contains the amazon link to that book). That book is divided into three main chapters: scriptural, rational and experiential aspects of Advaita. This is my post on the third part of that book.
Whose experience are we talking about? To disambiguate the term experience, as to who's experiences and which experiences, Sharma divides the term experience into the four categories:
In the introduction, Sharma says that Sankara is the leading expositor of doctrinal Advaita and Ramana is the leading expositor of experiential Advaita.
The first chapter titled "what is normal experience". At the end of chapter, Sharma raises 11 points and sub-points about how people give primacy to waking, over those of dream and dreamless sleep. This is even among among people who agree that all three are just states of consciousness and should be comparable.
In the second chapter titled "critique of normal experience" gives the counter arguments that Advaita provides to each of the 11 points and sub-points raised in the first chapter. Some significant conclusions seem to be in point 3.vi, where to answer the question 'If a contradicting experience is superior to a contradicted experience, is not waking state superior to dreaming?'. Sharma admits that a contradicting experience is superior to a contradicted experience. However all three states are capable of contradicting each other [How can dreamless state contradict any of the other? Ans: we experience pain when awake. When asleep, we donot experience it. So pain characterizes not the body but the body-consciousness as it comes and goes with it.]. For example, a rich man may dream that he is poor, which is a contradiction when he is in dream state. So, the fact that a state can be a contradicting some other state enforces our view that the contradicting state can be contradicted.
The third chapter titled "Conclusions on the critique" makes some conclusions. The important being that the three states contradict each other in terms of reality in each of them. None of these three states represents reality by itself. What is common between all three? The being I is common all three states.
The fourth chapter titled "Advaitin Experience and its relationship to Normal Experience", Sharma makes some interesting points. He answers the question, how does an Advaitin experience reality (and dualities like pleasure and pain) different from normal people. He says that
...
the realized person, however, in a sense experiences less than the ordinary person; in another sense experiences more than the ordinary person; and in another sense experiences the world differently from an ordinary person.
...
He says the following about the differences in experience of dualities by the realized and ordinary person.
The realized person experiences pain and pleasure but does not experience it in the same way as an ordinary person. ... In a sense it might be said that the realized person feels physical pain but nor mental pain. It could be said that the difference between a realized person and an ordinary person does not lie not so much in what the realized one experiences and the ordinary one does not but rather in what the ordinary person experiences and the realized one does not. The realized person and the ordinary person both experience sugar as sweet and wormwood as bitter, both see and smell and walk and talk. But the ordinary person also experiences anxiety, fear, suffering, hope, diappointment etc. These the realized person does not experience.
The eight chapter is titled "Some accounts of Advaitin Experience". It is mainly about the experience of Ramana, of his 'disciple' Paul Brunton and of Nisargadatta. [Paul Brunton wrote the book "Search in secret India". This is the Amazon link.]
Sharma concludes the book with the following:
There is starkness [emphasis mine] about Advaita Vedanta when presented in its experiential dimension. This starkness some find compelling and some repelling and others remain unaffected by it. All, however, would perhaps want to know: Does it have anything to offer?
The question was put to Ramana whose virtual nakedness symbolized, as it were, the starkness of the experiential dimension of Advaita Vedanta. He was once asked by a somwehat cynical seeker: 'Do you have anything to offer to me?'
'Yes', Ramana is supposed to have said, putting aside the comic book he was reading. 'But do you think you can take it?'
Posted by
ramakrishna u
at
8/14/2005 03:44:00 AM
Labels:
Philosophy Papers and Books,
Reviews
0
comments
Friday, August 12, 2005
ability to multiplex - duality and non-duality
It seems the ability to multiplex thoughts seems to come naturally, except for some periods when there is a single thought. It also seems that that period of uninturrepted flow occurs only after repeatedly trying to do work and at the same time, keeping the "other" thoughts on God. This is probably what Shankara said when duality leads us to non-duality. And the Quote in PUM "The Lords's feet lead us to the other side".
Read the rest of this entry >>
Wednesday, August 10, 2005
unitarization and directed hypergraphs: more thoughts
It is clear that unitarization results in a directed hypergraph. Also, a zero-weight cycle is a path from a vertex to itself.
Fact: if the given graph has a 1-dimensional schedule, it has one sequential loop. If it has as 2-d schedule, it has a 2-d schedule.
So, what can we say about the presence of a k-d schedule and the limits of doing something in the graph with 1 and d? (ala sandwitch theorem.)
What can we say about the presence of d-dim schedules?
Are you sure that the depth is min(n,d). Or does it have to be d? Read the rest of this entry >>
Fact: if the given graph has a 1-dimensional schedule, it has one sequential loop. If it has as 2-d schedule, it has a 2-d schedule.
So, what can we say about the presence of a k-d schedule and the limits of doing something in the graph with 1 and d? (ala sandwitch theorem.)
What can we say about the presence of d-dim schedules?
Are you sure that the depth is min(n,d). Or does it have to be d? Read the rest of this entry >>
Sunday, August 07, 2005
Software Pipelining and unarization
It seems that, unarization can be applied to small loops very effectively.
--
It also seems that the technique can be applied to 1-d loops and since the model is simple, maybe they can be scheduled even with resource constraints in a simple way. Read the rest of this entry >>
--
It also seems that the technique can be applied to 1-d loops and since the model is simple, maybe they can be scheduled even with resource constraints in a simple way. Read the rest of this entry >>
Siddhartha by Hesse: Final Chapter - Govinda
I have nothing to say but to redirect to the chapter "Govinda" from Siddhartha by Herman Hesse.
I am just feeling the way Sanjaya might have felt after he saw the immortal conversation between Lord Krishna and Arjuna in the Bhagavad Gita. My words would echo the same that he used in exclamation at the end of the Celestial Song.
Also the comments by Atanu on searching vs. finding are a worthwhile read. Read the rest of this entry >>
I am just feeling the way Sanjaya might have felt after he saw the immortal conversation between Lord Krishna and Arjuna in the Bhagavad Gita. My words would echo the same that he used in exclamation at the end of the Celestial Song.
Also the comments by Atanu on searching vs. finding are a worthwhile read. Read the rest of this entry >>
Wednesday, August 03, 2005
Atri-DattaAtreya and Vyasa-Suka: a comparision
Both Dattaatreya and Suka are sons of well known sages in Hindu puranas. The concept of son itself has a hidden meaning, as in other familial ties. A son ofcourse carries forward the school of thought that the father propagates, which is exactly what these two great sons did.
Dattaatreya is the son of Atri Mahamuni. The name of the sage Atri itself -- A-tri: without three or beyong three -- conveys the spirit of one who has transcended the three levels of consciousness (not just beyond the three characteristics of Sattva, Rajas and Tamas). Datta-Atreya means, someone who has been adopted by such a sage and AnaSuya, one who is beyond Asuya. It is said that Brahman himself had been Datta (taken as a son) by such a couple. Datta-Atreya is said to be the sage who formulated the basic concept of Advaita: namely, Aham Brahmasmi (Thou art Brahman).
Suka is the son of Vyasa. Vyasa is the divine sage who compiled the vedas and wrote the Mahabharata containing BhagavadGita and VishnuSahasranamam. Vyasa literally means the divisor of vedas. Suka is known to be his son, who had transcended all the stages of consciousness from birth itself. Suka narrated the Bhagavata Purana. The narration has well known chapters like Gajendra Moksham, Prahlada Charitam, where he not only conceptualized the method of Jnana (like the original prayer to nirguna Brahman by Gajendra) and of course, the method of Bhakti (where else but the pinnacle: chapter 10?) in the story of Lord Krishna.
An Anecdote about Suka: It is said that Suka if he heard even the name of Radha, would go into samadhi. So, in the whole Bhagavatam, there is no mention of Radha-Devi. The questions put forward by Parikshit were such that no mention of Radha could be made by Suka when he was answering the story.
It seems that the Atri-Dattaatreya are more abstract to understand by ordinary human beings. (This is even though the teachings are very simple, like the 24 natural gurus of Lord DattaAtreta.) So Vyasa-Suka had made the ideas into a form that is more concrete, and possibly easier/simpler in another way. The result was a concept of Bhakti which could be understood by the multitudes.
It should however, be not be forgotten that the Avadhuta in the Avadhuta-Yadu samvadam, which occurs at the end of Bhagavatam, is none other than DattaAtreya. So by inserting that dialogue in a later chapter of Bhavatam, Vyasa-Suka made the wonderful reader-friendly assumption that a reader who has read/listened to Bhagavatam till that chapter is sufficiently mature to understand the Advaitic concepts of DattaAtreya. Read the rest of this entry >>
Dattaatreya is the son of Atri Mahamuni. The name of the sage Atri itself -- A-tri: without three or beyong three -- conveys the spirit of one who has transcended the three levels of consciousness (not just beyond the three characteristics of Sattva, Rajas and Tamas). Datta-Atreya means, someone who has been adopted by such a sage and AnaSuya, one who is beyond Asuya. It is said that Brahman himself had been Datta (taken as a son) by such a couple. Datta-Atreya is said to be the sage who formulated the basic concept of Advaita: namely, Aham Brahmasmi (Thou art Brahman).
Suka is the son of Vyasa. Vyasa is the divine sage who compiled the vedas and wrote the Mahabharata containing BhagavadGita and VishnuSahasranamam. Vyasa literally means the divisor of vedas. Suka is known to be his son, who had transcended all the stages of consciousness from birth itself. Suka narrated the Bhagavata Purana. The narration has well known chapters like Gajendra Moksham, Prahlada Charitam, where he not only conceptualized the method of Jnana (like the original prayer to nirguna Brahman by Gajendra) and of course, the method of Bhakti (where else but the pinnacle: chapter 10?) in the story of Lord Krishna.
An Anecdote about Suka: It is said that Suka if he heard even the name of Radha, would go into samadhi. So, in the whole Bhagavatam, there is no mention of Radha-Devi. The questions put forward by Parikshit were such that no mention of Radha could be made by Suka when he was answering the story.
It seems that the Atri-Dattaatreya are more abstract to understand by ordinary human beings. (This is even though the teachings are very simple, like the 24 natural gurus of Lord DattaAtreta.) So Vyasa-Suka had made the ideas into a form that is more concrete, and possibly easier/simpler in another way. The result was a concept of Bhakti which could be understood by the multitudes.
It should however, be not be forgotten that the Avadhuta in the Avadhuta-Yadu samvadam, which occurs at the end of Bhagavatam, is none other than DattaAtreya. So by inserting that dialogue in a later chapter of Bhavatam, Vyasa-Suka made the wonderful reader-friendly assumption that a reader who has read/listened to Bhagavatam till that chapter is sufficiently mature to understand the Advaitic concepts of DattaAtreya. Read the rest of this entry >>
Tuesday, August 02, 2005
checking for the validity of a transformation on unitary graphs
Given a d-dimensional vector and a computation on the vector, what are the problems that can be solved fastly? If we are also given that the dependence components in the associated computation are from the range {-1,0,+1}, can we find the equivalence of two computations? If so, give an algorithm to find the equivalence fastly.
--
If the size of the computation is so smaller than the size of the array, we can do a reachability analysis and make the calculation fast.
--
Given: a single d-dimensional vector and a two loops of dimensions n1 and n2.
--
What if we annotate a point in the data dependence graph by the function at that point?
Do it for 2-dimensional and then 3-dimensional arrays.
--
Given: two polytopes, P1 and P2, of n-dimensions and one polytope P3 of 2d-dimensions (which we will call the tile polytope: given a tile size, we have an polytope.)
The input to each polytope is a the same facet. The output of the 2n-dimension polytope.
--
Given: two alpha programs with the following characteristics
---------
Program1:
an parametrized n-dimensional polyhedral domain.
A statement defined in all the points of the domain.
A monodimensional time schedule, which is valid.
A memory mapping
---------
Program2:
A parametrized 2n-dimensional Z-polyhedral domain. The domain can be expressed as a affine transformation of the original domain.
A collection of statements defined in all the points of the domain
A mono-dimensional schedule, which is valid (define the schedule function)
A memory mapping, which is an affine function of the one in program1.
-------------
Verify that the two programs do the same computation. Can it be done? Can it be done in a fast way?
So the statements and variables in Program2 are of two types
1. ones that correspond to variables in program1.
2. ones that are additional variables (the buffers).
Consider the following situation. If we remove all the variables of type (2.), from program 2, then we are left with a memory mapping that is "similar" to the one in program1 and ofcourse, it has similar schedule to the one in program1. define similarity.
--
Simple question: prove that the transformation is right.
easy first step: give a transformation from the original n-d space to the transformed 2n-d space and also the reverse of it.
Let the orig space be I_n. Let the tile sizes be S. The 2n-d space is given by T .................
--
What are the representations of a Z-polyhedra? Read the rest of this entry >>
--
If the size of the computation is so smaller than the size of the array, we can do a reachability analysis and make the calculation fast.
--
Given: a single d-dimensional vector and a two loops of dimensions n1 and n2.
--
What if we annotate a point in the data dependence graph by the function at that point?
Do it for 2-dimensional and then 3-dimensional arrays.
--
Given: two polytopes, P1 and P2, of n-dimensions and one polytope P3 of 2d-dimensions (which we will call the tile polytope: given a tile size, we have an polytope.)
The input to each polytope is a the same facet. The output of the 2n-dimension polytope.
--
Given: two alpha programs with the following characteristics
---------
Program1:
---------
Program2:
-------------
Verify that the two programs do the same computation. Can it be done? Can it be done in a fast way?
So the statements and variables in Program2 are of two types
Consider the following situation. If we remove all the variables of type (2.), from program 2, then we are left with a memory mapping that is "similar" to the one in program1 and ofcourse, it has similar schedule to the one in program1. define similarity.
--
Simple question: prove that the transformation is right.
easy first step: give a transformation from the original n-d space to the transformed 2n-d space and also the reverse of it.
Let the orig space be I_n. Let the tile sizes be S. The 2n-d space is given by T .................
--
What are the representations of a Z-polyhedra? Read the rest of this entry >>
Tuesday, July 26, 2005
hardness of the URE/SURE scheduling
The input to a SURE scheduling problem is a (n+d)*m matrix, and we are to find the minimum running time of the system (with a reasonable aggrement on this statement).
n has to be >= 1.
If n=1, then we have a URE. The scheduling problem for n = 1 is as hard as the LP problem of d*m size.
When n>1, the problem is harder.
OTOH, d has to be >= 0
If d = 0. we have a digraph and the scheduling problem is a DFS!
if d = 1, we have a digraph, whose weights are scalars. This means that the scheduling problem is as hard as a APSP.
for d > 1, the problem becomes harder
Read the rest of this entry >>
n has to be >= 1.
OTOH, d has to be >= 0
Sunday, July 24, 2005
A data structural approach to detect cycles
Refer to the previous post about the construction of the hypergraph with ordinary nodes and hypernodes.
The ordinary nodes correspond to the number of times an edge has been selected in the PRDG. A hyperedge is from a subset of (ordinary) nodes to a subset of nodes. Either the head of the edge, or the tail of the edge can be a zero size subset.
Now define the arity of a hyperedge as the sum of its indegrees and outdegrees.
Some simple observations:
for a hypernode v with an edge S to T, if indegree(v) = outdegree(v) = 1 (or card(S)=card(T)=1), then we can "club" the two singleton sets to make another singletonset. The hyperedges which begin or end at the vertices of the sets S and T can be made to correspondingly begin or end at the new vertex, which can be called its own singleton set.
The hypernodes can be stored in a priority queue, which supports the following operations:
deletion of the hypernode with the minimal arity
Also, a hypernode v such that indegree(v) = 0 or (outdegree(v) =0) can be removed from the queue. This would lead us to a pruning of the graph:
removal of the hypernode from the priority queue: This hypernode may not be at the top of the priority queue.
pruning the hyperedges which begin or end at these vertices
--
TO DO: try to see some examples of the particular linear programs of unitary form.
solve the dual linear programs
look at the original and dual graphs Read the rest of this entry >>
The ordinary nodes correspond to the number of times an edge has been selected in the PRDG. A hyperedge is from a subset of (ordinary) nodes to a subset of nodes. Either the head of the edge, or the tail of the edge can be a zero size subset.
Now define the arity of a hyperedge as the sum of its indegrees and outdegrees.
Some simple observations:
for a hypernode v with an edge S to T, if indegree(v) = outdegree(v) = 1 (or card(S)=card(T)=1), then we can "club" the two singleton sets to make another singletonset. The hyperedges which begin or end at the vertices of the sets S and T can be made to correspondingly begin or end at the new vertex, which can be called its own singleton set.
The hypernodes can be stored in a priority queue, which supports the following operations:
Also, a hypernode v such that indegree(v) = 0 or (outdegree(v) =0) can be removed from the queue. This would lead us to a pruning of the graph:
removal of the hypernode from the priority queue: This hypernode may not be at the top of the priority queue.
pruning the hyperedges which begin or end at these vertices
--
TO DO: try to see some examples of the particular linear programs of unitary form.
solve the dual linear programs
look at the original and dual graphs Read the rest of this entry >>
Thursday, July 21, 2005
transitive orientations and separating hyperplanes
Given the unitarized block matrix of size (n+d) rows and m columns, we can draw a hypergraph G with (n+d) nodes and m edges in the following way:
(How can you use Euler's theorem in these proofs?)
The PRDG has no self loops. They can be eliminated in a simple way.
---------------------------------------------------------
====================
Alg1: Algorithm to construct G
====================
For every row of the block matrix, there is a node in the hypergaph /*There are total n+d nodes in G*/
for all columns of A, do the following:
If there is a +1 in a column of G, then we make them the tail of a hyperedge. If there is a -1 in the row, we make that corresponding node a head of a hyperedge (A 0 does nothing. By a restriction in the input PRDG, a 0 in the incidence matrix of input PRDG corresponds to nothing).
---------------------------------------------------------
We can construct another graph G' in the following way:
---------------------------------------------------------
====================
Algorithm to construct hypergraph G'
====================
Notation:
The input PRDG is unitarized by step A_1 of the algorithm (given above). So in the rest of the paper, we refer to the unitarized-PRDG G_u simply as a PRDG. We also refer to it as a graph when the context is clear (In such instances, we refer to the other graph as a hypergraph). We also refer to the PRDG in terms of its block matrix. The block matrix is a (n+d)*m matrix, with the columns corresponding to its edges. Also, the top n rows corresponding to the incidence matrix of the PRDG, which is but a directed graph. The bottom d rows of the block matrix contain the d-dimensional dependence vectors associated with that corresponding edge.
G': A hypergraph constructed from the PRDG. Specifically, it has
a node for each column of the block matrix A (or, a node for each edge of the PRDG)
a hyperedge for a row of the matrix A. This also means a hyperedge which for a vertex of the PRDG and a hyperedge for each component of the dependence vector. The tail of the hyperedge is all the rows of A that have an entry equal to -1. The head of the hyperedge is all the rows of A that have an entry of +1.
one hypernode per hyperedge of the hypergraph G': These are the additional nodes that comeup in the analysis. Refer to section ... for additional notes on types of hypernodes.
---------------------------------------------------------
Lemma 1: G'=COMP(UNDIRECT(G))G' is the graph obtained by the complementing the underlying undirected graph for G(see proof below).
Lemma 2: In graph G, all the hypernodes are either sources (all outgoing nodes) or sinks (all incoming nodes).
Proof (by construction):
Corollary: in graph G', for v \in hypernode, indegree(v) = outdegree(v)
Theorem: The given system of equations has a solution iff the orientation of G is a transitive orientation.
Proof(if): The orientation of G is a transitive orientation
Corollary: The given system of equations has a zero cycle iff the orientation of G' is a transitive orientation.
--
What is a separating hyperplane? How to do decomposition?
--
the given PRDG has atleast one alternating sign in every row and column.
--
Dependence analysis of unitarized PRDGs Read the rest of this entry >>
(How can you use Euler's theorem in these proofs?)
The PRDG has no self loops. They can be eliminated in a simple way.
---------------------------------------------------------
====================
Alg1: Algorithm to construct G
====================
For every row of the block matrix, there is a node in the hypergaph /*There are total n+d nodes in G*/
for all columns of A, do the following:
If there is a +1 in a column of G, then we make them the tail of a hyperedge. If there is a -1 in the row, we make that corresponding node a head of a hyperedge (A 0 does nothing. By a restriction in the input PRDG, a 0 in the incidence matrix of input PRDG corresponds to nothing).
---------------------------------------------------------
We can construct another graph G' in the following way:
---------------------------------------------------------
====================
Algorithm to construct hypergraph G'
====================
Notation:
The input PRDG is unitarized by step A_1 of the algorithm (given above). So in the rest of the paper, we refer to the unitarized-PRDG G_u simply as a PRDG. We also refer to it as a graph when the context is clear (In such instances, we refer to the other graph as a hypergraph). We also refer to the PRDG in terms of its block matrix. The block matrix is a (n+d)*m matrix, with the columns corresponding to its edges. Also, the top n rows corresponding to the incidence matrix of the PRDG, which is but a directed graph. The bottom d rows of the block matrix contain the d-dimensional dependence vectors associated with that corresponding edge.
G': A hypergraph constructed from the PRDG. Specifically, it has
---------------------------------------------------------
Lemma 1: G'=COMP(UNDIRECT(G))G' is the graph obtained by the complementing the underlying undirected graph for G(see proof below).
Lemma 2: In graph G, all the hypernodes are either sources (all outgoing nodes) or sinks (all incoming nodes).
Proof (by construction):
Corollary: in graph G', for v \in hypernode, indegree(v) = outdegree(v)
Theorem: The given system of equations has a solution iff the orientation of G is a transitive orientation.
Proof(if): The orientation of G is a transitive orientation
Corollary: The given system of equations has a zero cycle iff the orientation of G' is a transitive orientation.
--
What is a separating hyperplane? How to do decomposition?
--
the given PRDG has atleast one alternating sign in every row and column.
--
Dependence analysis of unitarized PRDGs Read the rest of this entry >>
Subscribe to:
Posts (Atom)