Just from experience and knowledge of graph theory ...
Dijkstra's algorithm: Computes shortest paths in a graph with non-negative edge weights.
Floyd-Warshall: Solves the all pairs shortest path problem in a weighted, directed graph.
Prim: Finds a minimum spanning tree for a graph.
Kruskal: Finds a minimum spanning tree for a graph.
Boruvka: Finds a minimum spanning tree for a graph.
Johnson: All pairs shortest path algorithm in sparse weighted directed graph.
Ford-Fulkerson: Computes the maximum flow in a graph.
Edmonds-Karp: Implementation of Ford-Fulkerson.
Bellman-Ford: Computes shortest paths in a weighted graph (where some of the edge weights may be negative).
Hungarian: Algorithm for finding a perfect matching.
Coloring algorithm: Graph coloring algorithm.
Topological sort: Sort a directed acyclic graph in such a manner that each node comes before all nodes to which it has edges (according to directions).
Nearest neighbour: Find nearest neighbour (Travelling salesmen problem).
Tarjan's off-line least common ancestors algorithm: Compute lowest common ancestors for pairs of nodes in a tree.
For more graph theory algorithms, please visit this.
Dijkstra's Algorithm Runtime:
Sunday, April 25, 2010
Saturday, April 24, 2010
Happy 20th Anniversary Hubble!
Twenty years ago today, the NASA space shuttle Discovery launched from Florida carrying what would become one of the most iconic instruments in astronomy: the Hubble space telescope.
Whirling around Earth at 17,500 miles (28,163 kilometers) an hour, the Hubble Space Telescope has captured some of the most detailed pictures yet of space objects and activities.
"Hubble has done what maybe no other scientific experiment before it had done," - astrophysicist Mario Livio.
To commemorate Hubble's 20th anniversary, NASA has released Hubble: A Journey Through Space and Time, a book of images throughout its history.
The latest pic of the Carina Nebula was spectacular:
Here are some additional jaw-dropping high resolution images over the past 20 years that in my opinion have changed our views of the cosmos:
Astronaut F. Story Musgrave hangs above the world from the robotic arm of the space shuttle Endeavour as he prepares to place protective covers on the Hubble telescope's magnetometers during the space telescope's first servicing mission in 1993.
The multihued clouds of the Orion Nebula swirl before a starry background.
A sunset bathes the space shuttle Discovery in a rosy glow during the second mission to service the Hubble telescope in February 1997.
Hubble revealed a central, dying star blowing out gasses and materials at high speed. The doomed stellar "face" is surrounded by a disk strewn with cometlike objects, which leave behind tails as they streak away from the sunlike star, in the so-called Eskimo Nebula - which sits about 5,000 light-years away.
The Hubble Deep Field image - The reason why String theory exists.
I think this says a lot about our Imaginative Universe.
Also, happy birthday Youtube! It's first video ever turns five today!
Now i'm off to University of Toronto for International Day of Astronomy!
~Cheers.
Whirling around Earth at 17,500 miles (28,163 kilometers) an hour, the Hubble Space Telescope has captured some of the most detailed pictures yet of space objects and activities.
"Hubble has done what maybe no other scientific experiment before it had done," - astrophysicist Mario Livio.
To commemorate Hubble's 20th anniversary, NASA has released Hubble: A Journey Through Space and Time, a book of images throughout its history.
The latest pic of the Carina Nebula was spectacular:
Here are some additional jaw-dropping high resolution images over the past 20 years that in my opinion have changed our views of the cosmos:
Astronaut F. Story Musgrave hangs above the world from the robotic arm of the space shuttle Endeavour as he prepares to place protective covers on the Hubble telescope's magnetometers during the space telescope's first servicing mission in 1993.
The multihued clouds of the Orion Nebula swirl before a starry background.
A sunset bathes the space shuttle Discovery in a rosy glow during the second mission to service the Hubble telescope in February 1997.
Hubble revealed a central, dying star blowing out gasses and materials at high speed. The doomed stellar "face" is surrounded by a disk strewn with cometlike objects, which leave behind tails as they streak away from the sunlike star, in the so-called Eskimo Nebula - which sits about 5,000 light-years away.
The Hubble Deep Field image - The reason why String theory exists.
I think this says a lot about our Imaginative Universe.
Also, happy birthday Youtube! It's first video ever turns five today!
Now i'm off to University of Toronto for International Day of Astronomy!
~Cheers.
Thursday, April 22, 2010
CCC 2009 Wireless
The Canadian Computing Competition is a prestigious programming competition for high school students with considerable programming experience willing to take on a challenge.
For over a year now, I have been rather struggling to solve the last problem from the 2009 senior section, Wireless. The reason I have been struggling was not that my solution didn't generate the correct outputs, but that the algorithm was not efficient enough to pass the SPOJ online judging system, with a rather strict time limit.
My original solution which was just straight forward 2D array processing, with an O(N log N) algorithm complexity failed to pass the judge as it exceeded the time limit. Mind you, this was implemented in Java, which I realize now is horrible for online competitions. Perhaps another time I will post my thoughts on Java vs C/C++.
Anyways, after several failed attempts, I decided to change my language to C++, and while talking with one of my friends, I came up with a rather odd sort and count solution that at the time I didn't think was sufficient enough.
Here is an outline of how it works:
On every row, all wirelesses have a (possibly empty) range of columns it covers.
This is translated into a set of commands for that row, consisting of a column number and the change in bitrate, with every wireless leading to two commands:
An increase in rate at the start of the range, and a similar decrease at the end of the range. By combining the commands and considering them in order of column number, the cumulative bitrates can be calculated over the range of the whole row.
NOTE: With N rows and K wirelesses, the complexity of this implementation is O(N*K*InK), the K*InK factor arising from the fact that a simple Quick sort is used to sort the commands, plus the fact that a Binary search is used to find the ranges within each row (Pythagoras). Better algorithms may be possible.
Once I coded this, to my surprise, it finally got accepted!
Time: 11.25 sec
MEM: 2.6 M
Now I have to prepare for the upcoming regional ECOO programming contest held at York University. Usually the problems are very trivial, so i'm not worried at all.
My SPOJ account
2009 CCC Wireless Solution C++
For over a year now, I have been rather struggling to solve the last problem from the 2009 senior section, Wireless. The reason I have been struggling was not that my solution didn't generate the correct outputs, but that the algorithm was not efficient enough to pass the SPOJ online judging system, with a rather strict time limit.
My original solution which was just straight forward 2D array processing, with an O(N log N) algorithm complexity failed to pass the judge as it exceeded the time limit. Mind you, this was implemented in Java, which I realize now is horrible for online competitions. Perhaps another time I will post my thoughts on Java vs C/C++.
Anyways, after several failed attempts, I decided to change my language to C++, and while talking with one of my friends, I came up with a rather odd sort and count solution that at the time I didn't think was sufficient enough.
Here is an outline of how it works:
On every row, all wirelesses have a (possibly empty) range of columns it covers.
This is translated into a set of commands for that row, consisting of a column number and the change in bitrate, with every wireless leading to two commands:
An increase in rate at the start of the range, and a similar decrease at the end of the range. By combining the commands and considering them in order of column number, the cumulative bitrates can be calculated over the range of the whole row.
NOTE: With N rows and K wirelesses, the complexity of this implementation is O(N*K*InK), the K*InK factor arising from the fact that a simple Quick sort is used to sort the commands, plus the fact that a Binary search is used to find the ranges within each row (Pythagoras). Better algorithms may be possible.
Once I coded this, to my surprise, it finally got accepted!
Time: 11.25 sec
MEM: 2.6 M
Now I have to prepare for the upcoming regional ECOO programming contest held at York University. Usually the problems are very trivial, so i'm not worried at all.
My SPOJ account
2009 CCC Wireless Solution C++
Dangerous Knowledge
In this one-off documentary, David Malone looks at four brilliant mathematicians - Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing - whose genius has profoundly affected us, but which tragically drove them insane and eventually led to them all committing suicide.
The film begins with Georg Cantor, the great mathematician whose work proved to be the foundation for much of the 20th-century mathematics. He believed he was God's messenger and was eventually driven insane trying to prove his theories of infinity.
Ludwig Boltzmann's struggle to prove the existence of atoms and probability eventually drove him to suicide.
Kurt Gödel, the introverted confidant of Einstein, proved that there would always be problems which were outside human logic. His life ended in a sanatorium where he starved himself to death.
Finally, Alan Turing, the great Bletchley Park code breaker, father of computer science and homosexual, died trying to prove that some things are fundamentally unprovable.
The film also talks to the latest in the line of thinkers who have continued to pursue the question of whether there are things that mathematics and the human mind cannot know. They include Greg Chaitin, mathematician at the IBM TJ Watson Research Center, New York, and Roger Penrose.
Dangerous Knowledge tackles some of the profound questions about the true nature of reality that mathematical thinkers are still trying to answer today.
Part 1
Part 2
Part 3
Part 4
Part 5
Part 6
Part 7
Part 8
Part 9
Part 10
Enjoy!
The film begins with Georg Cantor, the great mathematician whose work proved to be the foundation for much of the 20th-century mathematics. He believed he was God's messenger and was eventually driven insane trying to prove his theories of infinity.
Ludwig Boltzmann's struggle to prove the existence of atoms and probability eventually drove him to suicide.
Kurt Gödel, the introverted confidant of Einstein, proved that there would always be problems which were outside human logic. His life ended in a sanatorium where he starved himself to death.
Finally, Alan Turing, the great Bletchley Park code breaker, father of computer science and homosexual, died trying to prove that some things are fundamentally unprovable.
The film also talks to the latest in the line of thinkers who have continued to pursue the question of whether there are things that mathematics and the human mind cannot know. They include Greg Chaitin, mathematician at the IBM TJ Watson Research Center, New York, and Roger Penrose.
Dangerous Knowledge tackles some of the profound questions about the true nature of reality that mathematical thinkers are still trying to answer today.
Part 1
Part 2
Part 3
Part 4
Part 5
Part 6
Part 7
Part 8
Part 9
Part 10
Enjoy!
Sunday, April 18, 2010
Quantum Philosophy
Quantum Mechanics
Let us consider briefly the two aspects of this problem. As always, one is the philosophical implication for physics, and the other is the extrapolation of philosophical matters to other fields. When philosophical ideas associated with science in general, are dragged into another field, they are usually completely distorted. Therefore we shall confine our remarks as much as possible to physics itself.
First of all, the most interesting aspect is the idea of the uncertainty principle; making an observation affects a phenomenon. It has always been known that making observations affects a phenomenon, but the point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way, and the disturbance is necessary for the consistency of the viewpoint. The observer was sometimes important in prequantum physics, but only in a trivial sense. However, there is no doubt that a problem has been raised: if a tree falls in forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if nobody is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating. So in a certain sense I guess, we would have to admit that there is a sound made. We might ask some question like: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us just leave that problem in that form.
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. Unless a thing can be defined by measurement, it has no place in a theory - simple as that. And since an accurate value of the momentum of a localized particle cannot be defined by measurement, it therefore has no place in the theory. The idea that what was the matter with classical theory is a false position. It is careless analysis of the situation. Just because we cannot measure position and momentum precisely does not a priori mean that we cannot talk about them. It only means that we need not to talk about them. The situation in the sciences is this: A concept or an idea which cannot be measured or cannot be referred directly to experiment may or may not be useful. It need not exist in theory. In other words, suppose we compare the classical theory of the world with the quantum theory of the world, and suppose that it is true experimentally that we can measure position and momentum only imprecisely. The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the idea, and hence, the quantum theory does not. This does not mean that classical physics is wrong. When the new quantum mechanics was discovered, the classical people - which by the way included everyone except Heisenberg, Schrödinger, and Born - said something like, "Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle? Which slit does it go through?" etc … and Heisenberg's answer would've been something like, "I do not need to answer such questions because you cannot ask such a question experimentally" It is that we do not have to. Consider two theories a and b. a contains an idea that cannot be checked directly but which is used in the analysis, and the other, b, does not contain the idea. If they disagree in their preconditions, one could not claim that b is false because it cannot explain this idea that is in a because that idea is one of the things that cannot be checked directly. It is always good to know (in my opinion) which ideas cannot be checked directly, but it is not necessary to remove them all. It is not true that we can pursue science completely by using only those concepts which are directly subject to experiment.
In quantum mechanics itself there is a probability amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of science is its ability to predict. To predict means to tell what will happen in an experiment that has never been done. How can we do that? By assuming that we know what is there, independent of the experiment. We must extrapolate the experiments to a region where they have not yet been checked. If we do not do that, we have no prediction. So it was perfectly sensible for the classical physicists to go happily along and suppose that the position - which lets say obviously meant something for a baseball - meant something for an electron. It was a sensible procedure. Today we say that the law of relativity is supposed to be true at all energies, but someday somebody may come along and say how stupid we were. We do not know where we are "stupid" until we "stick our neck out" and so the whole idea is to put our neck out. And the only way to find out that we are wrong is to find out what our predictions are. It is absolutely necessary to make constructs.
We have already made a few remarks about the indeterminacy of quantum mechanics. That is, that we are unable now to predict what will happen in physics in a given physical circumstance which is arranged as carefully as possible. If we have an atom that is in an excited state and so is going to emit a photon, we cannot say when it will emit the photon. It has a certain amplitude to emit the photon at any time, and we can predict only a probability for emission: we cannot predict the future exactly. This is given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain.
Of course we must emphasize that classical physics is also indeterminate, in a sense. It’s usually thought that this indeterminacy, that we cannot predict the future, is an important quantum mechanical thing, and this is said to explain the behavior of the mind, feelings of free will etc ... But if the world were classical - if the laws of mechanics were classical - it is not quite obvious that the mind would not feel more or less the same. It is true classically that if we knew the position and the velocity of every particle in the world, or in a box of gas, we could predict exactly what would happen. Therefore the classical world is deterministic. Suppose, however, that we have a finite accuracy and do not know exactly where just one atom is, say to one part in a billion. Then as it goes along it hits another atom, and because we did not know the position better then to one part in a billion, we find an even larger error in the position after the collision. If we start with only a tiny error it rapidly magnifies to a very great uncertainty. For me to give an example, lets say, if water falls over a dam, it splashes obviously. If we stand nearby, every now and then a drop will land on our nose. This appears to be completely random, yet such a behavior would be predicted by purely classical laws! The exact position of all drops depends upon the precise wigglings of the water before it goes over the dam. Maybe you’re asking yourself now, how? Well, the tinniest irregularities are magnified in falling, so that we get complete randomness. Obviously, we cannot really predict the position of the drops unless we know the motion of the water absolutely, exactly.
Given an arbitrary accuracy, no matter how precise, one can find a time long enough that we cannot make predictions valid for that long a time. Now the point is that this length of time is not very large. It is not that the time is millions of years if the accuracy is one part in a billion. The time goes, in fact, only logarithmically with the error, and it turns out that in only a very, very tiny time we lose all our information. If the accuracy is taken to be one part in billions and billions - no matter how many billions we wish, provided we do stop somewhere - then we can find a time less than the time it took to state the accuracy - after which we can no longer predict what is going to happen! It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical "deterministic" physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a completely mechanistic universe. For already in classical mechanics there was indeterminability from a practical point of view.
~ Food for thought
Let us consider briefly the two aspects of this problem. As always, one is the philosophical implication for physics, and the other is the extrapolation of philosophical matters to other fields. When philosophical ideas associated with science in general, are dragged into another field, they are usually completely distorted. Therefore we shall confine our remarks as much as possible to physics itself.
First of all, the most interesting aspect is the idea of the uncertainty principle; making an observation affects a phenomenon. It has always been known that making observations affects a phenomenon, but the point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way, and the disturbance is necessary for the consistency of the viewpoint. The observer was sometimes important in prequantum physics, but only in a trivial sense. However, there is no doubt that a problem has been raised: if a tree falls in forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if nobody is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating. So in a certain sense I guess, we would have to admit that there is a sound made. We might ask some question like: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us just leave that problem in that form.
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. Unless a thing can be defined by measurement, it has no place in a theory - simple as that. And since an accurate value of the momentum of a localized particle cannot be defined by measurement, it therefore has no place in the theory. The idea that what was the matter with classical theory is a false position. It is careless analysis of the situation. Just because we cannot measure position and momentum precisely does not a priori mean that we cannot talk about them. It only means that we need not to talk about them. The situation in the sciences is this: A concept or an idea which cannot be measured or cannot be referred directly to experiment may or may not be useful. It need not exist in theory. In other words, suppose we compare the classical theory of the world with the quantum theory of the world, and suppose that it is true experimentally that we can measure position and momentum only imprecisely. The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the idea, and hence, the quantum theory does not. This does not mean that classical physics is wrong. When the new quantum mechanics was discovered, the classical people - which by the way included everyone except Heisenberg, Schrödinger, and Born - said something like, "Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle? Which slit does it go through?" etc … and Heisenberg's answer would've been something like, "I do not need to answer such questions because you cannot ask such a question experimentally" It is that we do not have to. Consider two theories a and b. a contains an idea that cannot be checked directly but which is used in the analysis, and the other, b, does not contain the idea. If they disagree in their preconditions, one could not claim that b is false because it cannot explain this idea that is in a because that idea is one of the things that cannot be checked directly. It is always good to know (in my opinion) which ideas cannot be checked directly, but it is not necessary to remove them all. It is not true that we can pursue science completely by using only those concepts which are directly subject to experiment.
In quantum mechanics itself there is a probability amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of science is its ability to predict. To predict means to tell what will happen in an experiment that has never been done. How can we do that? By assuming that we know what is there, independent of the experiment. We must extrapolate the experiments to a region where they have not yet been checked. If we do not do that, we have no prediction. So it was perfectly sensible for the classical physicists to go happily along and suppose that the position - which lets say obviously meant something for a baseball - meant something for an electron. It was a sensible procedure. Today we say that the law of relativity is supposed to be true at all energies, but someday somebody may come along and say how stupid we were. We do not know where we are "stupid" until we "stick our neck out" and so the whole idea is to put our neck out. And the only way to find out that we are wrong is to find out what our predictions are. It is absolutely necessary to make constructs.
We have already made a few remarks about the indeterminacy of quantum mechanics. That is, that we are unable now to predict what will happen in physics in a given physical circumstance which is arranged as carefully as possible. If we have an atom that is in an excited state and so is going to emit a photon, we cannot say when it will emit the photon. It has a certain amplitude to emit the photon at any time, and we can predict only a probability for emission: we cannot predict the future exactly. This is given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain.
Of course we must emphasize that classical physics is also indeterminate, in a sense. It’s usually thought that this indeterminacy, that we cannot predict the future, is an important quantum mechanical thing, and this is said to explain the behavior of the mind, feelings of free will etc ... But if the world were classical - if the laws of mechanics were classical - it is not quite obvious that the mind would not feel more or less the same. It is true classically that if we knew the position and the velocity of every particle in the world, or in a box of gas, we could predict exactly what would happen. Therefore the classical world is deterministic. Suppose, however, that we have a finite accuracy and do not know exactly where just one atom is, say to one part in a billion. Then as it goes along it hits another atom, and because we did not know the position better then to one part in a billion, we find an even larger error in the position after the collision. If we start with only a tiny error it rapidly magnifies to a very great uncertainty. For me to give an example, lets say, if water falls over a dam, it splashes obviously. If we stand nearby, every now and then a drop will land on our nose. This appears to be completely random, yet such a behavior would be predicted by purely classical laws! The exact position of all drops depends upon the precise wigglings of the water before it goes over the dam. Maybe you’re asking yourself now, how? Well, the tinniest irregularities are magnified in falling, so that we get complete randomness. Obviously, we cannot really predict the position of the drops unless we know the motion of the water absolutely, exactly.
Given an arbitrary accuracy, no matter how precise, one can find a time long enough that we cannot make predictions valid for that long a time. Now the point is that this length of time is not very large. It is not that the time is millions of years if the accuracy is one part in a billion. The time goes, in fact, only logarithmically with the error, and it turns out that in only a very, very tiny time we lose all our information. If the accuracy is taken to be one part in billions and billions - no matter how many billions we wish, provided we do stop somewhere - then we can find a time less than the time it took to state the accuracy - after which we can no longer predict what is going to happen! It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical "deterministic" physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a completely mechanistic universe. For already in classical mechanics there was indeterminability from a practical point of view.
~ Food for thought
Hello World!
Subscribe to:
Posts (Atom)