Randomness, Computability and Logic

Is 1010101010101010 the result of tossing a coin and writing 1 if it comes a head and 0 if it comes a tail? And what about this one: 1001101111001101? One puts in doubt the first one but trusts the second. Although both strings are equally probable (because both have the same length), one feels that the second is more random than the first one. The theory of algorithmic randomness gives several precise mathematical definitions of what is a random sequence, using tools from computability theory. But what about our own perception of randomness?

We carry on a research which aims to understand what way our brain classifies randomness and up to what extent it can generate random strings.  Both to distinguish a more random from a less random string, and to generate random strings it is supposed that the individual uses some kind of mental algorithm. We expect to find and calibrate a computational model capable of explaining these cognitive tasks. Our hypothesis is that this model can be grounded in concepts and ideas taken from algorithmic information theory, an area developed by Kolmogorov, Solomonoff and Chaitin in the late 1960.
For theoretical research on algorithmic randomness, computability theory, Kolmogorov complexity, algorithmic information theory, model theory, modal logics, see www.glyc.dc.uba.ar.