15-18 Jun 2015 Heidelberg (Germany)

Abstracts

"Algorithmic statistics and useful information"

Nikolay K. Vereshchagin, Moscow State University

Algorithmic statistics is an approach where the quality of a statistical hypothesis is measured by its Kolmogorov complexity. Using this approach one may try to divide the information present in the observed data into the "useful information" and the "noise".

 

 

"Kolmogorov structure functions for automatic complexity"

Bjorn Kjos-Hanssen, University of Hawaii

We study an analogue of Kolmogorov's notion of structure function, introduced in 1973, with Kolmogorov complexity replaced by Shallit and Wang's (2001) automatic complexity. We discuss the prospects for using it for model selection in statistics. We prove an upper bound which is piecewise smooth, related to the binary entropy function, and appears to be fairly sharp based on numerical evidence.

 

 

"Generic, Coarse, Cofinite, and Mod-Finite Reducibilities"

Gregory Igusa, University of Notre Dame

In 2012, Jockusch and Schupp defined and introduced generic computability, and coarse computability. These are two computability-theoretic notions designed to mimic the idea that in terms of real world applications of complexity theory, the worst-case complexity of a problem (exponential time, polynomial time, etc) is not actually as important as the general-case complexity of a problem -- if an algorithm occasionally takes exponential time, but usually runs very quickly, then for many purposes, it is still a very good algorithm.

A generic computation of a real is a computation that does not always halt, but that halts on asympotic density 1, and that is correct wherever it halts. A coarse computation is total computation that is not always correct, but is correct on asymptotic density 1. Offhand, it might appear that a generically computable real should also be coarsely computable, but in fact, out that there are both generically computable reals that are not coarsely computable, and coarsely computable reals that are not generically computable. It is, however, true that by many measures of closeness, generically computable reals can be shown to be much closer to being coarsely computable than coarsely computable reals are to being generically computable.

When investigating the degree structures for generic and coarse computability, our oracles, much like our computations, are only required to be correct most of the time. In generic reducibility, our oracles do not necessarily give answers everywhere, and in coarse reducibility, our oracles occasionally make mistakes. Both of these reducibilities can be defined either uniformly or nonuniformly, depending on whether we require that a single Turing functional work for all generic/coarse oracles, or whether we are allowed to have our functional change depending on the specific oracle we are given. The Turing degrees can be embedded into any of the four degree structures: uniform or nonuniform generic or coarse reducibility, none of which are equivalent to each other.

In recent work, Dzhafarov and Igusa introduce ``cofinite reducibility'' and ``mod-finite reducibility'' in part to remove the baggage of asymptotic density, and to directly investigate the issues of uniformity and of partial information vs flawed information. In cofinite reducibility, oracles and computations halt on cofinite domain, while in mod-finite reducibility, oracles and computations are both correct mod-finite. Non-uniformly, both of these reducibilities are equivalent to Turing reducibility, but the uniformity requirement makes both of them strictly stronger than Turing reducibility.

The cofinite degrees embed into the uniform generic degrees and the mod-finite degrees embed into the uniform coarse degrees. Also, unlike with coarse and generic reducibility, a mod-finite reducibility implies a cofinite reducibility. The end result of these interactions is that results in the cofinite degrees can reflect into the mod-finite, generic and coarse degrees. We illustrate these interactions by presenting some recent research of Cholak and Igusa concerning the relationships between randomness and uniform generic reduction, and tying it to recent research by Hirschfeldt, Jockusch, Kuyper, and Schupp concerning the relationships between randomness and nonuniform coarse reduction.

 

 

"Exact constructive dimension"

Ludwig Staiger, Martin-Luther-Universität, Halle-Wittenberg, Institut für Informatik

The present paper generalises results on constructive dimension to the case of exact constructive dimension. The constructive dimension of (sets of) infinite strings was initially defined by Lutz using a variant of semi-computable (super-)martingales, so-called $s$-(super-)gales and it assigns to every string or set of strings a real number.

 It turned out that it can be likewise defined by the lower asymptotic Kolmogorov complexity of infinite strings.

A major achievement of the above approach is Lutz's martingale characterisation of (classical) Hausdorff dimension which draw a connection between Algorithmic Information Theory and fractal geometry.

Usually, the Hausdorff dimension of a set of infinite strings is a real number which in some sense measures the fractalness of this set. It appears, however, that in Hausdorff's original paper the Hausdorff dimension is a real function often referred to as dimension or gauge function. It is now common to speak of the original version as exact Hausdorff dimension. 

As the values of the exact Hausdorff dimension of a subset is a real function, it allows for a much finer differentiation between fractal subsets of the space of infinite strings than classical Hausdorff dimension.

In this paper we derive several results which generalise the constructive dimension of (sets of) infinite strings to the case of exact dimension. We start with proving a martingale characterisation of exact Hausdorff dimension. Then using semi-computable super-martingales we introduce the notion of exact constructive dimension of (sets of) infinite strings. This allows us to derive several bounds on the complexity functions of infinite strings, that is, functions assigning to every finite prefix its Kolmogorov complexity.
In particular, it is shown that the exact Hausdorff dimension of a set off infinite strings lower bounds the maximum complexity function of strings in this set. Furthermore, we show a result bounding the exact Hausdorff dimension of a set of strings having a certain computable complexity function as upper bound.

Obviously, the Hausdorff dimension of a set of strings alone without any computability constraints cannot yield upper bounds on the complexity of strings in the set. If we require, however, the set of strings to be $\Sigma_{2}$-definable several results upper bounding the complexity by the exact Hausdorff dimension hold true. First we prove that for a $\Sigma_{2}$-definable set with computable dimension function one can construct a computable -- not only semi-computable -- martingale succeeding on
this set. Then, using this result, a tight upper bound on the prefix complexity function for all strings in the set is obtained. Finally, it is shown that a similar bound for $\Pi_{2}$-definable sets cannot be obtained.

 

 

"Algorithmically Random Functions and Effective Capacities"

Doug Cenzer, University of Florida

We continue the investigation of algorithmically random functions and closed sets, and in particular the connection with the notion of capacity.  We study notions of random continuous functions given in terms of a family of computable measures called symmetric Bernoulli measures. We isolate one particular class of random functions that we refer to as online random functions F, where the value of y(n) for y = F(x) may be computed from the values of x(0),...,x(n). We show that random online functions are neither onto nor one-to-one. We give a necessary condition on the members of the ranges of online random functions in terms of initial segment complexity and the associated computable capacity. Lastly, we introduce the notion of online partial Martin-Löf random function on Cantor space and give a family of online partial random functions the ranges of which are precisely the standard random closed sets introduced by Barmpalias, Cenzer et a in 2007.  This is joint work with Chris Porter.

 

 

"Randomness as a type of warranty"

Alexander Shen, CNRS & Université Montpellier 2


When an organization publishes a table of random numbers, what does it means? Can a reader complain if all the bits in the table are zeros, even taking into account that the zero sequence has exactly the same probability as every other one? There were several attempts to define an individual random object, but the game interpretation of probability theory (Vovk -- Shafer) suggests a different approach: randomness is not the property of the object but a type of warranty with which this object is provided. We discuss this approach and its connections with algorithmic information theory.

 

 

"Degrees and difficulty"

Walter Dean, Warwick University

We explore the concept of  mathematical problem and the attendant notions  of  reducibility and degree of difficulty in light of the practice of contemporary computability and complexity theory.  I will begin by tracing the former notion back to Kolmogorov’s problem interpretation of intuitionistic logic. I will then ask whether various mathematical definitions of the second notion (e.g. Turing, one-one, many-one, truth table, Cook, and Karp reducibility) can be understood as analyzing a stable pre-theoretical
notion of reduction in something like the manner of Church’s Thesis. I will then consider the extent to which the assignment of problems to degrees induced by these definitions can be understood as measuring a salient pre-theoretical sense of mathematical
difficulty.  Finally, I will consider the prospects for formulating and proving representation and uniqueness theorems relating our judgements of absolute and relative difficulty to various degree structures.

 

 

"The Randomness of Empirical Data, the Simplicity of Hypotheses, and Extreme Priors"

Jeff Schatz, UC Irvine

Contemporary computability theory has been applied to the philosophical analysis of scientific methodology in two distinct ways. First, Kevin Kelly proposes using tools from mathematical topology and effective descriptive set theory to evaluate scientific methods; representing data as infinite strings of numbers and scientific methods as total functions from data to conjectures, Kelly's project attempts to determine the extent to which methods can be guaranteed to converge to truth. Secondly, James W. McAllister posits that observed data is causally determined by the combination of the underlying explanatory phenomenon and random noise; the result of these factors is a random string of data, a maximally efficient source of information about the world. Though these two projects have separately been applied to the philosophy of science quite fruitfully, there has been little work on combining these two approaches into a unified model of empirical hypothesis testing.

In this talk, I will present a hitherto unaddressed incompatibility between these approaches: if data is random in the sense of Martin-Lӧf, then no refutable, highly-specific hypothesis can be true. While it might be expected that random data could not validate extremely simple refutable hypotheses, this result seems too strong to hold for all such hypotheses. As a result, I will explore ways of altering the two approaches in order to avoid this undesirable result. In particular, I will focus on using alternative definitions of algorithmic randomness for empirical data and changing the notion of probability underlying the model of inquiry. Exploring these ways of satisfactorily combining the two applications of computability to the philosophy of science, a fuller understanding of the implications of random data for hypothesis testing can be achieved.

 

 

"Mutual information and the independence postulate"

Christopher Porter, University of Florida


In this talk, I will discuss Levin's argument for closing what he refers to as "Gödel's loophole," which is roughly the possibility that a consistent completion of Peano arithmetic might be produced by some non-mechanical means.  A key step in Levin's argument involves what he calls the "independence postulate," which says that for every mathematically definable sequence X and every physically obtainable sequence Y, X and Y have finite mutual information.  After reconstructing Levin's argument, I will raise some concerns about the independence postulate.  I will conclude by discussing what I take to be the significance of Levin's argument, namely, the identification of an instance of a more general phenomenon involve deep Pi01 classes (a notion developed jointly with Laurent Bienvenu).

 

 

"Effective Prime Uniqueness"

Peter Cholak, University of Notre Dame

Assuming the obvious definitions below we show the a decidable model that is effectively prime is also effectively atomic. This implies that two effectively prime (decidable) models are computably isomorphic. This is in contrast to the theorem that there are two atomic decidable models which are not computably isomorphic. [Joint work with Charlie McCoy]

 

 

"The relevance of algebraic properties for the classification of uniform computational content"

Vasco Brattka, Universität der Bundeswehr München

I plan to report on some recent results regarding the classification of uniform computational content in the Weihrauch lattice. I will highlight how such results can be expressed using the algebraic structure of the lattice and how such characterizations can lead to useful lower and upper bound results. Another focus will be related to the observation that one can connect different seemingly unrelated areas of the lattice with the help of algebraic results. The examples will be related to Martin-Löf randomness, cohesiveness and Ramsey's Theorem and they are based on several projects with different co-authors. (joint work with Guido Gherardi, Matthew Hendtlass, Rupert Hölzl, Alexander Kreuzer, Arno Pauly, Tahina Rakotoniaina)

 

 

 "The role of randomness in reverse mathematics"

Ludovic Patey, Université Paris 7

Many statements in reverse mathematics can be seen as problems, coming with a natural class of instances and their corresponding solutions. For example, an instance of König's lemma is an infinite, finitely branching tree, and a solution is an infinite path through the tree.

A simple way to show that a statement P is not provable in the base theory RCA is to exhibit a computable P-instance
which admits no computable solution. When such an instance exists, one may wonder whether such an instance admits a probabilistic solution. In this talk, we classify various statements in reverse mathematics according to the existence of a probabilistic solution to their instances or not.

 

 

"Stochasticity properties of random graphs"

Jan Reimann, Penn State University

The study of (infinite) random graphs has recently been greatly advanced by the development of the theory of graphons. These continuous structures yield countable random graphs "from above", via sampling, as opposed to the approach "from below", such as via Fraissé limits.

In this talk, I will compare the two approaches, in particular from the perspective of stochasticity, i.e. the stability of properties under taking subsequences.

 

 

"Algorithmic randomness and constructive/computable mathematics"

Jason Rute, Penn State University

I will present a short history of constructive and computable measure theory, focusing on connections with algorithmic randomness. Constructive measure theory originated in a 1919 paper of Brouwer, which developed, among other things, constructive definitions of "null set", "measurable set", "measurable function", and "integrable function". Brouwer's work has been extended, reformed, and rediscovered in a number of settings: Russian constructive mathematics, Bishop-style constructive mathematics, reverse mathematics, computable analysis, categorical logic, and algorithmic randomness. Moreover, many of the fathers of algorithmic randomness---Martin-Löf, Demuth, and Schnorr---were motivated by constructive concerns.

Recently, a number of results have appeared connecting algorithmic randomness and analysis. In many cases, these results can be phrased as theorems in constructive mathematics. Conversely, the constructive theorems of Bishop, Demuth, and others have direct consequences to algorithmic randomness.

This raises the the question, which I will answer. "What randomness notion is most associated with constructive mathematics?"(Hint: It is not Martin-Löf randomness!)

 

 

"Universality, optimality, and randomness deficiency"

Paul Shafer, Ghent University

A Martin-Löf test U is universal if it captures all non-Martin-Löf random sequences, and it is optimal if for every ML-test V there is a natural number c such that for every natural number n V_{n+c} is a subset of U_n. We study the computational differences between universal and optimal ML-tests as well as the effects that these differences have on both the notion of layerwise computability and the Weihrauch degree of LAY, the function that produces a bound for a given Martin-Löf random sequence's randomness deficiency. We prove several robustness and idempotence results concerning the Weihrauch degree of LAY, and we show that layerwise computability is more restrictive than Weihrauch reducibility to LAY. [Joint work with Rupert Hölzl].

 

 

"Algorithmic randomness and geometric measure theory" 

Willem Fouché, University of South Africa (UNISA)

In mathematical logic we can imagine and represent , with the aid of subtle combinatorics, recursions, proof theories, forcing arguments, phenomena of incredible complexity, even within the finite world.

Mathematics is growing to the stage where these ``logical" phenomena become manifest in, and are essential to, other areas of mathematics, for example, in differential geometry (Nabutovsky, Weinberger, in collaboration with Soare), harmonic analysis (Fouché), non-archimedean extremely amenable groups (Blass and Kechris, Pestov, Todorcevic) and Ramsey theory (Paris-Harrington, Harvey Friedman), to mention just a few examples. I shall discuss some recent results in geometric measure theory that were developed from insights from algorithmic randomness, and conversely, I shall discuss certain phenomena in geometric measure theory, which have close links with the theory of algorithmic randomness. For the former, I will discuss new results in harmonic analysis (sets of multiplicity), and for the latter, I will address the problem of identifying non-archimedean amenable groups in terms of the presence of algorithmically random elements (relative to the Glasner-Weiss measure) in the universal minimal flows that are associated to these groups. 

e
Online user: 1