|
|
Abstracts"Algorithmic statistics and useful information" Nikolay K. Vereshchagin, Moscow State University Algorithmic statistics is an approach where the quality of a statistical hypothesis is measured by its Kolmogorov complexity. Using this approach one may try to divide the information present in the observed data into the "useful information" and the "noise".
"Kolmogorov structure functions for automatic complexity" Bjorn Kjos-Hanssen, University of Hawaii We study an analogue of Kolmogorov's notion of structure function, introduced in 1973, with Kolmogorov complexity replaced by Shallit and Wang's (2001) automatic complexity. We discuss the prospects for using it for model selection in statistics. We prove an upper bound which is piecewise smooth, related to the binary entropy function, and appears to be fairly sharp based on numerical evidence.
"Generic, Coarse, Cofinite, and Mod-Finite Reducibilities" Gregory Igusa, University of Notre Dame In 2012, Jockusch and Schupp defined and introduced generic computability, and coarse computability. These are two computability-theoretic notions designed to mimic the idea that in terms of real world applications of complexity theory, the worst-case complexity of a problem (exponential time, polynomial time, etc) is not actually as important as the general-case complexity of a problem -- if an algorithm occasionally takes exponential time, but usually runs very quickly, then for many purposes, it is still a very good algorithm. A generic computation of a real is a computation that does not always halt, but that halts on asympotic density 1, and that is correct wherever it halts. A coarse computation is total computation that is not always correct, but is correct on asymptotic density 1. Offhand, it might appear that a generically computable real should also be coarsely computable, but in fact, out that there are both generically computable reals that are not coarsely computable, and coarsely computable reals that are not generically computable. It is, however, true that by many measures of closeness, generically computable reals can be shown to be much closer to being coarsely computable than coarsely computable reals are to being generically computable. When investigating the degree structures for generic and coarse computability, our oracles, much like our computations, are only required to be correct most of the time. In generic reducibility, our oracles do not necessarily give answers everywhere, and in coarse reducibility, our oracles occasionally make mistakes. Both of these reducibilities can be defined either uniformly or nonuniformly, depending on whether we require that a single Turing functional work for all generic/coarse oracles, or whether we are allowed to have our functional change depending on the specific oracle we are given. The Turing degrees can be embedded into any of the four degree structures: uniform or nonuniform generic or coarse reducibility, none of which are equivalent to each other. In recent work, Dzhafarov and Igusa introduce ``cofinite reducibility'' and ``mod-finite reducibility'' in part to remove the baggage of asymptotic density, and to directly investigate the issues of uniformity and of partial information vs flawed information. In cofinite reducibility, oracles and computations halt on cofinite domain, while in mod-finite reducibility, oracles and computations are both correct mod-finite. Non-uniformly, both of these reducibilities are equivalent to Turing reducibility, but the uniformity requirement makes both of them strictly stronger than Turing reducibility. The cofinite degrees embed into the uniform generic degrees and the mod-finite degrees embed into the uniform coarse degrees. Also, unlike with coarse and generic reducibility, a mod-finite reducibility implies a cofinite reducibility. The end result of these interactions is that results in the cofinite degrees can reflect into the mod-finite, generic and coarse degrees. We illustrate these interactions by presenting some recent research of Cholak and Igusa concerning the relationships between randomness and uniform generic reduction, and tying it to recent research by Hirschfeldt, Jockusch, Kuyper, and Schupp concerning the relationships between randomness and nonuniform coarse reduction.
"Exact constructive dimension" Ludwig Staiger, Martin-Luther-Universität, Halle-Wittenberg, Institut für Informatik The present paper generalises results on constructive dimension to the case of exact constructive dimension. The constructive dimension of (sets of) infinite strings was initially defined by Lutz using a variant of semi-computable (super-)martingales, so-called $s$-(super-)gales and it assigns to every string or set of strings a real number.
"Algorithmically Random Functions and Effective Capacities" Doug Cenzer, University of Florida We continue the investigation of algorithmically random functions and closed sets, and in particular the connection with the notion of capacity. We study notions of random continuous functions given in terms of a family of computable measures called symmetric Bernoulli measures. We isolate one particular class of random functions that we refer to as online random functions F, where the value of y(n) for y = F(x) may be computed from the values of x(0),...,x(n). We show that random online functions are neither onto nor one-to-one. We give a necessary condition on the members of the ranges of online random functions in terms of initial segment complexity and the associated computable capacity. Lastly, we introduce the notion of online partial Martin-Löf random function on Cantor space and give a family of online partial random functions the ranges of which are precisely the standard random closed sets introduced by Barmpalias, Cenzer et a in 2007. This is joint work with Chris Porter.
"Randomness as a type of warranty" Alexander Shen, CNRS & Université Montpellier 2
"Degrees and difficulty" Walter Dean, Warwick University
"The Randomness of Empirical Data, the Simplicity of Hypotheses, and Extreme Priors" Jeff Schatz, UC Irvine Contemporary computability theory has been applied to the philosophical analysis of scientific methodology in two distinct ways. First, Kevin Kelly proposes using tools from mathematical topology and effective descriptive set theory to evaluate scientific methods; representing data as infinite strings of numbers and scientific methods as total functions from data to conjectures, Kelly's project attempts to determine the extent to which methods can be guaranteed to converge to truth. Secondly, James W. McAllister posits that observed data is causally determined by the combination of the underlying explanatory phenomenon and random noise; the result of these factors is a random string of data, a maximally efficient source of information about the world. Though these two projects have separately been applied to the philosophy of science quite fruitfully, there has been little work on combining these two approaches into a unified model of empirical hypothesis testing. In this talk, I will present a hitherto unaddressed incompatibility between these approaches: if data is random in the sense of Martin-Lӧf, then no refutable, highly-specific hypothesis can be true. While it might be expected that random data could not validate extremely simple refutable hypotheses, this result seems too strong to hold for all such hypotheses. As a result, I will explore ways of altering the two approaches in order to avoid this undesirable result. In particular, I will focus on using alternative definitions of algorithmic randomness for empirical data and changing the notion of probability underlying the model of inquiry. Exploring these ways of satisfactorily combining the two applications of computability to the philosophy of science, a fuller understanding of the implications of random data for hypothesis testing can be achieved.
"Mutual information and the independence postulate" Christopher Porter, University of Florida
"Effective Prime Uniqueness" Peter Cholak, University of Notre Dame Assuming the obvious definitions below we show the a decidable model that is effectively prime is also effectively atomic. This implies that two effectively prime (decidable) models are computably isomorphic. This is in contrast to the theorem that there are two atomic decidable models which are not computably isomorphic. [Joint work with Charlie McCoy]
"The relevance of algebraic properties for the classification of uniform computational content" Vasco Brattka, Universität der Bundeswehr München I plan to report on some recent results regarding the classification of uniform computational content in the Weihrauch lattice. I will highlight how such results can be expressed using the algebraic structure of the lattice and how such characterizations can lead to useful lower and upper bound results. Another focus will be related to the observation that one can connect different seemingly unrelated areas of the lattice with the help of algebraic results. The examples will be related to Martin-Löf randomness, cohesiveness and Ramsey's Theorem and they are based on several projects with different co-authors. (joint work with Guido Gherardi, Matthew Hendtlass, Rupert Hölzl, Alexander Kreuzer, Arno Pauly, Tahina Rakotoniaina)
"The role of randomness in reverse mathematics" Ludovic Patey, Université Paris 7 Many statements in reverse mathematics can be seen as problems, coming with a natural class of instances and their corresponding solutions. For example, an instance of König's lemma is an infinite, finitely branching tree, and a solution is an infinite path through the tree.
"Stochasticity properties of random graphs" Jan Reimann, Penn State University
"Algorithmic randomness and constructive/computable mathematics" Jason Rute, Penn State University I will present a short history of constructive and computable measure theory, focusing on connections with algorithmic randomness. Constructive measure theory originated in a 1919 paper of Brouwer, which developed, among other things, constructive definitions of "null set", "measurable set", "measurable function", and "integrable function". Brouwer's work has been extended, reformed, and rediscovered in a number of settings: Russian constructive mathematics, Bishop-style constructive mathematics, reverse mathematics, computable analysis, categorical logic, and algorithmic randomness. Moreover, many of the fathers of algorithmic randomness---Martin-Löf, Demuth, and Schnorr---were motivated by constructive concerns. Recently, a number of results have appeared connecting algorithmic randomness and analysis. In many cases, these results can be phrased as theorems in constructive mathematics. Conversely, the constructive theorems of Bishop, Demuth, and others have direct consequences to algorithmic randomness. This raises the the question, which I will answer. "What randomness notion is most associated with constructive mathematics?"(Hint: It is not Martin-Löf randomness!)
"Universality, optimality, and randomness deficiency" Paul Shafer, Ghent University A Martin-Löf test U is universal if it captures all non-Martin-Löf random sequences, and it is optimal if for every ML-test V there is a natural number c such that for every natural number n V_{n+c} is a subset of U_n. We study the computational differences between universal and optimal ML-tests as well as the effects that these differences have on both the notion of layerwise computability and the Weihrauch degree of LAY, the function that produces a bound for a given Martin-Löf random sequence's randomness deficiency. We prove several robustness and idempotence results concerning the Weihrauch degree of LAY, and we show that layerwise computability is more restrictive than Weihrauch reducibility to LAY. [Joint work with Rupert Hölzl].
"Algorithmic randomness and geometric measure theory" Willem Fouché, University of South Africa (UNISA) In mathematical logic we can imagine and represent , with the aid of subtle combinatorics, recursions, proof theories, forcing arguments, phenomena of incredible complexity, even within the finite world. Mathematics is growing to the stage where these ``logical" phenomena become manifest in, and are essential to, other areas of mathematics, for example, in differential geometry (Nabutovsky, Weinberger, in collaboration with Soare), harmonic analysis (Fouché), non-archimedean extremely amenable groups (Blass and Kechris, Pestov, Todorcevic) and Ramsey theory (Paris-Harrington, Harvey Friedman), to mention just a few examples. I shall discuss some recent results in geometric measure theory that were developed from insights from algorithmic randomness, and conversely, I shall discuss certain phenomena in geometric measure theory, which have close links with the theory of algorithmic randomness. For the former, I will discuss new results in harmonic analysis (sets of multiplicity), and for the latter, I will address the problem of identifying non-archimedean amenable groups in terms of the presence of algorithmically random elements (relative to the Glasner-Weiss measure) in the universal minimal flows that are associated to these groups. |