An important part of arithmetic combinatorics is concerned with the study of the extent to which structure must occur in every sufficiently large subset of a commutative group, and in particular the extent to which arithmetic patterns must occur in sufficiently large sets of integers. In this sense, arithmetic combinatorics illustrates that deep principle which is often stated to summarise the area of combinatorics known as Ramsey theory, namely that in every large amount of disorder one can always find regions of order. The arithmetic patterns in question are mostly solutions to prescribed systems of linear equations (more general systems, involving polynomials for example, are also considered). Classical Fourier analysis is a very useful tool in arithmetic combinatorics, yet there are key questions for which it is not sufficiently refined. This is visible in particular when one moves from the study of the occurrence of patterns consisting of one linear equation, such as arithmetic progressions of length three, to that of some patterns with two or more equations, such as progressions of length four. For the latter type of patterns, the basic functions one uses in classical Fourier analysis, the characters, are not suitable for detecting whether a subset of an abelian group contains the (in some sense) expected number of incidences of the pattern. Observations of this kind of limitations of classical Fourier analysis are at the origin of work initiated a decade ago by W.T. Gowers, which has been rapidly advancing ever since thanks to important contributions from several mathematicians, and which presently gives us strong evidence suggesting that, in this connection with arithmetic combinatorics, classical Fourier analysis is but the simplest level of a much larger theory. This theory has come to be known as higherorder Fourier analysis. A family of norms introduced by Gowers, known as the uniformity norms, is central to the theory and can in fact be used to stratify the theory into layers in increasing order of complexity, with each layer corresponding to one of the uniformity norms. In recent years, higherorder Fourier analysis has benefited from a vibrant interaction with ergodic theory. In a nutshell, ergodic theory consists in the study of the structure and the longterm behaviour of measurepreserving systems. Recently, analogues in ergodic theory of concepts from arithmetic combinatorics such as the uniformity norms have been discovered, and the resulting research within ergodic theory is shedding light on questions that are crucial for the development of higherorder Fourier analysis. The project I propose lies at the interface between ergodic theory and combinatorics constituted by this development. One major direction of the project aims to combine ideas from ergodic theory with ideas from my PhD thesis to further develop higherorder counterparts of some of the most useful tools from classical Fourier analysis. Among these higherorder counterparts are the key results known as decomposition theorems, which allow, roughly speaking, to decompose a function on a commutative group into a sum of an easytohandle structured component and a remainder, the remainder being negligible in applications usually by being small in the appropriate uniformity norm. The project will also be concerned with a recent classification question, which asks for a general notion of complexity enabling us, given any (nondegenerate) arithmetic pattern, to choose the optimal layer of higherorder Fourier analysis to use in the study of the occurrence of the pattern. Another major objective of the project is to extend the applicability of higherorder Fourier analysis, especially by using it to address known questions in number theory and combinatorics which, while being related to questions that are accessible with classical Fourier analysis, are also known to require the higherorder theory.
