Mathematics (Faculty of)No Descriptionhttps://hdl.handle.net/10012/99242024-09-10T16:24:50Z2024-09-10T16:24:50Z30921Tubings of Rooted TreesCantwell, Ameliahttps://hdl.handle.net/10012/209622024-09-06T07:00:56Z2024-09-05T00:00:00Zdc.title: Tubings of Rooted Trees
dc.contributor.author: Cantwell, Amelia
dc.description.abstract: A tubing of a rooted tree is a broad term for a way to split up the tree into induced connected subtrees. They are useful for computing series expansion coefficients. This thesis discusses two different definitions of tubings, one which helps us understand Dyson- Schwinger equations, and the other which helps us understand the Magnus expansion. Chord diagrams are combinatorial objects that relate points on a circle. We can explicitly map rooted connected chord diagrams to tubings of rooted trees by a bijection, and we explore further combinatorial properties arising from this map. Furthermore, this thesis describes how re-rooting a tubed tree will change the chord diagram. We present an algorithm for finding the new chord diagram by switching some chords around. Finally, a different notion of tubings of rooted trees is introduced, which was originally developed by Mencattini and Quesney [27]. They defined two sub-types of tubings: vertical and horizontal which are used to find coefficients in the Magnus expansion. These two types of tubings have an interesting relationship when the forests are viewed as plane posets.
2024-09-05T00:00:00ZTight Multi-Target Security for Key Encapsulation MechanismsGlabush, Lewishttps://hdl.handle.net/10012/209562024-09-05T07:01:01Z2024-09-04T00:00:00Zdc.title: Tight Multi-Target Security for Key Encapsulation Mechanisms
dc.contributor.author: Glabush, Lewis
dc.description.abstract: The use of symmetric encryption schemes requires that the communicating parties have
access to a shared secret key. A key encapsulation mechanism (KEM) is a cryptographic
tool for the secure establishment of such a key. The KEMs most commonly used at this
time are vulnerable to adversaries with access to a large quantum computer. This project
concerns KEMs that are resistant to all known quantum attacks, such as lattice-based
schemes.
A desirable property for any KEM is multi-target security, capturing the idea that
security does not degrade below the targeted level as the number of users of a protocol
or the amount of communication per user scales to a certain threshold. For schemes
based on prime-order groups, multi-ciphertext security can be trivially reduced to singleciphertext
security using self reducibility arguments, but these arguments are not available
for lattice-based schemes. Indeed, one of the alternates in NIST’s post-quantum cryptography
standardization project, FrodoKEM, was susceptible to simple attacks in the
multi-target setting.
The standard approach to building IND-CCA secure KEMs has been to start with an
IND-CPA secure public key encryption scheme (PKE) and apply the Fujisaki-Okamoto
transform (FO). In this paper, we introduce a new variant of the FO transform, called
the salted FO transform (SFO) which adds a uniformly random salt to the generation of
ciphertexts. We then show that the resulting KEM’s have much tighter security bounds
compared to their generic counterparts. We then apply our results to FrodoKEM to resolve
the aforementioned simple attacks.
2024-09-04T00:00:00ZPrice-setting Problems and Matroid Bayesian Online SelectionDeHaan, Ianhttps://hdl.handle.net/10012/209342024-08-31T07:00:30Z2024-08-30T00:00:00Zdc.title: Price-setting Problems and Matroid Bayesian Online Selection
dc.contributor.author: DeHaan, Ian
dc.description.abstract: We study a class of Bayesian online selection problems with matroid constraints. Consider a seller who has several items they wish to sell, with the set of sold items being subject to some structural constraints, e.g., the set of sold items should be independent with respect to some matroid. Each item has an offer value drawn independently from a known distribution. In a known order, items arrive and drawn values are presented to the seller. If the seller chooses to sell the item, they gain the drawn value as revenue. Given distribution information for each item, the seller wishes to maximize their expected revenue by carefully choosing which offers to accept as they arrive.
Such problems have been studied extensively when the seller's revenue is compared with the offline optimum, referred to as the "prophet". In this setting, a tight 2-competitive algorithm is known when the seller is limited to selling independent sets of a matroid [KW12]. We turn our attention to the online optimum, or "philosopher", and ask how well the seller can do with polynomial-time computation, compared to a seller with unlimited computation but with the same limited distribution information about offers.
We show that when the underlying constraints are laminar and the arrival of buyers follows a natural "left-to-right" order, there is a polynomial-time approximation scheme for maximizing the seller's revenue. We also show that such a result is impossible for the related case when the underlying constraints correspond to a graphic matroid. In particular, it is PSPACE-hard to approximate the philosopher's expected revenue to some fixed constant α < 1; moreover, this cannot be alleviated by requirements on the arrival order in the case of graphic matroids. We also show similar hardness results for both transversal and cographic matroids.
We then turn our attention to a related problem where the arrival order of items is unknown and uniformly random. In this setting, we show that there is a polynomial-time approximation scheme whenever the rank of the matroid is bounded above by a constant and all probabilities in the input are bounded below by a constant.
We additionally examine the computational complexity of computing the prophet's expected revenue. We show that this problem is #P-hard, and give a fully polynomial-time randomized approximation scheme for the problem.
2024-08-30T00:00:00ZPolicy Learning under Uncertainty and RiskLuo, Yudonghttps://hdl.handle.net/10012/209312024-08-31T07:00:42Z2024-08-30T00:00:00Zdc.title: Policy Learning under Uncertainty and Risk
dc.contributor.author: Luo, Yudong
dc.description.abstract: Recent years have seen a rapid growth of reinforcement learning (RL) research. In year 2015, deep RL achieved superhuman performance in Atari video games. In year 2016, the Alpha Go developed by Google DeepMind beat Lee Sedol, one of the top Go players in South Korea. In year 2022, OpenAI released ChatGPT 3.5, a powerful large language model, which is fine-tuned by RL algorithms. Traditional RL considers the problem that an agent interacts with an environment to acquire a good policy. The performance of the policy is usually evaluated by the expected value of total discounted rewards (or called return) collected in the environment. However, the mostly studied domains (including the three mentioned above) are usually deterministic or contain less randomness. In many real world applications, the domains are highly stochastic, thus agents need to perform decision making under uncertainty. Due to the randomness of the environment, another natural consideration is to minimize the risk, since only maximizing the expected return may not be sufficient. For instance, we want to avoid huge financial loss in portfolio management, which motivates the mean variance trade off.
In this thesis, we focus on the problem of policy learning under uncertainty and risk. This requires the agent to quantify the intrinsic uncertainty of the environment and be risk-averse in specific cases, instead of only caring for the mean of the return.
To quantify the intrinsic uncertainty, in this thesis, we stick to the distributional RL method. Due to the stochasticity of the environment dynamic and also stochastic polices, the future return that an agent can get at a state is naturally a random variable. Distributional RL aims to learn the full value distribution of this random variable. Usually, the value distribution is represented by its quantile function. However, the quantile functions learned by existing algorithms suffer from limited representation ability or quantile crossing issue, which is shown to hinder policy learning and exploration. We propose a new learning algorithm to directly learn a monotonic, smooth, and continuous quantile representation, which provides much flexibility for value distribution learning in distributional RL.
For risk-averse policy learning, we study two common types of risk measure, i.e., measure of variability, e.g., variance, and tail risk measure, e.g., conditional value at risk (CVaR). 1) Mean variance trade off is a classic yet popular problem in RL. Traditional methods directly restrict the total return variance. Recent methods restrict the per-step reward variance as a proxy. We thoroughly examine the limitations of these variance-based methods in the policy gradient approach, and propose to use an alternative measure of variability, Gini deviation, as a substitute. We study various properties of this new risk measure and derive a policy gradient algorithm to minimize it. 2) CVaR is another popular risk measure for risk-averse RL. However, RL algorithms utilizing policy gradients to optimize CVaR face significant challenges with sample inefficiency, hindering their practical applications. This inefficiency stems from two main facts: a focus on tail-end performance that overlooks many sampled trajectories, and the potential of gradient vanishing when the lower tail of the return distribution is overly flat. To address these challenges, we start from an insight that in many scenarios, the risk-averse behavior is only required in a subset of states, and propose a simple mixture policy parameterization. This method integrates a risk-neutral policy with an adjustable policy to form a risk-averse policy. By employing this strategy, all collected trajectories can be utilized for policy updating, and the issue of vanishing gradients is counteracted by stimulating higher returns through the risk-neutral component, thus the sample efficiency is significantly improved.
2024-08-30T00:00:00ZThe roughness exponent and its application in financeHan, Xiyuehttps://hdl.handle.net/10012/209262024-08-31T07:02:01Z2024-08-30T00:00:00Zdc.title: The roughness exponent and its application in finance
dc.contributor.author: Han, Xiyue
dc.description.abstract: Rough phenomena and trajectories are prevalent across mathematics, engineering, physics, and the natural sciences. In quantitative finance, sparked by the observation that the historical realized volatility is rougher than a Brownian martingale, rough stochastic volatility models were recently proposed and studied extensively. Unlike classical diffusion volatility models, the volatility process in a rough stochastic volatility model is driven by an irregular process. The roughness of the volatility process plays a pivotal role in governing the behavior of such models.
This thesis aims to explore the concept of roughness and estimate the roughness of financial time series in a strictly pathwise manner. To this end, we introduce the notion of the roughness exponent, which is a pathwise measure of the degree of roughness. A substantial portion of this thesis focuses on the model-free estimation of this roughness exponent for both price and volatility processes. Towards the end of this thesis, we study the Wiener–Young Φ-variation of classical fractal functions, which can be regarded as a finer characterization than the roughness exponent.
Chapter 2 introduces the roughness exponent and establishes a model-free estimator for the roughness exponent based on direct observations. We say that a continuous real-valued function x admits the roughness exponent R if the pth variation of x converges to zero for p>1/R and to infinity for p<1/R . The main result of this chapter provides a mild condition on the Faber–Schauder coefficients of x under which the roughness exponent exists and is given as the limit of the classical Gladyshev estimates. This result can be viewed as a strong consistency result for the Gladyshev estimator in an entirely model-free setting because no assumption whatsoever is made on the possible dynamics of the function x. Nonetheless, we show that the condition of our main result is satisfied for the typical sample paths of fractional Brownian motion with drift, and we provide almost-sure convergence rates for the corresponding Gladyshev estimates. In this chapter, we also discuss the connections between our roughness exponent and Besov regularity and weighted quadratic variation. Since the Gladyshev estimators are not scale-invariant, we construct several scale-invariant modifications of our estimator. Finally, we extend our results to the case where the p^th variation of x is defined over a sequence of unequally spaced partitions.
Chapter 3 considers the problem of estimating the roughness exponent of the volatility process in a stochastic volatility model that arises as a nonlinear function of a fractional Brownian motion with drift. To this end, we establish a new estimator based on the Gladyshev estimator that estimates the roughness exponent of a continuous function x, but based on the observations of its antiderivative y. We identify conditions on the underlying trajectory x under which our estimates converge in a strictly pathwise sense. Then, we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion with drift. As a consequence, we obtain strong consistency of our estimator in the context of a large class of rough volatility models. Numerical simulations are implemented to show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.
Chapter 4 highlights the rationale of constructing the estimator from the Gladyshev estimator. In this chapter, we study the problem of reconstructing the Faber–Schauder coefficients of a continuous function from discrete observations of its antiderivative. This problem arises in the task of estimating the roughness exponent of the volatility process of financial assets but is also of independent interest. Our approach starts with formulating this problem through piecewise quadratic spline interpolation. We then provide a closed-form solution and an in-depth error analysis between the actual and approximated Faber–Schauder coefficients. These results lead to some surprising observations, which also throw new light on the classical topic of quadratic spline interpolation itself: They show that the well-known instabilities of this method can be located exclusively within the final generation of estimated Faber–Schauder coefficients, which suffer from non-locality and strong dependence on the initial value and the given data. By contrast, all other Faber–Schauder coefficients depend only locally on the data, are independent of the initial value, and admit uniform error bounds. We thus conclude that a robust and well-behaved estimator for our problem can be obtained by simply dropping the final-generation coefficients from the estimated Faber–Schauder coefficients.
Chapter 5 studies the Wiener–Young Φ-variation of classical fractal functions with a critical degree of roughness. In this case, the functions have vanishing p^th variation for all p>q but are also of infinite p^th variation for p< q for some q≥1 . We partially resolve this apparent puzzle by showing that these functions have finite, nonzero, and linear Wiener–Young Φ-variation along the sequence of certain partitions. For instance, functions of bounded variation admit vanishing p^th variation for any p>1. On the other hand, Weierstrass and Takagi–van der Waerden functions have vanishing p^th variation for p>1 but are also nowhere differentiable and hence not of bounded variation. As a result, power variation and the roughness exponent fail to distinguish the difference of degree of roughness for these functions. However, we can individuate these functions by showing that the Weierstrass and Takagi–van der Waerden functions admit a nontrivial and linear Φ-variation along the sequence of b-adic partitions, where Φ_q (x)=x/(-log x)^1/2. Moreover, for q>1 , we further develop a probabilistic approach so as to identify functions in the Takagi class that have linear and nontrivial Φ_q-variation for a prescribed Φ_q . Furthermore, for each fixed q>1 , the collection of functions Φ_q forms a wide class of increasing functions that are regularly varying at zero with an index of regular variation q.
2024-08-30T00:00:00ZContracts for Density and Packing FunctionsSkitsko, Jacobhttps://hdl.handle.net/10012/209222024-08-31T07:00:53Z2024-08-30T00:00:00Zdc.title: Contracts for Density and Packing Functions
dc.contributor.author: Skitsko, Jacob
dc.description.abstract: We study contracts for combinatorial problems in multi-agent settings. In this problem, a principal designs a contract with several agents, whose actions the principal is unable to observe. The principal is able to see only the outcome of the agents' collective actions. All agents that decided to exert effort incur costs, and so naturally all agents expect a fraction of the principal's reward as a compensation. The principal needs to decide what fraction of their reward to give to each agent so that the principal's expected utility is maximized. One of our focuses is on the case when the principal's reward function is supermodular and is based on some graph. Recently, Deo-Campo Vuong et al. showed that for this problem it is impossible to provide any finite multiplicative approximation or additive FPTAS unless P=NP. On a positive note, Deo-Campo Vuong et al. provided an additive PTAS for the case when all agents have the same cost. Deo-Campo Vuong et al. asked whether an additive PTAS can be obtained for the general case, i.e for the case when agents potentially have different costs. In this thesis, we answer this open question in positive. Additionally, we provide multiplicative approximation algorithms for functions that are based on hypergraphs and encode packing constraints. This family of functions provides a generalization for XOS functions.
2024-08-30T00:00:00ZSolitons with continuous symmetriesLang, Christopher Jameshttps://hdl.handle.net/10012/209062024-08-30T07:00:48Z2024-08-29T00:00:00Zdc.title: Solitons with continuous symmetries
dc.contributor.author: Lang, Christopher James
dc.description.abstract: In this thesis, we develop a framework for classifying symmetric points on moduli spaces using representation theory. We utilize this framework in a few case studies, but it has applications well beyond these cases.
As a demonstration of the power of this framework, we use it to study various symmetric solitons: instantons as well as hyperbolic, singular, and Euclidean monopoles. Examples of these objects are hard to come by due to non-linear constraints. However, by applying this framework, we introduce a linear constraint, whose solution greatly simplifies the non-linear constraints. This simplification not only allows us to easily find a plethora of novel examples of these solitons, it also provides a framework for classifying such symmetric objects. As an example, by applying this method, we prove that the basic instanton is essentially the only instanton with two particular kinds of conformal symmetry.
Additionally, we study the symmetry breaking of monopoles, a part of their topological classification. We prove a straightforward method for determining the symmetry breaking of a monopole and explicitly identify the symmetry breaking for all Lie groups with classical, simply Lie algebras. We also identify methods for doing the same for the exceptional simple Lie groups.
2024-08-29T00:00:00ZPerformance Evaluation of Women's Volleyball Players Using Bayesian Data AnalysisAwosoga, Davidhttps://hdl.handle.net/10012/208992024-08-29T07:01:04Z2024-08-28T00:00:00Zdc.title: Performance Evaluation of Women's Volleyball Players Using Bayesian Data Analysis
dc.contributor.author: Awosoga, David
dc.description.abstract: Understanding player contribution is an important component of lineup construction, advance scouting, and performance evaluation in volleyball. However, traditional methods utilize oversimplified percentages that fail to acknowledge latent variables and situational nuance. These shortcomings are addressed in this work via a holistic framework mirroring the structure of a story, with each sub-component representing an angle by which player contribution can be investigated. The emphasis here is on modelling player contribution via player presence, and a Bayesian logistic regularized regression with a horseshoe prior and custom model structure is employed. This approach incorporates player roles, lineup matchups, and additional context-specific information into its estimates. The resultant model outputs are tested against substantive knowledge and posterior predictive checks, with direct application to volleyball coaches, players, and the community at large. Extensions to this analysis consider other factors of player contribution such as player action, event sequences, and player intent, with applications arising from advancements in computer vision, sports science technology, and data acquisition also discussed.
2024-08-28T00:00:00ZProjection Geometric Methods for Linear and Nonlinear Filtering ProblemsAhmed, Ashrafhttps://hdl.handle.net/10012/208942024-08-29T07:01:26Z2024-08-28T00:00:00Zdc.title: Projection Geometric Methods for Linear and Nonlinear Filtering Problems
dc.contributor.author: Ahmed, Ashraf
dc.description.abstract: In this thesis, we review the infinite-dimensional space containing the solution of a broad range of stochastic filtering problems, and outline the substantive differences between the foundations of finite dimensional information geometry and Pistone’s extension to infinite dimensions characterizing the substantive differences between the two geometries with respect to the geometric structures needed for projection theorems such as a dually flat affine manifold preserving the affine and convex geometry of the set of all probability measures with the same support, the notion of orthogonal complement between the different tangent representation which are key for the generalized Pythagorean theorem, and the key notion of exponential and mixture parallel transport needed for projecting a point on a submanifold. We also explore the projection method proposed by Brigo and Pistone for reducing the dimensionality of infinite-dimensional measure valued evolution equations from the infinite-dimensional space in which they are written, that is the infinitedimensional statistical manifold of Pistone, onto a finite-dimensional exponential subfamily
using a local generalized projection theorem that is a non-parameteric analog of the generalized projection theorem proposed by Amari. Also, we explore using standard arguments the projection idea in the discrete state space with focus on building intuition and using computational examples to understand properties of the projection method. We establish two novel results regarding the impact of the boundary and choosing a subfamily that does not contain the initial condition of the problem. We demostrate, when the evolution process approaches the boundary of the space, the projection method fails completely due to the classical boundary relating to the vanishing of the tangent spaces at the boundary. We also show the impact of choosing a subfamily to project onto that does not contain the initial condition of the problem showing that, in certain directions, the approximation by projection changes from the true value due to solving a different differential equation than if we are to start from within the low-dimensional manifold. We also study the importance of having a sufficient statistics of the exponential subfamily to lie in the span of the left eigenfunctions of the infinitesimal generator of the process we wish to project using computational experiments.
2024-08-28T00:00:00ZControl over the KKR bijection with respect to the nesting structure on rigged configurations and a CSP instance involving Motzkin numbersChan, Williamhttps://hdl.handle.net/10012/208852024-08-28T07:01:22Z2024-08-27T00:00:00Zdc.title: Control over the KKR bijection with respect to the nesting structure on rigged configurations and a CSP instance involving Motzkin numbers
dc.contributor.author: Chan, William
dc.description.abstract: There are two disjoint main projects that this thesis covers. The Motzkin numbers are a sort of “re-
laxed version” of the Catalan numbers. For example, Catalan numbers count perfect non-crossing
matchings, while Motzkin numbers count not necessarily perfect non-crossing matchings. The first
project deals with instances of the cyclic sieving phenomenon involving Motzkin numbers and their
standard q−analogue. We also show that the standard q−analogue for Motzkin numbers satisfy
a similar generating series interpretation to that of the q−Catalan numbers. The second project
deals with understanding the Kerov-Kirillov-Reshetikhin (KKR) bijection between semistandard
tableaux and rigged configurations with a particular emphasis on the standard case. In partic-
ular, we understand inducing perturbations on the corresponding rigged configuration via direct
operations on the tableau. We develop a technique to take a standard tableau T, and output a
new standard tableau T′ which has the same corresponding rigged configuration up to a rigging on
any desired row of the first rigged partition. We also develop an alternate technique to Kuniba,
Okado, Sakamoto, Takagi, and Yamada that “unwraps” the natural nesting structure on rigged
configurations. The primary operation from which all the above follows from is “raise” which we
introduce and give various combinatorial models for. The operation raise induces a very simple,
controlled perturbation on the corresponding rigged configuration. Our results are formulated in
terms of paths or (classically) highest weight elements of tensor products of Kirillov-Reshetikhin
crystals.
2024-08-27T00:00:00Z