Lukas Taus

PhD Candidate in the CSEM program in the Oden Institutate for Computational Engineering & Sciences at the University of Texas at Austin. My research interests include machine learning, numerical analysis, computer vision, optimal decision making, and optimization.

Contact

l.taus@utexas.edu

Education

University of Texas at Austin Austin, Texas

August 2021 - Current

PhD Student in Computation Science, Engeneering and Mathematics

GPA: 3.9492

Research interests: Scientific Machine Learning from a numerical analysis perspective with applications in computer vision and optimal control.

Course Work

Methods of Applied Mathematics I

Methods of Applied Mathematics II

Numerical Analysis: Linear Algebra

Introduction to Mathematical Modeling in Science & Engeneering I

Introduction to Mathematical Modeling in Science & Engeneering II

Foundational Techniques in Machine Learning and Data Science

Tools and Techniques in Computational Science

Deep Learning I

Deep Learning II

Computation and Variational Methods for Inverse Problems

Predictive Machine Learning

Linear Systems Analysis

Graz University of Technology Graz, Austria

October 2018 - July 2021

MSc in Financial Mathematics. Studied state of the art stochastic models for financial markets.

Thesis: Processes with free Increments
Generalization of classical probablity theory for non-commuting random variables with applications in signal processing and random matrices.

Course Work

Advances Analysis

Advanced financial Mathematics

Statistical methods in actuarial science

Risk theory and management in actuarial science

Actuarial modeling

Non-life insurance mathematics

Financial management

Mathematical statistics

Advanced probability

Stochastic analysis

Life and health insuracne mathematics

Selected Chapters Analysis (Special functions)

Probability and Analysis on Graphs and Groups

Markov Processes

Project in finance and insurance

Discrete and algebraic structures

Regression analysis

Japanese A2/1

Japanese B1/1

Japanese B1/2

Graz University of Technology Graz, Austria

October 2018 - July 2021

BSc in Mathematics. Got a broad eduction covering a wide range of important fields in the subject.

Thesis: The Monte-Carlo Method and Pseudo-Random number generators
Explored the theoretical foundations of random number generation and Monte-Carlo integration methods.

Course Work

Linear Algebra I

Linear Algebra II

Analysis I

Analysis II

Analysis III

Discrete Mathematics

Computer Mathematics

C++

Fundamentals of Mathematics

Ordinary Differential Equations

Computational Mathematics

Introduction to Functional Analysis

Optimization I

Introduction to Algebra

Parital Differential Equations

Stochastic Processes

Introduction to Complex Analysis

Statistics

Data Structures and Algorithms

Financial and insurance mathematics

Mathematics for Finance and Insurance

Personal Actuarial Science

Optimization problems in mathematics of finance

Bachelor's Thesis

Probability Theory

Advanced Probability

Selected Chapters Analysis (Elliptic Differential Equations)

Advanced actuarial mathematics

Integration and measure theory

Japanese A1/1

Japanese A1/2

Rendezvous Planning from Sparse Observations of Optimally Controlled Targets

We present a probabilistic framework for rendezvous planning with fast targets under uncertainty. We estimate trajectories using kernel-based MAP estimation and Gaussian processes, then optimize coordinates via a sequential greedy algorithm that maintains a statistically consistent belief space. Crucially, this approach enables successful coordination with targets significantly faster than the seeking agents, a task infeasible with existing methods.

Optimizing Sensor Network Design for Multiple Coverage

We introduce a new objective function for the greedy algorithm to design efficient and robust sensor networks and derive theoretical bounds on the network's optimality. We further introduce a Deep Learning model to accelerate the algorithm for near real-time computations. Correspondingly, we show that understanding the geometric properties of the training data set provides important insights into the performance and training.

Fast End-to-End Generation of Belief Space Paths for Minimum Sensing Navigation

We propose a deep learning-based motion planner to accelerate navigation in high-dimensional Gaussian belief spaces. By leveraging a U-Net architecture, we learn to predict optimal path candidates directly from image-encoded problem descriptions, including start/goal states and obstacles. This approach replaces computationally expensive sampling with a fast inference step that reconstructs paths from learned distributions. Our method significantly reduces computation time compared to traditional sampling-based baselines while maintaining high solution quality.

Publications

Published Papers

Rendezvous Planning from Sparse Observations of Optimally Controlled Targets

Authors: Thomas A. Scott, Lukas Taus, Yen-Hsi Richard Tsai, Tan Bui-Thanh, Justin G.R. Delva

Preprint • April 2026

We develop a probabilistic framework for \emph{rendezvous planning}: given sparse, noisy observations of a fast-moving target, plan rendezvous spatiotemporal coordinates for a set of significantly slower seeking agents. The unknown target trajectory is estimated under uncertain dynamics using a filtering approach that combines a kernel-based maximum a posteriori estimation with Gaussian process correction, producing a mixture over trajectory hypotheses. This estimate is used to select spatiotemporal rendezvous points that maximize the probability of successful rendezvous. Points are chosen sequentially by greedily minimizing failure probability in the current belief space, which is updated after each step by conditioning on unsuccessful rendezvous attempts. We show that the failure-conditioned update correctly captures the posterior belief for subsequent decisions, ensuring that each step in the greedy sequence is informed by a statistically consistent representation of the remaining search space, and derive the corresponding Bayesian updates incorporating temporal correlations intrinsic to the trajectory model. This result provides a systematic framework for planning under uncertainty in applications of autonomous rendezvous such as unmanned aerial vehicle refueling, spacecraft servicing, autonomous surface vessel operations, search and rescue missions, and missile defense. In each, the motion of the target entity can be modeled using a system of differential equations undergoing optimal control for a chosen objective, in our example case Hamilton--Jacobi--Bellman solutions for minimum arrival time of a Dubins car with uncertain turning radius and destination.

Fast End-to-End Generation of Belief Space Paths for Minimum Sensing Navigation

Authors: Lukas Taus, Vrushabh Zinage, Takashi Tanaka, Richard Tsai

Preprint • September 2024

We revisit the problem of motion planning in the Gaussian belief space. Motivated by the fact that most existing sampling-based planners suffer from high computational costs due to the high-dimensional nature of the problem, we propose an approach that leverages a deep learning model to predict optimal path candidates directly from the problem description. Our proposed approach consists of three steps. First, we prepare a training dataset comprising a large number of input-output pairs: the input image encodes the problem to be solved (e.g., start states, goal states, and obstacle locations), whereas the output image encodes the solution (i.e., the ground truth of the shortest path). Any existing planner can be used to generate this training dataset. Next, we leverage the U-Net architecture to learn the dependencies between the input and output data. Finally, a trained U-Net model is applied to a new problem encoded as an input image. From the U-Net's output image, which is interpreted as a distribution of paths,an optimal path candidate is reconstructed. The proposed method significantly reduces computation time compared to the sampling-based baseline algorithm.

Optimizing Sensor Network Design for Multiple Coverage

Authors: Lukas Taus, Yen-Hsi Richard Tsai

Preprint • May 2024

Sensor placement optimization methods have been studied extensively. They can be applied to a wide range of applications, including surveillance of known environments, optimal locations for 5G towers, and placement of missile defense systems. However, few works explore the robustness and efficiency of the resulting sensor network concerning sensor failure or adversarial attacks. This paper addresses this issue by optimizing for the least number of sensors to achieve multiple coverage of non-simply connected domains by a prescribed number of sensors. We introduce a new objective function for the greedy (next-best-view) algorithm to design efficient and robust sensor networks and derive theoretical bounds on the network's optimality. We further introduce a Deep Learning model to accelerate the algorithm for near real-time computations. The Deep Learning model requires the generation of training examples. Correspondingly, we show that understanding the geometric properties of the training data set provides important insights into the performance and training process of deep learning techniques. Finally, we demonstrate that a simple parallel version of the greedy approach using a simpler objective can be highly competitive.

Efficient and Robust Sensor Placement in Complex Environments

Authors: Lukas Taus, Yen-Hsi Richard Tsai

Preprint • September 2023

We address the problem of efficient and unobstructed surveillance or communication in complex environments. On one hand, one wishes to use a minimal number of sensors to cover the environment. On the other hand, it is often important to consider solutions that are robust against sensor failure or adversarial attacks. This paper addresses these challenges of designing minimal sensor sets that achieve multi-coverage constraints -- every point in the environment is covered by a prescribed number of sensors. We propose a greedy algorithm to achieve the objective. Further, we explore deep learning techniques to accelerate the evaluation of the objective function formulated in the greedy algorithm. The training of the neural network reveals that the geometric properties of the data significantly impact the network's performance, particularly at the end stage. By taking into account these properties, we discuss the differences in using greedy and ϵ-greedy algorithms to generate data and their impact on the robustness of the network.

Presentations

Poster Presentation at CAMDACollege Station, US

May 2023

Presented recent results about efficient and robust sensor placement in complex environments at the Center for Approximation and Mathematical Data Analytics.

Work Experience

Raiffeisen-Landesbank Steiermark AGRaaba, Austria

August 2019 - July 2021

Risk Management Intern

  • Collaborated with a team of 6 people with different backgrounds to provide quantitive measures for risk surveilance.
  • Automated and optimized the data handling process and analysis of frequent statistical tests using Python
  • Developed a machine learning framework for early detection of defaulting loans. The model improved the 80% accuracy time horizon from 3 months of previous models to 6 months which helped identify struggeling businesses during the Covid19 pandemic.

Skills

Programming Python (Pandas, PyTorch, TensorFlow, NumPy, Scikit-learn), R, C/C++, SQL, Matlab,...

Miscellaneous Linux, Shell (Bash/Zsh), Latex, Git, HTML,...

Languages

English Professional proficiency

German Native proficiency

Japanese Intermediate proficiency