ED

Erik Derner

Authored

20 records found

Toward Physically Plausible Data-Driven Models

A Novel Neural Network Approach to Symbolic Regression

Many real-world systems can be described by mathematical models that are human-comprehensible, easy to analyze and help explain the system's behavior. Symbolic regression is a method that can automatically generate such models from data. Historically, symbolic regression has been ...

ViewFormer

NeRF-Free Neural Rendering from Few Images Using Transformers

Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The curren ...

SymFormer

End-to-End Symbolic Regression Using Transformer-Based Architecture

Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression t ...

SymFormer

End-to-End Symbolic Regression Using Transformer-Based Architecture

Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression t ...
Reinforcement learning algorithms can solve dynamic decision-making and optimal control problems. With continuous-valued state and input variables, reinforcement learning algorithms must rely on function approximators to represent the value function and policy mappings. Commonly ...
Continual model learning for nonlinear dynamic systems, such as autonomous robots, presents several challenges. First, it tends to be computationally expensive as the amount of data collected by the robot quickly grows in time. Second, the model accuracy is impaired when data fro ...
Virtually all dynamic system control methods benefit from the availability of an accurate mathematical model of the system. This includes also methods like reinforcement learning, which can be vastly sped up and made safer by using a dynamic system model. However, obtaining a suf ...
Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which ...
Autonomous mobile robots are becoming increasingly important in many industrial and domestic environments. Dealing with unforeseen situations is a difficult problem that must be tackled to achieve long-term robot autonomy. In vision-based localization and navigation methods, one ...
Developing mathematical models of dynamic systems is central to many disciplines of engineering and science. Models facilitate simulations, analysis of the system's behavior, decision making and design of automatic control algorithms. Even inherently model-free control techniques ...
In symbolic regression, the search for analytic models is typically driven purely by the prediction error observed on the training data samples. However, when the data samples do not sufficiently cover the input space, the prediction error does not provide sufficient guidance tow ...
Autonomous mobile robots are becoming increasingly important in many industrial and domestic environments. Dealing with unforeseen situations is a difficult problem that must be tackled in order to move closer to the ultimate goal of life-long autonomy. In computer vision-based m ...
Relying on static representations of the environment limits the use of mapping methods in most real-world tasks. Real-world environments are dynamic and undergo changes that need to be handled through map adaptation. In this work, an object-based pose graph is proposed to solve t ...
Deep reinforcement learning (RL) has been successfully applied to a variety of game-like environments. However, the application of deep RL to visual navigation with realistic environments is a challenging task. We propose a novel learning architecture capable of navigating an age ...
Reinforcement learning (RL) is a suitable approach for controlling systems with unknown or time-varying dynamics. RL in principle does not require a model of the system, but before it learns an acceptable policy, it needs many unsuccessful trials, which real robots usually cannot ...
The ability to search for objects is a precondition for various robotic tasks. In this paper, we address the problem of finding objects in partially known indoor environments. Using the knowledge of the floor plan and the mapped objects, we consider object-object and object-room ...
The ability to search for objects is a precondition for various robotic tasks. In this paper, we address the problem of finding objects in partially known indoor environments. Using the knowledge of the floor plan and the mapped objects, we consider object-object and object-room ...
Virtually all robot control methods benefit from the availability of an accurate mathematical model of the robot. However, obtaining a sufficient amount of informative data for constructing dynamic models can be difficult, especially when the models are to be learned during robot ...
Virtually all robot control methods benefit from the availability of an accurate mathematical model of the robot. However, obtaining a sufficient amount of informative data for constructing dynamic models can be difficult, especially when the models are to be learned during robot ...
It is well known that reinforcement learning (RL) can benefit from the use of a dynamic prediction model which is learned on data samples collected online from the process to be controlled. Most RL algorithms are formulated in the state-space domain and use state-space models. Ho ...