Publications
For a comprehensive list of publications and stats, please visit:
2020
Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin A. Riedmiller: Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics. CoRR abs/2001.00449 (2020)
Noah Y. Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin A. Riedmiller: Keep Doing What Worked: Behavioral Modeling Priors for Offline Reinforcement Learning. CoRR abs/2002.08396 (2020)
2019
Jan M. Wülfing, Sreedhar S. Kumar, Joschka Boedecker, Martin A. Riedmiller, Ulrich Egert: Adaptive long-term control of biological neural networks with Deep Reinforcement Learning. Neurocomputing 342: 66-74 (2019)
Devin Schwab, Jost Tobias Springenberg, Murilo Fernandes Martins, Michael Neunert, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Roland Hafner, Francesco Nori, Martin A. Riedmiller: Simultaneously Learning Vision and Feature-Based Control Policies for Real-World Ball-In-A-Cup. Robotics: Science and Systems 2019
Carlos Florensa, Jonas Degrave, Nicolas Heess, Jost Tobias Springenberg, Martin A. Riedmiller: Self-supervised Learning of Image Embedding for Continuous Control. CoRR abs/1901.00943 (2019)
Devin Schwab, Jost Tobias Springenberg, Murilo F. Martins, Thomas Lampe, Michael Neunert, Abbas Abdolmaleki, Tim Hertweck, Roland Hafner, Francesco Nori, Martin A. Riedmiller: Simultaneously Learning Vision and Feature-based Control Policies for Real-world Ball-in-a-Cup. CoRR abs/1902.04706 (2019)
Daniel J. Mankowitz, Nir Levine, Rae Jeong, Abbas Abdolmaleki, Jost Tobias Springenberg, Timothy A. Mann, Todd Hester, Martin A. Riedmiller: Robust Reinforcement Learning for Continuous Control with Model Misspecification. CoRR abs/1906.07516 (2019)
Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller: Regularized Hierarchical Policies for Compositional Transfer in Robotics. CoRR abs/1906.11228 (2019)
H. Francis Song, Abbas Abdolmaleki, Jost Tobias Springenberg, Aidan Clark, Hubert Soyer, Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov, Martin A. Riedmiller, Matthew M. Botvinick: V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control. CoRR abs/1909.12238 (2019)
Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Y. Siegel, Nicolas Heess, Martin A. Riedmiller: Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models. CoRR abs/1910.04142 (2019)
Jonas Degrave, Abbas Abdolmaleki, Jost Tobias Springenberg, Nicolas Heess, Martin A. Riedmiller: Quinoa: a Q-function You Infer Normalized Over Actions. CoRR abs/1911.01831 (2019)
2018
Jan Wülfing, Sreedhar S. Kumar, Joschka Boedecker, Martin A. Riedmiller, Ulrich Egert: Controlling biological neural networks with deep reinforcement learning. ESANN 2018
Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Heess, Martin A. Riedmiller: Maximum a Posteriori Policy Optimisation. ICLR (Poster) 2018
Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, Martin A. Riedmiller: Learning an Embedding Space for Transferable Robot Skills. ICLR (Poster) 2018
Martin A. Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Vlad Mnih, Nicolas Heess, Jost Tobias Springenberg: Learning by Playing Solving Sparse Reward Tasks from Scratch. ICML 2018: 4341-4350
Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin A. Riedmiller, Raia Hadsell, Peter W. Battaglia: Graph Networks as Learnable Physics Engines for Inference and Control. ICML 2018: 4467-4476
Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, Martin A. Riedmiller: DeepMind Control Suite. CoRR abs/1801.00690 (2018)
Martin A. Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, Jost Tobias Springenberg: Learning by Playing - Solving Sparse Reward Tasks from Scratch. CoRR abs/1802.10567 (2018)
Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin A. Riedmiller, Raia Hadsell, Peter W. Battaglia: Graph networks as learnable physics engines for inference and control. CoRR abs/1806.01242 (2018)
Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Heess, Martin A. Riedmiller: Maximum a Posteriori Policy Optimisation. CoRR abs/1806.06920 (2018)
Abbas Abdolmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, Martin A. Riedmiller: Relative Entropy Regularized Policy Iteration. CoRR abs/1812.02256 (2018)
2017
Ivaylo Popov, Nicolas Heess, Timothy P. Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerík, Thomas Lampe, Yuval Tassa, Tom Erez, Martin A. Riedmiller: Data-efficient Deep Reinforcement Learning for Dexterous Manipulation. CoRR abs/1704.03073 (2017)
Rico Jonschkowski, Roland Hafner, Jonathan Scholz, Martin A. Riedmiller: PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations. CoRR abs/1705.09805 (2017)
Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin A. Riedmiller, David Silver: Emergence of Locomotion Behaviours in Rich Environments. CoRR abs/1707.02286 (2017)
Matej Vecerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin A. Riedmiller: Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards. CoRR abs/1707.08817 (2017)
2016
Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin A. Riedmiller, Thomas Brox: Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9): 1734-1747 (2016)
SS Kumar, J Wülfing, S Okujeni, J Boedecker, M Riedmiller, U Egert (2016) Autonomous Optimization of Targeted Stimulation of Neuronal Networks. PLoS Comput Biol 12 (8) pp. e1005054.
Nicolas Heess, Gregory Wayne, Yuval Tassa, Timothy P. Lillicrap, Martin A. Riedmiller, David Silver: Learning and Transfer of Modulated Locomotor Controllers. CoRR abs/1610.05182 (2016)
2015
Wendelin Böhmer, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller, Klaus Obermayer (2015) Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations. KI - Künstliche Intelligenz pp. 1-10. Springer Berlin Heidelberg.
J. T. Springenberg, A. Dosovitskiy, T. Brox, M. Riedmiller (2015) Striving for Simplicity: The All Convolutional Net. In arXiv:1412.6806, also appeared at ICLR 2015 Workshop Track.
A. Eitel, J. T. Springenberg, L. Spinello, M. Riedmiller, W. Burgard (2015) Multimodal Deep Learning for Robust RGB-D Object Recognition. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Manuel Watter, Jost Springenberg, Joschka Boedecker, Martin Riedmiller (2015) Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. In Advances in Neural Information Processing Systems 28. pp. 2728–2736.
2014
Thomas Lampe, Lukas D. J. Fiederer, Martin Voelker, Alexander Knorr, Martin Riedmiller, Tonio Ball (2014) A Brain-Computer Interface for High-Level Remote Control of an Autonomous, Reinforcement-Learning-Based Robotic System for Reaching and Grasping. In International Conference on Intelligent User Interfaces (IUI 2014). Haifa, Israel.
David Silver, G. Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin Riedmiller (2014) Deterministic Policy Gradient Algorithms. In The 31st International Conference on Machine Learning (ICML 2014). Beijing, China.
Thomas Lampe, Martin Riedmiller (2014) Approximate Model-Assisted Neural Fitted Q-Iteration. In IEEE International Joint Conference on Neural Networks (IJCNN 2014). Beijing, China.
Joschka Boedecker, Jost Tobias Springenberg, Jan Wülfing, Martin Riedmiller (2014) Approximate Real-Time Optimal Control Based on Sparse Gaussian Process Models. In Adaptive Dynamic Programming and Reinforcement Learning (ADPRL).
Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, Thomas Brox (2014) Discriminative Unsupervised Feature Learning with Convolutional Neural Networks. In 28th Annual Conference on Neural Information Processing Systems (NIPS).
M Duempelmann, S Ewing, M Blum, R Rostek, P Woias, M Riedmiller, A Schulze-Bonhage (2014) Investigation of low complexity seizure detection algorithm for closed loop devices in epilepsy treatment. Clinical Neurophysiology pp. S80.
2013
Manuel Blum, Martin Riedmiller (2013) Electricity Demand Forecasting using Gaussian Processes. In The AAAI-13 Workshop on Trading Agent Design and Analysis (TADA-13).
Thomas Gabel, Martin Riedmiller (2013) The Cooperative Driver: Multi-Agent Learning for Preventing Traffic Jams. International Journal of Traffic and Transportation Engineering 1 (4) pp. 67–76. Scientific & Academic Publishing.
Manuel Blum, Martin Riedmiller (2013) Optimization of Gaussian Process Hyperparameters using Rprop. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning.
Thomas Lampe, Martin Riedmiller (2013) Acquiring Visual Servoing Reaching and Grasping Skills using Neural Reinforcement Learning. In IEEE International Joint Conference on Neural Networks (IJCNN 2013). Dallas, TX.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller (2013) Playing Atari with Deep Reinforcement Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning.
Joschka Boedecker, Thomas Lampe, Martin Riedmiller (2013) Modeling effects of intrinsic and extrinsic rewards on the competition between striatal learning systems. Frontiers in Psychology 4 (739)
Jost Tobias Springenberg, Martin Riedmiller (2013) Improving Deep Neural Networks with Probabilistic Maxout Units. In arXiv:1312.6116, also appeared at ICLR 2014 Workshop Track.
2012
Martin Riedmiller, Sascha Lange, Arne Voigtlaender (2012) Autonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks. Brisbane, Australia.
Manuel Blum, Jost Tobias Springenberg, Jan Wülfing, Martin Riedmiller (2012) A Learned Feature Descriptor for Object Recognition in RGB-D Data. In IEEE International Conference on Robotics and Automation (ICRA). St. Paul, Minnesota, USA.
Oliver Obst, Martin Riedmiller (2012) Taming the Reservoir: Feedforward Training for Recurrent Neural Networks. In International Joint Conference on Neural Networks. Brisbane, Australia.
Jan Wülfing, Martin Riedmiller (2012) Unsupervised Learning of Local Features for Music Classification. In Proceedings of the 13th International Society for Music Information Retrieval Conference (ISMIR). Porto, Portugal.
Jan Mattner, Sascha Lange, Martin Riedmiller (2012) Learn to Swing Up and Balance a Real Pole Based on Raw Visual Input Data. In Proceedings of the 19th International Conference on Neural Information Processing (5) (ICONIP 2012). pp. 126–133. Dohar, Qatar.
Jost Tobias Springenberg, Martin Riedmiller (2012) Learning temporal coherent features through life-time sparsity. In International Conference on Neural Information Processing (ICONIP). pp. 347–356.
Martin Riedmiller (2012) 10 Steps and Some Tricks to Set up Neural Reinforcement Controllers. In Neural Networks: Tricks of the Trade (2nd ed.). pp. 735-757.
2011
T. Gabel, C. Lutz, M. Riedmiller (April 2011) Improved Neural Fitted Q Iteration Applied to a Novel Computer Gaming and Learning Benchmark. In In Proceedings of the IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2011). IEEE Press. Paris France.
Andreas Witsch, Roland Reichle, Sascha Lange, Martin Riedmiller, Kurt Geihs (April 2011) Enhancing the Episodic Natural Actor-Critic Algorithm by a Regularization Term to Stabilize Learning of Control Structures. In In Proceedings of the IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2011). IEEE Press. Paris France.
Thomas Gabel, Martin Riedmiller (2011) Distributed Policy Search Reinforcement Learning for Job-Shop Scheduling Tasks. TPRS International Journal of Production Research 50 (1) pp. available online from May 2011. Taylor & Francis.
Roland Hafner, Martin Riedmiller (2011) Reinforcement learning in feedback control. Machine Learning 27 (1) pp. 55–74. available online at http://dx.doi.org/10.1007/s10994-011-5235-x or upon request at riedmiller@informatik.uni-freiburg.de. Springer Netherlands.
Manuel Blum, Jost Tobias Springenberg, Jan Wülfing, Martin Riedmiller (2011) On the Applicability of Unsupervised Feature Learning for Object Recognition in RGB-D Data. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Granada, Spain.
Roland Hafner, Martin Riedmiller (2011) Learning (Near) Time Optimal Control for Flexible Robot Joints. In Workshop on Comparison of Reinforcement Learning and Optimal Control Methods for Real-World Robotic Tasks, RSS 2011. Los Angeles, CA.
S. Lange, T. Gabel, M. Riedmiller (2011) Batch Reinforcement Learning. In Reinforcement Learning: State of the Art. Springer, in press.
Manuel Blum, Martin Riedmiller (2011) Q-function Approximation in Batch Mode Reinforcement Learning. In International Workshop on Bio-Inspired Robots.
2010
Martin Lauer, Roland Hafner, Sascha Lange, Martin Riedmiller (2010) Cognitive concepts in autonomous soccer playing robots. Cognitive Systems Research 11 (3) pp. 287-309.
Thomas Gabel, Martin Riedmiller (2010) On Progress in RoboCup: The Simulation League Showcase. In RoboCup 2010: Robot Soccer World Cup XIV, LNCS. Springer. Singapore.
Sascha Lange, Martin Riedmiller (2010) Deep Learning of Visual Control Policies. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2010). Brugge, Belgium.
Sascha Lange, Martin Riedmiller (2010) Deep Auto-Encoder Neural Networks in Reinforcement Learning. In International Joint Conference on Neural Networks (IJCNN 2010). Barcelona, Spain.
Sascha Lange, Martin Riedmiller (2010) A Vision for Reinforcement Learning and its Implications for Neural Computation. In NC2 Workshop, German Conference on Artificial Intelligence (KI2010), accepted. Karlsruhe, Germany.
2009
T. Kietzmann, M. Riedmiller (Dec 2009) The Neuro Slot Car Racer: Reinforcement Learning in a Real World Setting. In Proceedings of the Int. Conference on Machine Learning Applications (ICMLA09). Springer. Miami, Florida.
M. Riedmiller, T. Gabel, R. Hafner, S. Lange (2009) Reinforcement Learning for Robot Soccer. Autonomous Robots 27 (1) pp. 55–74. Springer.
T. Kietzmann, S. Lange, M. Riedmiller (2009) Computational Object Recognition: A Biologically Motivated Approach. Biological Cybernetics 100 (1) pp. 55–73. IEEE Press.
S. Timmer, M. Riedmiller (2009) Efficient Identification of State in Reinforcement Learning. Künstliche Intelligenz BöttcherIT Verlag.
Martin Lauer, Martin Riedmiller (2009) Participating in Autonomous Robot Competitions: Experiences from a Robot Soccer Team.
2008
T. Gabel, M. Riedmiller (September 2008) Joint Equilibrium Policy Search for Multi-Agent Scheduling Problems. In Proceedings of the 6th Conference on Multiagent System Technologies (MATES 2008). Springer. Kaiserslautern.
T. Gabel, M. Riedmiller (September 2008) Gradient-Descent Policy Search for Job-Shop Scheduling Problems. In Online Proceedings of the 18th International Conference on Automated Planning and Scheduling (ICAPS 2008). AAAI Press. Sydney, Australia.
Thomas. Gabel., Martin Riedmiller (June 2008) Evaluation of Batch-Mode Reinforcement Learning for Solving DEC-MDPs with Changing Action Sets. In Proceedings of the 8th Biennial European Workshop on Reinforcement Learning (EWRL 2008). Springer. Lille, France.
T. Kietzmann, S. Lange, M. Riedmiller (2008) Incremental GRLVQ: Learning Relevant Features for 3D Object Recognition. Neurocomputing 71 (13-15) pp. 2868–2879. Elsevier.
S. Riedmiller, M. Riedmiller (2008) Konzepte und Anwendungen lernf{ä}higer Systeme. Medizinisch Orthop”adische Technik 5 (December) Verlagsgesellschaft Tischler GmbH.
T. Gabel, M. Riedmiller (2008) Increasing Precision of Credible Case-Based Inference. In Proceedings of the 9th European Conference on Case-Based Reasoning (ECCBR 2008). Springer. Trier, Germany.
Thomas Gabel, Martin Riedmiller (2008) Reinforcement Learning for DEC-MDPs with Changing Action Sets and Partially Ordered Dependencies. In {Proceedings of the 7th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2008)}. Springer. Estoril, Portugal.
Thomas Gabel, Martin Riedmiller, Florian Trost (2008) A Case Study on Improving Defense Behavior in Soccer Simulation 2D: The NeuroHassle Approach. In RoboCup 2008: Robot Soccer World Cup XII, LNCS. Springer. Shouzou, China.
Martin Riedmiller, Roland Hafner, Sascha Lange, Martin Lauer (2008) Learning to Dribble on a Real Robot by Success and Failure. In Proceedings of the 2008 International Conference on Robotics and Automation (ICRA 2008). Springer. Pasadena CA.
2007
Thomas Gabel, Martin Riedmiller (2007) An Analysis of Case-Based Value Function Approximation by Approximating State Transition Graphs. In Proceedings of the 7th International Conference on Case-Based Reasoning (ICCBR 2007). Springer. Belfast, UK.
Heiko Müller, Martin Lauer, Roland Hafner, Sascha Lange, Martin Riedmiller (2007) Making a Robot Learn to Play Soccer Using Reward and Punishment. In Proceedings of the German Conference on Artificial Intelligence, KI 2007. Osnabrück Germany.
Stephan Timmer, Martin Riedmiller (2007) Safe Q-Learning on Complete History Spaces. In Proceedings of the 18th European Conference on Machine Learning, ECML 07. Springer. Warsaw, Poland.
Martin Riedmiller, Mike Montemerlo, Hendrik Dahlkamp (2007) Learning to Drive in 20 Minutes. In Proceedings of the FBIT 2007 conference.. Springer. Jeju, Korea.
T. Gabel, M. Riedmiller (2007) Adaptive Reactive Job-Shop Scheduling with Reinforcement Learning Agents. International Journal of Information Technology and Intelligent Computing 24 (4) IEEE Press.
Arne Voigtländer, Sascha Lange, Martin Lauer, Martin Riedmiller (2007) Real-time 3D Ball Recognition using Perspective and Catadioptric Cameras. In Proceedings of the European Conference on Mobile Robots ECMR 2007. Freiburg Germany.
Roland Hafner, Martin Riedmiller (2007) Neural Reinforcement Learning Controllers for a Real Robot Application. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 07). Rome, Italy.
T. Gabel, M. Riedmiller (2007) Scaling Adaptive Agent-Based Reactive Job-Shop Scheduling to Large-Scale Problems. In Proceedings of the IEEE Symposium on Computational Intelligence in Scheduling (CI-Sched 2007). Honolulu, USA.
T. Gabel, M. Riedmiller (2007) On Experiences in a Complex and Competitive Gaming Domain: Reinforcement Learning Meets RoboCup. In Proceedings of the IEEE Symposium on Computational Intelligence and Games (CIG 2007). Honolulu, USA.
S. Timmer, M. Riedmiller (2007) Fitted Q Iteration with CMACs. In Proceedings of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 07). Honolulu, USA.
Martin Riedmiller, Jan Peters, Stefan Schaal (2007) Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark. In Proceedings of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 07). Honolulu, USA.
T. Gabel, M. Riedmiller (2007) On a Successful Application of Multi-Agent Reinforcement Learning to Operations Research Benchmarks. In Proceedings of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 07). Honolulu, USA.
2006
T. Gabel, M. Riedmiller (2006) Learning a Partial Behavior for a competitive Soccer Agent. Kuenstliche Intelligenz 2 pp. 18-23. Springer.
M. Riedmiller, T. Gabel, R. Hafner, S. Lange, M. Lauer (2006) Die Brainstormers: Entwurfsprinzipien lernfaehiger autonomer Roboter. Informatik-Spektrum 20(3) pp. 175-190. Springer.
M. Lauer, S. Lange, M. Riedmiller (2006) Motion Estimation of Moving Objects for Autonomous mobile Robots. Künstliche Intelligenz 1 pp. 11-17. Springer.
T. Gabel, M. Riedmiller (2006) Multi-Agent Case-Based Reasoning for Cooperative Reinforcement Learners. In In Proceedings of the 8th European Conference on Case-Based Reasoning (ECCBR 2006). Fethiye, Turkey.
S. Lange, M. Riedmiller (2006) Appearance Based Robot Discrimination using Eigenimages. In In Proceedings of the RoboCup Symposium 2006. Bremen, Germany.
T. Gabel, M. Riedmiller (2006) Reducing Policy Degradation in Neuro-Dynamic Programming. In In Proceedings of the 11th European Symposium on Artificial Neural Networks (ESANN 2006). Bruges, Belgium.
S. Timmer, M. Riedmiller (2006) Abstract State Spaces with History. In Proceedings of the NAFIPS Conference, the North American Fuzzy Information Processing Society, Montreal, Canada.
T. Gabel, R. Hafner, S. Lange, M. Lauer, M. Riedmiller (2006) Bridging the Gap: Learning in the RoboCup Simulation and Midsize League. In Proceedings of the 7th Portuguese Conference on Automatic Control (Controlo 2006). Portuguese Society of Automatic Control. Porto, Portugal.
2005
D. Withopf, M. Riedmiller (October 2005) Comparing Different Methods to Speed-Up Reinforcement Learning in a Complex Domain. In Proc. of the Int. Conference on Systems, Man and Cybernetics, 2005. Big Island, USA.
S. Timmer, M. Riedmiller (October 2005) Learning Policies for Abstract States. In Proc. of the Int. Conference on Systems, Man and Cybernetics, 2005. Big Island, USA.
M. Riedmiller (October 2005) Neural Fitted Q Iteration - First experiences with a data efficient neural Reinforcement Learning Method. In Lecture Notes in Computer Science: Proc. of the European Conference on Machine Learning, ECML 2005. pp. 317-328. Porto, Portugal.
M. Riedmiller (October 2005) Neural Reinforcement Learning to Swing-Up and Balance a Real Pole. In Proc. of the Int. Conference on Systems, Man and Cybernetics, 2005. Big Island, USA.
T. Gabel, M. Riedmiller (August 2005) CBR for State Value Function Approximation in Reinforcement Learning. In Proceedings of the International Conference on Case Based Reasoning 2005 (ICCBR 2005). Springer. Chicago, USA.
D. Withopf, M. Riedmiller (2005) Effective Methods for Reinforcement Learning in Large Multi-Agent Domains. it - Information Technology Journal 5 (47) pp. 241-249. Oldenbourg.
R. Hafner, M.Riedmiller (2005) Case study: control of a real world system in CLSquare. In Proceedings of the NIPS Workshop on Reinforcement Learning Comparisons, Whistler, British Columbia, Canada.
M. Lauer, S. Lange, M. Riedmiller (2005) Calculating the Perfect Match: An Efficient and Accurate Approach for Robot Self-Localisation. In RoboCup-2005: Robot Soccer World Cup IX, LNCS. Springer.
A. Sung, A. Merke, M. Riedmiller (2005) Reinforcement Learning using a Grid-Based Function Approximator. In Biomimetic Neural Learning for Intelligent Robots.
2004
M. Riedmiller (September 2004) Machine Learning for Autonomous Robots. Keynote Speech.. In Proceedings of the KI '04, Ulm, Germany.
R. Schoknecht, M. Spott, M. Riedmiller (May 2004) textsc{Fynesse}: An Architecture for Integrating Prior Knowledge in Autonomously Learning Agents. Soft Computing 8 (6) pp. 397-408. Springer.
Enrico Pagello, Emanuele Menegatti, Daniel Polani, Ansgar Bredenfel, Paulo Costa, Thomas Christaller, Adam Jacoff, Martin Riedmiller, Alessandro Saffiotti, Elizabeth Sklar, Takashi Tomoichi (2004) RoboCup-2003: New Scientific and Technical Advances. AI Magazine American Association for Artificial Intelligence (AAAI).
M. Lauer, M. Riedmiller (2004) Reinforcement Learning for Stochastic Cooperative Multi-Agent Systems. In Proceedings of the AAMAS '04, New York.
A. Merke, S. Welker, M. Riedmiller (2004) Line Base Robot Localisation under Natural Light Conditions. In ECAI Workshop on Agents in Dynamic and Real-Time Environments.
S. Lange, M. Riedmiller (2004) Evolution of Computer Vision Subsystems in Robot Navigation and Image Classification Tasks. In RoboCup-2004: Robot Soccer World Cup VIII, LNCS. Springer.
2003
R. Schoknecht, M. Riedmiller (November 2003) Reinforcement Learning on Explicitly Specified Time-scales. Neural Computing & Applications 12 (2) pp. 61-80. Springer.
M. Arbatzat, S. Freitag, M. Fricke, R. Hafner, C. Heermann, K. Hegelich, A. Krause, J. Krüger, M. Lauer, M. Lewandowski, A. Merke, H. Müller, M. Riedmiller, J. Schanko, M. Schulte-Hobein, M. Theile, S. Welker, D. Withopf (2003) Creating a Robot Soccer Team from Scratch: the Brainstormers Tribots. In CD attached to Robocup 2003 Proceedings, Padua, Italy.
M. Riedmiller, A. Merke, W. Nowak, M. Nickschas, D. Withopf (2003) Brainstormers 2003 - Team Description. In CD attached to Robocup 2003 Proceedings, Padua, Italy.
Roland Hafner, Martin Riedmiller (2003) Reinforcement Learning on an omnidirectional mobile robot. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas.
R. Schoknecht, M. Riedmiller (2003) Learning to Control at Multiple Time Scales. In Proceedings of the Thirteenth International Conference on Artificial Neural Networks (ICANN). pp. 479–487. Springer.
Martin Lauer, Martin Riedmiller, Thomas Ragg, Walter Baum, Michael Wigbers (2003) The Smaller the Better: Comparison of Two Approaches for Sales Rate Prediction. In Advances in Intelligent Data Analysis V. pp. 451-461. Springer.
M. Riedmiller, A. Merke (2003) Using machine learning techniques in complex multi-agent domains. In Adaptivity and Learning. Springer.
2002
R. Schoknecht, M. Riedmiller (2002) Speeding up Reinforcement Learning with Multi-Step Actions. In Proceedings of International Conference on Artificial Neural Networks, ICANN'02. Madrid, Spain.
M. Lauer, M. Riedmiller (2002) Generalisation in Reinforcement Learning and the Use of Observation-Based Learning. In Proceedings of the FGML Workshop 2002. pp. 100-107.
2001
R. Schoknecht, M. Riedmiller (2001) Using Multi-step Actions for Faster Reinforcement Learning. In Fifth European Workshop on Reinforcement Learning.
W. Hunger, M. Riedmiller (2001) Scheduling with adaptive agents - an empirical evaluation. In Fifth European Workshop on Reinforcement Learning.
A. Merke, M. Riedmiller (2001) Karlsruhe Brainstormers - A Reinforcement Learning Way to Robotic Soccer II. In RoboCup-2001: Robot Soccer World Cup V, LNCS. pp. 322-327. Springer.
Martin Riedmiller (2001) Wie Roboter von Gehirnen lernen. In Kosmos Gehirn. BMBF.
Martin Riedmiller, Andrew Moore, Jeff Schneider (2001) Reinforcement Learning for Cooperating and Communicating Reactive Agents in Electrical Power Grids. In Balancing Reactivity and Social Deliberation in Multi-agent Systems. pp. 137-149. Springer, LNAI 2103.
2000
M. Riedmiller (2000) Concepts and facilities of a neural reinforcement learning control architecture for technical process control. Neural Computing & Applications 8 pp. 323-338. Springer.
M. Riedmiller, A. Merke, D. Meier, A. Hoffmann, A. Sinner, O. Thate, C. Kill, R. Ehrmann (2000) Karlsruhe Brainstormers - A Reinforcement Learning Way to Robotic Soccer. In RoboCup-2000: Robot Soccer World Cup IV, LNCS. Springer.
S. Buck, M. Riedmiller (2000) Learning Situation Dependent Success Rates Of Actions In A RoboCup Scenario. In Proceedings of PRICAI '00. pp. 809. Melbourne, Australia.
M. Lauer, M. Riedmiller (2000) An Algorithm for Distributed Reinforcement Learning in Cooperative Multi-Agent Systems. In Proceedings of International Conference on Machine Learning, ICML '00. pp. 535-542. Stanford, CA.
R. Schoknecht, M. Spott, M. Riedmiller (2000) Design of self-learning controllers using { sc Fynesse}. In Deep Fusion of Computational and Symbolic Processing. Physica.
1999
M. Spott, R. Schoknecht, M. Riedmiller (1999) Approaches for the integration of a priori knowledge into an autonomously learning control architecture.. In Proc. of EUFIT99. Aachen, Germany.
J. Schneider, W. Wong, A. Moore, M. Riedmiller (1999) Distributed Value Functions. In Proceedings of International Conference on Machine Learning, ICML'99. pp. 371-378. Bled, Slovenia.
R. Schoknecht, M. Spott, F. Liekweg, M. Riedmiller (1999) Search Space Reduction for Strategy Learning in Sequential Decision Processes. In Proceedings of the International Conference on Neural Information Processing (ICONIP '99). Perth, Australia.
M. Riedmiller, S. Buck, A. Merke, R. Ehrmann, O. Thate, S. Dilger, A. Sinner, A. Hofmann, L. Frommberger (1999) Karlsruhe Brainstormers - Design Principles. In RoboCup-1999: Robot Soccer World Cup III, LNCS. Springer.
R. Schoknecht, M. Riedmiller (1999) Using reinforcement learning for engine control. In Proceedings of International Conference on Artificial Neuraln Networks, ICANN'99. Edinburgh.
T. Gaul, M. Spott, M. Riedmiller, R. Schoknecht (1999) Fuzzy-Neuro-Controlled Verified Instruction Scheduler. In Conference-Proceedings. New York.
S. Riedmiller, M. Riedmiller (1999) A neural reinforcement learning approach to learn local dispatching policies in production scheduling. In Proceedings of International Joint Conference on Artificial Intelligence, ICJAI'99. Stockholm.
M. Riedmiller, M. Spott, J. Weisbrod (1999) Fynesse: A hybrid architecture for self-learning control. In Knowledge Based Neuro Computing. MIT Press.
1998
R. Schoknecht, M. Spott, M Riedmiller (1998) Design of self-learning controllers using { sc Fynesse} (in German). In Fuzzy Karlsruhe '98. Karlsruhe.
K. Santa, M. Mews, M. Riedmiller (1998) A Neural Approach for the Control of Piezoelectric Micromanipulation Robots. Journal of Intelligent and Robotic Systems 22 pp. 351-374.
M. Riedmiller (1998) High quality thermostat control by reinforcement learning - a case study. In Proceedings of the Conald Workshop 1998. Carnegie-Mellon-University.
M. Riedmiller, R. Schoknecht (1998) Self-learning controllers in automotive applications (in German). In Proceedings of VDI/GMA -Aussprachetag. Berlin.
M. Spott, M. Riedmiller (1998) Improving a priori control knowledge by reinforcement learning. In Proceedings in Artificial Intelligence: Fuzzy-Neuro-Systems '98. pp. 146–153. infix Verlag. Muenchen, Deutschland.
M. Riedmiller (1998) Reinforcement learning without an explicit terminal state. In Proc. of the International Joint Conference on Neural Networks, IJCNN '98. Anchorage, Alaska.
1997
M. Riedmiller, M. Wigbers (1997) A New Method for the Analysis of Neural Reference Model Control. In IEEE International Conference on Neural Networks ICNN;97. Huston, Texas.
G. Goos, M. Spott, J. Weisbrod, M. Riedmiller (1997) Interpretation und Adaption unscharfer Relationen innerhalb einer hybriden selbstlernenden Steuerungsarchitektur. In Proceedings in Artificial Intelligence: Fuzzy-Neuro-Systeme '97. pp. 332–339. infix Verlag. Soest, Deutschland.
M. Riedmiller (1997) Generating continuous control signals for reinforcement controllers using dynamic output elements. In European Symposium on Artificial Neural Networks, ESANN'97. Bruges.
M. Riedmiller, M. Spott, J. Weisbrod (1997) First Results on the Application of the Fynesse Control Architecture. In IEEE 1997 International Aerospace Conference. pp. 421–434. Aspen, USA.
M. Riedmiller, S. Gutjahr, J. Klingemann (1997) Predicting exchange rates using neural networks. In Joint International Conference on Expert Systems (PACES/SPICES '97). Singapore.
A. Stahlberger, M. Riedmiller (1997) Fast Network Pruning and Feature Extraction using the Unit-OBS Algorithm. In Advances in Neural Information Processing Systems 9. MIT Press.
1996
M. Riedmiller (1996) Application of sequential reinforcement learning to control dynamic systems. In IEEE Intenational Conference on Neural Networks (ICNN '96). Washington.
M. Riedmiller (1996) Learning to Control Dynamic Systems. In Proceedings of the 13th. European Meeting on Cybernetics and Systems Research - 1996 (EMCSR '96). Vienna.
M. Riedmiller (1996) Selbständig lernende neuronale Steuerungen. VDI-Verlag.
1995
M. Riedmiller, B. Janusz (1995) Using Neural Reinforcement Controllers in Robotics. In Proceedings of the 8th. Australian Conference on Artificial Intelligence (AI '95). World Scientific Publishing. Singapore.
M. Riedmiller, B. Janusz (1995) Self learning control of a mobile robot. In Proceedings of the IEEE ICNN '95. Perth, Australia.
D. Koll, M. Riedmiller, H. Braun (1995) Massively Parallel Training of Multi Layer Perceptrons with Irregular Topologies. In Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms ICANNGA95. Springer (Vienna). Ales, France.
1994
M. Riedmiller (oct 1994) Aspects of Learning Neural Control. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. San Antonio, Texas.
M. Riedmiller (1994) Advanced Supervised Learning in Multi-layer Perceptrons - From Backpropagation to Adaptive Learning Algorithms. Int. Journal of Computer Standards and Interfaces 16 pp. 265-278.
1993
M. Riedmiller (oct 1993) Controlling an Inverted Pendulum by Neural Plant Identification. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. Le Touquet.
M. Riedmiller (sep 1993) Untersuchungen zu Konvergenz und Generalisierungsverhalten überwachter Lernverfahren mit dem SNNS. In SNNS 1993 Workshop-Proceedings. pp. 107 - 116. Stuttgart.
M. Riedmiller, H. Braun (1993) A Direct Adaptive Method for Faster Backpropagation Learning: The {RPROP} Algorithm. In Proceedings of the IEEE International Conference on Neural Networks (ICNN). pp. 586 - 591. San Francisco.
M. Riedmiller, H. Braun (1993) {RPROP}: A Fast and Robust Backpropagation Learning Strategy. In Fourth Australian Conference on Neural Networks. pp. 169 - 172. Melbourne.
1992
M. Riedmiller, H. Braun (1992) {RPROP}: A Fast Adaptive Learning Algorithm. In International Symposium on Computer and Information Science VII. pp. 279 - 286. Antalya, Turkey.
1986
M. Riedmiller (1986), Vom Byte zur Action, Computronic Journal, January/ February 1986
1985
M. Riedmiller (1985), Totenkopf for ZX Spectrum, CPUJournal, July/ August 1985
M. Riedmiller (1985), Spukhaus for ZX Spectrum (Game of the Month), CPUJournal, March/ April 1985
M. Riedmiller (1985), Spinnen for ZX81 (Game of the Month), CPUJournal, January/ February 1985