Module Details

Module Code: COMP9069
Title: Robotics & Autonomous Systems
Long Title: Robotics & Autonomous Systems
NFQ Level: Expert
Valid From: Semester 1 - 2019/20 ( September 2019 )
Duration: 1 Semester
Credits: 5
Field of Study: 4811 - Computer Science
Module Delivered in: 2 programme(s)
Module Description: Robotics and autonomous systems has the potential to transform many industries such as manufacturing, construction and logistics. Traditional automated system design requires highly controlled more-or-less stationary environments for correct operation, such systems have a limited number of applications. The integration of machine learning into robotic systems allows robots to overcome this constraint and thus operate in unconstrained environments. Recent development in robotics middle-ware that facilitate the application of machine learning approaches has allowed the development of robots that can modify behaviour with changing environmental conditions, continuously improve operation and adapt to system failures. This module will focus on utilizing contemporary robotics middle-ware and the application of machine learning to both articulated systems (e.g. robotic arms) and autonomous systems (e.g. quad-copters and rovers).
 
Learning Outcomes
On successful completion of this module the learner will be able to:
# Learning Outcome Description
LO1 Develop and simulate models for articulated and autonomous robotic systems.
LO2 Evaluate the applicability of machine learning in robotics.
LO3 Adapt machine learning algorithms to robotic motion control and autonomous applications.
LO4 Appraise the application of deep learning to robotic systems.
Dependencies
Module Recommendations

This is prior learning (or a practical skill) that is strongly recommended before enrolment in this module. You may enrol in this module if you have not acquired the recommended learning but you will have considerable difficulty in passing (i.e. achieving the learning outcomes of) the module. While the prior learning is expressed as named MTU module(s) it also allows for learning (in another module or modules) which is equivalent to the learning specified in the named module(s).

Incompatible Modules
These are modules which have learning outcomes that are too similar to the learning outcomes of this module. You may not earn additional credit for the same learning and therefore you may not enrol in this module if you have successfully completed any modules in the incompatible list.
No incompatible modules listed
Co-requisite Modules
No Co-requisite modules listed
Requirements

This is prior learning (or a practical skill) that is mandatory before enrolment in this module is allowed. You may not enrol on this module if you have not acquired the learning specified in this section.

No requirements listed
 
Indicative Content
Modelling and Simulating Robots and Autonomous Systems
Spatial descriptions and transformations, forward kinematics, inverse kinematics, jacobian matrices, modelling non-rigid robots, autonomous system kinematics. Uncertainty in robotic models. Simulation and programming tools and environments such as V-REP, ROS, Gazebo.
Reinforcement Learning
Elements of RL, Finite Markov Decision Processes, Policies and Value Functions, Partially Observable MDPs, Inverse Reinforcement Learning, Bellman Equations, Optimal Value Functions, Model Based vs Model Free Algorithms, Dynamic Programming, Monte Carlo Methods, Temporal-Difference Prediction and Q Learning.
Reinforcement Learning in Robotic Systems
Searching for parametric motor primitives, adapting parametric motor primitives to changing conditions, control prioritisation for motor primitives. Autonomous systems map building, localisation, path planning, obstacle avoidance and navigation in dynamic environments.
Deep Reinforcement Learning in Robotics
Radial Basis Function Artificial Neural Networks, Policy Gradient, TD Lambda, and Deep Q-Learning applications in robotic systems. Usage of OpenAI Gym, Tensorflow.
Module Content & Assessment
Assessment Breakdown%
Coursework100.00%

Assessments

Coursework
Assessment Type Project % of Total Mark 40
Timing Week 7 Learning Outcomes 1,2
Assessment Description
Project developing a simulation model of an articulated or autonomous robotic system and evaluating of the fidelity of the model developed.
Assessment Type Project % of Total Mark 60
Timing Sem End Learning Outcomes 3,4
Assessment Description
Project applying machine learning to a robotic or autonomous system, iterating and evaluating the methodology applied to environmental and system changes.
No End of Module Formal Examination
Reassessment Requirement
Coursework Only
This module is reassessed solely on the basis of re-submitted coursework. There is no repeat written examination.

The University reserves the right to alter the nature and timings of assessment

 

Module Workload

Workload: Full Time
Workload Type Contact Type Workload Description Frequency Average Weekly Learner Workload Hours
Lecture Contact Lecture delivering theory underpinning learning outcomes. Every Week 2.00 2
Lab Contact Practical computer-based lab supporting learning outcomes. Every Week 2.00 2
Independent & Directed Learning (Non-contact) Non Contact Independent & directed learning Every Week 3.00 3
Total Hours 7.00
Total Weekly Learner Workload 7.00
Total Weekly Contact Hours 4.00
Workload: Part Time
Workload Type Contact Type Workload Description Frequency Average Weekly Learner Workload Hours
Lecture Contact Lecture delivering theory underpinning learning outcomes. Every Week 2.00 2
Lab Contact Practical computer-based lab supporting learning outcomes. Every Week 2.00 2
Independent & Directed Learning (Non-contact) Non Contact Independent & directed learning Every Week 3.00 3
Total Hours 7.00
Total Weekly Learner Workload 7.00
Total Weekly Contact Hours 4.00
 
Module Resources
Recommended Book Resources
  • Sutton, Richard S and Barto, Andrew G. (1998), Reinforcement learning: An introduction, MIT press Cambridge, [ISBN: 9780262193986].
Supplementary Book Resources
  • Jens Kober and Jan Peters. (2014), Learning Motor Skills From Algorithms to Robot Experiments, Springer International Publishing, [ISBN: 9783319031941].
  • Todd Hester. (2013), TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains, Springer International Publishing, [ISBN: 9783319011677].
Recommended Article/Paper Resources
  • Kober, Jens and Bagnell, J Andrew and Peters, Jan. (2013), Reinforcement learning in robotics: A survey, The International Journal of Robotics Research, 32, no 11, pp 1238-1274.
  • Cully, Antoine and Clune, Jeff and Tarapore, Danesh and Mouret, Jean-Baptiste. (2015), Robots that can adapt like animals, Nature Research, 521, pp 503-507.
  • Ijspeert, Auke Jan. (2008), Central pattern generators for locomotion control in animals and robots: a review, Elsevier Journal on Neural networks, Vol 21, No 4, pp 642-653.
Supplementary Article/Paper Resources
  • Chatzilygeroudis, Konstantinos and Rama, Roberto and Kaushik, Rituraj and Goepp, Dorian and Vassiliades, Vassilis and Mouret, Jean-Baptiste. (2017), Black-Box Data-efficient Policy Search for Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
  • Cutler, Mark and How, Jonathan P. (2015), Efficient reinforcement learning for robots using informative simulated priors, IEEE International Conference on Robotics and Automation (ICRA), pp 2605-2612.
  • Abbeel, Pieter and Coates, Adam and Quigley, Morgan and Ng, Andrew Y. (2007), An application of reinforcement learning to aerobatic helicopter flight, Advances in neural information processing systems, pp 1-8.
Other Resources
 
Module Delivered in
Programme Code Programme Semester Delivery
CR_KARIN_9 Master of Science in Artificial Intelligence 1 Elective
CR_EINMS_9 Postgraduate Certificate in Intelligent Manufacturing Systems 2 Mandatory