Seigler, Thomas M

Our research group studies human-in-the-loop control behavior. Specifically, we conduct experiments with human subjects as they interact with dynamic systems. We use the data collected in these experiments to model the human's control behavior.  This research addresses two main questions:


What control strategies do humans learn? The objective of Q1 is to identify the strategies that humans employ to control dynamic systems. The strategies used by humans to control dynamic systems are currently unknown. The internal model hypothesis (IMH) of neuroscience proposes that the brain constructs models of system dynamics and these models are used for control. However, the evidence-to-date in support of the IMH is not conclusive, and other theories have not been ruled out.  Our previous experimental work provides evidence that (for some systems) a fundamental component of human learning is updating the feed forward control behavior until it models the inverse dynamics. This observation supports the IMH.  Although previous research indicates that, for some systems, humans construct and use internal models, significant questions remain.  Do humans always attempt to use inverse plant dynamics in feed forward?  How do plant characteristics affect the control strategies that humans employ? For example, humans may adopt different strategies for non minimum phase systems since dynamic inversion is unstable.  Furthermore, non-linearities, high system order, instabilities, and high relative degree may make it difficult for humans to construct accurate models. In this case, approximating the inverse dynamics in feedforward may not be possible. Thus, we do not yet fully understand what control strategies humans learn. Q1 aims to address these significant open questions.


How do humans learn to control unknown dynamic systems? The objective of Q2 is to identify learning mechanisms that allow humans to adapt to and control unknown dynamic systems. We do not understand how humans learn control dynamic systems. That is, we do not understand the learning process. Fundamental questions include: Do humans require persistency of excitation to learn? How do humans avoid transient instability? How does human learning compare with existing control methods (e.g., adaptive control)? The proposed analyses in this project include: examining how humans learn at different frequencies, comparing human learning to adaptive control, and exploring how humans use persistently exciting signals to learn.


A Control-Systems Approach to Understanding Human Learning (NSF Award 1405257)

The objective of this project is to identify the strategies that humans employ to control dynamic systems. Our research approach is to analyze data obtained from human-in-the-loop (HITL) experiments using subsystem identification (SSID) algorithms. The SSID algorithms use closed-loop data from HITL experiments to model a human's feedforward and feedback controls. 


Computational methods:

We have developed specialized SSID algorithms that use a multi-convex optimization approach.


Software:

C++ and Matlab


Students: 

Sajad Koushkbaghi (ME Ph.D Student and Research Assist)


Collaborators:
This research is conducted in collaboration with Jesse B. Hoagg (Associate Professor of Mechanical Engineering at UK).


Grants:

A Control-Systems Approach to Understanding Human Learning (NSF Award 1405257)

Center for Computational Sciences