Proto-value functions

From HandWiki
Revision as of 01:19, 8 March 2021 by imported>Wikisleeper (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In applied mathematics, proto-value functions (PVFs) are automatically learned basis functions that are useful in approximating task-specific value functions, providing a compact representation of the powers of transition matrices. They provide a novel framework for solving the credit assignment problem. The framework introduces a novel approach to solving Markov decision processes (MDP) and reinforcement learning problems, using multiscale spectral and manifold learning methods. Proto-value functions are generated by spectral analysis of a graph, using the graph Laplacian.

Proto-value functions were first introduced in the context of reinforcement learning by Sridhar Mahadevan in his paper, Proto-Value Functions: Developmental Reinforcement Learning at ICML 2005.[1]

Motivation

Value function approximation is a critical component to solving MDPs defined over a continuous state space. A good function approximator allows an RL agent to accurately represent the value of any state it has experienced, without explicitly storing its value. Linear function approximation using basis functions is a common way of constructing a value function approximation, like Radial basis functions, polynomial state encodings, and CMACs. However, parameters associated with these basis functions often require significant domain-specific hand-engineering.[2] Proto-value functions attempts to solve this required hand-engineering by accounting for the underlying manifold structure of the problem domain.[1]

Overview

Proto-value functions are task-independent global basis functions that collectively span the entire space of possible value functions for a given state space.[1] They incorporate geometric constraints intrinsic to the environment. For example, states close in Euclidean distance (such as states on opposite sides of a wall) may be far apart in manifold space. Previous approaches to this nonlinearity problem lacked a broad theoretical framework, and consequently have only been explored in the context of discrete MDPs.

Proto-value functions arise from reformulating the problem of value function approximation as real-valued function approximation on a graph or manifold. This results in broader applicability of the learned bases and enables a new class of learning algorithms, which learn representations and policies at the same time.[3]

Basis functions from graph Laplacian

In this approach, we will construct the basis functions by spectral analysis of the graph Laplacian, a self-adjoint (or symmetric) operator on the space of functions on the graph, closely related to the random walk operator.

For the sake of simplicity, assume that the underlying state space can be represented as an undirected unweighted graph [math]\displaystyle{ G=\left(V,E\right) }[/math] The combinatorial Laplacian [math]\displaystyle{ L }[/math] is defined as the operator [math]\displaystyle{ L = D - A }[/math], where [math]\displaystyle{ D }[/math] is a diagonal matrix called the degree matrix and [math]\displaystyle{ A }[/math] is the adjacency matrix.[1]

The spectral analysis of the Laplace operator on a graph consists of finding the eigenvalues and eigenfunctions which solve the equation

[math]\displaystyle{ L\phi_\lambda = \lambda\phi_\lambda }[/math],

where [math]\displaystyle{ L }[/math] is the combinatorial Laplacian, [math]\displaystyle{ \phi_\lambda }[/math] is an eigenfunction associated with the eigenvalue [math]\displaystyle{ \lambda }[/math]. Here the term "eigenfunction" is used to denote what is traditionally referred to as eigenvector in linear algebra, because the Laplacian eigenvectors can naturally be viewed as functions that map each vertex to a real number.[3]

The combinatorial Laplacian is not the only operator on graphs to select from. Other possible graph operators include:

  • Normalized Laplacian [math]\displaystyle{ L_\text{normalized} = I - D^{-1/2}AD^{-1/2} }[/math] [4]
  • Random Walk [math]\displaystyle{ P = D^{-1}A }[/math] [4]

Graph construction on discrete state space

For a finite state space the graph [math]\displaystyle{ G }[/math] mentioned above can be simply constructed by examining the connections between states. Let [math]\displaystyle{ S_i }[/math] and [math]\displaystyle{ S_j }[/math] be any two states. Then

[math]\displaystyle{ G_{i,j}=\begin{cases} 1 & \text{if } S_i\leftrightarrow S_j \\ 0 & \text{otherwise} \end{cases} }[/math]

It is important to note that this can only be done when the state space is finite and of reasonable size.

Graph construction on continuous or large state space

For a continuous state space or simply a very large discrete state space, it is necessary to sample from the manifold in state space. Then constructing the Graph [math]\displaystyle{ G }[/math] based on the samples. There are a few issues to consider here:[4]

  • How to sample the manifold
    • Random walk or guided exploration
  • How to determine if two sample should be connected

Application

Once the PVFs are generated, they can be plugged into a traditional function approximation framework. One such method is least-squares approximation.

Least-squares approximation using proto-value functions

Let [math]\displaystyle{ \Phi_G=\left\{ V_1^G,\dots,V_k^G\right\} }[/math] be the basis set of PVFs, where each [math]\displaystyle{ V_i^G }[/math] is the eigenfunction defined over all states in the graph [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ \hat{V}^\pi }[/math] be the target value function that is only known for a subset of states [math]\displaystyle{ S_M^G =\left\{ s_1,\dots,s_m\right\} }[/math].

Define the gram matrix

[math]\displaystyle{ K_G =\left(\Phi_m^G \right)^T \Phi_m^G. }[/math]

here [math]\displaystyle{ S_{m}^{G} }[/math] is the component wise projection of the PVFs onto the states in [math]\displaystyle{ S_{G}^m }[/math]. Hence, each entry of the gram matrix is

[math]\displaystyle{ K_G \left(i,j\right) = \sum_k V_i^G(k) V_j^G (k) }[/math]

Now the we can solve for the coefficients that minimize the least squares error with the equation

[math]\displaystyle{ \alpha=K_G^{-1}\left(\Phi_M^G \right)^T \hat{V}^\pi. }[/math]

A nonlinear least-squares approach is possible by using the k PVFs with the largest absolute coefficients to compute the approximation.[1]

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 Mahadevan, S. Proto-Value Functions: Developmental Reinforcement Learning. Proceedings of the International Conference on Machine Learning ICML 2005
  2. Johns, J. and Mahadevan, S., Constructing Basis Functions from Directed Graphs for Value Function Approximation, International Conference on Machine Learning (ICML), 2007
  3. 3.0 3.1 Mahadevan, S. and Maggiono, M., Proto-Value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes, University of Massachusetts, Department of Computer Science Technical Report TR-2006-35, 2006
  4. 4.0 4.1 4.2 Mahadevan, S. and Maggiono, M., ICML 2006 tutorial.