public class ValueRepository extends Object
State
.Modifier and Type | Field and Description |
---|---|
protected double |
discountFactor |
protected ImmediateValueFunction<State,Action,Double> |
immediateValueFunction |
protected Map<State,Action> |
optimalActionHashTable |
protected Map<State,Double> |
optimalValueHashTable |
protected Map<StateAction,Double> |
valueHashTable |
Modifier | Constructor and Description |
---|---|
protected |
ValueRepository() |
|
ValueRepository(ImmediateValueFunction<State,Action,Double> immediateValueFunction,
double discountFactor,
HashType hash)
Creates a new value repository.
|
|
ValueRepository(ImmediateValueFunction<State,Action,Double> immediateValueFunction,
double discountFactor,
int stateSpaceSizeLowerBound,
float loadFactor,
HashType hash)
Creates a new value repository.
|
Modifier and Type | Method and Description |
---|---|
double |
getDiscountFactor()
Returns the discount factor for the problem value function
|
double |
getExpectedValue(State initialState,
Action action,
TransitionProbability transitionProbability)
Returns the expected value associated with
initialState and action under one-step transition probabilities
described in transitionProbability . |
double |
getImmediateValue(State initialState,
Action action,
State finalState)
Returns the immediate value of a transition from
initialState to finalState under a chosen action . |
Action |
getOptimalAction(State state)
Returns the optimal action associated with
state . |
Map<State,Action> |
getOptimalActionHashTable()
Returns the hashtable storing optimal actions.
|
double |
getOptimalExpectedValue(State state)
Returns the optimal expected value associated with
state . |
Map<State,Double> |
getOptimalValueHashTable()
Returns the hashtable storing optimal state values.
|
Map<StateAction,Double> |
getValueHashTable() |
void |
setImmediateValue(ImmediateValueFunction<State,Action,Double> immediateValueFunction)
Sets the immediate value function of a transition from
initialState to finalState under a chosen action . |
void |
setOptimalAction(State state,
Action action)
Associates an optimal action
action to state state . |
void |
setOptimalExpectedValue(State state,
double expectedValue)
Associates an optimal expected value
expectedValue to state . |
protected Map<StateAction,Double> valueHashTable
protected double discountFactor
protected ImmediateValueFunction<State,Action,Double> immediateValueFunction
public ValueRepository(ImmediateValueFunction<State,Action,Double> immediateValueFunction, double discountFactor, HashType hash)
ConcurrentHashMap
in conjunction with forward recursion.immediateValueFunction
- the immediate value of a transition from initialState
to
finalState
under a chosen action
.discountFactor
- the value function discount factorhash
- the type of hash used to store the state spacepublic ValueRepository(ImmediateValueFunction<State,Action,Double> immediateValueFunction, double discountFactor, int stateSpaceSizeLowerBound, float loadFactor, HashType hash)
ConcurrentHashMap
in conjunction with forward recursion.immediateValueFunction
- the immediate value of a transition from initialState
to
finalState
under a chosen action
.discountFactor
- the value function discount factorstateSpaceSizeLowerBound
- a lower bound for the sdp state space size, used to initialise the internal hash mapsloadFactor
- the internal hash maps load factorhash
- the type of hash used to store the state spaceprotected ValueRepository()
public Map<State,Double> getOptimalValueHashTable()
public Map<State,Action> getOptimalActionHashTable()
public Map<StateAction,Double> getValueHashTable()
public double getImmediateValue(State initialState, Action action, State finalState)
initialState
to finalState
under a chosen action
.initialState
- the initial state of the stochastic process.action
- the chosen action.finalState
- the final state of the stochastic process.initialState
to finalState
under a chosen action
.public void setImmediateValue(ImmediateValueFunction<State,Action,Double> immediateValueFunction)
initialState
to finalState
under a chosen action
.immediateValueFunction
- the immediate value of a transition from initialState
to
finalState
under a chosen action
.public double getDiscountFactor()
public double getExpectedValue(State initialState, Action action, TransitionProbability transitionProbability)
initialState
and action
under one-step transition probabilities
described in transitionProbability
.initialState
- the initial state of the stochastic process.action
- the chosen action.transitionProbability
- the transition probabilities of the stochastic process.initialState
and action
under one-step transition probabilities
described in transitionProbability
.public void setOptimalExpectedValue(State state, double expectedValue)
expectedValue
to state
.state
- the target state.expectedValue
- the optimal expected total cost.public double getOptimalExpectedValue(State state)
state
.state
- the target state.public void setOptimalAction(State state, Action action)
action
to state state
.state
- the target state.action
- the optimal action.Copyright © 2017–2018. All rights reserved.