I have a problem set which is hard one and for that I need to give the agent some initial policy as a hint i.e. a good initial starting point. So, how can I give the initial weights and then allow the agent to learn starting from good initial weights. How can I do that when I am using the rl block in simulink.
To use the rl agent block, you need to create an agent first, which also requires a policy architecture. When you set up your neural network, you can specify initial values for the weights using, e.g., the 'Weights' option of the fully connected layer (or any layer that has learnable parameters).