The functionality to customize the action space is not yet available. A couple of workarounds:
1) Use penalties in the reward signal every time a repetitive action is selected. Make sure you use the previously selected action as an observation here. This may work but if the number of possible actions is small, it may interfere with exploration
2) Use a custom agent following the template guidelines here and here. You can subclass the provided DQN agent and set exploration and action selection as needed for your application.