Empowerment: A New Reward-Free Paradigm for Human-AI Collaboration?
Human-AI collaboration is gradually becoming a mainstream topic, as AI assistants have become prevalent and have started permeating traditional softwares like text and spreadsheet editors. In this setting, humans and AIs co-exist in a shared environment where both of their actions have impact on the environment. From the designer's perspective, the goal of the AI is to take actions to help humans achieve their goals, typically without knowing exactly or a priori what their goals are.
Currently, the dominant paradigm for human-AI collaboration is reward inference. For example, in RLHF, the AI agent infers a reward function from human ratings of model responses and then adapts its responses to optimize the reward. However, the reward inference approach can be prone to inaccurate inference and reward hacking, which leads the model astray from desired behavior. It thus begs the question: What's the ultimate objective for human-AI collaboration?

Recently, there is a line of work exploring an alternative paradigm for human-AI collaboration using a metric called "empowerment". Empowerment is defined as the mutual information between human actions and subsequent environment states, quantifying how easy it is to predict environment changes given human actions, under the AI's assistance. We can understand this as how much can the AI's assistance make the environment more controllable to the human and hopefully their goals more achievable.
Empowerment has been studied as an objective for RL agents in the past. However, this is worth discussing now, because Myers et al, 2024 recently demonstrated an interesting result showing that the empowerment objective lower bounds the reward objective. This means that AI agents optimizing empowerment can effectively assist humans without inferring rewards and suffering from its issues. Too good to be true?
Developing helpful AI via empowerment seems too much of a deviation from the reward inference paradigm, making it hard to understand conceptually.
How does the empowerment agent model the human?
Does the empowerment agent make any inference about the human?
What's the learning or adaptation dynamics? Will the empowerment agent ask clarifying questions to resolve uncertainty over human goals?
Is the empowerment agent guaranteed to optimize human reward?
We'll try to get some clarity on these questions in this post.
The foundation: Cooperative inverse reinforcement learning
Cooperative IRL is usually considered as the foundation for modeling assistive agents. It adopts the inverse RL (IRL) framework by assuming humans will act in a (perhaps noisily) reward rational way. However, it extends IRL to an interactive setting where the human is no longer demonstrating reward-rational behavior in isolation but rather with awareness of the AI agent. This means that the human knows the AI will likely try to infer its reward, the human knows that the AI will act in a way to help achieve higher rewards, and the AI knows the human knows. This setup can induce much more intricate behavior than IRL where the human may exaggerate demonstrations to make its reward easier to infer, the AI may actively query the human for its reward or demonstrations to reduce uncertainty, and the AI may act conservatively without having enough confidence of human rewards (e.g., see this paper on the benefit of cooperative IRL over IRL).
Formally, cooperative IRL models human-AI interaction using the following MDP (𝒮, 𝒜ᴴ, 𝒜ᴿ, R, P, γ) where H and R superscripts stand for human and robot (i.e., the AI, and we will use these two terms interchangeably). The transition dynamics depends on both human and robot actions P(s'|s, aᴴ, aᴿ). The reward function R(s, aᴴ, aᴿ) also depends on both human and robot actions and is only known to the human. The human and the robot have to act simultaneously and independently using policies πᴴ(aᴴₜ|hₜ), πᴿ(aᴿₜ|hₜ), where hₜ := (s_{0:t}, aᴴ_{0:t-1}, aᴿ_{0:t-1}) is the entire interaction history to make the policy class general. The goal of both agents is to cooperatively maximize the expected cumulative reward or return:
where the prior distribution P(R) represent the set of possible human rewards.
The key result in the cooperative IRL paper is that the optimal policies to this decentralized coordination problem for the human and the robot have the following forms:
This means that both the human and the robot will take actions on the basis on the robot's belief of human rewards: bₜ(R) := P(R|s_{0:t}, aᴴ_{0:t}, aᴿ_{0:t}) ∈ Δ(R), which means that the robot will need to form beliefs and make inference of human reward, as we would expect. The beliefs are computed mainly from observed human actions:
Let's now consider two examples of how the robot will behave under this framework.
Consider the chat agent setting where the human asks scientific questions and the robot responds with an answer (taken from variational preference learning). The human may prefer either a concise response or a detailed response depending on their internal reward, but this is a priori unknown to the robot. The optimal strategy for the robot in this case, if allowed by other aspects of its reward specification, is to actively query the human for their style preference and generate a response based on that. This uncertainty resolution -> goal seeking strategy is very typical of many problems of this nature. Although nice and simple, this example doesn't have a strong notion of a shared environment, which is helpful for building intuition of other properties of human-AI collaboration.

The next example we consider is the gridworld from Myers et al, 2024. Here we have the human starting in the lower left of the grid with a circle of blocks around it. The blocks may or may not be considered a trap depending on where the human wants to go on the grid: if the human wants to go to the upper right, then the blocks are definitely trapping them; but if the human is happy staying at the lower left, then the blocks are there just fine. The robot is a mechanical crane that can move the blocks and open the trap. This defines the shared environment with a more explicit notion of robot actions on human expected outcomes. If the human now wants the robot's help, for example if it wants to go to the upper right of the grid, it could move up in the trap to gesture its intent. The robot would then help remove the blocks and the rest is straightforward human navigation.

In summary, it's relatively easy to reason about the coordination behavior as a result of reward inference. As I am writing this post, Laidlaw et al, 2025 just scaled the reward inference assistance paradigm to Minecraft in an algorithm called AssistanceZero.
Assistance via empowerment
Assistance via empowerment has been studied in a series of papers on a variety of tasks, such as training copilot agents in shared control of a system such as a lunar flight, tuning control interfaces such as eye gaze tracker and keyboard, and training robot teammates in games such as overcooked.
The proposal is to replace the robot reward function with empowerment which is defined as:
where hₜ := (s_{0:t}, aᴴ_{0:t-1}, aᴿ_{0:t-1}) again is the interaction history. s⁺ = s_{t+K} is the state K time steps into the future and the distributions conditioned on human and robot policies πᴴ, πᴿ are expected under these policies. For example:
In plain words, empowerment is the mutual information between the random variables of human actions and future states, which can be read as the predictability of future states with and without knowing the human's current action aᴴₜ. This can be seen from the relative entropy representation of mutual information:
which quantifies the reduction in entropy over s⁺ if aᴴₜ was known.
We can now observe a few differences if we set empowerment as the robot objective, which is defined as:
First, we are no longer being explicit about the human having rewards or not. Second, we do not need to reason explicitly about the coordinated, game-theoretic interaction. The objective is the standard model-free single-agent RL objective.
Needless to say, the authors found merit in this approach. Understandably, the reduction to model-free RL bypasses model-misspecification issues, or at least kicks the barrel down the road.
Empowerment lower bounds reward
The exciting result from Myers et al, 2024 is that even without direct reward inference and optimization, if the human is actually reward rational, the empowerment objective lower bounds human rewards. This is nice because it provides some assurance on the expected behavior of the empowerment agent and the joint outcomes, potentially without falling prey to model-misspecification and reward hacking.
The proof technique is relatively straightforward. First, we need to assume human actions are rational w.r.t. to expected return of human rewards. The authors choose the following Boltzmann rational model:
where the state-only reward R is assumed to be restricted to the unit simplex, i.e., R(s) ∈ [0, 1] and ∑_{s}R(s) = 1. We use Q(hₜ, aᴴ_{t:t+K}) to denote the open-loop return.
Then the main trick is to establish a series of inequalities to show that the mutual information between future states and human actions lower bounds the mutual information between human reward and human actions, which then lower bounds the expected cumulative rewards:
The first inequality holds because future states are generated downstream of actions and rewards, and the inequality follows from data processing inequality. In plain words, this is saying that the ability to predict human rewards if we know their actions should be no more difficult than predicting future states. This makes sense if the states are harder to achieve than inferring human intentions.
The second inequality is obtained by focusing on the entropies of human actions given human reward using the following bound:
Using the second line plus a lower bound on the Boltzmann distribution entropy for the actions, we get the above result. In plain words, this means that if the environment is constructed in such a way that actually allows the human to achieve the highest unit reward in the long run, then the entropy of human future actions given their reward has the above correspondence with the expected return. There are some tricky details to this which we discuss in the appendix along with other proofs.
This second inequality establishes an important link between mutual information and expected return. Given the fact that expected closed-loop returns are strictly better than open-loop returns, we obtain the final result:
Regardless of whether this is a tight or a vacuous bound, it does say that maximizing empowerment implies maximizing human rewards. This is nice and allegedly the first result of its kind.
What is empowerment optimizing?
The empowerment objective appears very different from the reward inference approach because it does not a priori assume a reward based model of human behavior which allows for training-free online planning and adaptation. However, if we were to think about how the robot were to compute the mutual information without using contrastive estimators used in the paper, then it must have a model of the human. In other words, interacting with the human amortizes the human model into the robot policy.
This became clear as soon as we chose to model human behavior as reward rational in the previous proof. We see that the original state-action mutual information objective basically got swapped by the reward-action mutual information objective in a change of variable. In this sense, the robot policy must learn to make inference of human reward, which makes it equal to the reward inference paradigm on that front.
However, what dictates robot actions is less clear, and there might be a few interpretations. If we interpret mutual information as a measure of information gain, which is highlighted by its KL divergence representation:
Then we can interpret the objective as incentivizing the robot to quickly infer human reward. Yet, this is not useful for actually achieving human reward.
Alternatively, we saw that the main contributor in the proof is the entropy of human actions given human reward. In other words, this term arguably has the strongest effect on robot behavior. If we unpack this term, say for a single time step, we see that it can be expressed as the expected log likelihood of human actions, which under the Boltzmann model, is the expected advantage and is proportional to the expected return:
In this sense, the robot is actually nudging the human to high value states.
Final words
Combining these interpretations, we see that the robot is incentivized to take actions that both resolve its uncertainty over human rewards and achieve higher human returns; an effect similar to the reward inference paradigm without explicitly coding it as a constraint on the robot. Will this method make it into practical applications any time soon? Does the removal of constraint or inductive bias also take away data efficiency compared to reward inference approaches? There are still many unanswered questions around this new paradigm.
References
Appendix
In this part, we discuss the lower bound derivation in a bit more depth. Overall, I think that some additional assumptions need to be made in order for the results to hold. For details and also better math reading experience, please visit the Github version of this post.