
Image Credit: Bill Zhao 
We present algorithms for a class of resource allocation problems both in the online setting with stochastic input and in the offline setting. This class of problems contains many interesting special cases such as the Adwords problem. In the online setting we introduce a new distributional model called the adversarial stochastic input model, which is a generalization of the i.i.d model with unknown distributions, where the distributions can change over time. In this model we give a 1O(ε) approximation algorithm for the resource allocation problem, with almost the weakest possible assumption: the ratio of the maximum amount of resource consumed by any single request to the total capacity of the resource, and the ratio of the profit contributed by any single request to the optimal profit is at most O(ε^{2}/log(n/ε)) where n is the number of resources available. There are instances where this ratio is ε^{2}/log(n/ε) such that no randomized algorithm can have a competitive ratio of 1o(ε) even in the i.i.d model. The upper bound on ratio that we require improves on the previous upperbound for the i.i.d case by a factor of n.
Our proof technique also gives a very simple proof that the greedy algorithm has a competitive ratio of 11/e for the Adwords problem in the i.i.d model with unknown distributions, and more generally in the adversarial stochastic input model, when there is no bound on the bid to budget ratio. All the previous proofs assume that either bids are very small compared to budgets or something very similar to this.
In the offline setting we give a fast algorithm to solve very large LPs with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ε) the mixed packingcovering problem with O(δγmlog(m/ε)/ε^{2}) oracle calls where the constraint matrix of this LP has dimension mn, the success probability of the algorithm is 1δ, and γ is a parameter which is very similar to the ratio described for the online setting. We discuss several applications, and how our algorithms improve existing results in some of these applications
Truthfulness is fragile and demanding. It is oftentimes harder to guarantee truthfulness when solving a problem than it is to solve the problem itself. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism’s outcome. One obstacle is that truthful payments depend on outcomes other than the one realized, such as the lengths of nonshortestpaths in a shortestpath auction. Singlecall mechanisms are a powerful tool that circumvents this obstacle: they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a tradeoff between the expected quality of the outcome and the risk of large payments.
We study two of the most general domains for truthful mechanisms and largely settle when and to what extent singlecall mechanisms are possible. The first singlecall construction was discovered by Babaioff et al. [2010] in singleparameter domains. They give a transformation that turns any monotone, singleparameter allocation rule into a truthfulinexpectation singlecall mechanism. Our first result is a natural complement to Babaioff et al. [2010]: we give a new transformation that produces a singlecall VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the singleparameter and VCG settings, we precisely characterize the possible transformations, showing that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent tradeoff between the expected quality of the outcome and the risk of large payments. We show that our construction and that of Babaioff et al. [2010] simultaneously optimize a variety of metrics in their respective domains.
Our study is motivated by settings where uncertainty in a mechanism renders other known techniques untruthful, and we offer a variety of examples where such uncertainty can arise. In particular, we analyze payperclick advertising auctions, where the truthfulness of the standard VCGbased auction is easily broken when the auctioneer’s estimated clickthroughrates are imprecise.
Nearly fifteen years ago, Google unveiled the generalized second price (GSP) auction. By all theoretical accounts including their own [Varian14], this was the wrong auction  the VickreyClarkeGroves (VCG) auction would have been the proper choice  yet GSP has succeeded spectacularly.
We give a deep justification for GSP's success: advertisers' preferences map to a model we call value maximization; they do not maximize profit as the standard theory would believe. For value maximizers, GSP is the truthful auction [Aggarwal09]. Moreover, this implies an axiomatization of GSP  it is an auction whose prices are truthful for value maximizers  that can be applied much more broadly than the simple model for which GSP was originally designed. In particular, applying it to arbitrary singleparameter domains recovers the folklore definition of GSP. Through the lens of value maximization, GSP metamorphosizes into a powerful auction, sound in its principles and elegant in its simplicity.
Auctions like sponsored search often implicitly or explicitly require that bidders are treated fairly. This may be because large bidders have market power to negotiate equal treatment, because small bidders are difficult to identify, or for many other reasons. We study socalled anonymous auctions to understand the revenue tradeoffs and to develop simple anonymous auctions that are approximately optimal.
We begin with the canonical digital goods setting and show that the optimal anonymous, expost incentive compatible auction has an intuitive structure — imagine that bidders are randomly permuted before the auction, then infer a posterior belief about bidder i’s valuation from the values of other bidders and set a posted price that maximizes revenue given this posterior.
We prove that no anonymous mechanism can guarantee an approximation better than Θ(n) to the optimal revenue in the worst case (or Θ(logn)Θ(logn) for regular distributions) and that even posted price mechanisms match those guarantees. Understanding that the real power of anonymous mechanisms comes when the auctioneer can infer the bidder identities accurately, we show a tight Θ(k) approximation guarantee when each bidder can be confused with at most k “higher types”. Moreover, we introduce a simple mechanism based on n target prices that is asymptotically optimal. Finally, we return to our original motivation and build on this mechanism to extend our results to munit auctions and sponsored search.
Quasiliearity is a ubiquitous and questionable assumption in the standard study of Walrasian equilibria. Quasilinearity implies that a buyer's value for goods purchased in a Walrasian equilibrium is always additive with goods purchased with unspent money. It is a particularly suspect assumption in combinatorial auctions, where buyers' complex preferences over goods would naturally extend beyond the items obtained in the Walrasian equilibrium.
We study Walrasian equilibria in combinatorial auctions when quasilinearity is not assumed. We show that existence can be reduced to an ArrowDebreu style market with one divisible good and many indivisible goods, and that a "fractional" Walrasian equilibrium always exists. We also show that standard integral Walrasian equilibria are related to integral solutions of an induced configuration LP associated with a fractional Walrasian equilibrium, generalizing known results for both quasilinear and nonquasilnear settings.
The popular generalized second price (GSP) auction for sponsored search is built upon a separable model of clickthroughrates that decomposes the likelihood of a click into the product of a "slot effect" and an "advertiser effect"  if the first slot is twice as good as the second for some bidder, then it is twice as good for everyone. Though appealing in its simplicity, this model is quite suspect in practice. A wide variety of factors including externalities and budgets have been studied that can and do cause it to be violated. In this paper we adopt a view of GSP as an iterated second price auction (see, e.g., Milgrom 2010) and study how the most basic violation of separability  position dependent, arbitrary public clickthroughrates that do not decompose  affects results from the foundational analysis of GSP (Varian 2007, Edelman et al. 2007). For the twoslot setting we prove that for arbitrary clickthroughrates, for arbitrary bidder values, an efficient purestrategy equilibrium always exists; however, without separability there always exist values such that the VCG outcome and payments cannot be realized by any bids, in equilibrium or otherwise. The separability assumption is therefore necessary in the twoslot case to match the payments of VCG but not for efficiency. We moreover show that without separability, generic existence of efficient equilibria is sensitive to the choice of tiebreaking rule, and when there are more than two slots, no (bidindependent) tiebreaking rule yields the positive result. In light of this we suggest alternative mechanisms that trade the simplicity of GSP for better equilibrium properties when there are three or more slots.
The firstprice auction is popular in practice for its simplicity and transparency. Moreover, its potential virtues grow in complex settings where incentive compatible auctions may generate little or no revenue. Unfortunately, the firstprice auction is poorly understood in theory because equilibrium is not a priori a credible predictor of bidder behavior.
We take a dynamic approach to studying firstprice auctions: rather than basing performance guarantees solely on static equilibria, we study the repeated setting and show that robust performance guarantees may be derived from simple axioms of bidder behavior. For example, as long as a loser raises her bid quickly, a standard firstprice auction will generate at least as much revenue as a secondprice auction.
We generalize this dynamic technique to complex payyourbid auction settings: as long as losers do not wait too long to raise bids, a firstprice auction will reach an envyfree equilibrium that implies a strong lowerbound on revenue; as long as winners occasionally experiment by lowering their bids, the outcome will near the boundary of this envyfree set so bidders do not overpay; and when players with the largest payoffs are the least patient, bids converge to the egalitarian equilibrium. Significantly, bidders need only know whether they are winning or losing in order to implement such behavior.
Along the way, we find that the auctioneer's choice of bidding language is critical when generalizing beyond the singleitem setting, and we propose a specific construction called the surplus auction that performs well. This auction is closely related to profittarget bidding in firstprice and ascending proxy package auctions and gives strong revenue guarantees for a variety of complex auction environments. Of particular interest, the guaranteed existence of a purestrategy equilibrium in the surplus auction shows how Overture might have eliminated the cyclic behavior in their generalized firstprice sponsored search auction if bidders could have placed more sophisticated bids.
Truthfulness is fragile and demanding. It is oftentimes computationally harder than solving the original problem. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism\'s outcome. One obstacle is that truthful payments depend on outcomes other than the one realized, such as the lengths of nonshortestpaths in a shortestpath auction. Singlecall mechanisms are a powerful tool that circumvents this obstacle  they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a tradeoff between the expected quality of the outcome and the risk of large payments.
We largely settle when and to what extent singlecall mechanisms are possible. The first singlecall construction was discovered by Babaioff, Kleinberg, and Slivkins [BKS10] in singleparameter domains. They give a transformation that turns any monotone, singleparameter allocation rule into a truthfulinexpectation singlecall mechanism. Our first result is a natural complement to [BKS10]: we give a new transformation that produces a singlecall VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the singleparameter and VCG settings, we precisely characterize the possible transformations, showing that that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent tradeoff between the expected quality of the outcome and the risk of large payments. We show that our construction and that of [BKS10] simultaneously optimize a variety of metrics in their respective domains.
Our study is motivated by settings where uncertainty in a mechanism renders other known techniques untruthful. As an example, we analyze payperclick advertising auctions, where the truthfulness of the standard VCGbased auction is easily broken when the auctioneer\'s estimated clickthroughrates are imprecise.
We present algorithms for a class of resource allocation problems both in the online setting with stochastic input and in the offline setting. This class of problems contains many interesting special cases such as the Adwords problem. In the online setting we introduce a new distributional model called the adversarial stochastic input model, which is a generalization of the i.i.d model with unknown distributions, where the distributions can change over time. In this model we give a 1O(ε) approximation algorithm for the resource allocation problem, with almost the weakest possible assumption: the ratio of the maximum amount of resource consumed by any single request to the total capacity of the resource, and the ratio of the profit contributed by any single request to the optimal profit is at most O(ε^{2}/log(n/ε)) where n is the number of resources available. There are instances where this ratio is ε^{2}/log(n/ε) such that no randomized algorithm can have a competitive ratio of 1o(ε) even in the i.i.d model. The upper bound on ratio that we require improves on the previous upperbound for the i.i.d case by a factor of n.
Our proof technique also gives a very simple proof that the greedy algorithm has a competitive ratio of 11/e for the Adwords problem in the i.i.d model with unknown distributions, and more generally in the adversarial stochastic input model, when there is no bound on the bid to budget ratio. All the previous proofs assume that either bids are very small compared to budgets or something very similar to this.
In the offline setting we give a fast algorithm to solve very large LPs with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+ε) the mixed packingcovering problem with O(δγmlog(m/ε)/ε^{2}) oracle calls where the constraint matrix of this LP has dimension mn, the success probability of the algorithm is 1δ, and γ is a parameter which is very similar to the ratio described for the online setting. We discuss several applications, and how our algorithms improve existing results in some of these applications
The convexity assumptions required for the ArrowDebreu theorem are reasonable and realistic for preferences; however, they are highly problematic for production because they rule out economies of scale. We take a complexitytheoretic look at economies with nonconvex production. It had been known that in such markets equilibrium prices may not exist; we show that it is an intractable problem to achieve Pareto efficiency, the fundamental objective achieved equilibrium prices. The same is true for core efficiency or any one of an array of concepts of stability, with the degree of intractability ranging from FΔ_{2}^{P}completeness to PSPACEcompleteness. We also identify a novel phenomenon that we call complexity equilibrium in which agents quiesce, not because there is no way for any one of group of them to improve their situation, but because discovering the changes necessary for (individual or group) improvement is intractable. In fact, we exhibit a somewhat natural distribution of economies that gives an averagecase hard complexity equilibrium.
We study the information content of equilibrium prices using the market communication model of Deng, Papadimitriou, and Safra. We show that, in the worst case, communicating an equilibrium in a production economy requires a number of bits that is a quadratic polynomial in the number of goods, the number of agents, the number of firms, and the number of bits used to represent an endowment.
We show a range of complexity results for the Ricardo and HeckscherOhlin models of international trade (as ArrowDebreu production markets). For both models, we show three types of results:\
Many auction settings implicitly or explicitly require that bidders are treated equally exante. This may be because discrimination is philosophically or legally impermissible, or because it is practically difficult to implement or impossible to enforce. We study socalled {\em anonymous} auctions to understand the revenue tradeoffs and to develop simple anonymous auctions that are approximately optimal.
We consider digital goods settings and show that the optimal anonymous, dominant strategy incentive compatible auction has an intuitive structure  imagine that bidders are randomly permuted before the auction, then infer a posterior belief about bidder i's valuation from the values of other bidders and set a posted price that maximizes revenue given this posterior.
We prove that no anonymous mechanism can guarantee an approximation better than O(n) to the optimal revenue in the worst case (or O(log n) for regular distributions) and that even posted price mechanisms match those guarantees. Understanding that the real power of anonymous mechanisms comes when the auctioneer can infer the bidder identities accurately, we show a tight O(k) approximation guarantee when each bidder can be confused with at most k "higher types". Moreover, we introduce a simple mechanism based on n target prices that is asymptotically optimal and build on this mechanism to extend our results to munit auctions and sponsored search.
A single advertisement often benefits many parties, for example, an ad for a Samsung laptop benefits Microsoft. We study this phenomenon in search advertising auctions and show that standard solutions, including the status quo ignorance of mutual benefit and a benefitaware VickreyClarkeGroves mechanism, perform poorly. In contrast, we show that an appropriate firstprice auction has wellbehaved equilibria in a singleslot ad auction: all equilibria that satisfy a natural cooperative envyfreeness condition select the welfaremaximizing ad and satisfy an intuitive lowerbound on revenue.
Bidders often want to get as much as they can without violating constraints on what they spend. For example, advertisers seek to maximize the impressions, clicks, sales, or market share generated by their advertising, subject to budget or returnoninvestment (ROI) constraints. The quasilinear utility model dramatically fails to capture these preferences, and so we initiate the study of mechanism design in this different context. In singleparameter settings, we show that any monotone allocation can be implemented truthfully. Interestingly, even in unrestricted domains any choice function that optimizes monotone functions of bidders' values can be implemented. For general valuations we show that maximizing the value of the highestvalue bidder is the natural analog of welfare maximization.
We apply our results to online advertising as a case study. Firstly, the natural analog of welfare maximization directly generalizes the generalized second price (GSP) auction commonly used to sell search ads. Finally, we show that valuemaximizing preferences are robust in a practical sense: even though real advertisers' preferences are complex and varied, as long as outcomes are "sufficiently different," any advertiser with a moderate ROI constraint and "superquasilinear" preferences will be behaviorally equivalent to a value maximizer. We empirically establish that for at least 80% of a sample of auctions from the Yahoo Gemini Native ads platform, bidders requiring at least a 100% ROI should behave like value maximizers.
We consider sequential decisionmaking settings where strategic agents hold private information that can be expressed in a singledimension at any given time, though it may evolve from period to period. In other words, we consider the dynamic singleparameter mechanism design problem. Our main goal is to obtain results analogous to Myerson's foundational characterization of truthful mechanisms for the static singleparameter setting. We identify a pseudoallocation function that, when monotone in an appropriate technical sense, can be used to express a set of truthful payments. When this pseudoallocation is a "true" allocation, we call the setting strongly singleparameter and show that our payment formula gives a complete characterization of truthful mechanisms. The technical monotonicity condition is somewhat opaque, but we also provide an easily checkable sufficient condition on allocation rules; when satisfied, it implies existence of payments that yield truthfulness. One practical motivation is provided by the world of online advertising, where values are often private but unchanging, while clickprobability estimates evolve over time in response to click events. We show that all "myopically monotonic" and permutation invariant allocation rules that also satisfy an IIA condition can be implemented. This class is broad, containing everything from the myopic allocation rule, to epsilongreedy heuristic strategies, to the Bayesianoptimal rule that perfectly balances the exploration/exploitation tradeoff. We also show that Thompson sampling is incentive compatible on this domain. Finally, we demonstrate how sampling tricks can sometimes give a mechanism that is truthful in expectation when a common prior over click probabilities is unavailable.
We study the optimal mechanism design problem faced by a market intermediary who makes revenue by connecting buyers and sellers. We first show that the optimal intermediation protocol has substantial structure: it is the solution to an algorithmic pricing problem in which seller\'s costs are replaced with virtual costs, and the sellers\' payments need only depend on the buyer\'s behavior and not the buyer\'s actual valuation function.
Since the underlying algorithmic pricing problem may be difficult to solve optimally, we study specific models of buyer behavior and give mechanisms with provable approximation guarantees. We show that offering only the single most profitable item for sale guarantees an Ω(1/log n) fraction of the optimal revenue when item value distributions are independent and have monotone hazard rates. We also give constant factor approximations when the buyer considers all items at once, k items at once, or items in sequence.
There is a growing research tradition in the interface between Economics and Computer Science: Economic insights and questions about incentives inform the design of systems, while concepts from the theory of computation help illuminate classical Economics problems. This dissertation presents results in both directions of the intellectual exchange.
Originally designed by industry engineers, the sponsored search auction has raised many interesting questions and spurred much research in auction design. For example, early auctions were based on a firstprice payment model and proved to be highly unstable  this dissertation explores how improvements in the bidding language could restore stability. We also show that a firstprice auction offers substantially better performance guarantees when a single advertiser may benefit from multiple ads. Another interesting problem arises because sponsored search auctions must operate with limited information about a user\'s behavior  we show how sampling can maintain incentive compatibility even when the auctioneer incorrectly predicts the user\'s behavior.
Computational tools also offer novel ways to understand the limits of complex economic systems. For example, a fundamental observation in this intellectual exchange is that people cannot be expected to solve computationally intractable problems. We show that this insight engenders a new form of stability we call complexity equilibria: when production has economies of scale, markets may be stable because finding a good deviation is computationally intractable. We also use techniques from communication complexity to show that equilibrium prices, even when they exist, may need to encode an impractical amount of information to guarantee that a market clears.