# Publications

All | **Project Area A** | Project Area B | Project Area C

**2017** (12)

Dominik Gutt, Darius Schlangenotto, Dennis Kundisch:

In Wirtschaftsinformatik Proceedings, St. Gallen, Switzerland.

[Show Abstract]

**You can’t buy my rating! On the pivotal effect of an unconditional gift on rating behavior**In Wirtschaftsinformatik Proceedings, St. Gallen, Switzerland.

**(2017)**[Show Abstract]

The importance of online ratings on sales is widely acknowledged. Firms need to find ways of increasing the number of ratings and rating scores, but how they can achieve this effectively is less well established. In this paper we analyze the impact of an unconditional gift on customers’ rating behavior in an online field experiment. Contrary to prevalent advice, our results suggest that providing a gift is not necessarily beneficial. Younger customers are significantly less likely to rate when exposed to an unconditional gift. Regression analysis reveals that age serves as a moderator and older customers even respond slightly positive to a gift. Having detected a negative effect of gifts on rating behavior provides first indicative evidence of a possible crowding out of intrinsic motivation in the context of online ratings. This has direct implications for practitioners considering the usage of gifts to elicit online ratings.

[Show BibTeX] @inproceedings{gift_on_rating,

author = {Dominik Gutt AND Darius Schlangenotto AND Dennis Kundisch},

title = {You can’t buy my rating! On the pivotal effect of an unconditional gift on rating behavior},

booktitle = {Wirtschaftsinformatik Proceedings, St. Gallen, Switzerland},

year = {2017},

abstract = {The importance of online ratings on sales is widely acknowledged. Firms need to find ways of increasing the number of ratings and rating scores, but how they can achieve this effectively is less well established. In this paper we analyze the impact of an unconditional gift on customers’ rating behavior in an online field experiment. Contrary to prevalent advice, our results suggest that providing a gift is not necessarily beneficial. Younger customers are significantly less likely to rate when exposed to an unconditional gift. Regression analysis reveals that age serves as a moderator and older customers even respond slightly positive to a gift. Having detected a negative effect of gifts on rating behavior provides first indicative evidence of a possible crowding out of intrinsic motivation in the context of online ratings. This has direct implications for practitioners considering the usage of gifts to elicit online ratings.}

}

[DOI]
author = {Dominik Gutt AND Darius Schlangenotto AND Dennis Kundisch},

title = {You can’t buy my rating! On the pivotal effect of an unconditional gift on rating behavior},

booktitle = {Wirtschaftsinformatik Proceedings, St. Gallen, Switzerland},

year = {2017},

abstract = {The importance of online ratings on sales is widely acknowledged. Firms need to find ways of increasing the number of ratings and rating scores, but how they can achieve this effectively is less well established. In this paper we analyze the impact of an unconditional gift on customers’ rating behavior in an online field experiment. Contrary to prevalent advice, our results suggest that providing a gift is not necessarily beneficial. Younger customers are significantly less likely to rate when exposed to an unconditional gift. Regression analysis reveals that age serves as a moderator and older customers even respond slightly positive to a gift. Having detected a negative effect of gifts on rating behavior provides first indicative evidence of a possible crowding out of intrinsic motivation in the context of online ratings. This has direct implications for practitioners considering the usage of gifts to elicit online ratings.}

}

Björn Feldkord, Friedhelm Meyer auf der Heide:

In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM

[Show Abstract]

**The Mobile Server Problem**In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM

**(2017)**(to appear)[Show Abstract]

We introduce the mobile server problem, inspired by current trends

to move computational tasks from cloud structures to multiple

devices close to the end user. An example for this are embedded sys-

tems in autonomous cars that communicate in order to coordinate

their actions.

Our model is a variant of the classical Page Migration Problem.

Moreformally,weconsideramobileserverholdingadatapage.The

server can move in the Euclidean space (of arbitrary dimension).

In every round, requests for data items from the page pop up at

arbitrary points in the space. The requests are served, each at a cost

of the distance from the requesting point and the server, and the

mobile server may move, at a cost D times the distance traveled for

some constant D . We assume a maximum distance m the server is

allowed to move per round.

We show that no online algorithm can achieve a competitive

ratio independent of the length of the input sequence in this setting.

Hence we augment the maximum movement distance of the online

algorithms to ( 1 + δ) times the maximum distance of the offline

solution. We provide a deterministic algorithm which is simple

to describe and works for multiple variants of our problem. The

algorithm achieves almost tight competitive ratios independent of

the length of the input sequence.

[Show BibTeX] to move computational tasks from cloud structures to multiple

devices close to the end user. An example for this are embedded sys-

tems in autonomous cars that communicate in order to coordinate

their actions.

Our model is a variant of the classical Page Migration Problem.

Moreformally,weconsideramobileserverholdingadatapage.The

server can move in the Euclidean space (of arbitrary dimension).

In every round, requests for data items from the page pop up at

arbitrary points in the space. The requests are served, each at a cost

of the distance from the requesting point and the server, and the

mobile server may move, at a cost D times the distance traveled for

some constant D . We assume a maximum distance m the server is

allowed to move per round.

We show that no online algorithm can achieve a competitive

ratio independent of the length of the input sequence in this setting.

Hence we augment the maximum movement distance of the online

algorithms to ( 1 + δ) times the maximum distance of the offline

solution. We provide a deterministic algorithm which is simple

to describe and works for multiple variants of our problem. The

algorithm achieves almost tight competitive ratios independent of

the length of the input sequence.

@inproceedings{FM2017,

author = {Bj{\"o}rn Feldkord AND Friedhelm Meyer auf der Heide},

title = {The Mobile Server Problem},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

publisher = {ACM},

note = {to appear},

abstract = {We introduce the mobile server problem, inspired by current trendsto move computational tasks from cloud structures to multipledevices close to the end user. An example for this are embedded sys-tems in autonomous cars that communicate in order to coordinatetheir actions.Our model is a variant of the classical Page Migration Problem.Moreformally,weconsideramobileserverholdingadatapage.Theserver can move in the Euclidean space (of arbitrary dimension).In every round, requests for data items from the page pop up atarbitrary points in the space. The requests are served, each at a costof the distance from the requesting point and the server, and themobile server may move, at a cost D times the distance traveled forsome constant D . We assume a maximum distance m the server isallowed to move per round.We show that no online algorithm can achieve a competitiveratio independent of the length of the input sequence in this setting.Hence we augment the maximum movement distance of the onlinealgorithms to ( 1 + δ) times the maximum distance of the offlinesolution. We provide a deterministic algorithm which is simpleto describe and works for multiple variants of our problem. Thealgorithm achieves almost tight competitive ratios independent ofthe length of the input sequence.}

}

[DOI]
author = {Bj{\"o}rn Feldkord AND Friedhelm Meyer auf der Heide},

title = {The Mobile Server Problem},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

publisher = {ACM},

note = {to appear},

abstract = {We introduce the mobile server problem, inspired by current trendsto move computational tasks from cloud structures to multipledevices close to the end user. An example for this are embedded sys-tems in autonomous cars that communicate in order to coordinatetheir actions.Our model is a variant of the classical Page Migration Problem.Moreformally,weconsideramobileserverholdingadatapage.Theserver can move in the Euclidean space (of arbitrary dimension).In every round, requests for data items from the page pop up atarbitrary points in the space. The requests are served, each at a costof the distance from the requesting point and the server, and themobile server may move, at a cost D times the distance traveled forsome constant D . We assume a maximum distance m the server isallowed to move per round.We show that no online algorithm can achieve a competitiveratio independent of the length of the input sequence in this setting.Hence we augment the maximum movement distance of the onlinealgorithms to ( 1 + δ) times the maximum distance of the offlinesolution. We provide a deterministic algorithm which is simpleto describe and works for multiple variants of our problem. Thealgorithm achieves almost tight competitive ratios independent ofthe length of the input sequence.}

}

Peter Kling, Alexander Mäcker, Sören Riechers, Alexander Skopalik:

In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM

[Show Abstract]

**Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource**In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM

**(2017)**(to appear)[Show Abstract]

We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbbR$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.

Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.

Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).

[Show BibTeX] Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.

Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).

@inproceedings{KMRS17,

author = {Peter Kling AND Alexander M{\"a}cker AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

publisher = {ACM},

month = {July},

note = {to appear},

abstract = {We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbb{R}$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).}

}

author = {Peter Kling AND Alexander M{\"a}cker AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

publisher = {ACM},

month = {July},

note = {to appear},

abstract = {We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbb{R}$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).}

}

Matthias Feldotto, Maximilian Drees, Sören Riechers, Alexander Skopalik:

In Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON). Springer, LNCS

[Show Abstract]

**Pure Nash Equilibria in Restricted Budget Games**In Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON). Springer, LNCS

**(2017)**(to appear)[Show Abstract]

In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.

[Show BibTeX] @inproceedings{DFRS17,

author = {Matthias Feldotto AND Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Pure Nash Equilibria in Restricted Budget Games},

booktitle = {Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON)},

year = {2017},

publisher = {Springer},

note = {to appear},

abstract = {In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.},

series = {LNCS}

}

author = {Matthias Feldotto AND Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Pure Nash Equilibria in Restricted Budget Games},

booktitle = {Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON)},

year = {2017},

publisher = {Springer},

note = {to appear},

abstract = {In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.},

series = {LNCS}

}

Angelika Endres, Sonja Brangewitz, Behnud Djawadi, Britta Hoyer:

Techreport UPB.

[Show Abstract]

**Network Formation and Disruption - An Experiment: Are efficient networks too complex?**Techreport UPB.

**(2017)**[Show Abstract]

We experimentally study the emergence of networks under a known external threat. To be more specific, we deal with the question if subjects in the role of a strategic Designer are able to form safe and efficient networks while facing a strategic Adversary who is going to attack their networks. This investigation relates theoretical predictions by Dziubinski and Goyal (2013) to actual observed behaviour. Varying the costs for protecting nodes, we designed and tested two treatments with different predictions for the equilibrium network. Furthermore, the influence of the subjects' farsightedness on their decision-making process was elicited and analysed. We find that while subjects are able to build safe networks in both treatments, equilibrium networks are only built in one of the two treatments. In the other treatment, predominantly safe networks are built but they are not efficient. Additionally, we find that farsightedness -as measured in our experiment - has no influence on whether subjects are able to build safe or efficient networks.

[Show BibTeX] @techreport{BDEH2017,

author = {Angelika Endres AND Sonja Brangewitz AND Behnud Djawadi AND Britta Hoyer},

title = {Network Formation and Disruption - An Experiment: Are efficient networks too complex?},

year = {2017},

type = {Techreport UPB},

abstract = {We experimentally study the emergence of networks under a known external threat. To be more specific, we deal with the question if subjects in the role of a strategic Designer are able to form safe and efficient networks while facing a strategic Adversary who is going to attack their networks. This investigation relates theoretical predictions by Dziubinski and Goyal (2013) to actual observed behaviour. Varying the costs for protecting nodes, we designed and tested two treatments with different predictions for the equilibrium network. Furthermore, the influence of the subjects' farsightedness on their decision-making process was elicited and analysed. We find that while subjects are able to build safe networks in both treatments, equilibrium networks are only built in one of the two treatments. In the other treatment, predominantly safe networks are built but they are not efficient. Additionally, we find that farsightedness --as measured in our experiment - has no influence on whether subjects are able to build safe or efficient networks.}

}

author = {Angelika Endres AND Sonja Brangewitz AND Behnud Djawadi AND Britta Hoyer},

title = {Network Formation and Disruption - An Experiment: Are efficient networks too complex?},

year = {2017},

type = {Techreport UPB},

abstract = {We experimentally study the emergence of networks under a known external threat. To be more specific, we deal with the question if subjects in the role of a strategic Designer are able to form safe and efficient networks while facing a strategic Adversary who is going to attack their networks. This investigation relates theoretical predictions by Dziubinski and Goyal (2013) to actual observed behaviour. Varying the costs for protecting nodes, we designed and tested two treatments with different predictions for the equilibrium network. Furthermore, the influence of the subjects' farsightedness on their decision-making process was elicited and analysed. We find that while subjects are able to build safe networks in both treatments, equilibrium networks are only built in one of the two treatments. In the other treatment, predominantly safe networks are built but they are not efficient. Additionally, we find that farsightedness --as measured in our experiment - has no influence on whether subjects are able to build safe or efficient networks.}

}

Linghui Luo:

Master's thesis, University of Paderborn

[Show BibTeX]

**MultiSkipList: A Self-stabilizing Overlay Network with Monotonic Searchability maintained**Master's thesis, University of Paderborn

**(2017)**[Show BibTeX]

@mastersthesis{Luo2017,

author = {Linghui Luo},

title = {MultiSkipList: A Self-stabilizing Overlay Network with Monotonic Searchability maintained},

school = {University of Paderborn},

year = {2017}

}

author = {Linghui Luo},

title = {MultiSkipList: A Self-stabilizing Overlay Network with Monotonic Searchability maintained},

school = {University of Paderborn},

year = {2017}

}

Laura Niggmeyer:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Kartellabsprachen und vertikale Preisbindungen - Eine wettbewerbspolitische Analyse am Bespiel der Lebensmittelindustrie in Deutschland**Bachelor thesis, University of Paderborn

**(2017)**[Show BibTeX]

@misc{LN2017,

author = {Laura Niggmeyer},

title = {Kartellabsprachen und vertikale Preisbindungen - Eine wettbewerbspolitische Analyse am Bespiel der Lebensmittelindustrie in Deutschland},

year = {2017}

}

author = {Laura Niggmeyer},

title = {Kartellabsprachen und vertikale Preisbindungen - Eine wettbewerbspolitische Analyse am Bespiel der Lebensmittelindustrie in Deutschland},

year = {2017}

}

Steffen Zimmermann, Philipp Herrmann, Dennis Kundisch, Barry Nault:

[Show Abstract]

**Decomposing the Variance of Online Consumer Ratings and the Impact on Price and Demand****(2017)**(Contribution at: Workshop Theory in Economics of Information Systems (TEIS), Sonoma, USA)[Show Abstract]

Consumer ratings play a decisive role in purchases by online shoppers. Although the effect of the average and the number of consumer ratings on future product pricing and demand have been studied with some conclusive results, the effects of the variance of these ratings are less well understood. We develop a model which considers durable goods that are characterized by three types of attributes: search attributes, experience attributes, and transformed attributes the latter are conventional experience attributes that are transformed by consumer ratings into attributes that can be searched. Using informed search attributes to refer to the combination of search attributes and transformed attributes, we consider two sources of variance of consumer ratings: taste differences about informed search attributes and quality differences in the form of product failure representing experience attributes. We find that (i) optimal price increases and demand decreases in variance caused by informed search attributes, (ii) optimal price and demand decrease in variance caused by experience attributes, and (iii) by holding the average rating as well as the total variance constant, for products with low total variance price and demand increase in the relative share of variance caused by informed search attributes. Counter to intuition, we demonstrate that risk averse consumers may prefer a higher priced product with a higher variance in ratings when deciding between two similar products with the same average rating. Finally, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in e-commerce that differs from established explanations.

[Show BibTeX] @misc{TEIS_2017,

author = {Steffen Zimmermann AND Philipp Herrmann AND Dennis Kundisch AND Barry Nault},

title = {Decomposing the Variance of Online Consumer Ratings and the Impact on Price and Demand},

year = {2017},

note = {Contribution at: Workshop Theory in Economics of Information Systems (TEIS), Sonoma, USA},

abstract = {Consumer ratings play a decisive role in purchases by online shoppers. Although the effect of the average and the number of consumer ratings on future product pricing and demand have been studied with some conclusive results, the effects of the variance of these ratings are less well understood. We develop a model which considers durable goods that are characterized by three types of attributes: search attributes, experience attributes, and transformed attributes { the latter are conventional experience attributes that are transformed by consumer ratings into attributes that can be searched. Using informed search attributes to refer to the combination of search attributes and transformed attributes, we consider two sources of variance of consumer ratings: taste differences about informed search attributes and quality differences in the form of product failure representing experience attributes. We find that (i) optimal price increases and demand decreases in variance caused by informed search attributes, (ii) optimal price and demand decrease in variance caused by experience attributes, and (iii) by holding the average rating as well as the total variance constant, for products with low total variance price and demand increase in the relative share of variance caused by informed search attributes. Counter to intuition, we demonstrate that risk averse consumers may prefer a higher priced product with a higher variance in ratings when deciding between two similar products with the same average rating. Finally, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in e-commerce that differs from established explanations.}

}

author = {Steffen Zimmermann AND Philipp Herrmann AND Dennis Kundisch AND Barry Nault},

title = {Decomposing the Variance of Online Consumer Ratings and the Impact on Price and Demand},

year = {2017},

note = {Contribution at: Workshop Theory in Economics of Information Systems (TEIS), Sonoma, USA},

abstract = {Consumer ratings play a decisive role in purchases by online shoppers. Although the effect of the average and the number of consumer ratings on future product pricing and demand have been studied with some conclusive results, the effects of the variance of these ratings are less well understood. We develop a model which considers durable goods that are characterized by three types of attributes: search attributes, experience attributes, and transformed attributes { the latter are conventional experience attributes that are transformed by consumer ratings into attributes that can be searched. Using informed search attributes to refer to the combination of search attributes and transformed attributes, we consider two sources of variance of consumer ratings: taste differences about informed search attributes and quality differences in the form of product failure representing experience attributes. We find that (i) optimal price increases and demand decreases in variance caused by informed search attributes, (ii) optimal price and demand decrease in variance caused by experience attributes, and (iii) by holding the average rating as well as the total variance constant, for products with low total variance price and demand increase in the relative share of variance caused by informed search attributes. Counter to intuition, we demonstrate that risk averse consumers may prefer a higher priced product with a higher variance in ratings when deciding between two similar products with the same average rating. Finally, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in e-commerce that differs from established explanations.}

}

Matthias Feldotto, Lennart Leder, Alexander Skopalik:

In Proceedings of the 10th International Conference on Algorithms and Complexity (CIAC). Springer, LNCS, vol. 10236, pp. 222-233

[Show Abstract]

**Congestion Games with Complementarities**In Proceedings of the 10th International Conference on Algorithms and Complexity (CIAC). Springer, LNCS, vol. 10236, pp. 222-233

**(2017)**[Show Abstract]

We study a model of selfish resource allocation that seeks to incorporate dependencies among resources as they exist in in modern networked environments. Our model is inspired by utility functions with constant elasticity of substitution (CES) which is a well-studied model in economics. We consider congestion games with different aggregation functions. In particular, we study $L_p$ norms and analyze the existence and complexity of (approximate) pure Nash equilibria. Additionally, we give an almost tight characterization based on monotonicity properties to describe the set of aggregation functions that guarantee the existence of pure Nash equilibria.

[Show BibTeX] @inproceedings{FLS17,

author = {Matthias Feldotto AND Lennart Leder AND Alexander Skopalik},

title = {Congestion Games with Complementarities},

booktitle = {Proceedings of the 10th International Conference on Algorithms and Complexity (CIAC)},

year = {2017},

pages = {222--233},

publisher = {Springer},

abstract = {We study a model of selfish resource allocation that seeks to incorporate dependencies among resources as they exist in in modern networked environments. Our model is inspired by utility functions with constant elasticity of substitution (CES) which is a well-studied model in economics. We consider congestion games with different aggregation functions. In particular, we study $L_p$ norms and analyze the existence and complexity of (approximate) pure Nash equilibria. Additionally, we give an almost tight characterization based on monotonicity properties to describe the set of aggregation functions that guarantee the existence of pure Nash equilibria.},

series = {LNCS}

}

[DOI]
author = {Matthias Feldotto AND Lennart Leder AND Alexander Skopalik},

title = {Congestion Games with Complementarities},

booktitle = {Proceedings of the 10th International Conference on Algorithms and Complexity (CIAC)},

year = {2017},

pages = {222--233},

publisher = {Springer},

abstract = {We study a model of selfish resource allocation that seeks to incorporate dependencies among resources as they exist in in modern networked environments. Our model is inspired by utility functions with constant elasticity of substitution (CES) which is a well-studied model in economics. We consider congestion games with different aggregation functions. In particular, we study $L_p$ norms and analyze the existence and complexity of (approximate) pure Nash equilibria. Additionally, we give an almost tight characterization based on monotonicity properties to describe the set of aggregation functions that guarantee the existence of pure Nash equilibria.},

series = {LNCS}

}

Marie-Christine Jakobs, Julia Krämer, Dirk Van Straaten, Theodor Lettmann:

In Marcelo De Barros, Janusz Klink,Tadeus Uhl, Thomas Prinz (eds.): The Ninth International Conferences on Advanced Service Computing SERVICE COMPUTATION 2017. IARIA XPS Press, pp. 7-12

[Show Abstract]

**Certiﬁcation Matters for Service Markets**In Marcelo De Barros, Janusz Klink,Tadeus Uhl, Thomas Prinz (eds.): The Ninth International Conferences on Advanced Service Computing SERVICE COMPUTATION 2017. IARIA XPS Press, pp. 7-12

**(2017)**[Show Abstract]

Whenever customers have to decide between different instances of the same product, they are interested in buying the best product. In contrast, companies are interested in reducing the construction effort (and usually as a consequence thereof, the quality) to gain profit. The described setting is widely known as opposed preferences in quality of the product and also applies to the context of service-oriented computing. In general, service-oriented computing emphasizes the construction of large software systems out of existing services, where services are small and self-contained pieces of software that adhere to a specified interface. Several implementations of the same interface are considered as several instances of the same service. Thereby, customers are interested in buying the best service implementation for their service composition wrt. to metrics, such as costs, energy, memory consumption, or execution time. One way to ensure the service quality is to employ certificates, which can come in different kinds: Technical certificates proving correctness can be automatically constructed by the service provider and again be automatically checked by the user. Digital certificates allow proof of the integrity of a product. Other certificates might be rolled out if service providers follow a good software construction principle, which is checked in annual audits. Whereas all of these certificates are handled differently in service markets, what they have in common is that they influence the buying decisions of customers. In this paper, we review state-of-the-art developments in certification with respect to service-oriented computing. We not only discuss how certificates are constructed and handled in service-oriented computing but also review the effects of certificates on the market from an economic perspective.

[Show BibTeX] @inproceedings{JKvSL2017,

author = {Marie-Christine Jakobs AND Julia Kr{\"a}mer AND Dirk Van Straaten AND Theodor Lettmann},

title = {Certiﬁcation Matters for Service Markets},

booktitle = {The Ninth International Conferences on Advanced Service ComputingSERVICE COMPUTATION 2017},

year = {2017},

editor = {Marcelo De Barros, Janusz Klink,Tadeus Uhl, Thomas Prinz},

pages = {7-12},

publisher = {IARIA XPS Press},

abstract = {Whenever customers have to decide between different instances of the same product, they are interested in buying the best product. In contrast, companies are interested in reducing the construction effort (and usually as a consequence thereof, the quality) to gain profit. The described setting is widely known as opposed preferences in quality of the product and also applies to the context of service-oriented computing. In general, service-oriented computing emphasizes the construction of large software systems out of existing services, where services are small and self-contained pieces of software that adhere to a specified interface. Several implementations of the same interface are considered as several instances of the same service. Thereby, customers are interested in buying the best service implementation for their service composition wrt. to metrics, such as costs, energy, memory consumption, or execution time. One way to ensure the service quality is to employ certificates, which can come in different kinds: Technical certificates proving correctness can be automatically constructed by the service provider and again be automatically checked by the user. Digital certificates allow proof of the integrity of a product. Other certificates might be rolled out if service providers follow a good software construction principle, which is checked in annual audits. Whereas all of these certificates are handled differently in service markets, what they have in common is that they influence the buying decisions of customers. In this paper, we review state-of-the-art developments in certification with respect to service-oriented computing. We not only discuss how certificates are constructed and handled in service-oriented computing but also review the effects of certificates on the market from an economic perspective.}

}

[DOI]
author = {Marie-Christine Jakobs AND Julia Kr{\"a}mer AND Dirk Van Straaten AND Theodor Lettmann},

title = {Certiﬁcation Matters for Service Markets},

booktitle = {The Ninth International Conferences on Advanced Service ComputingSERVICE COMPUTATION 2017},

year = {2017},

editor = {Marcelo De Barros, Janusz Klink,Tadeus Uhl, Thomas Prinz},

pages = {7-12},

publisher = {IARIA XPS Press},

abstract = {Whenever customers have to decide between different instances of the same product, they are interested in buying the best product. In contrast, companies are interested in reducing the construction effort (and usually as a consequence thereof, the quality) to gain profit. The described setting is widely known as opposed preferences in quality of the product and also applies to the context of service-oriented computing. In general, service-oriented computing emphasizes the construction of large software systems out of existing services, where services are small and self-contained pieces of software that adhere to a specified interface. Several implementations of the same interface are considered as several instances of the same service. Thereby, customers are interested in buying the best service implementation for their service composition wrt. to metrics, such as costs, energy, memory consumption, or execution time. One way to ensure the service quality is to employ certificates, which can come in different kinds: Technical certificates proving correctness can be automatically constructed by the service provider and again be automatically checked by the user. Digital certificates allow proof of the integrity of a product. Other certificates might be rolled out if service providers follow a good software construction principle, which is checked in annual audits. Whereas all of these certificates are handled differently in service markets, what they have in common is that they influence the buying decisions of customers. In this paper, we review state-of-the-art developments in certification with respect to service-oriented computing. We not only discuss how certificates are constructed and handled in service-oriented computing but also review the effects of certificates on the market from an economic perspective.}

}

Darius Schlangenotto, Dennis Kundisch:

In Proceedings of the 50th annual Hawaii International Conference on System Sciences (HICSS), Waikoloa Village, HI, USA. AIS Electronic Library (AISeL)

[Show Abstract]

**Achieving more by saying less? On the Moderating Effect of Information Cues in Paid Search**In Proceedings of the 50th annual Hawaii International Conference on System Sciences (HICSS), Waikoloa Village, HI, USA. AIS Electronic Library (AISeL)

**(2017)**[Show Abstract]

Research on ad copy design is well-studied in the context of offline marketing. However, researchers have only recently started to investigate ad copies in the context of paid search, and have not yet explored the potential of information cues to enhance customers’ search process. In this paper we analyze the impact of an information cue on user behavior in ad copies. Contrary to prevalent advice, results suggest that reducing the number of words in an ad is not always beneficial. Users act quite differently (and unexpectedly) in response to an information cue depending on their search phrases. In turn, practitioners could leverage the observed moderating effect of an information cue to enhance paid search success. Furthermore, having detected deviating user behavior in terms of clicks and conversions, we provide first indicative evidence of a self-selection mechanism at play when paid search users respond to differently phrased ad copies.

[Show BibTeX] @inproceedings{information_cues,

author = {Darius Schlangenotto AND Dennis Kundisch},

title = {Achieving more by saying less? On the Moderating Effect of Information Cues in Paid Search},

booktitle = {Proceedings of the 50th annual Hawaii International Conference on System Sciences (HICSS), Waikoloa Village, HI, USA},

year = {2017},

publisher = {{AIS} Electronic Library (AISeL)},

abstract = {Research on ad copy design is well-studied in the context of offline marketing. However, researchers have only recently started to investigate ad copies in the context of paid search, and have not yet explored the potential of information cues to enhance customers’ search process. In this paper we analyze the impact of an information cue on user behavior in ad copies. Contrary to prevalent advice, results suggest that reducing the number of words in an ad is not always beneficial. Users act quite differently (and unexpectedly) in response to an information cue depending on their search phrases. In turn, practitioners could leverage the observed moderating effect of an information cue to enhance paid search success. Furthermore, having detected deviating user behavior in terms of clicks and conversions, we provide first indicative evidence of a self-selection mechanism at play when paid search users respond to differently phrased ad copies.}

}

[DOI]
author = {Darius Schlangenotto AND Dennis Kundisch},

title = {Achieving more by saying less? On the Moderating Effect of Information Cues in Paid Search},

booktitle = {Proceedings of the 50th annual Hawaii International Conference on System Sciences (HICSS), Waikoloa Village, HI, USA},

year = {2017},

publisher = {{AIS} Electronic Library (AISeL)},

abstract = {Research on ad copy design is well-studied in the context of offline marketing. However, researchers have only recently started to investigate ad copies in the context of paid search, and have not yet explored the potential of information cues to enhance customers’ search process. In this paper we analyze the impact of an information cue on user behavior in ad copies. Contrary to prevalent advice, results suggest that reducing the number of words in an ad is not always beneficial. Users act quite differently (and unexpectedly) in response to an information cue depending on their search phrases. In turn, practitioners could leverage the observed moderating effect of an information cue to enhance paid search success. Furthermore, having detected deviating user behavior in terms of clicks and conversions, we provide first indicative evidence of a self-selection mechanism at play when paid search users respond to differently phrased ad copies.}

}

Juergen Neumann, Dominik Gutt:

In Proceedings of the Twenty Fifth Conference on Information Systems (ECIS), Guimaraes.

[Show Abstract]

**A Homeowner’s Guide to Airbnb: Theory and Empirical Evidence for Optimal Pricing Conditional on Online Ratings**In Proceedings of the Twenty Fifth Conference on Information Systems (ECIS), Guimaraes.

**(2017)**[Show Abstract]

Optimal price setting in peer-to-peer markets featuring online ratings requires incorporating interactions between prices and ratings. Additionally, recent literature reports that online ratings in peer-to-peer markets tend to be inflated overall, undermining the reliability of online ratings as a quality signal. This study proposes a two-period model for optimal price setting that takes (potentially inflated) ratings into account. Our theoretical findings suggest that sellers in the medium-quality segment have an incentive to lower first-period prices to monetize on increased second-period ratings and that the possibility on monetizing on second-period ratings depends on the reliability of the rating system. Additionally, we find that total profits and prices increase with online ratings and additional quality signals. Empirically, conducting Difference-in-Difference regressions on a comprehensive panel data set from Airbnb, we can validate that price increases lead to lower ratings, and we find empirical support for the prediction that additional quality signals increase prices. Our work comes with substantial implications for sellers in peer-to-peer markets looking for an optimal price setting strategy. Moreover, we argue that our theoretical finding on the weights between online ratings and additional quality signals translates to conventional online markets.

[Show BibTeX] @inproceedings{HGAirbnb_JN-DG_2017,

author = {Juergen Neumann AND Dominik Gutt},

title = {A Homeowner’s Guide to Airbnb: Theory and Empirical Evidence for Optimal Pricing Conditional on Online Ratings},

booktitle = {Proceedings of the Twenty Fifth Conference on Information Systems (ECIS), Guimaraes},

year = {2017},

abstract = {Optimal price setting in peer-to-peer markets featuring online ratings requires incorporating interactions between prices and ratings. Additionally, recent literature reports that online ratings in peer-to-peer markets tend to be inflated overall, undermining the reliability of online ratings as a quality signal. This study proposes a two-period model for optimal price setting that takes (potentially inflated) ratings into account. Our theoretical findings suggest that sellers in the medium-quality segment have an incentive to lower first-period prices to monetize on increased second-period ratings and that the possibility on monetizing on second-period ratings depends on the reliability of the rating system. Additionally, we find that total profits and prices increase with online ratings and additional quality signals. Empirically, conducting Difference-in-Difference regressions on a comprehensive panel data set from Airbnb, we can validate that price increases lead to lower ratings, and we find empirical support for the prediction that additional quality signals increase prices. Our work comes with substantial implications for sellers in peer-to-peer markets looking for an optimal price setting strategy. Moreover, we argue that our theoretical finding on the weights between online ratings and additional quality signals translates to conventional online markets.}

}

author = {Juergen Neumann AND Dominik Gutt},

title = {A Homeowner’s Guide to Airbnb: Theory and Empirical Evidence for Optimal Pricing Conditional on Online Ratings},

booktitle = {Proceedings of the Twenty Fifth Conference on Information Systems (ECIS), Guimaraes},

year = {2017},

abstract = {Optimal price setting in peer-to-peer markets featuring online ratings requires incorporating interactions between prices and ratings. Additionally, recent literature reports that online ratings in peer-to-peer markets tend to be inflated overall, undermining the reliability of online ratings as a quality signal. This study proposes a two-period model for optimal price setting that takes (potentially inflated) ratings into account. Our theoretical findings suggest that sellers in the medium-quality segment have an incentive to lower first-period prices to monetize on increased second-period ratings and that the possibility on monetizing on second-period ratings depends on the reliability of the rating system. Additionally, we find that total profits and prices increase with online ratings and additional quality signals. Empirically, conducting Difference-in-Difference regressions on a comprehensive panel data set from Airbnb, we can validate that price increases lead to lower ratings, and we find empirical support for the prediction that additional quality signals increase prices. Our work comes with substantial implications for sellers in peer-to-peer markets looking for an optimal price setting strategy. Moreover, we argue that our theoretical finding on the weights between online ratings and additional quality signals translates to conventional online markets.}

}

**2016** (30)

Katharina Bernhardt:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Zertifikate als Qualitätssignal – Wie die Zertifizierung von Produkten und Verkäufern das Vertrauen von Kunden im Onlinehandel beeinflussen**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{Bernhardt2016,

author = {Katharina Bernhardt},

title = {Zertifikate als Qualit{\"a}tssignal – Wie die Zertifizierung von Produkten und Verk{\"a}ufern das Vertrauen von Kunden im Onlinehandel beeinflussen},

year = {2016}

}

author = {Katharina Bernhardt},

title = {Zertifikate als Qualit{\"a}tssignal – Wie die Zertifizierung von Produkten und Verk{\"a}ufern das Vertrauen von Kunden im Onlinehandel beeinflussen},

year = {2016}

}

Annabel Holzmann:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Wenn 1+1 nicht 2 ergibt – Gestaltungsmöglichkeiten Einzelbewertungen in Reputationssystemen zu Gesamtbewertungen zu aggregieren**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{Holzmann2016,

author = {Annabel Holzmann},

title = {Wenn 1+1 nicht 2 ergibt – Gestaltungsm{\"o}glichkeiten Einzelbewertungen in Reputationssystemen zu Gesamtbewertungen zu aggregieren},

year = {2016}

}

author = {Annabel Holzmann},

title = {Wenn 1+1 nicht 2 ergibt – Gestaltungsm{\"o}glichkeiten Einzelbewertungen in Reputationssystemen zu Gesamtbewertungen zu aggregieren},

year = {2016}

}

Sebastian Abshoff, Peter Kling, Christine Markarian, Friedhelm Meyer auf der Heide, Peter Pietrzyk:

In

[Show Abstract]

**Towards the price of leasing online**In

*Journal of Combinatorial Optimization*, vol. 32, no. 4, pp. 1197-1216.**(2016)**[Show Abstract]

We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).

[Show BibTeX] @article{AKMMP17,

author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

[DOI]
author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

Christian Scheideler, Alexander Setzer, Thim Strothmann:

In Proceedings of the 30th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 9888, pp. 71-84

[Show Abstract]

**Towards a Universal Approach for Monotonic Searchability in Self-stabilizing Overlay Networks**In Proceedings of the 30th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 9888, pp. 71-84

**(2016)**[Show Abstract]

For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.

We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.

We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.

As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.

[Show BibTeX] We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.

We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.

As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.

@inproceedings{disc2016sss,

author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards a Universal Approach for Monotonic Searchability in Self-stabilizingOverlay Networks},

booktitle = {Proceedings of the 30th International Symposium on Distributed Computing (DISC)},

year = {2016},

pages = {71--84},

publisher = {Springer},

abstract = {For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.},

series = {LNCS}

}

[DOI]
author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards a Universal Approach for Monotonic Searchability in Self-stabilizingOverlay Networks},

booktitle = {Proceedings of the 30th International Symposium on Distributed Computing (DISC)},

year = {2016},

pages = {71--84},

publisher = {Springer},

abstract = {For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.},

series = {LNCS}

}

Burkhard Monien, Marios Mavronicolas:

In

[Show Abstract]

**The complexity of equilibria for risk-modeling valuations**In

*Theoretical Computer Science*, vol. 634, pp. 67-96. Elsevier**(2016)**[Show Abstract]

Following the direction pioneered by Fiat and Papadimitriou in their 2010 paper [12], we study the complexity of deciding the existence of mixed equilibria for minimization games where players use valuations other than expectation to evaluate their costs. We consider risk-averse players seeking to minimize the sum V=E+R of expectationE and a risk valuationR of their costs; R is non-negative and vanishes exactly when the cost incurred to a player is constant over all choices of strategies by the other players. In a V-equilibrium, no player could unilaterally reduce her cost.

Say that V has the Weak-Equilibrium-for-Expectation property if all strategies supported in a player's best-response mixed strategy incur the same conditional expectation of her cost. We introduce E-strict concavity and observe that every E-strictly concave valuation has the Weak-Equilibrium-for-Expectation property. We focus on a broad class of valuations shown to have the Weak-Equilibrium-for-Expectation property, which we exploit to prove two main complexity results, the first of their kind, for the two simplest cases of the problem:

• Two strategies: Deciding the existence of a V-equilibrium is strongly NP-hard for the restricted class of player-specific scheduling games on two ordered links [22], when choosing R as (1)Var (variance), or (2)SD (standard deviation), or (3) a concave linear sum of even moments of small order.

• Two players: Deciding the existence of a V-equilibrium is strongly NP-hard when choosing R as (1)γ⋅Var, or (2)γ⋅SD, where γ>0 is the risk-coefficient, or choosing V as (3) a convex combination of E+γ⋅Var and the concave ν-valuationν−1(E(ν(⋅))), where ν(x)=xr, with r≥2. This is a concrete consequence of a general strong NP-hardness result that only needs the Weak-Equilibrium-for-Expectation property and a few additional properties for V; its proof involves a reduction with a single parameter, which can be chosen efficiently so that each valuation satisfies the additional properties.

[Show BibTeX] Say that V has the Weak-Equilibrium-for-Expectation property if all strategies supported in a player's best-response mixed strategy incur the same conditional expectation of her cost. We introduce E-strict concavity and observe that every E-strictly concave valuation has the Weak-Equilibrium-for-Expectation property. We focus on a broad class of valuations shown to have the Weak-Equilibrium-for-Expectation property, which we exploit to prove two main complexity results, the first of their kind, for the two simplest cases of the problem:

• Two strategies: Deciding the existence of a V-equilibrium is strongly NP-hard for the restricted class of player-specific scheduling games on two ordered links [22], when choosing R as (1)Var (variance), or (2)SD (standard deviation), or (3) a concave linear sum of even moments of small order.

• Two players: Deciding the existence of a V-equilibrium is strongly NP-hard when choosing R as (1)γ⋅Var, or (2)γ⋅SD, where γ>0 is the risk-coefficient, or choosing V as (3) a convex combination of E+γ⋅Var and the concave ν-valuationν−1(E(ν(⋅))), where ν(x)=xr, with r≥2. This is a concrete consequence of a general strong NP-hardness result that only needs the Weak-Equilibrium-for-Expectation property and a few additional properties for V; its proof involves a reduction with a single parameter, which can be chosen efficiently so that each valuation satisfies the additional properties.

@article{MM2016,

author = {Burkhard Monien AND Marios Mavronicolas},

title = {The complexity of equilibria for risk-modeling valuations},

journal = {Theoretical Computer Science},

year = {2016},

volume = {634},

pages = {67-96},

abstract = {Following the direction pioneered by Fiat and Papadimitriou in their 2010 paper [12], we study the complexity of deciding the existence of mixed equilibria for minimization games where players use valuations other than expectation to evaluate their costs. We consider risk-averse players seeking to minimize the sum V=E+R of expectationE and a risk valuationR of their costs; R is non-negative and vanishes exactly when the cost incurred to a player is constant over all choices of strategies by the other players. In a V-equilibrium, no player could unilaterally reduce her cost.Say that V has the Weak-Equilibrium-for-Expectation property if all strategies supported in a player's best-response mixed strategy incur the same conditional expectation of her cost. We introduce E-strict concavity and observe that every E-strictly concave valuation has the Weak-Equilibrium-for-Expectation property. We focus on a broad class of valuations shown to have the Weak-Equilibrium-for-Expectation property, which we exploit to prove two main complexity results, the first of their kind, for the two simplest cases of the problem:• Two strategies: Deciding the existence of a V-equilibrium is strongly NP-hard for the restricted class of player-specific scheduling games on two ordered links [22], when choosing R as (1)Var (variance), or (2)SD (standard deviation), or (3) a concave linear sum of even moments of small order.• Two players: Deciding the existence of a V-equilibrium is strongly NP-hard when choosing R as (1)γ⋅Var, or (2)γ⋅SD, where γ>0 is the risk-coefficient, or choosing V as (3) a convex combination of E+γ⋅Var and the concave ν-valuationν−1(E(ν(⋅))), where ν(x)=xr, with r≥2. This is a concrete consequence of a general strong NP-hardness result that only needs the Weak-Equilibrium-for-Expectation property and a few additional properties for V; its proof involves a reduction with a single parameter, which can be chosen efficiently so that each valuation satisfies the additional properties.}

}

[DOI]
author = {Burkhard Monien AND Marios Mavronicolas},

title = {The complexity of equilibria for risk-modeling valuations},

journal = {Theoretical Computer Science},

year = {2016},

volume = {634},

pages = {67-96},

abstract = {Following the direction pioneered by Fiat and Papadimitriou in their 2010 paper [12], we study the complexity of deciding the existence of mixed equilibria for minimization games where players use valuations other than expectation to evaluate their costs. We consider risk-averse players seeking to minimize the sum V=E+R of expectationE and a risk valuationR of their costs; R is non-negative and vanishes exactly when the cost incurred to a player is constant over all choices of strategies by the other players. In a V-equilibrium, no player could unilaterally reduce her cost.Say that V has the Weak-Equilibrium-for-Expectation property if all strategies supported in a player's best-response mixed strategy incur the same conditional expectation of her cost. We introduce E-strict concavity and observe that every E-strictly concave valuation has the Weak-Equilibrium-for-Expectation property. We focus on a broad class of valuations shown to have the Weak-Equilibrium-for-Expectation property, which we exploit to prove two main complexity results, the first of their kind, for the two simplest cases of the problem:• Two strategies: Deciding the existence of a V-equilibrium is strongly NP-hard for the restricted class of player-specific scheduling games on two ordered links [22], when choosing R as (1)Var (variance), or (2)SD (standard deviation), or (3) a concave linear sum of even moments of small order.• Two players: Deciding the existence of a V-equilibrium is strongly NP-hard when choosing R as (1)γ⋅Var, or (2)γ⋅SD, where γ>0 is the risk-coefficient, or choosing V as (3) a convex combination of E+γ⋅Var and the concave ν-valuationν−1(E(ν(⋅))), where ν(x)=xr, with r≥2. This is a concrete consequence of a general strong NP-hardness result that only needs the Weak-Equilibrium-for-Expectation property and a few additional properties for V; its proof involves a reduction with a single parameter, which can be chosen efficiently so that each valuation satisfies the additional properties.}

}

Matthias Feldotto, Kalman Graffi:

In

[Show Abstract]

**Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM**In

*Concurrency and Computation: Practice and Experience*, vol. 28, no. 5, pp. 1655-1677.**(2016)**[Show Abstract]

Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing.

[Show BibTeX] @article{FG16,

author = {Matthias Feldotto AND Kalman Graffi},

title = {Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM},

journal = {Concurrency and Computation: Practice and Experience},

year = {2016},

volume = {28},

number = {5},

pages = {1655--1677},

month = {April},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing. }

}

[DOI]
author = {Matthias Feldotto AND Kalman Graffi},

title = {Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM},

journal = {Concurrency and Computation: Practice and Experience},

year = {2016},

volume = {28},

number = {5},

pages = {1655--1677},

month = {April},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing. }

}

Sonja Brangewitz, Sarah Brockhoff:

In

[Show Abstract]

**Sustainability of Coalitional Equilibria within Repeated Tax Competition**In

*European Journal of Political Economy*. Elsevier**(2016)**(in press)[Show Abstract]

This paper analyzes the sustainability of capital tax harmonization agreements in a stylized model where countries have formed coalitions to agree on a common tax rate in order to avoid the inefficient, fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extent the sustainability of tax agreements is affected by the coalitions that have formed. In our setup, countries are symmetric, but coalitions can be of arbitrary size. We analyze sustainability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we rank the sustainability of different coalition structures. We show that sub-coalitions consisting of singleton regions have the largest incentives to deviate and that the sustainability of cooperation depends on the degree of cooperative behavior ex-ante. Bilateral agreements between pairs of regions turn out to be the form of cooperation that is the easiest to sustain.

[Show BibTeX] @article{SBSB2016,

author = {Sonja Brangewitz AND Sarah Brockhoff},

title = {Sustainability of Coalitional Equilibria within Repeated Tax Competition},

journal = {European Journal of Political Economy},

year = {2016},

note = {in press},

abstract = {This paper analyzes the sustainability of capital tax harmonization agreements in a stylized model where countries have formed coalitions to agree on a common tax rate in order to avoid the inefficient, fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extent the sustainability of tax agreements is affected by the coalitions that have formed. In our setup, countries are symmetric, but coalitions can be of arbitrary size. We analyze sustainability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we rank the sustainability of different coalition structures. We show that sub-coalitions consisting of singleton regions have the largest incentives to deviate and that the sustainability of cooperation depends on the degree of cooperative behavior ex-ante. Bilateral agreements between pairs of regions turn out to be the form of cooperation that is the easiest to sustain.}

}

[DOI]
author = {Sonja Brangewitz AND Sarah Brockhoff},

title = {Sustainability of Coalitional Equilibria within Repeated Tax Competition},

journal = {European Journal of Political Economy},

year = {2016},

note = {in press},

abstract = {This paper analyzes the sustainability of capital tax harmonization agreements in a stylized model where countries have formed coalitions to agree on a common tax rate in order to avoid the inefficient, fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extent the sustainability of tax agreements is affected by the coalitions that have formed. In our setup, countries are symmetric, but coalitions can be of arbitrary size. We analyze sustainability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we rank the sustainability of different coalition structures. We show that sub-coalitions consisting of singleton regions have the largest incentives to deviate and that the sustainability of cooperation depends on the degree of cooperative behavior ex-ante. Bilateral agreements between pairs of regions turn out to be the form of cooperation that is the easiest to sustain.}

}

Maximilian Drees, Björn Feldkord, Alexander Skopalik:

In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 593-607

[Show Abstract]

**Strategic Online Facility Location**In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 593-607

**(2016)**[Show Abstract]

In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.

We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).

[Show BibTeX] We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).

@inproceedings{SOFL16,

author = {Maximilian Drees AND Bj{\"o}rn Feldkord AND Alexander Skopalik},

title = {Strategic Online Facility Location},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {593--607},

publisher = {Springer},

abstract = {In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).},

series = {LNCS}

}

[DOI]
author = {Maximilian Drees AND Bj{\"o}rn Feldkord AND Alexander Skopalik},

title = {Strategic Online Facility Location},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {593--607},

publisher = {Springer},

abstract = {In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).},

series = {LNCS}

}

Andreas Cord Landwehr:

PhD thesis, University of Paderborn

[Show BibTeX]

**Selfish Network Creation - On Variants of Network Creation Games**PhD thesis, University of Paderborn

**(2016)**[Show BibTeX]

@phdthesis{PhDCord-Landwehr,

author = {Andreas Cord Landwehr},

title = {Selfish Network Creation - On Variants of Network Creation Games},

school = {University of Paderborn},

year = {2016}

}

[DOI]
author = {Andreas Cord Landwehr},

title = {Selfish Network Creation - On Variants of Network Creation Games},

school = {University of Paderborn},

year = {2016}

}

Tobias Harks, Martin Höfer, Kevin Schewior, Alexander Skopalik:

In

[Show Abstract]

**Routing Games With Progressive Filling**In

*IEEE/ACM Transactions on Networking*, vol. 24, no. 4, pp. 2553 - 2562.**(2016)**[Show Abstract]

Abstract—Max-min fairness (MMF) is a widely known approach

to a fair allocation of bandwidth to each of the users

in a network. This allocation can be computed by uniformly

raising the bandwidths of all users without violating capacity

constraints. We consider an extension of these allocations by

raising the bandwidth with arbitrary and not necessarily uniform

time-depending velocities (allocation rates). These allocations

are used in a game-theoretic context for routing choices, which

we formalize in progressive filling games (PFGs). We present a

variety of results for equilibria in PFGs. We show that these games

possess pure Nash and strong equilibria. While computation in

general is NP-hard, there are polynomial-time algorithms for

prominent classes of Max-Min-Fair Games (MMFG), including

the case when all users have the same source-destination pair.

We characterize prices of anarchy and stability for pure Nash

and strong equilibria in PFGs and MMFGs when players have

different or the same source-destination pairs. In addition, we

show that when a designer can adjust allocation rates, it is possible

to design games with optimal strong equilibria. Some initial results

on polynomial-time algorithms in this direction are also derived.

[Show BibTeX] to a fair allocation of bandwidth to each of the users

in a network. This allocation can be computed by uniformly

raising the bandwidths of all users without violating capacity

constraints. We consider an extension of these allocations by

raising the bandwidth with arbitrary and not necessarily uniform

time-depending velocities (allocation rates). These allocations

are used in a game-theoretic context for routing choices, which

we formalize in progressive filling games (PFGs). We present a

variety of results for equilibria in PFGs. We show that these games

possess pure Nash and strong equilibria. While computation in

general is NP-hard, there are polynomial-time algorithms for

prominent classes of Max-Min-Fair Games (MMFG), including

the case when all users have the same source-destination pair.

We characterize prices of anarchy and stability for pure Nash

and strong equilibria in PFGs and MMFGs when players have

different or the same source-destination pairs. In addition, we

show that when a designer can adjust allocation rates, it is possible

to design games with optimal strong equilibria. Some initial results

on polynomial-time algorithms in this direction are also derived.

@article{HHSS16,

author = {Tobias Harks AND Martin H{\"o}fer AND Kevin Schewior AND Alexander Skopalik},

title = {Routing Games With Progressive Filling},

journal = {IEEE/ACM Transactions on Networking},

year = {2016},

volume = {24},

number = {4},

pages = {2553 - 2562},

abstract = {Abstract—Max-min fairness (MMF) is a widely known approachto a fair allocation of bandwidth to each of the usersin a network. This allocation can be computed by uniformlyraising the bandwidths of all users without violating capacityconstraints. We consider an extension of these allocations byraising the bandwidth with arbitrary and not necessarily uniformtime-depending velocities (allocation rates). These allocationsare used in a game-theoretic context for routing choices, whichwe formalize in progressive filling games (PFGs). We present avariety of results for equilibria in PFGs. We show that these gamespossess pure Nash and strong equilibria. While computation ingeneral is NP-hard, there are polynomial-time algorithms forprominent classes of Max-Min-Fair Games (MMFG), includingthe case when all users have the same source-destination pair.We characterize prices of anarchy and stability for pure Nashand strong equilibria in PFGs and MMFGs when players havedifferent or the same source-destination pairs. In addition, weshow that when a designer can adjust allocation rates, it is possibleto design games with optimal strong equilibria. Some initial resultson polynomial-time algorithms in this direction are also derived.}

}

[DOI]
author = {Tobias Harks AND Martin H{\"o}fer AND Kevin Schewior AND Alexander Skopalik},

title = {Routing Games With Progressive Filling},

journal = {IEEE/ACM Transactions on Networking},

year = {2016},

volume = {24},

number = {4},

pages = {2553 - 2562},

abstract = {Abstract—Max-min fairness (MMF) is a widely known approachto a fair allocation of bandwidth to each of the usersin a network. This allocation can be computed by uniformlyraising the bandwidths of all users without violating capacityconstraints. We consider an extension of these allocations byraising the bandwidth with arbitrary and not necessarily uniformtime-depending velocities (allocation rates). These allocationsare used in a game-theoretic context for routing choices, whichwe formalize in progressive filling games (PFGs). We present avariety of results for equilibria in PFGs. We show that these gamespossess pure Nash and strong equilibria. While computation ingeneral is NP-hard, there are polynomial-time algorithms forprominent classes of Max-Min-Fair Games (MMFG), includingthe case when all users have the same source-destination pair.We characterize prices of anarchy and stability for pure Nashand strong equilibria in PFGs and MMFGs when players havedifferent or the same source-destination pairs. In addition, weshow that when a designer can adjust allocation rates, it is possibleto design games with optimal strong equilibria. Some initial resultson polynomial-time algorithms in this direction are also derived.}

}

Angelika Endres:

Master's thesis, Paderborn University

[Show BibTeX]

**On the Design and Defense of Networks - An Experimental Investigation**Master's thesis, Paderborn University

**(2016)**[Show BibTeX]

@mastersthesis{AE2016,

author = {Angelika Endres},

title = {On the Design and Defense of Networks - An Experimental Investigation},

school = {Paderborn University},

year = {2016}

}

author = {Angelika Endres},

title = {On the Design and Defense of Networks - An Experimental Investigation},

school = {Paderborn University},

year = {2016}

}

Dominik Gutt, Dennis Kundisch:

In Proceedings of the Thirty Seventh International Conference on Information Systems (ICIS), Dublin, Ireland. Association for Information Systems

[Show Abstract]

**Money Talks (Even) in the Sharing Economy: Empirical Evidence for Price Effects in Online Ratings as Quality Signals**In Proceedings of the Thirty Seventh International Conference on Information Systems (ICIS), Dublin, Ireland. Association for Information Systems

**(2016)**[Show Abstract]

Recent literature reports concerns about implausibly high Overall ratings in the sharing economy, which undermines the credibility of this rating as a quality signal. This study empirically investigates the relationship between quality and price, commonly captured by the Value dimension in multidimensional rating systems, to reveal whether reviewers form a perception of quality that they then express in the Value dimension, rather than in the Overall rating. We test our hypotheses on a comprehensive panel dataset for 14,859 Airbnb listings in New York. Our preliminary empirical findings show that an increase in price leads to a significant and substantial decrease in the Value rating, suggesting that Value ratings can offer a valuable source of information for potential buyers in addition to the supposedly inflated Overall rating. Moreover, this mechanism has substantial implications for potential buyers who seek to evaluate a listing’s quality and for a seller’s price setting.

[Show BibTeX] @inproceedings{money_talks,

author = {Dominik Gutt AND Dennis Kundisch},

title = {Money Talks (Even) in the Sharing Economy: Empirical Evidence for Price Effects in Online Ratings as Quality Signals},

booktitle = {Proceedings of the Thirty Seventh International Conference on Information Systems (ICIS), Dublin, Ireland},

year = {2016},

publisher = {Association for Information Systems},

abstract = {Recent literature reports concerns about implausibly high Overall ratings in the sharing economy, which undermines the credibility of this rating as a quality signal. This study empirically investigates the relationship between quality and price, commonly captured by the Value dimension in multidimensional rating systems, to reveal whether reviewers form a perception of quality that they then express in the Value dimension, rather than in the Overall rating. We test our hypotheses on a comprehensive panel dataset for 14,859 Airbnb listings in New York. Our preliminary empirical findings show that an increase in price leads to a significant and substantial decrease in the Value rating, suggesting that Value ratings can offer a valuable source of information for potential buyers in addition to the supposedly inflated Overall rating. Moreover, this mechanism has substantial implications for potential buyers who seek to evaluate a listing’s quality and for a seller’s price setting. }

}

[DOI]
author = {Dominik Gutt AND Dennis Kundisch},

title = {Money Talks (Even) in the Sharing Economy: Empirical Evidence for Price Effects in Online Ratings as Quality Signals},

booktitle = {Proceedings of the Thirty Seventh International Conference on Information Systems (ICIS), Dublin, Ireland},

year = {2016},

publisher = {Association for Information Systems},

abstract = {Recent literature reports concerns about implausibly high Overall ratings in the sharing economy, which undermines the credibility of this rating as a quality signal. This study empirically investigates the relationship between quality and price, commonly captured by the Value dimension in multidimensional rating systems, to reveal whether reviewers form a perception of quality that they then express in the Value dimension, rather than in the Overall rating. We test our hypotheses on a comprehensive panel dataset for 14,859 Airbnb listings in New York. Our preliminary empirical findings show that an increase in price leads to a significant and substantial decrease in the Value rating, suggesting that Value ratings can offer a valuable source of information for potential buyers in addition to the supposedly inflated Overall rating. Moreover, this mechanism has substantial implications for potential buyers who seek to evaluate a listing’s quality and for a seller’s price setting. }

}

Ari Jubrail:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Literaturüberblick zur Varianz in Kundenbewertungen auf Online Plattformen**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{variancereview,

author = {Ari Jubrail},

title = {Literatur{\"u}berblick zur Varianz in Kundenbewertungen auf Online Plattformen},

year = {2016}

}

author = {Ari Jubrail},

title = {Literatur{\"u}berblick zur Varianz in Kundenbewertungen auf Online Plattformen},

year = {2016}

}

Christopher Schmidt:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Kundenbewertungen im Online-Handel – Alles Betrug?**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{Schmidt 2016,

author = {Christopher Schmidt},

title = {Kundenbewertungen im Online-Handel – Alles Betrug?},

year = {2016}

}

author = {Christopher Schmidt},

title = {Kundenbewertungen im Online-Handel – Alles Betrug?},

year = {2016}

}

Friedhelm Meyer auf der Heide, Peter Sanders, Nodari Sitchinava:

In

[Show BibTeX]

**Introduction to the Special Issue on SPAA 2014**In

*Transactions on Parallel Computing (TOPC)*, vol. 3, no. 1, pp. 1. ACM**(2016)**[Show BibTeX]

@article{MPS2016,

author = {Friedhelm Meyer auf der Heide AND Peter Sanders AND Nodari Sitchinava},

title = {Introduction to the Special Issue on SPAA 2014},

journal = {Transactions on Parallel Computing (TOPC)},

year = {2016},

volume = {3},

number = {1},

pages = {1}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Peter Sanders AND Nodari Sitchinava},

title = {Introduction to the Special Issue on SPAA 2014},

journal = {Transactions on Parallel Computing (TOPC)},

year = {2016},

volume = {3},

number = {1},

pages = {1}

}

Tobias von Rechenberg, Dominik Gutt, Dennis Kundisch:

In

[Show Abstract]

**Goals as Reference Points: Empirical Evidence from a Virtual Reward System**In

*Decision Analysis*, vol. 13, no. 2, pp. 153-171.**(2016)**[Show Abstract]

Heath et al. (1999) propose a prospect theory model for goal behavior. Their analytical model is based on the assumption that goals inherit the main properties of the prospect theory value function, i.e., reference point dependence, loss aversion, and diminishing sensitivity. We investigate whether these main properties transfer to goal behavior in the field. We take user activity data from a gamified Question & Answer community and analyze how users adjust their contribution behavior in the days surrounding goal achievement, where goals are represented by badges. We find that users gradually increase their performance in the days prior to earning a badge, with performance peaking on the day of the promotion. In subsequent days, user performance gradually diminishes again, with the decline being strongest on the day immediately following the badge achievement. These findings reflect the characteristic S-shape of the prospect theory value function which is convex below the reference point and concave above it. Employing the target-based approach, we can interpret the value function as a cumulative density function of a unimodal probability distribution. Our results suggest that it is more likely that active members of the community focus on the next badge relative to the status already achieved, as their next goal and are less likely to focus on more remote (higher-ranked) badges. Our results thus support the transferability of the main properties of the prospect theory value function to goal behavior in the field and suggest a distinct shape of the value function around goals.

[Show BibTeX] @article{ReferencePoints,

author = {Tobias von Rechenberg AND Dominik Gutt AND Dennis Kundisch},

title = {Goals as Reference Points: Empirical Evidence from a Virtual Reward System},

journal = {Decision Analysis},

year = {2016},

volume = {13},

number = {2},

pages = {153--171},

abstract = {Heath et al. (1999) propose a prospect theory model for goal behavior. Their analytical model is based on the assumption that goals inherit the main properties of the prospect theory value function, i.e., reference point dependence, loss aversion, and diminishing sensitivity. We investigate whether these main properties transfer to goal behavior in the field. We take user activity data from a gamified Question & Answer community and analyze how users adjust their contribution behavior in the days surrounding goal achievement, where goals are represented by badges. We find that users gradually increase their performance in the days prior to earning a badge, with performance peaking on the day of the promotion. In subsequent days, user performance gradually diminishes again, with the decline being strongest on the day immediately following the badge achievement. These findings reflect the characteristic S-shape of the prospect theory value function which is convex below the reference point and concave above it. Employing the target-based approach, we can interpret the value function as a cumulative density function of a unimodal probability distribution. Our results suggest that it is more likely that active members of the community focus on the next badge relative to the status already achieved, as their next goal and are less likely to focus on more remote (higher-ranked) badges. Our results thus support the transferability of the main properties of the prospect theory value function to goal behavior in the field and suggest a distinct shape of the value function around goals.}

}

[DOI]
author = {Tobias von Rechenberg AND Dominik Gutt AND Dennis Kundisch},

title = {Goals as Reference Points: Empirical Evidence from a Virtual Reward System},

journal = {Decision Analysis},

year = {2016},

volume = {13},

number = {2},

pages = {153--171},

abstract = {Heath et al. (1999) propose a prospect theory model for goal behavior. Their analytical model is based on the assumption that goals inherit the main properties of the prospect theory value function, i.e., reference point dependence, loss aversion, and diminishing sensitivity. We investigate whether these main properties transfer to goal behavior in the field. We take user activity data from a gamified Question & Answer community and analyze how users adjust their contribution behavior in the days surrounding goal achievement, where goals are represented by badges. We find that users gradually increase their performance in the days prior to earning a badge, with performance peaking on the day of the promotion. In subsequent days, user performance gradually diminishes again, with the decline being strongest on the day immediately following the badge achievement. These findings reflect the characteristic S-shape of the prospect theory value function which is convex below the reference point and concave above it. Employing the target-based approach, we can interpret the value function as a cumulative density function of a unimodal probability distribution. Our results suggest that it is more likely that active members of the community focus on the next badge relative to the status already achieved, as their next goal and are less likely to focus on more remote (higher-ranked) badges. Our results thus support the transferability of the main properties of the prospect theory value function to goal behavior in the field and suggest a distinct shape of the value function around goals.}

}

Tristan Sassenberg:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Gefälschte Online Bewertungen - Literaturüberblick**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{SassenbergT,

author = {Tristan Sassenberg},

title = {Gef{\"a}lschte Online Bewertungen - Literatur{\"u}berblick},

year = {2016}

}

author = {Tristan Sassenberg},

title = {Gef{\"a}lschte Online Bewertungen - Literatur{\"u}berblick},

year = {2016}

}

Maximilian Drees:

PhD thesis, University of Paderborn

[Show BibTeX]

**Existence and Properties of Pure Nash Equilibria in Budget Games**PhD thesis, University of Paderborn

**(2016)**[Show BibTeX]

@phdthesis{PhDDrees,

author = {Maximilian Drees},

title = {Existence and Properties of Pure Nash Equilibria in Budget Games},

school = {University of Paderborn},

year = {2016}

}

[DOI]
author = {Maximilian Drees},

title = {Existence and Properties of Pure Nash Equilibria in Budget Games},

school = {University of Paderborn},

year = {2016}

}

Eugen Dimant:

PhD thesis, Paderborn University

[Show BibTeX]

**Economics of Corruption and Crime: An Interdisciplinary Approach to Behavioral Ethics**PhD thesis, Paderborn University

**(2016)**[Show BibTeX]

@phdthesis{Arifulina-phdthesis-2016,

author = {Eugen Dimant},

title = {Economics of Corruption and Crime: An Interdisciplinary Approach to Behavioral Ethics},

school = {Paderborn University},

year = {2016}

}

[DOI]
author = {Eugen Dimant},

title = {Economics of Corruption and Crime: An Interdisciplinary Approach to Behavioral Ethics},

school = {Paderborn University},

year = {2016}

}

Sonja Brangewitz, Simon Hoof:

In Marco Aiello, Einar Broch Johnsen, Schahram Dustdar, and Ilche Georgievski (eds.): Service-Oriented and Cloud Computing: 5th IFIP WG 2.14 European Conference, ESOCC 2016, Vienna, Austria, September 5-7, 2016, Proceedings. Springer International Publishing (Cham), pp. 201-215

[Show Abstract]

**Economic Aspects of Service Composition: Price Negotiations and Quality Investments**In Marco Aiello, Einar Broch Johnsen, Schahram Dustdar, and Ilche Georgievski (eds.): Service-Oriented and Cloud Computing: 5th IFIP WG 2.14 European Conference, ESOCC 2016, Vienna, Austria, September 5-7, 2016, Proceedings. Springer International Publishing (Cham), pp. 201-215

**(2016)**[Show Abstract]

We analyse the economic interaction on the market for composed services. Typically, as providers of composed services, intermediaries interact on the sales side with users and on the procurement side with providers of single services. Thus, in how far a user request can be met often crucially depends on the prices and qualities of the different single services used in the composition. We study an intermediary who purchases two complementary single services and combines them. The prices paid to the service providers are determined by simultaneous multilateral Nash bargaining between the intermediary and the respective service provider. By using a function with constant elasticity of substitution (CES) to determine the quality of the composed service, we allow for complementary as well as substitutable degrees of the providers' service qualities. We investigate quality investments of service providers and the corresponding evolution of the single service quality within a differential game framework.

[Show BibTeX] @inproceedings{SBSH16,

author = {Sonja Brangewitz AND Simon Hoof},

title = {Economic Aspects of Service Composition: Price Negotiations and Quality Investments},

booktitle = {Service-Oriented and Cloud Computing: 5th IFIP WG 2.14 European Conference, ESOCC 2016, Vienna, Austria, September 5-7, 2016, Proceedings},

year = {2016},

editor = {Marco Aiello, Einar Broch Johnsen, Schahram Dustdar, and Ilche Georgievski},

pages = {201-215},

publisher = {Springer International Publishing},

address = {Cham},

abstract = {We analyse the economic interaction on the market for composed services. Typically, as providers of composed services, intermediaries interact on the sales side with users and on the procurement side with providers of single services. Thus, in how far a user request can be met often crucially depends on the prices and qualities of the different single services used in the composition. We study an intermediary who purchases two complementary single services and combines them. The prices paid to the service providers are determined by simultaneous multilateral Nash bargaining between the intermediary and the respective service provider. By using a function with constant elasticity of substitution (CES) to determine the quality of the composed service, we allow for complementary as well as substitutable degrees of the providers' service qualities. We investigate quality investments of service providers and the corresponding evolution of the single service quality within a differential game framework. }

}

[DOI]
author = {Sonja Brangewitz AND Simon Hoof},

title = {Economic Aspects of Service Composition: Price Negotiations and Quality Investments},

booktitle = {Service-Oriented and Cloud Computing: 5th IFIP WG 2.14 European Conference, ESOCC 2016, Vienna, Austria, September 5-7, 2016, Proceedings},

year = {2016},

editor = {Marco Aiello, Einar Broch Johnsen, Schahram Dustdar, and Ilche Georgievski},

pages = {201-215},

publisher = {Springer International Publishing},

address = {Cham},

abstract = {We analyse the economic interaction on the market for composed services. Typically, as providers of composed services, intermediaries interact on the sales side with users and on the procurement side with providers of single services. Thus, in how far a user request can be met often crucially depends on the prices and qualities of the different single services used in the composition. We study an intermediary who purchases two complementary single services and combines them. The prices paid to the service providers are determined by simultaneous multilateral Nash bargaining between the intermediary and the respective service provider. By using a function with constant elasticity of substitution (CES) to determine the quality of the composed service, we allow for complementary as well as substitutable degrees of the providers' service qualities. We investigate quality investments of service providers and the corresponding evolution of the single service quality within a differential game framework. }

}

Julia Funke:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Die Wirkung von monetären Incentives auf das Bewertungsverhalten von Kunden am Beispiel von meineLinse.de**Bachelor thesis, Paderborn University

**(2016)**[Show BibTeX]

@misc{funke_meinelinse,

author = {Julia Funke},

title = {Die Wirkung von monet{\"a}ren Incentives auf das Bewertungsverhalten von Kunden am Beispiel von meineLinse.de},

year = {2016}

}

author = {Julia Funke},

title = {Die Wirkung von monet{\"a}ren Incentives auf das Bewertungsverhalten von Kunden am Beispiel von meineLinse.de},

year = {2016}

}

Philipp Herrmann, Dominik Gutt, Mohammad Rahman:

[Show Abstract]

**Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions****(2016)**(contribution at: INFORMS Annual Meeting, Nashville, USA)[Show Abstract]

Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.

[Show BibTeX] @misc{yelp_informs,

author = {Philipp Herrmann AND Dominik Gutt AND Mohammad Rahman},

title = {Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions},

year = {2016},

note = {contribution at: INFORMS Annual Meeting, Nashville, USA},

abstract = {Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.}

}

author = {Philipp Herrmann AND Dominik Gutt AND Mohammad Rahman},

title = {Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions},

year = {2016},

note = {contribution at: INFORMS Annual Meeting, Nashville, USA},

abstract = {Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.}

}

Philipp Herrmann, Dominik Gutt, Mohammad Rahman:

[Show Abstract]

**Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions****(2016)**(contribution at: NBER Summer Institute on the Economics of Information Technology and Digitization, Cambridge, MA)[Show Abstract]

Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.

[Show BibTeX] @misc{yelp_nber,

author = {Philipp Herrmann AND Dominik Gutt AND Mohammad Rahman},

title = {Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions},

year = {2016},

note = {contribution at: NBER Summer Institute on the Economics of Information Technology and Digitization, Cambridge, MA},

abstract = {Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.}

}

author = {Philipp Herrmann AND Dominik Gutt AND Mohammad Rahman},

title = {Crowd-Driven Competitive Intelligence: Understanding the Relationship between Local Market Structure and Online Rating Distributions},

year = {2016},

note = {contribution at: NBER Summer Institute on the Economics of Information Technology and Digitization, Cambridge, MA},

abstract = {Crowdsourced information, such as, online ratings, are increasingly viewed as a critical source for understanding local market dynamics. A key aspect of utilizing online ratings to derive competitive market intelligence is to delineate the systematic relationship between local market structure and distributional properties of online ratings. As one of the earliest papers in this stream, combining demographic, population, and restaurant review data from Yelp.com for 372 isolated markets in the U.S., our empirical findings suggest that an increase in competition leads to a broader range of ratings and to a decrease in the average rating in a market. These effects are particularly pronounced when the analysis is limited to specific restaurant types where there are fewer opportunities for horizontal differentiation. To gain richer insights into the empirical results, we adopt the classical theoretical lenses of an oligopoly where firms vertically differentiate their quality offerings in the presence of heterogeneous consumers and marginal costs that increase quadratically in quality. Moreover, we present evidence in support of both the internal and external validity of Yelp’s crowdsourced online ratings, validating the role online rating distributions can play in helping scholars and managers understand competitive dynamics in local markets.}

}

Matthias Feldotto, Lennart Leder, Alexander Skopalik:

In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 655-669

[Show Abstract]

**Congestion Games with Mixed Objectives**In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 655-669

**(2016)**[Show Abstract]

We study a new class of games which generalizes congestion games and its bottleneck variant. We introduce congestion games with mixed objectives to model network scenarios in which players seek to optimize for latency and bandwidths alike. We characterize the existence of pure Nash equilibria (PNE) and the convergence of improvement dynamics. For games that do not possess PNE we give bounds on the approximation ratio of approximate pure Nash equilibria.

[Show BibTeX] @inproceedings{FLS16,

author = {Matthias Feldotto AND Lennart Leder AND Alexander Skopalik},

title = {Congestion Games with Mixed Objectives},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {655--669},

publisher = {Springer},

abstract = {We study a new class of games which generalizes congestion games and its bottleneck variant. We introduce congestion games with mixed objectives to model network scenarios in which players seek to optimize for latency and bandwidths alike. We characterize the existence of pure Nash equilibria (PNE) and the convergence of improvement dynamics. For games that do not possess PNE we give bounds on the approximation ratio of approximate pure Nash equilibria.},

series = {LNCS}

}

[DOI]
author = {Matthias Feldotto AND Lennart Leder AND Alexander Skopalik},

title = {Congestion Games with Mixed Objectives},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {655--669},

publisher = {Springer},

abstract = {We study a new class of games which generalizes congestion games and its bottleneck variant. We introduce congestion games with mixed objectives to model network scenarios in which players seek to optimize for latency and bandwidths alike. We characterize the existence of pure Nash equilibria (PNE) and the convergence of improvement dynamics. For games that do not possess PNE we give bounds on the approximation ratio of approximate pure Nash equilibria.},

series = {LNCS}

}

Lennart Leder:

Master's thesis, Paderborn University

[Show BibTeX]

**Congestion Games with Mixed Objectives**Master's thesis, Paderborn University

**(2016)**[Show BibTeX]

@mastersthesis{Leder16,

author = {Lennart Leder},

title = {Congestion Games with Mixed Objectives},

school = {Paderborn University},

year = {2016}

}

author = {Lennart Leder},

title = {Congestion Games with Mixed Objectives},

school = {Paderborn University},

year = {2016}

}

Sonja Brangewitz, Jochen Manegold:

In

[Show Abstract]

**Competition of Intermediaries in a Differentiated Duopoly**In

*Theoretical Economics Letters*, vol. 6, pp. 1341-1362.**(2016)**[Show Abstract]

On an intermediate goods market with asymmetric production technologies as well as vertical and horizontal product differentiation we analyze the influence of simultaneous competition for resources and customers. The intermediaries face either price or quantity competition on the output market and a monopolistic, strategically acting supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Moreover, the well-known welfare advantage of price competition can in general be no longer confirmed in our model with an endogenous input market and asymmetric intermediaries.

[Show BibTeX] @article{SBJM2016,

author = {Sonja Brangewitz AND Jochen Manegold},

title = {Competition of Intermediaries in a Differentiated Duopoly},

journal = {Theoretical Economics Letters},

year = {2016},

volume = {6},

pages = {1341-1362},

abstract = {On an intermediate goods market with asymmetric production technologies as well as vertical and horizontal product differentiation we analyze the influence of simultaneous competition for resources and customers. The intermediaries face either price or quantity competition on the output market and a monopolistic, strategically acting supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Moreover, the well-known welfare advantage of price competition can in general be no longer confirmed in our model with an endogenous input market and asymmetric intermediaries.}

}

[DOI]
author = {Sonja Brangewitz AND Jochen Manegold},

title = {Competition of Intermediaries in a Differentiated Duopoly},

journal = {Theoretical Economics Letters},

year = {2016},

volume = {6},

pages = {1341-1362},

abstract = {On an intermediate goods market with asymmetric production technologies as well as vertical and horizontal product differentiation we analyze the influence of simultaneous competition for resources and customers. The intermediaries face either price or quantity competition on the output market and a monopolistic, strategically acting supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Moreover, the well-known welfare advantage of price competition can in general be no longer confirmed in our model with an endogenous input market and asymmetric intermediaries.}

}

Jochen Manegold:

PhD thesis, University of Paderborn

[Show BibTeX]

**Competition in Markets with Intermediaries**PhD thesis, University of Paderborn

**(2016)**[Show BibTeX]

@phdthesis{PhDManegold,

author = {Jochen Manegold},

title = {Competition in Markets with Intermediaries},

school = {University of Paderborn},

year = {2016}

}

[DOI]
author = {Jochen Manegold},

title = {Competition in Markets with Intermediaries},

school = {University of Paderborn},

year = {2016}

}

Tobias von Rechenberg, Dominik Gutt:

In Proceedings of the Twenty Fourth Conference on Information Systems (ECIS), Istanbul.

[Show Abstract]

**Challenge Accepted! - The Impcat of Goal Achievement on Subsequent User Effort and Implications of a Goal's Difficulty**In Proceedings of the Twenty Fourth Conference on Information Systems (ECIS), Istanbul.

**(2016)**[Show Abstract]

We empirically investigate the impact of successful goal achievement on future effort to attain the next goal in a recurring goal framework. We use data from a popular German Question & Answer community where goals are represented in the form of badges. In particular, our analysis of this data hinges on the fact, that in this Question & Answer community, badges in a hierarchical badge system are in-creasingly challenging to attain up to a certain badge. After this certain badge, the difficulty level sud-denly drops and remains constant throughout up to the last badge in the hierarchy. Our findings indi-cate that after successful badge achievement users increase their subsequent effort to attain the next badge, but only as long as badges represent a challenge to the user. According to our analysis, we identify self-learning to be the key driver of this behavior.

[Show BibTeX] @inproceedings{ChallengeAccepted,

author = {Tobias von Rechenberg AND Dominik Gutt},

title = {Challenge Accepted! - The Impcat of Goal Achievement on Subsequent User Effort and Implications of a Goal's Difficulty},

booktitle = {Proceedings of the Twenty Fourth Conference on Information Systems (ECIS), Istanbul},

year = {2016},

abstract = {We empirically investigate the impact of successful goal achievement on future effort to attain the next goal in a recurring goal framework. We use data from a popular German Question & Answer community where goals are represented in the form of badges. In particular, our analysis of this data hinges on the fact, that in this Question & Answer community, badges in a hierarchical badge system are in-creasingly challenging to attain up to a certain badge. After this certain badge, the difficulty level sud-denly drops and remains constant throughout up to the last badge in the hierarchy. Our findings indi-cate that after successful badge achievement users increase their subsequent effort to attain the next badge, but only as long as badges represent a challenge to the user. According to our analysis, we identify self-learning to be the key driver of this behavior.}

}

author = {Tobias von Rechenberg AND Dominik Gutt},

title = {Challenge Accepted! - The Impcat of Goal Achievement on Subsequent User Effort and Implications of a Goal's Difficulty},

booktitle = {Proceedings of the Twenty Fourth Conference on Information Systems (ECIS), Istanbul},

year = {2016},

abstract = {We empirically investigate the impact of successful goal achievement on future effort to attain the next goal in a recurring goal framework. We use data from a popular German Question & Answer community where goals are represented in the form of badges. In particular, our analysis of this data hinges on the fact, that in this Question & Answer community, badges in a hierarchical badge system are in-creasingly challenging to attain up to a certain badge. After this certain badge, the difficulty level sud-denly drops and remains constant throughout up to the last badge in the hierarchy. Our findings indi-cate that after successful badge achievement users increase their subsequent effort to attain the next badge, but only as long as badges represent a challenge to the user. According to our analysis, we identify self-learning to be the key driver of this behavior.}

}

Andreas Cord Landwehr, Matthias Fischer, Daniel Jung, Friedhelm Meyer auf der Heide:

In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 301-312

[Show Abstract]

**Asymptotically Optimal Gathering on a Grid**In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 301-312

**(2016)**[Show Abstract]

In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm - executed by every robot - which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).

[Show BibTeX] @inproceedings{CLFJMadH16,

author = {Andreas Cord Landwehr AND Matthias Fischer AND Daniel Jung AND Friedhelm Meyer auf der Heide},

title = {Asymptotically Optimal Gathering on a Grid},

booktitle = {Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2016},

pages = {301-312},

publisher = {ACM},

abstract = {In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm -- executed by every robot -- which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).}

}

[DOI]
author = {Andreas Cord Landwehr AND Matthias Fischer AND Daniel Jung AND Friedhelm Meyer auf der Heide},

title = {Asymptotically Optimal Gathering on a Grid},

booktitle = {Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2016},

pages = {301-312},

publisher = {ACM},

abstract = {In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm -- executed by every robot -- which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).}

}

Matthias Keller:

PhD thesis, Paderborn University

[Show BibTeX]

**Application Deployment at Distributed Clouds**PhD thesis, Paderborn University

**(2016)**[Show BibTeX]

@phdthesis{Keller-phdthesis-2016,

author = {Matthias Keller},

title = {Application Deployment at Distributed Clouds},

school = {Paderborn University},

year = {2016}

}

author = {Matthias Keller},

title = {Application Deployment at Distributed Clouds},

school = {Paderborn University},

year = {2016}

}

**2015** (39)

Rene Fahr, Behnud Djawadi:

In

[Show Abstract]

**“…and they are really lying”: Clean Evidence on the Pervasiveness of Cheating in Professional Contexts from a Field Experiment.**In

*Journal of Economic Psychology*, vol. 48, pp. 48-59.**(2015)**[Show Abstract]

We investigate the pervasiveness of lying in professional contexts such as insurance fraud, tax evasion and untrue job applications. We argue that lying in professional contexts share three characterizing features: (1) the gain from the dishonest behavior is uncertain, (2) the harm that lying may cause to the other party is only indirect and (3) lies are more indirect lies by action or written statements. Conducted as a field experiment with a heterogenous group of participants during a University ‘‘Open House Day’’, our ‘‘gumball-machineexperiment’’ provides field evidence on how preferences for lying are shaped in situations typically found in professional contexts which we consider to be particularly prone to lying behavior compared to other contexts. As a key innovation, our experimental design allows measuring exact levels of cheating behavior under anonymous conditions. We find clean evidence that cheating is prevalent across all sub groups and that more than 32% of the population cheats for their own gain. However, an analysis of the cheating rates with respect to highest educational degree and professional status reveals that students cheat more than non-students. This finding warrants a careful interpretation of generalizing laboratory findings with student subjects about the prevalence of cheating in the population.

[Show BibTeX] @article{Fahr/Djawadi2015,

author = {Rene Fahr AND Behnud Djawadi},

title = {“…and they are really lying”: Clean Evidence on the Pervasiveness of Cheating in Professional Contexts from a Field Experiment.},

journal = {Journal of Economic Psychology},

year = {2015},

volume = {48},

pages = {48-59},

abstract = {We investigate the pervasiveness of lying in professional contexts such as insurance fraud, tax evasion and untrue job applications. We argue that lying in professional contexts share three characterizing features: (1) the gain from the dishonest behavior is uncertain, (2) the harm that lying may cause to the other party is only indirect and (3) lies are more indirect lies by action or written statements. Conducted as a field experiment with a heterogenous group of participants during a University ‘‘Open House Day’’, our ‘‘gumball-machineexperiment’’ provides field evidence on how preferences for lying are shaped in situations typically found in professional contexts which we consider to be particularly prone to lying behavior compared to other contexts. As a key innovation, our experimental design allows measuring exact levels of cheating behavior under anonymous conditions. We find clean evidence that cheating is prevalent across all sub groups and that more than 32% of the population cheats for their own gain. However, an analysis of the cheating rates with respect to highest educational degree and professional status reveals that students cheat more than non-students. This finding warrants a careful interpretation of generalizing laboratory findings with student subjects about the prevalence of cheating in the population.}

}

[DOI]
author = {Rene Fahr AND Behnud Djawadi},

title = {“…and they are really lying”: Clean Evidence on the Pervasiveness of Cheating in Professional Contexts from a Field Experiment.},

journal = {Journal of Economic Psychology},

year = {2015},

volume = {48},

pages = {48-59},

abstract = {We investigate the pervasiveness of lying in professional contexts such as insurance fraud, tax evasion and untrue job applications. We argue that lying in professional contexts share three characterizing features: (1) the gain from the dishonest behavior is uncertain, (2) the harm that lying may cause to the other party is only indirect and (3) lies are more indirect lies by action or written statements. Conducted as a field experiment with a heterogenous group of participants during a University ‘‘Open House Day’’, our ‘‘gumball-machineexperiment’’ provides field evidence on how preferences for lying are shaped in situations typically found in professional contexts which we consider to be particularly prone to lying behavior compared to other contexts. As a key innovation, our experimental design allows measuring exact levels of cheating behavior under anonymous conditions. We find clean evidence that cheating is prevalent across all sub groups and that more than 32% of the population cheats for their own gain. However, an analysis of the cheating rates with respect to highest educational degree and professional status reveals that students cheat more than non-students. This finding warrants a careful interpretation of generalizing laboratory findings with student subjects about the prevalence of cheating in the population.}

}

Burkhard Monien, Marios Mavronicolas, Klaus Wagner:

In the ´Festschrift´ Algorithms, Probability, Networks, and Games: Scientific Papers and Essays Dedicated to Paul G. Spirakis on the Occasion of His 60th Birthday. Springer, LNCS, vol. 9295, pp. 49-86

[Show Abstract]

**Weighted Boolean Formula Games**In the ´Festschrift´ Algorithms, Probability, Networks, and Games: Scientific Papers and Essays Dedicated to Paul G. Spirakis on the Occasion of His 60th Birthday. Springer, LNCS, vol. 9295, pp. 49-86

**(2015)**[Show Abstract]

We introduce weighted boolean formula games (WBFG) as a new class of succinct games. Each player has a set of boolean formulas she wants to get satisfied; the formulas involve a ground set of boolean variables each of which is controlled by some player. The payoff of a player is a weighted sum of the values of her formulas. We consider both pure equilibria and their refinement of payoff-dominant equilibria [34], where every player is no worse-off than in any other pure equilibrium. We present both structural and complexity results:

We consider mutual weighted boolean formula games (MWBFG), a subclass of WBFG making a natural mutuality assumption on the formulas of players. We present a very simple exact potential for MWBFG. We establish a polynomial monomorphism from certain classes of weighted congestion games to subclasses of WBFG and MWBFG, respectively, indicating their rich structure.

We present a collection of complexity results about decision (and search) problems for both pure and payoff-dominant equilibria in WBFG. The precise complexities depend crucially on five parameters: (i) the number of players; (ii) the number of variables per player; (iii) the number of formulas per player; (iv) the weights in the payoff functions (whether identical or not), and (v) the syntax of the formulas. These results imply that, unless the polynomial hierarchy collapses, decision (and search) problems for payoff-dominant equilibria are harde than for pure equilibria.

[Show BibTeX] We consider mutual weighted boolean formula games (MWBFG), a subclass of WBFG making a natural mutuality assumption on the formulas of players. We present a very simple exact potential for MWBFG. We establish a polynomial monomorphism from certain classes of weighted congestion games to subclasses of WBFG and MWBFG, respectively, indicating their rich structure.

We present a collection of complexity results about decision (and search) problems for both pure and payoff-dominant equilibria in WBFG. The precise complexities depend crucially on five parameters: (i) the number of players; (ii) the number of variables per player; (iii) the number of formulas per player; (iv) the weights in the payoff functions (whether identical or not), and (v) the syntax of the formulas. These results imply that, unless the polynomial hierarchy collapses, decision (and search) problems for payoff-dominant equilibria are harde than for pure equilibria.

@inproceedings{MM2014,

author = {Burkhard Monien AND Marios Mavronicolas AND Klaus Wagner},

title = {Weighted Boolean Formula Games},

booktitle = {the ´Festschrift´ Algorithms, Probability, Networks, and Games: Scientific Papers and Essays Dedicated to Paul G. Spirakis on the Occasion of His 60th Birthday},

year = {2015},

pages = {49-86},

publisher = {Springer},

abstract = {We introduce weighted boolean formula games (WBFG) as a new class of succinct games. Each player has a set of boolean formulas she wants to get satisfied; the formulas involve a ground set of boolean variables each of which is controlled by some player. The payoff of a player is a weighted sum of the values of her formulas. We consider both pure equilibria and their refinement of payoff-dominant equilibria [34], where every player is no worse-off than in any other pure equilibrium. We present both structural and complexity results:We consider mutual weighted boolean formula games (MWBFG), a subclass of WBFG making a natural mutuality assumption on the formulas of players. We present a very simple exact potential for MWBFG. We establish a polynomial monomorphism from certain classes of weighted congestion games to subclasses of WBFG and MWBFG, respectively, indicating their rich structure.We present a collection of complexity results about decision (and search) problems for both pure and payoff-dominant equilibria in WBFG. The precise complexities depend crucially on five parameters: (i) the number of players; (ii) the number of variables per player; (iii) the number of formulas per player; (iv) the weights in the payoff functions (whether identical or not), and (v) the syntax of the formulas. These results imply that, unless the polynomial hierarchy collapses, decision (and search) problems for payoff-dominant equilibria are harde than for pure equilibria.},

series = {LNCS}

}

[DOI]
author = {Burkhard Monien AND Marios Mavronicolas AND Klaus Wagner},

title = {Weighted Boolean Formula Games},

booktitle = {the ´Festschrift´ Algorithms, Probability, Networks, and Games: Scientific Papers and Essays Dedicated to Paul G. Spirakis on the Occasion of His 60th Birthday},

year = {2015},

pages = {49-86},

publisher = {Springer},

abstract = {We introduce weighted boolean formula games (WBFG) as a new class of succinct games. Each player has a set of boolean formulas she wants to get satisfied; the formulas involve a ground set of boolean variables each of which is controlled by some player. The payoff of a player is a weighted sum of the values of her formulas. We consider both pure equilibria and their refinement of payoff-dominant equilibria [34], where every player is no worse-off than in any other pure equilibrium. We present both structural and complexity results:We consider mutual weighted boolean formula games (MWBFG), a subclass of WBFG making a natural mutuality assumption on the formulas of players. We present a very simple exact potential for MWBFG. We establish a polynomial monomorphism from certain classes of weighted congestion games to subclasses of WBFG and MWBFG, respectively, indicating their rich structure.We present a collection of complexity results about decision (and search) problems for both pure and payoff-dominant equilibria in WBFG. The precise complexities depend crucially on five parameters: (i) the number of players; (ii) the number of variables per player; (iii) the number of formulas per player; (iv) the weights in the payoff functions (whether identical or not), and (v) the syntax of the formulas. These results imply that, unless the polynomial hierarchy collapses, decision (and search) problems for payoff-dominant equilibria are harde than for pure equilibria.},

series = {LNCS}

}

Shouwei Li, Alexander Mäcker, Christine Markarian, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

[Show Abstract]

**Towards Flexible Demands in Online Leasing Problems**In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

**(2015)**[Show Abstract]

We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.

[Show BibTeX] @inproceedings{geilerscheiss,

author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

Christian Scheideler, Alexander Setzer, Thim Strothmann:

In Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Leibniz International Proceedings in Informatics (LIPIcs), vol. 46

[Show Abstract]

**Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures**In Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Leibniz International Proceedings in Informatics (LIPIcs), vol. 46

**(2015)**[Show Abstract]

Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.

[Show BibTeX] @inproceedings{opodis15sss,

author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures},

booktitle = {Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2015},

publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},

abstract = {Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.},

series = {Leibniz International Proceedings in Informatics (LIPIcs)}

}

[DOI]
author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures},

booktitle = {Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2015},

publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},

abstract = {Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.},

series = {Leibniz International Proceedings in Informatics (LIPIcs)}

}

Andreas Koutsopoulos, Christian Scheideler, Thim Strothmann:

In Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, Lecture Notes in Computer Science, vol. 9212, pp. 201-216

[Show Abstract]

**Towards a Universal Approach for the Finite Departure Problem in Overlay Networks**In Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, Lecture Notes in Computer Science, vol. 9212, pp. 201-216

**(2015)**[Show Abstract]

A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.

[Show BibTeX] @inproceedings{universal-departure-sss,

author = {Andreas Koutsopoulos AND Christian Scheideler AND Thim Strothmann},

title = {Towards a Universal Approach for the Finite Departure Problem in Overlay Networks},

booktitle = {Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2015},

pages = {201-216},

publisher = {Springer},

abstract = {A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Andreas Koutsopoulos AND Christian Scheideler AND Thim Strothmann},

title = {Towards a Universal Approach for the Finite Departure Problem in Overlay Networks},

booktitle = {Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2015},

pages = {201-216},

publisher = {Springer},

abstract = {A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.},

series = {Lecture Notes in Computer Science}

}

Thim Strothmann:

In Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM). Springer, LNCS, vol. 8973, pp. 175-186

[Show Abstract]

**The impact of communication patterns on distributed locally self-adjusting binary search trees**In Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM). Springer, LNCS, vol. 8973, pp. 175-186

**(2015)**[Show Abstract]

This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.

To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.

We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.

Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.

[Show BibTeX] To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.

We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.

Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.

@inproceedings{StrothmannWalcom2015,

author = {Thim Strothmann},

title = {The impact of communication patterns on distributed locally self-adjusting binary search trees},

booktitle = {Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM)},

year = {2015},

pages = {175--186},

publisher = {Springer},

abstract = {This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.},

series = {LNCS}

}

[DOI]
author = {Thim Strothmann},

title = {The impact of communication patterns on distributed locally self-adjusting binary search trees},

booktitle = {Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM)},

year = {2015},

pages = {175--186},

publisher = {Springer},

abstract = {This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.},

series = {LNCS}

}

Burkhard Monien, Marios Mavronicolas:

In

[Show Abstract]

**The complexity of pure equilibria in mix-weighted congestion games on parallel links**In

*Information Processing Letters*, pp. 927-931. Elsevier**(2015)**[Show Abstract]

We revisit the simple class of weighted congestion games on parallel links [10], where each player has a non-negative weight and her cost on the link she chooses is the sum of the weights of all players choosing the link. We extend this class to mix-weighted congestion games on parallel links, where weights may as well be negative. For the resulting simple class, we study the complexity of deciding the existence of a pure equilibrium, where no player could unilaterally improve her cost by switching to another link.

We show that even for a singlenegative weight, this decision problem is strongly NP-complete when the number of links is part of the input; the problem is NP-complete already for two links. When the number of links is a fixed constant, we show, through a pseudopolynomial, dynamic programming algorithm, that the problem is not strongly NP-complete unless P = NP; the algorithm works for any number of negative weights.

[Show BibTeX] We show that even for a singlenegative weight, this decision problem is strongly NP-complete when the number of links is part of the input; the problem is NP-complete already for two links. When the number of links is a fixed constant, we show, through a pseudopolynomial, dynamic programming algorithm, that the problem is not strongly NP-complete unless P = NP; the algorithm works for any number of negative weights.

@article{MM2015,

author = {Burkhard Monien AND Marios Mavronicolas},

title = {The complexity of pure equilibria in mix-weighted congestion games on parallel links},

journal = {Information Processing Letters},

year = {2015},

pages = {927-931},

abstract = {We revisit the simple class of weighted congestion games on parallel links [10], where each player has a non-negative weight and her cost on the link she chooses is the sum of the weights of all players choosing the link. We extend this class to mix-weighted congestion games on parallel links, where weights may as well be negative. For the resulting simple class, we study the complexity of deciding the existence of a pure equilibrium, where no player could unilaterally improve her cost by switching to another link.We show that even for a singlenegative weight, this decision problem is strongly NP-complete when the number of links is part of the input; the problem is NP-complete already for two links. When the number of links is a fixed constant, we show, through a pseudopolynomial, dynamic programming algorithm, that the problem is not strongly NP-complete unless P = NP; the algorithm works for any number of negative weights.}

}

[DOI]
author = {Burkhard Monien AND Marios Mavronicolas},

title = {The complexity of pure equilibria in mix-weighted congestion games on parallel links},

journal = {Information Processing Letters},

year = {2015},

pages = {927-931},

abstract = {We revisit the simple class of weighted congestion games on parallel links [10], where each player has a non-negative weight and her cost on the link she chooses is the sum of the weights of all players choosing the link. We extend this class to mix-weighted congestion games on parallel links, where weights may as well be negative. For the resulting simple class, we study the complexity of deciding the existence of a pure equilibrium, where no player could unilaterally improve her cost by switching to another link.We show that even for a singlenegative weight, this decision problem is strongly NP-complete when the number of links is part of the input; the problem is NP-complete already for two links. When the number of links is a fixed constant, we show, through a pseudopolynomial, dynamic programming algorithm, that the problem is not strongly NP-complete unless P = NP; the algorithm works for any number of negative weights.}

}

Arne Schwabe, Holger Karl:

In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 37-42

[Show Abstract]

**SynRace: Decentralized Load-Adaptive Multi-path Routing without Collecting Statistics**In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 37-42

**(2015)**[Show Abstract]

Multi-rooted trees are becoming the norm for modern data-center networks. In these networks, scalable flow routing is challenging owing to vast number of flows. Current approaches either employ a central controller that can have scalability issues or a scalable decentralized algorithm only considering local information. In this paper we present a new decentralized approach to least-congested path routing in software-defined data center networks that has neither of these issues: By duplicating the initial (or SYN) packet of a flow and estimating the data rate of multiple flows in parallel, we exploit TCP’s habit to fill buffers to find the least congested path. We show that our algorithm significantly improves flow completion time without the need for a central controller or specialized hardware.

[Show BibTeX] @inproceedings{SchwabeKarl15,

author = {Arne Schwabe AND Holger Karl},

title = {SynRace: Decentralized Load-Adaptive Multi-path Routing without Collecting Statistics},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {37-42},

publisher = {IEEE},

abstract = {Multi-rooted trees are becoming the norm for modern data-center networks. In these networks, scalable flow routing is challenging owing to vast number of flows. Current approaches either employ a central controller that can have scalability issues or a scalable decentralized algorithm only considering local information. In this paper we present a new decentralized approach to least-congested path routing in software-defined data center networks that has neither of these issues: By duplicating the initial (or SYN) packet of a flow and estimating the data rate of multiple flows in parallel, we exploit TCP’s habit to fill buffers to find the least congested path. We show that our algorithm significantly improves flow completion time without the need for a central controller or specialized hardware.}

}

[DOI]
author = {Arne Schwabe AND Holger Karl},

title = {SynRace: Decentralized Load-Adaptive Multi-path Routing without Collecting Statistics},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {37-42},

publisher = {IEEE},

abstract = {Multi-rooted trees are becoming the norm for modern data-center networks. In these networks, scalable flow routing is challenging owing to vast number of flows. Current approaches either employ a central controller that can have scalability issues or a scalable decentralized algorithm only considering local information. In this paper we present a new decentralized approach to least-congested path routing in software-defined data center networks that has neither of these issues: By duplicating the initial (or SYN) packet of a flow and estimating the data rate of multiple flows in parallel, we exploit TCP’s habit to fill buffers to find the least congested path. We show that our algorithm significantly improves flow completion time without the need for a central controller or specialized hardware.}

}

Sonja Brangewitz, Claus-Jochen Haake, Philipp Möhlmeier:

Techreport UPB.

[Show Abstract]

**Strategic Formation of Customer Relationship Networks**Techreport UPB.

**(2015)**[Show Abstract]

We analyze the stability of networks when two intermediaries strategically form costly links to customers. We interpret these links as customer relationships that enable trade to sell a product. Equilibrium prices and equilibrium quantities on the output as well as on the input market are determined endogenously for a given network of customer relationships. We investigate in how far the substitutability of the intermediaries' products and the costs of link formation influence the intermediaries' equilibrium profits and thus have an impact on the incentives to strategically form relationships to customers. For networks with three customers we characterize locally stable networks, in particular existence is guaranteed for any degree of substitutability. Moreover for the special cases of perfect complements, independent products and perfect substitutes, local stability coincides with the stronger concept of Nash stability. Additionally, for networks with n customers we analyze stability regions for selected networks and determine their limits when n goes to infinity. It turns out that the shape of the stability regions for those networks does not significantly change compared to a setting with a small number of customers.

[Show BibTeX] @techreport{SBPHCJH2015a,

author = {Sonja Brangewitz AND Claus-Jochen Haake AND Philipp M{\"o}hlmeier},

title = {Strategic Formation of Customer Relationship Networks},

year = {2015},

type = {Techreport UPB},

abstract = {We analyze the stability of networks when two intermediaries strategically form costly links to customers. We interpret these links as customer relationships that enable trade to sell a product. Equilibrium prices and equilibrium quantities on the output as well as on the input market are determined endogenously for a given network of customer relationships. We investigate in how far the substitutability of the intermediaries' products and the costs of link formation influence the intermediaries' equilibrium profits and thus have an impact on the incentives to strategically form relationships to customers. For networks with three customers we characterize locally stable networks, in particular existence is guaranteed for any degree of substitutability. Moreover for the special cases of perfect complements, independent products and perfect substitutes, local stability coincides with the stronger concept of Nash stability. Additionally, for networks with n customers we analyze stability regions for selected networks and determine their limits when n goes to infinity. It turns out that the shape of the stability regions for those networks does not significantly change compared to a setting with a small number of customers. }

}

author = {Sonja Brangewitz AND Claus-Jochen Haake AND Philipp M{\"o}hlmeier},

title = {Strategic Formation of Customer Relationship Networks},

year = {2015},

type = {Techreport UPB},

abstract = {We analyze the stability of networks when two intermediaries strategically form costly links to customers. We interpret these links as customer relationships that enable trade to sell a product. Equilibrium prices and equilibrium quantities on the output as well as on the input market are determined endogenously for a given network of customer relationships. We investigate in how far the substitutability of the intermediaries' products and the costs of link formation influence the intermediaries' equilibrium profits and thus have an impact on the incentives to strategically form relationships to customers. For networks with three customers we characterize locally stable networks, in particular existence is guaranteed for any degree of substitutability. Moreover for the special cases of perfect complements, independent products and perfect substitutes, local stability coincides with the stronger concept of Nash stability. Additionally, for networks with n customers we analyze stability regions for selected networks and determine their limits when n goes to infinity. It turns out that the shape of the stability regions for those networks does not significantly change compared to a setting with a small number of customers. }

}

Karlson Pfannschmidt:

Master's thesis, Paderborn University

[Show BibTeX]

**Solving the aggregated bandits problem**Master's thesis, Paderborn University

**(2015)**[Show BibTeX]

@mastersthesis{Pfannschmidt16,

author = {Karlson Pfannschmidt},

title = {Solving the aggregated bandits problem},

school = {Paderborn University},

year = {2015}

}

author = {Karlson Pfannschmidt},

title = {Solving the aggregated bandits problem},

school = {Paderborn University},

year = {2015}

}

Martin Dräxler, Johannes Blobel, Philipp Dreimann, Stefan Valentin, Holger Karl:

In Proceedings of the 2nd International Conference on Networked Systems (NetSys). IEEE, pp. 1-8

[Show Abstract]

**SmarterPhones: Anticipatory Download Scheduling for Wireless Video Streaming**In Proceedings of the 2nd International Conference on Networked Systems (NetSys). IEEE, pp. 1-8

**(2015)**[Show Abstract]

Video streaming is in high demand by mobile users. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast, good channel conditions cause common greedy buffering schemes to buffer too much data. Such over-buffering wastes expensive wireless channel capacity. Assuming that we can anticipate future data rates, we plan the quality and download time of video segments ahead. This anticipatory download scheduling avoids buffer underruns by downloading a large number of segments before a drop in available data rate occurs, without wasting wireless capacity by excessive buffering.

We developed a practical anticipatory scheduling algorithm for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality.

[Show BibTeX] We developed a practical anticipatory scheduling algorithm for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality.

@inproceedings{DBDVK2015,

author = {Martin Dr{\"a}xler AND Johannes Blobel AND Philipp Dreimann AND Stefan Valentin AND Holger Karl},

title = {SmarterPhones: Anticipatory Download Scheduling for Wireless Video Streaming},

booktitle = {Proceedings of the 2nd International Conference on Networked Systems (NetSys)},

year = {2015},

pages = {1--8},

publisher = {IEEE},

abstract = {Video streaming is in high demand by mobile users. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast, good channel conditions cause common greedy buffering schemes to buffer too much data. Such over-buffering wastes expensive wireless channel capacity. Assuming that we can anticipate future data rates, we plan the quality and download time of video segments ahead. This anticipatory download scheduling avoids buffer underruns by downloading a large number of segments before a drop in available data rate occurs, without wasting wireless capacity by excessive buffering.We developed a practical anticipatory scheduling algorithm for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality.}

}

[DOI]
author = {Martin Dr{\"a}xler AND Johannes Blobel AND Philipp Dreimann AND Stefan Valentin AND Holger Karl},

title = {SmarterPhones: Anticipatory Download Scheduling for Wireless Video Streaming},

booktitle = {Proceedings of the 2nd International Conference on Networked Systems (NetSys)},

year = {2015},

pages = {1--8},

publisher = {IEEE},

abstract = {Video streaming is in high demand by mobile users. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast, good channel conditions cause common greedy buffering schemes to buffer too much data. Such over-buffering wastes expensive wireless channel capacity. Assuming that we can anticipate future data rates, we plan the quality and download time of video segments ahead. This anticipatory download scheduling avoids buffer underruns by downloading a large number of segments before a drop in available data rate occurs, without wasting wireless capacity by excessive buffering.We developed a practical anticipatory scheduling algorithm for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality.}

}

Dominik Gutt, Philipp Herrmann:

In Proceedings of the Twenty Third European Conference on Information Systems (ECIS), Münster.

[Show Abstract]

**Sharing Means Caring? Hosts' Price Reactions to Rating Visibility**In Proceedings of the Twenty Third European Conference on Information Systems (ECIS), Münster.

**(2015)**[Show Abstract]

We empirically investigate how hosts on Airbnb, a popular peer-to-peer website for fee-based sharing of under-utilized space, adjust their prices once their offering gets a visible star rating for the first time. We use data for over 14,000 offerings from Airbnb which we collected for New York City. Our findings indicate that hosts whose offerings achieve star rating visibility significantly increase their prices by an average of 2.69 € more than hosts with comparable offerings who do not experience this rating visibility during the time of observation. Out of all offerings who achieve rating visibility, we identify the upper quartile of hosts to be the main driver of this price increase, whereas the first 75% percent show only a marginal price reaction. These results can serve as a first step towards understanding the motivation of people to provide assets to the sharing economy.

[Show BibTeX] @inproceedings{sharingmeanscaring,

author = {Dominik Gutt AND Philipp Herrmann},

title = {Sharing Means Caring? Hosts' Price Reactions to Rating Visibility},

booktitle = {Proceedings of the Twenty Third European Conference on Information Systems (ECIS), M{\"u}nster},

year = {2015},

abstract = {We empirically investigate how hosts on Airbnb, a popular peer-to-peer website for fee-based sharing of under-utilized space, adjust their prices once their offering gets a visible star rating for the first time. We use data for over 14,000 offerings from Airbnb which we collected for New York City. Our findings indicate that hosts whose offerings achieve star rating visibility significantly increase their prices by an average of 2.69 € more than hosts with comparable offerings who do not experience this rating visibility during the time of observation. Out of all offerings who achieve rating visibility, we identify the upper quartile of hosts to be the main driver of this price increase, whereas the first 75% percent show only a marginal price reaction. These results can serve as a first step towards understanding the motivation of people to provide assets to the sharing economy.}

}

author = {Dominik Gutt AND Philipp Herrmann},

title = {Sharing Means Caring? Hosts' Price Reactions to Rating Visibility},

booktitle = {Proceedings of the Twenty Third European Conference on Information Systems (ECIS), M{\"u}nster},

year = {2015},

abstract = {We empirically investigate how hosts on Airbnb, a popular peer-to-peer website for fee-based sharing of under-utilized space, adjust their prices once their offering gets a visible star rating for the first time. We use data for over 14,000 offerings from Airbnb which we collected for New York City. Our findings indicate that hosts whose offerings achieve star rating visibility significantly increase their prices by an average of 2.69 € more than hosts with comparable offerings who do not experience this rating visibility during the time of observation. Out of all offerings who achieve rating visibility, we identify the upper quartile of hosts to be the main driver of this price increase, whereas the first 75% percent show only a marginal price reaction. These results can serve as a first step towards understanding the motivation of people to provide assets to the sharing economy.}

}

Matthias Trykacz:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Share Economy - Identifikation von konstituierenden Merkmalen anhand einer vergleichenden Betrachtung von Geschäftsmodellen**Bachelor thesis, Paderborn University

**(2015)**[Show BibTeX]

@misc{sharing_economy_geschäftsmodelle,

author = {Matthias Trykacz},

title = {Share Economy - Identifikation von konstituierenden Merkmalen anhand einer vergleichenden Betrachtung von Gesch{\"a}ftsmodellen},

year = {2015}

}

author = {Matthias Trykacz},

title = {Share Economy - Identifikation von konstituierenden Merkmalen anhand einer vergleichenden Betrachtung von Gesch{\"a}ftsmodellen},

year = {2015}

}

Berno Buechel, Nils Roehl:

In

[Show Abstract]

**Robust equilibria in location games**In

*European Journal of Operational Research*, vol. 240, no. 2, pp. 505-517. Elsevier**(2015)**[Show Abstract]

In the framework of spatial competition, two or more players strategically choose a location in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A profile of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which differ only by the consumers’ perceptions of distances. For a finite number of players and any distribution of consumers, we provide a complete characterization of robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal differentiation and inefficiency are robust phenomena. Thereby, we find strong support for an old conjecture that in equilibrium firms form local clusters.

[Show BibTeX] @article{BBRN2015,

author = {Berno Buechel AND Nils Roehl},

title = {Robust equilibria in location games},

journal = {European Journal of Operational Research},

year = {2015},

volume = {240},

number = {2},

pages = {505-517},

abstract = {In the framework of spatial competition, two or more players strategically choose a location in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A profile of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which differ only by the consumers’ perceptions of distances. For a finite number of players and any distribution of consumers, we provide a complete characterization of robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal differentiation and inefficiency are robust phenomena. Thereby, we find strong support for an old conjecture that in equilibrium firms form local clusters.}

}

[DOI]
author = {Berno Buechel AND Nils Roehl},

title = {Robust equilibria in location games},

journal = {European Journal of Operational Research},

year = {2015},

volume = {240},

number = {2},

pages = {505-517},

abstract = {In the framework of spatial competition, two or more players strategically choose a location in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A profile of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which differ only by the consumers’ perceptions of distances. For a finite number of players and any distribution of consumers, we provide a complete characterization of robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal differentiation and inefficiency are robust phenomena. Thereby, we find strong support for an old conjecture that in equilibrium firms form local clusters.}

}

Melissa Sonntag:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Reputation und Vertrauen auf Online-Märkten**Bachelor thesis, Paderborn University

**(2015)**[Show BibTeX]

@misc{Sonntag2015,

author = {Melissa Sonntag},

title = {Reputation und Vertrauen auf Online-M{\"a}rkten},

year = {2015}

}

author = {Melissa Sonntag},

title = {Reputation und Vertrauen auf Online-M{\"a}rkten},

year = {2015}

}

Dominik Gutt, Dennis Kundisch:

[Show Abstract]

**Rating Aggregation in Multi-Dimensional Rating Systems: How Do Reviewers Form Overall Ratings?****(2015)**(contribution at: INFORMS Annual Meeting, Philadelphia, USA)[Show Abstract]

A recent strain of literature on online product reviews has focused in particular on multi-dimensional product reviews. Multi-dimensional product reviews usually allow the reviewer to rate a product first, based on one overall rating, and second, based on a set of several sub-dimensions. Mostly, overall ratings do not equal e.g. the calculated mean of the sub-dimensions. Our research will shed light on the question, which heuristics reviewers use to form an overall rating.

[Show BibTeX] @misc{aggregation_informs,

author = {Dominik Gutt AND Dennis Kundisch},

title = {Rating Aggregation in Multi-Dimensional Rating Systems: How Do Reviewers Form Overall Ratings?},

year = {2015},

note = {contribution at: INFORMS Annual Meeting, Philadelphia, USA},

abstract = {A recent strain of literature on online product reviews has focused in particular on multi-dimensional product reviews. Multi-dimensional product reviews usually allow the reviewer to rate a product first, based on one overall rating, and second, based on a set of several sub-dimensions. Mostly, overall ratings do not equal e.g. the calculated mean of the sub-dimensions. Our research will shed light on the question, which heuristics reviewers use to form an overall rating.}

}

author = {Dominik Gutt AND Dennis Kundisch},

title = {Rating Aggregation in Multi-Dimensional Rating Systems: How Do Reviewers Form Overall Ratings?},

year = {2015},

note = {contribution at: INFORMS Annual Meeting, Philadelphia, USA},

abstract = {A recent strain of literature on online product reviews has focused in particular on multi-dimensional product reviews. Multi-dimensional product reviews usually allow the reviewer to rate a product first, based on one overall rating, and second, based on a set of several sub-dimensions. Mostly, overall ratings do not equal e.g. the calculated mean of the sub-dimensions. Our research will shed light on the question, which heuristics reviewers use to form an overall rating.}

}

Philip Wette:

PhD thesis, University of Paderborn

[Show BibTeX]

**Optimizing Software-Defined Networks using Application-Layer Knowledge**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDWette,

author = {Philip Wette},

title = {Optimizing Software-Defined Networks using Application-Layer Knowledge},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Philip Wette},

title = {Optimizing Software-Defined Networks using Application-Layer Knowledge},

school = {University of Paderborn},

year = {2015}

}

Christine Markarian, Friedhelm Meyer auf der Heide:

In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC). ACM, pp. 343-344

[Show Abstract]

**Online Resource Leasing**In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC). ACM, pp. 343-344

**(2015)**[Show Abstract]

Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems".

[Show BibTeX] @inproceedings{MM-PODC2016,

author = {Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Online Resource Leasing},

booktitle = {Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC)},

year = {2015},

pages = {343-344},

publisher = {ACM},

month = {July, 21-23},

abstract = {Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems". }

}

[DOI]
author = {Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Online Resource Leasing},

booktitle = {Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC)},

year = {2015},

pages = {343-344},

publisher = {ACM},

month = {July, 21-23},

abstract = {Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems". }

}

Christine Markarian:

PhD thesis, University of Paderborn

[Show BibTeX]

**Online Resource Leasing**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDMarkarian,

author = {Christine Markarian},

title = {Online Resource Leasing},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Christine Markarian},

title = {Online Resource Leasing},

school = {University of Paderborn},

year = {2015}

}

Alexander Lange:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Online Bewertungssysteme – Ein systematischer Überblick**Bachelor thesis, Paderborn University

**(2015)**[Show BibTeX]

@misc{Lange_Alexander,

author = {Alexander Lange},

title = {Online Bewertungssysteme – Ein systematischer {\"U}berblick},

year = {2015}

}

author = {Alexander Lange},

title = {Online Bewertungssysteme – Ein systematischer {\"U}berblick},

year = {2015}

}

Sebastian Abshoff:

PhD thesis, University of Paderborn

[Show BibTeX]

**On the Complexity of Fundamental Problems in Dynamic Ad-hoc Networks**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDAbshoff,

author = {Sebastian Abshoff},

title = {On the Complexity of Fundamental Problems in Dynamic Ad-hoc Networks},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Sebastian Abshoff},

title = {On the Complexity of Fundamental Problems in Dynamic Ad-hoc Networks},

school = {University of Paderborn},

year = {2015}

}

Maximilian Drees, Matthias Feldotto, Sören Riechers, Alexander Skopalik:

In Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 9347, pp. 178-189

[Show Abstract]

**On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games**In Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 9347, pp. 178-189

**(2015)**[Show Abstract]

In \emphbandwidth allocation games (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is $\sf NP$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.

[Show BibTeX] @inproceedings{DFRS15,

author = {Maximilian Drees AND Matthias Feldotto AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games},

booktitle = {Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2015},

pages = {178-189},

publisher = {Springer Berlin Heidelberg},

abstract = {In \emph{bandwidth allocation games} (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is ${\sf NP}$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Maximilian Drees AND Matthias Feldotto AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games},

booktitle = {Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2015},

pages = {178-189},

publisher = {Springer Berlin Heidelberg},

abstract = {In \emph{bandwidth allocation games} (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is ${\sf NP}$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.},

series = {Lecture Notes in Computer Science}

}

Andreas Cord Landwehr, Pascal Lenzner:

In Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS). Springer, LNCS, vol. 9235, pp. 248-260

[Show Abstract]

**Network Creation Games: Think Global - Act Local**In Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS). Springer, LNCS, vol. 9235, pp. 248-260

**(2015)**[Show Abstract]

We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.

[Show BibTeX] @inproceedings{mfcs2015ncg,

author = {Andreas Cord Landwehr AND Pascal Lenzner},

title = {Network Creation Games: Think Global - Act Local},

booktitle = {Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS)},

year = {2015},

pages = {248--260},

publisher = {Springer},

abstract = {We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Pascal Lenzner},

title = {Network Creation Games: Think Global - Act Local},

booktitle = {Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS)},

year = {2015},

pages = {248--260},

publisher = {Springer},

abstract = {We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.},

series = {LNCS}

}

Till Hohenberger:

Master's thesis, University of Paderborn

[Show BibTeX]

**Network Creation Games with Interest Groups**Master's thesis, University of Paderborn

**(2015)**[Show BibTeX]

@mastersthesis{msc_ncg-with-interest-groups,

author = {Till Hohenberger},

title = {Network Creation Games with Interest Groups},

school = {University of Paderborn},

year = {2015}

}

author = {Till Hohenberger},

title = {Network Creation Games with Interest Groups},

school = {University of Paderborn},

year = {2015}

}

Nils Kothe:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Multilevel Netzwerk Spiele mit konstanten Entfernungen im Highspeed-Netzwerk**Bachelor thesis, University of Paderborn

**(2015)**[Show BibTeX]

@misc{bsc_ncg-multilevel-kosten,

author = {Nils Kothe},

title = {Multilevel Netzwerk Spiele mit konstanten Entfernungen im Highspeed-Netzwerk},

year = {2015}

}

author = {Nils Kothe},

title = {Multilevel Netzwerk Spiele mit konstanten Entfernungen im Highspeed-Netzwerk},

year = {2015}

}

Tobias Rojahn:

Master's thesis, University of Paderborn

[Show BibTeX]

**Load Balancing for Range Queries in a Dimension Invariant Peer-to-Peer Network**Master's thesis, University of Paderborn

**(2015)**[Show BibTeX]

@mastersthesis{msc_load-balancing-for-range-queries,

author = {Tobias Rojahn},

title = {Load Balancing for Range Queries in a Dimension Invariant Peer-to-Peer Network},

school = {University of Paderborn},

year = {2015}

}

author = {Tobias Rojahn},

title = {Load Balancing for Range Queries in a Dimension Invariant Peer-to-Peer Network},

school = {University of Paderborn},

year = {2015}

}

Philip Wette, Holger Karl:

In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 1-7

[Show Abstract]

**HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks**In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 1-7

**(2015)**(won best paper award )[Show Abstract]

The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.

[Show BibTeX] @inproceedings{WetteKarl15,

author = {Philip Wette AND Holger Karl},

title = {HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {1--7},

publisher = {IEEE},

note = {won best paper award},

abstract = {The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.}

}

[DOI]
author = {Philip Wette AND Holger Karl},

title = {HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {1--7},

publisher = {IEEE},

note = {won best paper award},

abstract = {The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.}

}

Joe Cox, Daniel Kaimann:

In

[Show Abstract]

**How do reviews from professional critics interact with other signals of product quality? Evidence from the video game industry**In

*Journal of Consumer Behaviour*, vol. 14, no. 6, pp. 366-377.**(2015)**[Show Abstract]

Experience goods are characterised by information asymmetry and a lack of ex ante knowledge of product quality, such that reliable external signals of quality are likely to be highly valued. Two potentially credible sources of such information are reviews from professional critics and ‘word of mouth’ from other consumers. This paper makes a direct comparison between the relative influences and interactions of reviews from both of these sources on the sales performance of video game software. In order to empirically estimate and separate the effects of the two signals, we analyze a sample of 1480 video games and their sales figures between 2004 and 2010. We find evidence to suggest that even after taking steps to control for endogeneity, reviews from professional critics have a significantly positive influence on sales which outweighs that from consumer reviews. We also find evidence to suggest that reviews from professional critics also interact significantly with other signals of product quality. Consequently, we contend that professional critics adopt the role of an influencer, whereas word-of-mouth opinion acts more as a predictor of sales in the market for video games.

[Show BibTeX] @article{CK15,

author = {Joe Cox AND Daniel Kaimann},

title = {How do reviews from professional critics interact with other signals of product quality? Evidence from the video game industry},

journal = {Journal of Consumer Behaviour},

year = {2015},

volume = {14},

number = {6},

pages = {366-377},

abstract = {Experience goods are characterised by information asymmetry and a lack of ex ante knowledge of product quality, such that reliable external signals of quality are likely to be highly valued. Two potentially credible sources of such information are reviews from professional critics and ‘word of mouth’ from other consumers. This paper makes a direct comparison between the relative influences and interactions of reviews from both of these sources on the sales performance of video game software. In order to empirically estimate and separate the effects of the two signals, we analyze a sample of 1480 video games and their sales figures between 2004 and 2010. We find evidence to suggest that even after taking steps to control for endogeneity, reviews from professional critics have a significantly positive influence on sales which outweighs that from consumer reviews. We also find evidence to suggest that reviews from professional critics also interact significantly with other signals of product quality. Consequently, we contend that professional critics adopt the role of an influencer, whereas word-of-mouth opinion acts more as a predictor of sales in the market for video games.}

}

[DOI]
author = {Joe Cox AND Daniel Kaimann},

title = {How do reviews from professional critics interact with other signals of product quality? Evidence from the video game industry},

journal = {Journal of Consumer Behaviour},

year = {2015},

volume = {14},

number = {6},

pages = {366-377},

abstract = {Experience goods are characterised by information asymmetry and a lack of ex ante knowledge of product quality, such that reliable external signals of quality are likely to be highly valued. Two potentially credible sources of such information are reviews from professional critics and ‘word of mouth’ from other consumers. This paper makes a direct comparison between the relative influences and interactions of reviews from both of these sources on the sales performance of video game software. In order to empirically estimate and separate the effects of the two signals, we analyze a sample of 1480 video games and their sales figures between 2004 and 2010. We find evidence to suggest that even after taking steps to control for endogeneity, reviews from professional critics have a significantly positive influence on sales which outweighs that from consumer reviews. We also find evidence to suggest that reviews from professional critics also interact significantly with other signals of product quality. Consequently, we contend that professional critics adopt the role of an influencer, whereas word-of-mouth opinion acts more as a predictor of sales in the market for video games.}

}

Philipp Herrmann, Dennis Kundisch, Steffen Zimmermann, Barry Nault:

In Proceedings of the Thirty Sixth International Conference on Information Systems (ICIS), Fort Worth.

[Show Abstract]

**How do Different Sources of the Variance of Consumer Ratings Matter?**In Proceedings of the Thirty Sixth International Conference on Information Systems (ICIS), Fort Worth.

**(2015)**[Show Abstract]

We examine the effect of the variance of consumer ratings on product pricing and sales using an analytical model, which considers goods that are characterized by experience attributes and informed search attributes (i.e., experience attributes that were transformed in search attributes by consumer ratings). For pure informed search goods, equilibrium price increases and demand decreases in variance. For pure experience goods, equilibrium price and demand decrease in variance. For hybrid goods with low total variance, equilibrium price and demand increase with an increasing relative share of variance caused by informed search attributes when the average rating and total variance of ratings are held constant. Hence, risk-averse consumers may prefer a more expensive good with a higher variance of ratings out of two similar goods with the same average rating. Moreover, our analytical model provides a theoretical foundation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.

[Show BibTeX] @inproceedings{Herrmann_et_al_2015,

author = {Philipp Herrmann AND Dennis Kundisch AND Steffen Zimmermann AND Barry Nault},

title = {How do Different Sources of the Variance of Consumer Ratings Matter?},

booktitle = {Proceedings of the Thirty Sixth International Conference on Information Systems (ICIS), Fort Worth},

year = {2015},

abstract = {We examine the effect of the variance of consumer ratings on product pricing and sales using an analytical model, which considers goods that are characterized by experience attributes and informed search attributes (i.e., experience attributes that were transformed in search attributes by consumer ratings). For pure informed search goods, equilibrium price increases and demand decreases in variance. For pure experience goods, equilibrium price and demand decrease in variance. For hybrid goods with low total variance, equilibrium price and demand increase with an increasing relative share of variance caused by informed search attributes when the average rating and total variance of ratings are held constant. Hence, risk-averse consumers may prefer a more expensive good with a higher variance of ratings out of two similar goods with the same average rating. Moreover, our analytical model provides a theoretical foundation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.}

}

[DOI]
author = {Philipp Herrmann AND Dennis Kundisch AND Steffen Zimmermann AND Barry Nault},

title = {How do Different Sources of the Variance of Consumer Ratings Matter?},

booktitle = {Proceedings of the Thirty Sixth International Conference on Information Systems (ICIS), Fort Worth},

year = {2015},

abstract = {We examine the effect of the variance of consumer ratings on product pricing and sales using an analytical model, which considers goods that are characterized by experience attributes and informed search attributes (i.e., experience attributes that were transformed in search attributes by consumer ratings). For pure informed search goods, equilibrium price increases and demand decreases in variance. For pure experience goods, equilibrium price and demand decrease in variance. For hybrid goods with low total variance, equilibrium price and demand increase with an increasing relative share of variance caused by informed search attributes when the average rating and total variance of ratings are held constant. Hence, risk-averse consumers may prefer a more expensive good with a higher variance of ratings out of two similar goods with the same average rating. Moreover, our analytical model provides a theoretical foundation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.}

}

Andreas Koutsopoulos:

PhD thesis, University of Paderborn

[Show BibTeX]

**Dynamics and Efficiency in Topological Self-Stabilization**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDKoutsopoulos,

author = {Andreas Koutsopoulos},

title = {Dynamics and Efficiency in Topological Self-Stabilization},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Andreas Koutsopoulos},

title = {Dynamics and Efficiency in Topological Self-Stabilization},

school = {University of Paderborn},

year = {2015}

}

Sebastian Kniesburges:

PhD thesis, University of Paderborn

[Show BibTeX]

**Distributed Data Structures and the Power of topological Self-Stabilization**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDKniesburges,

author = {Sebastian Kniesburges},

title = {Distributed Data Structures and the Power of topological Self-Stabilization},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Sebastian Kniesburges},

title = {Distributed Data Structures and the Power of topological Self-Stabilization},

school = {University of Paderborn},

year = {2015}

}

Philipp Herrmann, Dennis Kundisch, Steffen Zimmermann, Barry Nault:

[Show Abstract]

**Different Sources of the Variance of Online Consumer Ratings and their Impact on Price and Demand****(2015)**(contribution at: INFORMS Conference on Information Systems and Technology (CIST), Philadelphia, USA)[Show Abstract]

Consumer ratings can play a decisive role in purchases by online shoppers. To examine the effect of the variance of these ratings on future product pricing and sales we develop a model which considers goods that are characterized by two types of attributes: experience attributes and experience attributes that were transformed in search attributes by consumer ratings that we call informed search attributes. For pure informed search goods where the variance in ratings is caused by an informed search attribute, we find that with increasing variance optimal price increases and demand decreases. For pure experience goods where the variance in ratings is caused by an experience attribute, we find that with increasing variance optimal price and demand decrease. For hybrid goods – where the variance in ratings is caused by both attributes – when there is low total variance, and the average rating and total variance are held constant, optimal price and demand increase as the increasing relative share of variance caused by informed search attributes increases. Via this mechanism, between two similar goods with the same average rating risk averse consumers may prefer the higher priced good with a higher variance. In addition, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.

[Show BibTeX] @misc{variance_cist,

author = {Philipp Herrmann AND Dennis Kundisch AND Steffen Zimmermann AND Barry Nault},

title = {Different Sources of the Variance of Online Consumer Ratings and their Impact on Price and Demand},

year = {2015},

note = {contribution at: INFORMS Conference on Information Systems and Technology (CIST), Philadelphia, USA},

abstract = {Consumer ratings can play a decisive role in purchases by online shoppers. To examine the effect of the variance of these ratings on future product pricing and sales we develop a model which considers goods that are characterized by two types of attributes: experience attributes and experience attributes that were transformed in search attributes by consumer ratings that we call informed search attributes. For pure informed search goods where the variance in ratings is caused by an informed search attribute, we find that with increasing variance optimal price increases and demand decreases. For pure experience goods where the variance in ratings is caused by an experience attribute, we find that with increasing variance optimal price and demand decrease. For hybrid goods – where the variance in ratings is caused by both attributes – when there is low total variance, and the average rating and total variance are held constant, optimal price and demand increase as the increasing relative share of variance caused by informed search attributes increases. Via this mechanism, between two similar goods with the same average rating risk averse consumers may prefer the higher priced good with a higher variance. In addition, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.}

}

author = {Philipp Herrmann AND Dennis Kundisch AND Steffen Zimmermann AND Barry Nault},

title = {Different Sources of the Variance of Online Consumer Ratings and their Impact on Price and Demand},

year = {2015},

note = {contribution at: INFORMS Conference on Information Systems and Technology (CIST), Philadelphia, USA},

abstract = {Consumer ratings can play a decisive role in purchases by online shoppers. To examine the effect of the variance of these ratings on future product pricing and sales we develop a model which considers goods that are characterized by two types of attributes: experience attributes and experience attributes that were transformed in search attributes by consumer ratings that we call informed search attributes. For pure informed search goods where the variance in ratings is caused by an informed search attribute, we find that with increasing variance optimal price increases and demand decreases. For pure experience goods where the variance in ratings is caused by an experience attribute, we find that with increasing variance optimal price and demand decrease. For hybrid goods – where the variance in ratings is caused by both attributes – when there is low total variance, and the average rating and total variance are held constant, optimal price and demand increase as the increasing relative share of variance caused by informed search attributes increases. Via this mechanism, between two similar goods with the same average rating risk averse consumers may prefer the higher priced good with a higher variance. In addition, our model provides a theoretical explanation for the empirically observed j-shaped distribution of consumer ratings in electronic commerce.}

}

Alina Reimann:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Die Wirksamkeit von Zertifikaten als Qualitätssignal**Bachelor thesis, Paderborn University

**(2015)**[Show BibTeX]

@misc{Reimann2015,

author = {Alina Reimann},

title = {Die Wirksamkeit von Zertifikaten als Qualit{\"a}tssignal},

year = {2015}

}

author = {Alina Reimann},

title = {Die Wirksamkeit von Zertifikaten als Qualit{\"a}tssignal},

year = {2015}

}

Sonja Brangewitz, Claus-Jochen Haake, Jochen Manegold:

In Ortiz, Guadalupe and Tran, Cuong (eds.): Advances in Service-Oriented and Cloud Computing. Springer International Publishing, Communications in Computer and Information Science, vol. 508, pp. 160-174

[Show Abstract]

**Contract Design for Composed Services in a Cloud Computing Environment**In Ortiz, Guadalupe and Tran, Cuong (eds.): Advances in Service-Oriented and Cloud Computing. Springer International Publishing, Communications in Computer and Information Science, vol. 508, pp. 160-174

**(2015)**(Proceedings of the 2nd International Workshop on Cloud Service Brokerage, CSB 2014)[Show Abstract]

In this paper, we study markets in which sellers and buyers

interact with each other via an intermediary. Our motivating example

is a market with a cloud infrastructure where single services are

exibly combined to composed services. We address the contract design problem

of an intermediary to purchase complementary single services. By using

a non-cooperative game-theoretic model, we analyze the incentives for

high- and low-quality composed services to be an equilibrium outcome of

the market. It turns out that equilibria with low quality can be obtained

in the short run and in the long run, whereas those with high quality can

only be achieved in the long run. In our analysis we explicitly determine

the according discount factors needed in an infinitely repeated game.

Furthermore, we derive optimal contracts for the supply of high- and

low-quality composed services.

[Show BibTeX] interact with each other via an intermediary. Our motivating example

is a market with a cloud infrastructure where single services are

exibly combined to composed services. We address the contract design problem

of an intermediary to purchase complementary single services. By using

a non-cooperative game-theoretic model, we analyze the incentives for

high- and low-quality composed services to be an equilibrium outcome of

the market. It turns out that equilibria with low quality can be obtained

in the short run and in the long run, whereas those with high quality can

only be achieved in the long run. In our analysis we explicitly determine

the according discount factors needed in an infinitely repeated game.

Furthermore, we derive optimal contracts for the supply of high- and

low-quality composed services.

@inproceedings{BHM14,

author = {Sonja Brangewitz AND Claus-Jochen Haake AND Jochen Manegold},

title = {Contract Design for Composed Services in a Cloud Computing Environment},

booktitle = {Advances in Service-Oriented and Cloud Computing},

year = {2015},

editor = {Ortiz, Guadalupe and Tran, Cuong},

pages = {160-174},

publisher = {Springer International Publishing},

note = {Proceedings of the 2nd International Workshop on Cloud Service Brokerage, CSB 2014},

abstract = {In this paper, we study markets in which sellers and buyersinteract with each other via an intermediary. Our motivating exampleis a market with a cloud infrastructure where single services are exibly combined to composed services. We address the contract design problemof an intermediary to purchase complementary single services. By usinga non-cooperative game-theoretic model, we analyze the incentives forhigh- and low-quality composed services to be an equilibrium outcome ofthe market. It turns out that equilibria with low quality can be obtainedin the short run and in the long run, whereas those with high quality canonly be achieved in the long run. In our analysis we explicitly determinethe according discount factors needed in an infinitely repeated game.Furthermore, we derive optimal contracts for the supply of high- andlow-quality composed services.},

series = {Communications in Computer and Information Science}

}

[DOI]
author = {Sonja Brangewitz AND Claus-Jochen Haake AND Jochen Manegold},

title = {Contract Design for Composed Services in a Cloud Computing Environment},

booktitle = {Advances in Service-Oriented and Cloud Computing},

year = {2015},

editor = {Ortiz, Guadalupe and Tran, Cuong},

pages = {160-174},

publisher = {Springer International Publishing},

note = {Proceedings of the 2nd International Workshop on Cloud Service Brokerage, CSB 2014},

abstract = {In this paper, we study markets in which sellers and buyersinteract with each other via an intermediary. Our motivating exampleis a market with a cloud infrastructure where single services are exibly combined to composed services. We address the contract design problemof an intermediary to purchase complementary single services. By usinga non-cooperative game-theoretic model, we analyze the incentives forhigh- and low-quality composed services to be an equilibrium outcome ofthe market. It turns out that equilibria with low quality can be obtainedin the short run and in the long run, whereas those with high quality canonly be achieved in the long run. In our analysis we explicitly determinethe according discount factors needed in an infinitely repeated game.Furthermore, we derive optimal contracts for the supply of high- andlow-quality composed services.},

series = {Communications in Computer and Information Science}

}

Sonja Brangewitz, Jochen Manegold:

Techreport UPB.

[Show Abstract]

**Competition and Product Innovation of Intermediaries in a Differentiated Duopoly**Techreport UPB.

**(2015)**[Show Abstract]

On an intermediate goods market we allow for vertical and horizontal product differentiation and analyze the influence of simultaneous competition for resources and customers on the market outcome. Asymmetries between intermediaries cannot arise just from distinct product qualities, but also from different production technologies. The intermediaries face either price or quantity competition on the output market and a monopolistic input supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Considering product innovation for symmetric productivities we derive equilibrium conditions on the investment costs and compare price and quantity competition. It turns out that on the one hand there exist product qualities and degrees of horizontal product differentiation for complements such that asymmetric investment equilibria fail to exist. On the other hand we find that there also exist product qualities and degrees of horizontal product differentiation for substitutes such that existence can be guaranteed if the investment costs are chosen accordingly.

[Show BibTeX] @techreport{SBJM2015a,

author = {Sonja Brangewitz AND Jochen Manegold},

title = {Competition and Product Innovation of Intermediaries in a Differentiated Duopoly},

year = {2015},

type = {Techreport UPB},

abstract = {On an intermediate goods market we allow for vertical and horizontal product differentiation and analyze the influence of simultaneous competition for resources and customers on the market outcome. Asymmetries between intermediaries cannot arise just from distinct product qualities, but also from different production technologies. The intermediaries face either price or quantity competition on the output market and a monopolistic input supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Considering product innovation for symmetric productivities we derive equilibrium conditions on the investment costs and compare price and quantity competition. It turns out that on the one hand there exist product qualities and degrees of horizontal product differentiation for complements such that asymmetric investment equilibria fail to exist. On the other hand we find that there also exist product qualities and degrees of horizontal product differentiation for substitutes such that existence can be guaranteed if the investment costs are chosen accordingly.}

}

author = {Sonja Brangewitz AND Jochen Manegold},

title = {Competition and Product Innovation of Intermediaries in a Differentiated Duopoly},

year = {2015},

type = {Techreport UPB},

abstract = {On an intermediate goods market we allow for vertical and horizontal product differentiation and analyze the influence of simultaneous competition for resources and customers on the market outcome. Asymmetries between intermediaries cannot arise just from distinct product qualities, but also from different production technologies. The intermediaries face either price or quantity competition on the output market and a monopolistic input supplier on the input market. We find that there exist quality and productivity differences such that for quantity competition only one intermediary is willing to procure inputs from the input supplier, while for price competition both intermediaries are willing to purchase inputs. Considering product innovation for symmetric productivities we derive equilibrium conditions on the investment costs and compare price and quantity competition. It turns out that on the one hand there exist product qualities and degrees of horizontal product differentiation for complements such that asymmetric investment equilibria fail to exist. On the other hand we find that there also exist product qualities and degrees of horizontal product differentiation for substitutes such that existence can be guaranteed if the investment costs are chosen accordingly.}

}

Jannis Pautz:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Budget Games with priced strategies**Bachelor thesis, Paderborn University

**(2015)**[Show BibTeX]

@misc{Pautz15,

author = {Jannis Pautz},

title = {Budget Games with priced strategies},

year = {2015}

}

author = {Jannis Pautz},

title = {Budget Games with priced strategies},

year = {2015}

}

Claudius Jähn:

PhD thesis, University of Paderborn

[Show BibTeX]

**Bewertung von Renderingalgorithmen für komplexe 3-D-Szenen**PhD thesis, University of Paderborn

**(2015)**[Show BibTeX]

@phdthesis{PhDJaehn,

author = {Claudius J{\"a}hn},

title = {Bewertung von Renderingalgorithmen f{\"u}r komplexe 3-D-Szenen},

school = {University of Paderborn},

year = {2015}

}

[DOI]
author = {Claudius J{\"a}hn},

title = {Bewertung von Renderingalgorithmen f{\"u}r komplexe 3-D-Szenen},

school = {University of Paderborn},

year = {2015}

}

Ioannis Caragiannis, Angelo Fanelli, Nick Gravin, Alexander Skopalik:

In

[Show Abstract]

**Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure**In

*Transactions on Economics and Computation*, vol. 3, no. 1, pp. 2. ACM**(2015)**[Show Abstract]

We consider structural and algorithmic questions related to the Nash dynamics of weighted congestion games. In weighted congestion games with linear latency functions, the existence of pure Nash equilibria is guaranteed by a potential function argument. Unfortunately, this proof of existence is inefficient and computing pure Nash equilibria in such games is a PLS-hard problem even when all players have unit weights. The situation gets worse when superlinear (e.g., quadratic) latency functions come into play; in this case, the Nash dynamics of the game may contain cycles and pure Nash equilibria may not even exist. Given these obstacles, we consider approximate pure Nash equilibria as alternative solution concepts. A ρ-approximate pure Nash equilibrium is a state of a (weighted congestion) game from which no player has any incentive to deviate in order to improve her cost by a multiplicative factor higher than ρ. Do such equilibria exist for small values of ρ? And if so, can we compute them efficiently?

We provide positive answers to both questions for weighted congestion games with polynomial latency functions by exploiting an “approximation” of such games by a new class of potential games that we call Ψ-games. This allows us to show that these games have d!-approximate pure Nash equilibria, where d is the maximum degree of the latency functions. Our main technical contribution is an efficient algorithm for computing O(1)-approximate pure Nash equilibria when d is a constant. For games with linear latency functions, the approximation guarantee is 3+√5/2 + Oγ for arbitrarily small γ > 0; for latency functions with maximum degree d≥ 2, it is d2d+o(d). The running time is polynomial in the number of bits in the representation of the game and 1/γ. As a byproduct of our techniques, we also show the following interesting structural statement for weighted congestion games with polynomial latency functions of maximum degree d ≥ 2: polynomially-long sequences of best-response moves from any initial state to a dO(d2)-approximate pure Nash equilibrium exist and can be efficiently identified in such games as long as d is a constant.

To the best of our knowledge, these are the first positive algorithmic results for approximate pure Nash equilibria in weighted congestion games. Our techniques significantly extend our recent work on unweighted congestion games through the use of Ψ-games. The concept of approximating nonpotential games by potential ones is interesting in itself and might have further applications.

[Show BibTeX] We provide positive answers to both questions for weighted congestion games with polynomial latency functions by exploiting an “approximation” of such games by a new class of potential games that we call Ψ-games. This allows us to show that these games have d!-approximate pure Nash equilibria, where d is the maximum degree of the latency functions. Our main technical contribution is an efficient algorithm for computing O(1)-approximate pure Nash equilibria when d is a constant. For games with linear latency functions, the approximation guarantee is 3+√5/2 + Oγ for arbitrarily small γ > 0; for latency functions with maximum degree d≥ 2, it is d2d+o(d). The running time is polynomial in the number of bits in the representation of the game and 1/γ. As a byproduct of our techniques, we also show the following interesting structural statement for weighted congestion games with polynomial latency functions of maximum degree d ≥ 2: polynomially-long sequences of best-response moves from any initial state to a dO(d2)-approximate pure Nash equilibrium exist and can be efficiently identified in such games as long as d is a constant.

To the best of our knowledge, these are the first positive algorithmic results for approximate pure Nash equilibria in weighted congestion games. Our techniques significantly extend our recent work on unweighted congestion games through the use of Ψ-games. The concept of approximating nonpotential games by potential ones is interesting in itself and might have further applications.

@article{DBLP:journals/teco/CaragiannisFGS15,

author = {Ioannis Caragiannis AND Angelo Fanelli AND Nick Gravin AND Alexander Skopalik},

title = {Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure},

journal = {Transactions on Economics and Computation},

year = {2015},

volume = {3},

number = {1},

pages = {2},

abstract = {We consider structural and algorithmic questions related to the Nash dynamics of weighted congestion games. In weighted congestion games with linear latency functions, the existence of pure Nash equilibria is guaranteed by a potential function argument. Unfortunately, this proof of existence is inefficient and computing pure Nash equilibria in such games is a PLS-hard problem even when all players have unit weights. The situation gets worse when superlinear (e.g., quadratic) latency functions come into play; in this case, the Nash dynamics of the game may contain cycles and pure Nash equilibria may not even exist. Given these obstacles, we consider approximate pure Nash equilibria as alternative solution concepts. A ρ--approximate pure Nash equilibrium is a state of a (weighted congestion) game from which no player has any incentive to deviate in order to improve her cost by a multiplicative factor higher than ρ. Do such equilibria exist for small values of ρ? And if so, can we compute them efficiently?We provide positive answers to both questions for weighted congestion games with polynomial latency functions by exploiting an “approximation” of such games by a new class of potential games that we call Ψ-games. This allows us to show that these games have d!-approximate pure Nash equilibria, where d is the maximum degree of the latency functions. Our main technical contribution is an efficient algorithm for computing O(1)-approximate pure Nash equilibria when d is a constant. For games with linear latency functions, the approximation guarantee is 3+√5/2 + Oγ for arbitrarily small γ > 0; for latency functions with maximum degree d≥ 2, it is d2d+o(d). The running time is polynomial in the number of bits in the representation of the game and 1/γ. As a byproduct of our techniques, we also show the following interesting structural statement for weighted congestion games with polynomial latency functions of maximum degree d ≥ 2: polynomially-long sequences of best-response moves from any initial state to a dO(d2)-approximate pure Nash equilibrium exist and can be efficiently identified in such games as long as d is a constant.To the best of our knowledge, these are the first positive algorithmic results for approximate pure Nash equilibria in weighted congestion games. Our techniques significantly extend our recent work on unweighted congestion games through the use of Ψ-games. The concept of approximating nonpotential games by potential ones is interesting in itself and might have further applications.}

}

[DOI]
author = {Ioannis Caragiannis AND Angelo Fanelli AND Nick Gravin AND Alexander Skopalik},

title = {Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure},

journal = {Transactions on Economics and Computation},

year = {2015},

volume = {3},

number = {1},

pages = {2},

abstract = {We consider structural and algorithmic questions related to the Nash dynamics of weighted congestion games. In weighted congestion games with linear latency functions, the existence of pure Nash equilibria is guaranteed by a potential function argument. Unfortunately, this proof of existence is inefficient and computing pure Nash equilibria in such games is a PLS-hard problem even when all players have unit weights. The situation gets worse when superlinear (e.g., quadratic) latency functions come into play; in this case, the Nash dynamics of the game may contain cycles and pure Nash equilibria may not even exist. Given these obstacles, we consider approximate pure Nash equilibria as alternative solution concepts. A ρ--approximate pure Nash equilibrium is a state of a (weighted congestion) game from which no player has any incentive to deviate in order to improve her cost by a multiplicative factor higher than ρ. Do such equilibria exist for small values of ρ? And if so, can we compute them efficiently?We provide positive answers to both questions for weighted congestion games with polynomial latency functions by exploiting an “approximation” of such games by a new class of potential games that we call Ψ-games. This allows us to show that these games have d!-approximate pure Nash equilibria, where d is the maximum degree of the latency functions. Our main technical contribution is an efficient algorithm for computing O(1)-approximate pure Nash equilibria when d is a constant. For games with linear latency functions, the approximation guarantee is 3+√5/2 + Oγ for arbitrarily small γ > 0; for latency functions with maximum degree d≥ 2, it is d2d+o(d). The running time is polynomial in the number of bits in the representation of the game and 1/γ. As a byproduct of our techniques, we also show the following interesting structural statement for weighted congestion games with polynomial latency functions of maximum degree d ≥ 2: polynomially-long sequences of best-response moves from any initial state to a dO(d2)-approximate pure Nash equilibrium exist and can be efficiently identified in such games as long as d is a constant.To the best of our knowledge, these are the first positive algorithmic results for approximate pure Nash equilibria in weighted congestion games. Our techniques significantly extend our recent work on unweighted congestion games through the use of Ψ-games. The concept of approximating nonpotential games by potential ones is interesting in itself and might have further applications.}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

**A deterministic worst-case message complexity optimal solution for resource discovery**In

*Theoretical of Computer Science*, vol. 584, pp. 67-79. Elsevier**(2015)**[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.

[Show BibTeX] @article{KKS15-TOCS,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A deterministic worst-case message complexity optimal solution for resource discovery},

journal = {Theoretical of Computer Science},

year = {2015},

volume = {584},

pages = {67-79},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A deterministic worst-case message complexity optimal solution for resource discovery},

journal = {Theoretical of Computer Science},

year = {2015},

volume = {584},

pages = {67-79},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.}

}

**2014** (57)

Philip Wette, Martin Dräxler, Arne Schwabe, Felix Wallaschek, Mohammad Hassan Zahraee, Holger Karl:

In Proceedings of the 2014 IFIP Networking Conference (Networking 2014). IEEE, pp. 1-9

[Show Abstract]

**MaxiNet: Distributed Emulation of Software-Defined Networks**In Proceedings of the 2014 IFIP Networking Conference (Networking 2014). IEEE, pp. 1-9

**(2014)**[Show Abstract]

Network emulations are widely used for testing novel network protocols and routing algorithms in realistic scenarios. Up to now, there is no emulation tool that is able to emulate large software-deﬁned data center networks that consist of several thousand nodes. Mininet is the most common tool to emulate Software-Deﬁned Networks of several hundred nodes. We extend Mininet to span an emulated network over several physical machines, making it possible to emulate networks of several thousand nodes on just a handful of physical machines. This enables us to emulate, e.g., large data center networks. To test this approach, we additionally introduce a trafﬁc generator for data center trafﬁc. Since there are no data center trafﬁc traces publicly available we use the results of two recent trafﬁc studies to create synthetic trafﬁc. We show the design and discuss some challenges we had in building our trafﬁc generator. As a showcase for our work we emulated a data center consisting of 3200 hosts on a cluster of only 12 physical machines. We show the resulting workloads and the trade-offs involved.

[Show BibTeX] @inproceedings{wette14b,

author = {Philip Wette AND Martin Dr{\"a}xler AND Arne Schwabe AND Felix Wallaschek AND Mohammad Hassan Zahraee AND Holger Karl},

title = {{MaxiNet:} Distributed Emulation of {Software-Defined} Networks},

booktitle = {Proceedings of the 2014 IFIP Networking Conference (Networking 2014)},

year = {2014},

pages = {1-9},

publisher = {IEEE},

abstract = {Network emulations are widely used for testing novel network protocols and routing algorithms in realistic scenarios. Up to now, there is no emulation tool that is able to emulate large software-deﬁned data center networks that consist of several thousand nodes. Mininet is the most common tool to emulate Software-Deﬁned Networks of several hundred nodes. We extend Mininet to span an emulated network over several physical machines, making it possible to emulate networks of several thousand nodes on just a handful of physical machines. This enables us to emulate, e.g., large data center networks. To test this approach, we additionally introduce a trafﬁc generator for data center trafﬁc. Since there are no data center trafﬁc traces publicly available we use the results of two recent trafﬁc studies to create synthetic trafﬁc. We show the design and discuss some challenges we had in building our trafﬁc generator. As a showcase for our work we emulated a data center consisting of 3200 hosts on a cluster of only 12 physical machines. We show the resulting workloads and the trade-offs involved.}

}

[DOI]
author = {Philip Wette AND Martin Dr{\"a}xler AND Arne Schwabe AND Felix Wallaschek AND Mohammad Hassan Zahraee AND Holger Karl},

title = {{MaxiNet:} Distributed Emulation of {Software-Defined} Networks},

booktitle = {Proceedings of the 2014 IFIP Networking Conference (Networking 2014)},

year = {2014},

pages = {1-9},

publisher = {IEEE},

abstract = {Network emulations are widely used for testing novel network protocols and routing algorithms in realistic scenarios. Up to now, there is no emulation tool that is able to emulate large software-deﬁned data center networks that consist of several thousand nodes. Mininet is the most common tool to emulate Software-Deﬁned Networks of several hundred nodes. We extend Mininet to span an emulated network over several physical machines, making it possible to emulate networks of several thousand nodes on just a handful of physical machines. This enables us to emulate, e.g., large data center networks. To test this approach, we additionally introduce a trafﬁc generator for data center trafﬁc. Since there are no data center trafﬁc traces publicly available we use the results of two recent trafﬁc studies to create synthetic trafﬁc. We show the design and discuss some challenges we had in building our trafﬁc generator. As a showcase for our work we emulated a data center consisting of 3200 hosts on a cluster of only 12 physical machines. We show the resulting workloads and the trade-offs involved.}

}

Maximilian Lange:

Bachelor thesis, Paderborn University

[Show BibTeX]

**Was tun um Kunden von der Qualität seiner Produkte zu überzeugen - Möglichkeiten der zertifizierung und von Reputationssystemen auf Onlinemärkten**Bachelor thesis, Paderborn University

**(2014)**[Show BibTeX]

@misc{Lange2014,

author = {Maximilian Lange},

title = {Was tun um Kunden von der Qualit{\"a}t seiner Produkte zu {\"u}berzeugen - M{\"o}glichkeiten der zertifizierung und von Reputationssystemen auf Onlinem{\"a}rkten},

year = {2014}

}

author = {Maximilian Lange},

title = {Was tun um Kunden von der Qualit{\"a}t seiner Produkte zu {\"u}berzeugen - M{\"o}glichkeiten der zertifizierung und von Reputationssystemen auf Onlinem{\"a}rkten},

year = {2014}

}

Christopher Berkemeier:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Verhandlungen vs Auktionen im Beschäftigungsmanagement**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Berkemeier14,

author = {Christopher Berkemeier},

title = {Verhandlungen vs Auktionen im Besch{\"a}ftigungsmanagement},

year = {2014}

}

author = {Christopher Berkemeier},

title = {Verhandlungen vs Auktionen im Besch{\"a}ftigungsmanagement},

year = {2014}

}

Henri Beck:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Verhandlungen bei variablem status quo: Eine Modifikation des Adjustet Winner Verfahrens**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Beck14,

author = {Henri Beck},

title = {Verhandlungen bei variablem status quo: Eine Modifikation des Adjustet Winner Verfahrens},

year = {2014}

}

author = {Henri Beck},

title = {Verhandlungen bei variablem status quo: Eine Modifikation des Adjustet Winner Verfahrens},

year = {2014}

}

Philip Wette, Holger Karl:

In Proceedings of the IEEE International Conference on Communications 2014. IEEE Computer Society, pp. 3270-3276

[Show Abstract]

**Using Application Layer Knowledge in Routing and Wavelength Assignment Algorithms**In Proceedings of the IEEE International Conference on Communications 2014. IEEE Computer Society, pp. 3270-3276

**(2014)**[Show Abstract]

Preemptive Routing and Wavelength Assignment

(RWA) algorithms preempt established lightpaths in case not

enough resources are available to set up a new lightpath in a

Wavelength Division Multiplexing (WDM) network. The selection

of lightpaths to be preempted relies on internal decisions of the

RWA algorithm. Thus, if dedicated properties of the network

topology are required by the applications running on the network,

these requirements have to be known to the RWA algorithm.

We present a family of preemptive RWA algorithms for WDM

networks. These algorithms have two distinguishing features: a)

they can handle dynamic traffic by on-the-fly reconfiguration,

and b) users can give feedback for reconfiguration decisions and

thus influence the preemption decision of the RWA algorithm,

leading to networks which adapt directly to application needs.

This is different from traffic engineering where the network is

(slowly) adapted to observed traffic patterns.

Our algorithms handle various WDM network configurations

including networks consisting of heterogeneous WDM hardware.

To this end, we are using the layered graph approach together

with a newly developed graph model that is used to determine

conflicting lightpaths.

[Show BibTeX] (RWA) algorithms preempt established lightpaths in case not

enough resources are available to set up a new lightpath in a

Wavelength Division Multiplexing (WDM) network. The selection

of lightpaths to be preempted relies on internal decisions of the

RWA algorithm. Thus, if dedicated properties of the network

topology are required by the applications running on the network,

these requirements have to be known to the RWA algorithm.

We present a family of preemptive RWA algorithms for WDM

networks. These algorithms have two distinguishing features: a)

they can handle dynamic traffic by on-the-fly reconfiguration,

and b) users can give feedback for reconfiguration decisions and

thus influence the preemption decision of the RWA algorithm,

leading to networks which adapt directly to application needs.

This is different from traffic engineering where the network is

(slowly) adapted to observed traffic patterns.

Our algorithms handle various WDM network configurations

including networks consisting of heterogeneous WDM hardware.

To this end, we are using the layered graph approach together

with a newly developed graph model that is used to determine

conflicting lightpaths.

@inproceedings{wette14a,

author = {Philip Wette AND Holger Karl},

title = {Using Application Layer Knowledge in Routing and Wavelength Assignment Algorithms},

booktitle = {Proceedings of the IEEE International Conference on Communications 2014},

year = {2014},

pages = {3270-3276},

publisher = {IEEE Computer Society},

abstract = {Preemptive Routing and Wavelength Assignment(RWA) algorithms preempt established lightpaths in case notenough resources are available to set up a new lightpath in aWavelength Division Multiplexing (WDM) network. The selectionof lightpaths to be preempted relies on internal decisions of theRWA algorithm. Thus, if dedicated properties of the networktopology are required by the applications running on the network,these requirements have to be known to the RWA algorithm.We present a family of preemptive RWA algorithms for WDMnetworks. These algorithms have two distinguishing features: a)they can handle dynamic traffic by on-the-fly reconfiguration,and b) users can give feedback for reconfiguration decisions andthus influence the preemption decision of the RWA algorithm,leading to networks which adapt directly to application needs.This is different from traffic engineering where the network is(slowly) adapted to observed traffic patterns.Our algorithms handle various WDM network configurationsincluding networks consisting of heterogeneous WDM hardware.To this end, we are using the layered graph approach togetherwith a newly developed graph model that is used to determineconflicting lightpaths.}

}

[DOI]
author = {Philip Wette AND Holger Karl},

title = {Using Application Layer Knowledge in Routing and Wavelength Assignment Algorithms},

booktitle = {Proceedings of the IEEE International Conference on Communications 2014},

year = {2014},

pages = {3270-3276},

publisher = {IEEE Computer Society},

abstract = {Preemptive Routing and Wavelength Assignment(RWA) algorithms preempt established lightpaths in case notenough resources are available to set up a new lightpath in aWavelength Division Multiplexing (WDM) network. The selectionof lightpaths to be preempted relies on internal decisions of theRWA algorithm. Thus, if dedicated properties of the networktopology are required by the applications running on the network,these requirements have to be known to the RWA algorithm.We present a family of preemptive RWA algorithms for WDMnetworks. These algorithms have two distinguishing features: a)they can handle dynamic traffic by on-the-fly reconfiguration,and b) users can give feedback for reconfiguration decisions andthus influence the preemption decision of the RWA algorithm,leading to networks which adapt directly to application needs.This is different from traffic engineering where the network is(slowly) adapted to observed traffic patterns.Our algorithms handle various WDM network configurationsincluding networks consisting of heterogeneous WDM hardware.To this end, we are using the layered graph approach togetherwith a newly developed graph model that is used to determineconflicting lightpaths.}

}

Terry Fang Cheng:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Two-Sided Market and Game Console Vendors**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Cheng14,

author = {Terry Fang Cheng},

title = {Two-Sided Market and Game Console Vendors},

year = {2014}

}

author = {Terry Fang Cheng},

title = {Two-Sided Market and Game Console Vendors},

year = {2014}

}

Jörn Künsemöller:

PhD thesis, University of Paderborn

[Show BibTeX]

**Tragedy of the Common Cloud - Game Theory on the Infrastructure-as-a-Service Market**PhD thesis, University of Paderborn

**(2014)**[Show BibTeX]

@phdthesis{PhDKuensemoeller,

author = {J{\"o}rn K{\"u}nsem{\"o}ller},

title = {Tragedy of the Common Cloud - Game Theory on the Infrastructure-as-a-Service Market},

school = {University of Paderborn},

year = {2014}

}

[DOI]
author = {J{\"o}rn K{\"u}nsem{\"o}ller},

title = {Tragedy of the Common Cloud - Game Theory on the Infrastructure-as-a-Service Market},

school = {University of Paderborn},

year = {2014}

}

Sonja Brangewitz, Alexander Jungmann, Ronald Petrlic, Marie Christin Platenius:

In Proceedings of the 6th International Conferences on Advanced Service Computing (SERVICE COMPUTATION). IARIA XPS Press, pp. 49-57

[Show Abstract]

**Towards a Flexible and Privacy-Preserving Reputation System for Markets of Composed Services**In Proceedings of the 6th International Conferences on Advanced Service Computing (SERVICE COMPUTATION). IARIA XPS Press, pp. 49-57

**(2014)**[Show Abstract]

One future goal of service-oriented computing is to realize global markets of composed services. On such markets, service providers offer services that can be flexibly combined with each other. However, most often, market participants are not able to individually estimate the quality of traded services in advance. As a consequence, even potentially profitable transactions between customers and providers might not take place. In the worst case, this can induce a market failure. To overcome this problem, we propose the incorporation of reputation information as an indicator for expected service quality. We address On-The-Fly Computing as a representative environment of markets of composed services. In this environment, customers provide feedback on transactions. We present a conceptual design of a reputation system which collects and processes user feedback, and provides it to participants in the market. Our contribution includes the identification of requirements for such a reputation system from a technical and an economic perspective. Based on these requirements, we propose a flexible solution that facilitates the incorporation of reputation information into markets of composed services while simultaneously preserving privacy of customers who provide feedback. The requirements we formulate in this paper have just been partially met in literature. An integrated approach, however, has not been addressed yet.

[Show BibTeX] @inproceedings{BJPP2014,

author = {Sonja Brangewitz AND Alexander Jungmann AND Ronald Petrlic AND Marie Christin Platenius},

title = {Towards a Flexible and Privacy-Preserving Reputation System for Markets of Composed Services},

booktitle = {Proceedings of the 6th International Conferences on Advanced Service Computing (SERVICE COMPUTATION)},

year = {2014},

pages = {49-57},

publisher = {IARIA XPS Press},

abstract = {One future goal of service-oriented computing is to realize global markets of composed services. On such markets, service providers offer services that can be flexibly combined with each other. However, most often, market participants are not able to individually estimate the quality of traded services in advance. As a consequence, even potentially profitable transactions between customers and providers might not take place. In the worst case, this can induce a market failure. To overcome this problem, we propose the incorporation of reputation information as an indicator for expected service quality. We address On-The-Fly Computing as a representative environment of markets of composed services. In this environment, customers provide feedback on transactions. We present a conceptual design of a reputation system which collects and processes user feedback, and provides it to participants in the market. Our contribution includes the identification of requirements for such a reputation system from a technical and an economic perspective. Based on these requirements, we propose a flexible solution that facilitates the incorporation of reputation information into markets of composed services while simultaneously preserving privacy of customers who provide feedback. The requirements we formulate in this paper have just been partially met in literature. An integrated approach, however, has not been addressed yet.}

}

[DOI]
author = {Sonja Brangewitz AND Alexander Jungmann AND Ronald Petrlic AND Marie Christin Platenius},

title = {Towards a Flexible and Privacy-Preserving Reputation System for Markets of Composed Services},

booktitle = {Proceedings of the 6th International Conferences on Advanced Service Computing (SERVICE COMPUTATION)},

year = {2014},

pages = {49-57},

publisher = {IARIA XPS Press},

abstract = {One future goal of service-oriented computing is to realize global markets of composed services. On such markets, service providers offer services that can be flexibly combined with each other. However, most often, market participants are not able to individually estimate the quality of traded services in advance. As a consequence, even potentially profitable transactions between customers and providers might not take place. In the worst case, this can induce a market failure. To overcome this problem, we propose the incorporation of reputation information as an indicator for expected service quality. We address On-The-Fly Computing as a representative environment of markets of composed services. In this environment, customers provide feedback on transactions. We present a conceptual design of a reputation system which collects and processes user feedback, and provides it to participants in the market. Our contribution includes the identification of requirements for such a reputation system from a technical and an economic perspective. Based on these requirements, we propose a flexible solution that facilitates the incorporation of reputation information into markets of composed services while simultaneously preserving privacy of customers who provide feedback. The requirements we formulate in this paper have just been partially met in literature. An integrated approach, however, has not been addressed yet.}

}

Daniel Kaimann, Joe Cox:

Techreport UPB.

[Show Abstract]

**The Interaction of Signals: A Fuzzy set Analysis of the Video Game Industry**Techreport UPB.

**(2014)**[Show Abstract]

Customers continuously evaluate the credibility and reliability of a range of signals both separately and jointly. However, existing econometric studies pay insufficient attention to the interactions and complex combinations of these signals, and are typically limited as a result of difficulties controlling for multicollinearity and endogeneity in their data. We develop a novel theoretical approach to address these issues and study different signaling effects (i.e., word-of-mouth, brand reputation, and distribution strategy) on customer perceptions. Using data on the US video games market, we apply a fuzzy set qualitative comparative analysis (fsQCA) to account for cause-effect relationships. The results of our study address a number of key issues in the economics and management literature. First, our results support the contention that reviews from professional critics act as a signal of product quality and therefore positively influence unit sales, as do the discriminatory effects of prices and restricted age ratings. Second, we find evidence to support the use of brand extension strategies as marketing tools that create spillover effects and support the launch of new products.

[Show BibTeX] @techreport{ck14,

author = {Daniel Kaimann AND Joe Cox},

title = {The Interaction of Signals: A Fuzzy set Analysis of the Video Game Industry},

year = {2014},

type = {Techreport UPB},

abstract = {Customers continuously evaluate the credibility and reliability of a range of signals both separately and jointly. However, existing econometric studies pay insufficient attention to the interactions and complex combinations of these signals, and are typically limited as a result of difficulties controlling for multicollinearity and endogeneity in their data. We develop a novel theoretical approach to address these issues and study different signaling effects (i.e., word-of-mouth, brand reputation, and distribution strategy) on customer perceptions. Using data on the US video games market, we apply a fuzzy set qualitative comparative analysis (fsQCA) to account for cause-effect relationships. The results of our study address a number of key issues in the economics and management literature. First, our results support the contention that reviews from professional critics act as a signal of product quality and therefore positively influence unit sales, as do the discriminatory effects of prices and restricted age ratings. Second, we find evidence to support the use of brand extension strategies as marketing tools that create spillover effects and support the launch of new products.}

}

author = {Daniel Kaimann AND Joe Cox},

title = {The Interaction of Signals: A Fuzzy set Analysis of the Video Game Industry},

year = {2014},

type = {Techreport UPB},

abstract = {Customers continuously evaluate the credibility and reliability of a range of signals both separately and jointly. However, existing econometric studies pay insufficient attention to the interactions and complex combinations of these signals, and are typically limited as a result of difficulties controlling for multicollinearity and endogeneity in their data. We develop a novel theoretical approach to address these issues and study different signaling effects (i.e., word-of-mouth, brand reputation, and distribution strategy) on customer perceptions. Using data on the US video games market, we apply a fuzzy set qualitative comparative analysis (fsQCA) to account for cause-effect relationships. The results of our study address a number of key issues in the economics and management literature. First, our results support the contention that reviews from professional critics act as a signal of product quality and therefore positively influence unit sales, as do the discriminatory effects of prices and restricted age ratings. Second, we find evidence to support the use of brand extension strategies as marketing tools that create spillover effects and support the launch of new products.}

}

Lena Holzweißig:

Master's thesis, University of Paderborn

[Show BibTeX]

**The Impact of Customer Reviews and Reputation on Hotel Prices**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{Holzweissig14,

author = {Lena Holzweißig},

title = {The Impact of Customer Reviews and Reputation on Hotel Prices},

school = {University of Paderborn},

year = {2014}

}

author = {Lena Holzweißig},

title = {The Impact of Customer Reviews and Reputation on Hotel Prices},

school = {University of Paderborn},

year = {2014}

}

Friedrich Scheel:

PhD thesis, University of Paderborn

[Show BibTeX]

**The Economics of Individual Behavior in Competitive Environments: Empirical Evidence from Real-Life Tournaments**PhD thesis, University of Paderborn

**(2014)**[Show BibTeX]

@phdthesis{FSPhD2014,

author = {Friedrich Scheel},

title = {The Economics of Individual Behavior in Competitive Environments: Empirical Evidence from Real-Life Tournaments},

school = {University of Paderborn},

year = {2014}

}

[DOI]
author = {Friedrich Scheel},

title = {The Economics of Individual Behavior in Competitive Environments: Empirical Evidence from Real-Life Tournaments},

school = {University of Paderborn},

year = {2014}

}

Matthias Keller, Christoph Robbert, Holger Karl:

In Proceedings of 7th International Conference on Utility and Cloud Computing (UCC). IEEE/ACM, pp. 387-395

[Show Abstract]

**Template Embedding: Using Application Architecture to Allocate Resources in Distributed Clouds**In Proceedings of 7th International Conference on Utility and Cloud Computing (UCC). IEEE/ACM, pp. 387-395

**(2014)**[Show Abstract]

In distributed cloud computing, application deployment across multiple sites can improve quality of service. Recent research developed algorithms to find optimal locations for virtual machines. However, those algorithms assume to have either single-tier applications or a fixed number of virtual machines – a strong simplification of reality. This paper investigates the placement and scaling of complex application architectures. An application is dynamically scaled to fit both the current demand situation and the currently available infrastructure resources. We compare two approaches: The first one is based on virtual network embedding. The second approach is a novel method called Template Embedding. It is based on a hierarchical 1-allocation hub flow problem and combines applica- tion scaling and embedding in one step. Extensive experiments on 43200 network configurations showed that Template Embedding outperforms virtual network embedding in all cases in three metrics: success rate, solution quality, and runtime. This positive result shows that template embedding is a promising approach for distributed cloud resource allocation.

[Show BibTeX] @inproceedings{Keller2014b,

author = {Matthias Keller AND Christoph Robbert AND Holger Karl},

title = {Template Embedding: Using Application Architecture to Allocate Resources in Distributed Clouds},

booktitle = {Proceedings of 7th International Conference on Utility and Cloud Computing (UCC)},

year = {2014},

pages = {387--395},

publisher = {IEEE/ACM},

abstract = {In distributed cloud computing, application deployment across multiple sites can improve quality of service. Recent research developed algorithms to find optimal locations for virtual machines. However, those algorithms assume to have either single-tier applications or a fixed number of virtual machines – a strong simplification of reality. This paper investigates the placement and scaling of complex application architectures. An application is dynamically scaled to fit both the current demand situation and the currently available infrastructure resources. We compare two approaches: The first one is based on virtual network embedding. The second approach is a novel method called Template Embedding. It is based on a hierarchical 1-allocation hub flow problem and combines applica- tion scaling and embedding in one step. Extensive experiments on 43200 network configurations showed that Template Embedding outperforms virtual network embedding in all cases in three metrics: success rate, solution quality, and runtime. This positive result shows that template embedding is a promising approach for distributed cloud resource allocation.}

}

[DOI]
author = {Matthias Keller AND Christoph Robbert AND Holger Karl},

title = {Template Embedding: Using Application Architecture to Allocate Resources in Distributed Clouds},

booktitle = {Proceedings of 7th International Conference on Utility and Cloud Computing (UCC)},

year = {2014},

pages = {387--395},

publisher = {IEEE/ACM},

abstract = {In distributed cloud computing, application deployment across multiple sites can improve quality of service. Recent research developed algorithms to find optimal locations for virtual machines. However, those algorithms assume to have either single-tier applications or a fixed number of virtual machines – a strong simplification of reality. This paper investigates the placement and scaling of complex application architectures. An application is dynamically scaled to fit both the current demand situation and the currently available infrastructure resources. We compare two approaches: The first one is based on virtual network embedding. The second approach is a novel method called Template Embedding. It is based on a hierarchical 1-allocation hub flow problem and combines applica- tion scaling and embedding in one step. Extensive experiments on 43200 network configurations showed that Template Embedding outperforms virtual network embedding in all cases in three metrics: success rate, solution quality, and runtime. This positive result shows that template embedding is a promising approach for distributed cloud resource allocation.}

}

Laszlo Blazovics, Tamas Lukovszki, Bertalan Forstner:

In

[Show Abstract]

**Surrounding robots - A discrete localized solution for the intruder problem**In

*Journal of Advanced Computational Intelligence and Intelligent Informatics*, vol. 18, no. 3, pp. 315-319. Fuji Technology Press**(2014)**[Show Abstract]

Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time.

[Show BibTeX] @article{BLF2014,

author = {Laszlo Blazovics AND Tamas Lukovszki AND Bertalan Forstner},

title = {Surrounding robots -- A discrete localized solution for the intruder problem},

journal = {Journal of Advanced Computational Intelligence and Intelligent Informatics},

year = {2014},

volume = {18},

number = {3},

pages = {315--319},

abstract = {Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time. }

}

[DOI]
author = {Laszlo Blazovics AND Tamas Lukovszki AND Bertalan Forstner},

title = {Surrounding robots -- A discrete localized solution for the intruder problem},

journal = {Journal of Advanced Computational Intelligence and Intelligent Informatics},

year = {2014},

volume = {18},

number = {3},

pages = {315--319},

abstract = {Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time. }

}

Olga Ebel:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Struktur und algorithmische Bestimmung stabiler Matchings in one-to-one Matching Märkten**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Ebel14,

author = {Olga Ebel},

title = {Struktur und algorithmische Bestimmung stabiler Matchings in one-to-one Matching M{\"a}rkten},

year = {2014}

}

author = {Olga Ebel},

title = {Struktur und algorithmische Bestimmung stabiler Matchings in one-to-one Matching M{\"a}rkten},

year = {2014}

}

Nils Roehl:

PhD thesis, University of Paderborn

[Show BibTeX]

**Strategic and Cooperative Games in Network Economics**PhD thesis, University of Paderborn

**(2014)**[Show BibTeX]

@phdthesis{PhDRoehl,

author = {Nils Roehl},

title = {Strategic and Cooperative Games in Network Economics},

school = {University of Paderborn},

year = {2014}

}

[DOI]
author = {Nils Roehl},

title = {Strategic and Cooperative Games in Network Economics},

school = {University of Paderborn},

year = {2014}

}

Sevil Mehraghdam (married name: Dräxler), Matthias Keller, Holger Karl:

In Proceedings of the 3rd International Conference on Cloud Networking (CloudNet). IEEE, pp. 7-13

[Show Abstract]

**Specifying and Placing Chains of Virtual Network Functions**In Proceedings of the 3rd International Conference on Cloud Networking (CloudNet). IEEE, pp. 7-13

**(2014)**[Show Abstract]

Network appliances perform different functions on network flows and constitute an important part of an operator’s network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.

[Show BibTeX] @inproceedings{Mehr1410:Specifying,

author = {Sevil Mehraghdam (married name: Dr{\"a}xler) AND Matthias Keller AND Holger Karl},

title = {Specifying and Placing Chains of Virtual Network Functions},

booktitle = {Proceedings of the 3rd International Conference on Cloud Networking (CloudNet)},

year = {2014},

pages = {7-13},

publisher = {IEEE},

abstract = {Network appliances perform different functions on network flows and constitute an important part of an operator’s network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives. }

}

[DOI]
author = {Sevil Mehraghdam (married name: Dr{\"a}xler) AND Matthias Keller AND Holger Karl},

title = {Specifying and Placing Chains of Virtual Network Functions},

booktitle = {Proceedings of the 3rd International Conference on Cloud Networking (CloudNet)},

year = {2014},

pages = {7-13},

publisher = {IEEE},

abstract = {Network appliances perform different functions on network flows and constitute an important part of an operator’s network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives. }

}

Daniel Roeske:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Simulating load-dependent operation of picocells**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Roeske2014,

author = {Daniel Roeske},

title = {Simulating load-dependent operation of picocells},

year = {2014}

}

author = {Daniel Roeske},

title = {Simulating load-dependent operation of picocells},

year = {2014}

}

Jens Janiuk, Alexander Mäcker, Kalman Graffi:

In Proceedings of the International Conference on Collaboration Technologies and Systems (CTS). IEEE Computer Society, pp. 396-405

[Show Abstract]

**Secure Distributed Data Structures for Peer-to-Peer-based Social Networks**In Proceedings of the International Conference on Collaboration Technologies and Systems (CTS). IEEE Computer Society, pp. 396-405

**(2014)**[Show Abstract]

Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.

[Show BibTeX] @inproceedings{JaniukMaeckerGraffi14,

author = {Jens Janiuk AND Alexander M{\"a}cker AND Kalman Graffi},

title = {Secure Distributed Data Structures for Peer-to-Peer-based Social Networks},

booktitle = {Proceedings of the International Conference on Collaboration Technologies and Systems (CTS)},

year = {2014},

pages = {396-405},

publisher = {IEEE Computer Society},

abstract = {Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.}

}

[DOI]
author = {Jens Janiuk AND Alexander M{\"a}cker AND Kalman Graffi},

title = {Secure Distributed Data Structures for Peer-to-Peer-based Social Networks},

booktitle = {Proceedings of the International Conference on Collaboration Technologies and Systems (CTS)},

year = {2014},

pages = {396-405},

publisher = {IEEE Computer Society},

abstract = {Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.}

}

Tobias Harks, Martin Höfer, Kevin Schewior, Alexander Skopalik:

In Proceedings of the 33rd Annual IEEE International Conference on Computer Communications (INFOCOM'14). IEEE, pp. 352-360

[Show Abstract]

**Routing Games with Progressive Filling**In Proceedings of the 33rd Annual IEEE International Conference on Computer Communications (INFOCOM'14). IEEE, pp. 352-360

**(2014)**[Show Abstract]

Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs).

We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived.

[Show BibTeX] We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived.

@inproceedings{HHSS14,

author = {Tobias Harks AND Martin H{\"o}fer AND Kevin Schewior AND Alexander Skopalik},

title = {Routing Games with Progressive Filling},

booktitle = {Proceedings of the 33rd Annual IEEE International Conference on Computer Communications (INFOCOM'14)},

year = {2014},

pages = {352-360},

publisher = {IEEE},

abstract = {Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs).We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived. }

}

[DOI]
author = {Tobias Harks AND Martin H{\"o}fer AND Kevin Schewior AND Alexander Skopalik},

title = {Routing Games with Progressive Filling},

booktitle = {Proceedings of the 33rd Annual IEEE International Conference on Computer Communications (INFOCOM'14)},

year = {2014},

pages = {352-360},

publisher = {IEEE},

abstract = {Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs).We present a variety of results for equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs. In addition, we show that when a designer can adjust allocation rates, it is possible to design games with optimal strong equilibria. Some initial results on polynomial-time algorithms in this direction are also derived. }

}

Matthias Keller, Holger Karl:

In Proceedings of the SIGCOMM workshop on Distributed cloud computing. ACM, pp. 47-52

[Show Abstract]

**Response Time-Optimized Distributed Cloud Resource Allocation**In Proceedings of the SIGCOMM workshop on Distributed cloud computing. ACM, pp. 47-52

**(2014)**[Show Abstract]

In the near future many more compute resources will be available at different geographical locations. To minimize the response time of requests, application servers closer to the user can hence be used to shorten network round trip times. However, this advantage is neutralized if the used data centre is highly loaded as the processing time of re- quests is important as well. We model the request response time as the network round trip time plus the processing time at a data centre.

We present a capacitated facility location problem formal- ization where the processing time is modelled as the sojourn time of a queueing model. We discuss the Pareto trade-off between the number of used data centres and the resulting response time. For example, using fewer data centres could cut expenses but results in high utilization, high response time, and smaller revenues.

Previous work presented a non-linear cost function. We prove its convexity and exploit this property in two ways: First, we transform the convex model into a linear model while controlling the maximum approximation error. Sec- ond, we used a convex solver instead of a slower non-linear solver. Numerical results on network topologies exemplify our work.

[Show BibTeX] We present a capacitated facility location problem formal- ization where the processing time is modelled as the sojourn time of a queueing model. We discuss the Pareto trade-off between the number of used data centres and the resulting response time. For example, using fewer data centres could cut expenses but results in high utilization, high response time, and smaller revenues.

Previous work presented a non-linear cost function. We prove its convexity and exploit this property in two ways: First, we transform the convex model into a linear model while controlling the maximum approximation error. Sec- ond, we used a convex solver instead of a slower non-linear solver. Numerical results on network topologies exemplify our work.

@inproceedings{Keller2014a,

author = {Matthias Keller AND Holger Karl},

title = {Response Time-Optimized Distributed Cloud Resource Allocation},

booktitle = {Proceedings of the SIGCOMM workshop on Distributed cloud computing},

year = {2014},

pages = {47--52},

publisher = {ACM},

abstract = {In the near future many more compute resources will be available at different geographical locations. To minimize the response time of requests, application servers closer to the user can hence be used to shorten network round trip times. However, this advantage is neutralized if the used data centre is highly loaded as the processing time of re- quests is important as well. We model the request response time as the network round trip time plus the processing time at a data centre.We present a capacitated facility location problem formal- ization where the processing time is modelled as the sojourn time of a queueing model. We discuss the Pareto trade-off between the number of used data centres and the resulting response time. For example, using fewer data centres could cut expenses but results in high utilization, high response time, and smaller revenues.Previous work presented a non-linear cost function. We prove its convexity and exploit this property in two ways: First, we transform the convex model into a linear model while controlling the maximum approximation error. Sec- ond, we used a convex solver instead of a slower non-linear solver. Numerical results on network topologies exemplify our work.}

}

[DOI]
author = {Matthias Keller AND Holger Karl},

title = {Response Time-Optimized Distributed Cloud Resource Allocation},

booktitle = {Proceedings of the SIGCOMM workshop on Distributed cloud computing},

year = {2014},

pages = {47--52},

publisher = {ACM},

abstract = {In the near future many more compute resources will be available at different geographical locations. To minimize the response time of requests, application servers closer to the user can hence be used to shorten network round trip times. However, this advantage is neutralized if the used data centre is highly loaded as the processing time of re- quests is important as well. We model the request response time as the network round trip time plus the processing time at a data centre.We present a capacitated facility location problem formal- ization where the processing time is modelled as the sojourn time of a queueing model. We discuss the Pareto trade-off between the number of used data centres and the resulting response time. For example, using fewer data centres could cut expenses but results in high utilization, high response time, and smaller revenues.Previous work presented a non-linear cost function. We prove its convexity and exploit this property in two ways: First, we transform the convex model into a linear model while controlling the maximum approximation error. Sec- ond, we used a convex solver instead of a slower non-linear solver. Numerical results on network topologies exemplify our work.}

}

David Pahl:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Reputationssysteme für zusammengesetzte Dienstleistungen**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Pahl14,

author = {David Pahl},

title = {Reputationssysteme f{\"u}r zusammengesetzte Dienstleistungen},

year = {2014}

}

author = {David Pahl},

title = {Reputationssysteme f{\"u}r zusammengesetzte Dienstleistungen},

year = {2014}

}

Matthias Herlich:

PhD thesis, University of Paderborn

[Show Abstract]

**Reducing Energy Consumption of Radio Access Networks**PhD thesis, University of Paderborn

**(2014)**[Show Abstract]

Radio access networks (RANs) have become one of the largest energy consumers of communication technology [LLH+13] and their energy consumption is predicted to increase [FFMB11]. To reduce the energy consumption of RANs different techniques have been proposed. One of the most promising techniques is the use of a low-power sleep mode. However, a sleep mode can also reduce the performance. In this dissertation, I quantify how much energy can be conserved with a sleep mode and which negative effects it has on the performance of RANs. Additionally, I analyze how a sleep mode can be enabled more often and how the performance can be kept high. First, I quantify the effect of power-cycle durations on energy consumption and latency in an abstract queuing system. This results in a trade-off between energy consumption and latency for a single base station (BS). Second, I show that considering a network as a whole (instead of each BS individually) allows the energy consumption to be reduced even further. After these analyses, which are not specific for RANs, I study RANs for the rest of the dissertation. RANs need to both detect and execute the requests of users. Because detection and execution of requests have different requirements, I analyze them independently. I quantify how the number of active BSs can be reduced if the detection ranges of BSs are increased by cooperative transmissions. Next, I analyze how more BSs can be deactivated if the remaining active BSs cooperate to transmit data to the users. However, in addition to increasing the range, cooperative transmissions also radiate more power. This results in higher interference for other users which slows their transmissions down and, thus, increases energy consumption. Therefore, I describe how the radiated power of cooperative transmissions can be reduced if instantaneous channel knowledge is available. Because the implementation in real hardware is impractical for demonstration purposes, I show the results of a simulation that incorporates all effects I studied analytically earlier. In conclusion, I show that a sleep mode can reduce the energy consumption of RANs if applied correctly. To apply a sleep mode correctly, it is necessary to consider power-cycle durations, power profiles, and the interaction of BSs. When this knowledge is combined the energy consumption of RANs can be reduced with only a slight loss of performance. Because this results in a trade-off between energy consumption and performance, each RAN operator has to decide which trade-off is preferred.

[Show BibTeX] @phdthesis{HerlichPhD,

author = {Matthias Herlich},

title = {Reducing Energy Consumption of Radio Access Networks},

school = {University of Paderborn},

year = {2014},

abstract = {Radio access networks (RANs) have become one of the largest energy consumers of communication technology [LLH+13] and their energy consumption is predicted to increase [FFMB11]. To reduce the energy consumption of RANs different techniques have been proposed. One of the most promising techniques is the use of a low-power sleep mode. However, a sleep mode can also reduce the performance. In this dissertation, I quantify how much energy can be conserved with a sleep mode and which negative effects it has on the performance of RANs. Additionally, I analyze how a sleep mode can be enabled more often and how the performance can be kept high. First, I quantify the effect of power-cycle durations on energy consumption and latency in an abstract queuing system. This results in a trade-off between energy consumption and latency for a single base station (BS). Second, I show that considering a network as a whole (instead of each BS individually) allows the energy consumption to be reduced even further. After these analyses, which are not specific for RANs, I study RANs for the rest of the dissertation. RANs need to both detect and execute the requests of users. Because detection and execution of requests have different requirements, I analyze them independently. I quantify how the number of active BSs can be reduced if the detection ranges of BSs are increased by cooperative transmissions. Next, I analyze how more BSs can be deactivated if the remaining active BSs cooperate to transmit data to the users. However, in addition to increasing the range, cooperative transmissions also radiate more power. This results in higher interference for other users which slows their transmissions down and, thus, increases energy consumption. Therefore, I describe how the radiated power of cooperative transmissions can be reduced if instantaneous channel knowledge is available. Because the implementation in real hardware is impractical for demonstration purposes, I show the results of a simulation that incorporates all effects I studied analytically earlier. In conclusion, I show that a sleep mode can reduce the energy consumption of RANs if applied correctly. To apply a sleep mode correctly, it is necessary to consider power-cycle durations, power profiles, and the interaction of BSs. When this knowledge is combined the energy consumption of RANs can be reduced with only a slight loss of performance. Because this results in a trade-off between energy consumption and performance, each RAN operator has to decide which trade-off is preferred.}

}

author = {Matthias Herlich},

title = {Reducing Energy Consumption of Radio Access Networks},

school = {University of Paderborn},

year = {2014},

abstract = {Radio access networks (RANs) have become one of the largest energy consumers of communication technology [LLH+13] and their energy consumption is predicted to increase [FFMB11]. To reduce the energy consumption of RANs different techniques have been proposed. One of the most promising techniques is the use of a low-power sleep mode. However, a sleep mode can also reduce the performance. In this dissertation, I quantify how much energy can be conserved with a sleep mode and which negative effects it has on the performance of RANs. Additionally, I analyze how a sleep mode can be enabled more often and how the performance can be kept high. First, I quantify the effect of power-cycle durations on energy consumption and latency in an abstract queuing system. This results in a trade-off between energy consumption and latency for a single base station (BS). Second, I show that considering a network as a whole (instead of each BS individually) allows the energy consumption to be reduced even further. After these analyses, which are not specific for RANs, I study RANs for the rest of the dissertation. RANs need to both detect and execute the requests of users. Because detection and execution of requests have different requirements, I analyze them independently. I quantify how the number of active BSs can be reduced if the detection ranges of BSs are increased by cooperative transmissions. Next, I analyze how more BSs can be deactivated if the remaining active BSs cooperate to transmit data to the users. However, in addition to increasing the range, cooperative transmissions also radiate more power. This results in higher interference for other users which slows their transmissions down and, thus, increases energy consumption. Therefore, I describe how the radiated power of cooperative transmissions can be reduced if instantaneous channel knowledge is available. Because the implementation in real hardware is impractical for demonstration purposes, I show the results of a simulation that incorporates all effects I studied analytically earlier. In conclusion, I show that a sleep mode can reduce the energy consumption of RANs if applied correctly. To apply a sleep mode correctly, it is necessary to consider power-cycle durations, power profiles, and the interaction of BSs. When this knowledge is combined the energy consumption of RANs can be reduced with only a slight loss of performance. Because this results in a trade-off between energy consumption and performance, each RAN operator has to decide which trade-off is preferred.}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again. [Show BibTeX]

**Re-Chord: A Self-stabilizing Chord Overlay Network**In

*Theory of Computing Systems*, vol. 55, no. 3, pp. 591-612. Springer**(2014)**[Show Abstract]

The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.

@article{RECHORDjournal,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {Re-Chord: A Self-stabilizing Chord Overlay Network},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {3},

pages = {591-612},

abstract = {The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {Re-Chord: A Self-stabilizing Chord Overlay Network},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {3},

pages = {591-612},

abstract = {The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.}

}

Sebastian Abshoff, Christine Markarian, Friedhelm Meyer auf der Heide:

In Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 8881, pp. 25-34

[Show Abstract]

**Randomized Online Algorithms for Set Cover Leasing Problems**In Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 8881, pp. 25-34

**(2014)**[Show Abstract]

In the leasing variant of Set Cover presented by Anthony et al.

[1], elements U arrive over time and must be covered by sets from a family

F of subsets of U. Each set can be leased for K different periods of time.

Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ck

S and allows S to cover its elements for the next lk time steps. The objective

is to minimize the total cost of the sets leased, such that elements arriving

at any time t are covered by sets which contain them and are leased during

time t. Anthony et al. [1] gave an optimal O(log n)-approximation for

the problem in the offline setting, unless P = NP [22]. In this paper, we

give randomized algorithms for variants of Set Cover Leasing in the online

setting, including a generalization of Online Set Cover with Repetitions

presented by Alon et al. [2], where elements appear multiple times and

must be covered by a different set at each arrival. Our results improve the

O(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]

to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum number

of sets an element belongs to.

[Show BibTeX] [1], elements U arrive over time and must be covered by sets from a family

F of subsets of U. Each set can be leased for K different periods of time.

Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ck

S and allows S to cover its elements for the next lk time steps. The objective

is to minimize the total cost of the sets leased, such that elements arriving

at any time t are covered by sets which contain them and are leased during

time t. Anthony et al. [1] gave an optimal O(log n)-approximation for

the problem in the offline setting, unless P = NP [22]. In this paper, we

give randomized algorithms for variants of Set Cover Leasing in the online

setting, including a generalization of Online Set Cover with Repetitions

presented by Alon et al. [2], where elements appear multiple times and

must be covered by a different set at each arrival. Our results improve the

O(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]

to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum number

of sets an element belongs to.

@inproceedings{AMM2014,

author = {Sebastian Abshoff AND Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Randomized Online Algorithms for Set Cover Leasing Problems},

booktitle = {Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2014},

pages = {25-34},

publisher = {Springer},

abstract = {In the leasing variant of Set Cover presented by Anthony et al.[1], elements U arrive over time and must be covered by sets from a familyF of subsets of U. Each set can be leased for K different periods of time.Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ckS and allows S to cover its elements for the next lk time steps. The objectiveis to minimize the total cost of the sets leased, such that elements arrivingat any time t are covered by sets which contain them and are leased duringtime t. Anthony et al. [1] gave an optimal O(log n)-approximation forthe problem in the offline setting, unless P = NP [22]. In this paper, wegive randomized algorithms for variants of Set Cover Leasing in the onlinesetting, including a generalization of Online Set Cover with Repetitionspresented by Alon et al. [2], where elements appear multiple times andmust be covered by a different set at each arrival. Our results improve theO(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum numberof sets an element belongs to.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Randomized Online Algorithms for Set Cover Leasing Problems},

booktitle = {Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2014},

pages = {25-34},

publisher = {Springer},

abstract = {In the leasing variant of Set Cover presented by Anthony et al.[1], elements U arrive over time and must be covered by sets from a familyF of subsets of U. Each set can be leased for K different periods of time.Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ckS and allows S to cover its elements for the next lk time steps. The objectiveis to minimize the total cost of the sets leased, such that elements arrivingat any time t are covered by sets which contain them and are leased duringtime t. Anthony et al. [1] gave an optimal O(log n)-approximation forthe problem in the offline setting, unless P = NP [22]. In this paper, wegive randomized algorithms for variants of Set Cover Leasing in the onlinesetting, including a generalization of Online Set Cover with Repetitionspresented by Alon et al. [2], where elements appear multiple times andmust be covered by a different set at each arrival. Our results improve theO(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum numberof sets an element belongs to.},

series = {LNCS}

}

Andreas Cord Landwehr, Alexander Mäcker, Friedhelm Meyer auf der Heide:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 423-428

[Show Abstract]

**Quality of Service in Network Creation Games**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 423-428

**(2014)**[Show Abstract]

Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.

[Show BibTeX] @inproceedings{wine2014qosncg,

author = {Andreas Cord Landwehr AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide},

title = {Quality of Service in Network Creation Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {423-428},

publisher = {Springer International Publishing Switzerland},

abstract = {Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide},

title = {Quality of Service in Network Creation Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {423-428},

publisher = {Springer International Publishing Switzerland},

abstract = {Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.},

series = {LNCS}

}

Sonja Brangewitz, Behnud Djawadi, Rene Fahr, Claus-Jochen Haake:

Techreport UPB.

[Show Abstract]

**Quality Choices and Reputation Systems in Online Markets - An Experimental Study**Techreport UPB.

**(2014)**[Show Abstract]

In internet transactions where customers and service providers often interact once and anonymously, a reputation system is particularly important to reduce information asymmetries about product quality. In this study we experimentally examine the impact of the customers' evaluation abilities on strategic quality choices of a service provider. Our study is motivated by a simple theoretical model where short-lived customers are asked to evaluate the observed quality of the service provider's product by providing ratings to a reputation system. A reputation profile informs about the ratings of the last three sales. This profile gives new customers an indicator for the quality they have to expect and determines the sales price of the product. From the theoretical model we derive that the service provider's dichotomous quality decisions are independent of the reputation profile and depend only on the probabilities of receiving positive and negative ratings when providing low or high quality. However, when mapping our theoretical model to an experimental design we find that subjects in the role of the service provider deviate from optimal behavior and choose actions which are conditional on the current reputation profile. In addition, increasing the probability of a negative rating and decreasing the probability of a positive rating both do not affect strategic quality choices.

[Show BibTeX] @techreport{BSDBFRHCJ2014,

author = {Sonja Brangewitz AND Behnud Djawadi AND Rene Fahr AND Claus-Jochen Haake},

title = {Quality Choices and Reputation Systems in Online Markets - An Experimental Study},

year = {2014},

type = {Techreport UPB},

abstract = {In internet transactions where customers and service providers often interact once and anonymously, a reputation system is particularly important to reduce information asymmetries about product quality. In this study we experimentally examine the impact of the customers' evaluation abilities on strategic quality choices of a service provider. Our study is motivated by a simple theoretical model where short-lived customers are asked to evaluate the observed quality of the service provider's product by providing ratings to a reputation system. A reputation profile informs about the ratings of the last three sales. This profile gives new customers an indicator for the quality they have to expect and determines the sales price of the product. From the theoretical model we derive that the service provider's dichotomous quality decisions are independent of the reputation profile and depend only on the probabilities of receiving positive and negative ratings when providing low or high quality. However, when mapping our theoretical model to an experimental design we find that subjects in the role of the service provider deviate from optimal behavior and choose actions which are conditional on the current reputation profile. In addition, increasing the probability of a negative rating and decreasing the probability of a positive rating both do not affect strategic quality choices.}

}

author = {Sonja Brangewitz AND Behnud Djawadi AND Rene Fahr AND Claus-Jochen Haake},

title = {Quality Choices and Reputation Systems in Online Markets - An Experimental Study},

year = {2014},

type = {Techreport UPB},

abstract = {In internet transactions where customers and service providers often interact once and anonymously, a reputation system is particularly important to reduce information asymmetries about product quality. In this study we experimentally examine the impact of the customers' evaluation abilities on strategic quality choices of a service provider. Our study is motivated by a simple theoretical model where short-lived customers are asked to evaluate the observed quality of the service provider's product by providing ratings to a reputation system. A reputation profile informs about the ratings of the last three sales. This profile gives new customers an indicator for the quality they have to expect and determines the sales price of the product. From the theoretical model we derive that the service provider's dichotomous quality decisions are independent of the reputation profile and depend only on the probabilities of receiving positive and negative ratings when providing low or high quality. However, when mapping our theoretical model to an experimental design we find that subjects in the role of the service provider deviate from optimal behavior and choose actions which are conditional on the current reputation profile. In addition, increasing the probability of a negative rating and decreasing the probability of a positive rating both do not affect strategic quality choices.}

}

Jörn Künsemöller, Sonja Brangewitz, Holger Karl, Claus-Jochen Haake:

In Proceedings of the 2014 IEEE International Conference on Services Computing (SCC). IEEE Computer Society, pp. 203-210

[Show Abstract]

**Provider Competition in Infrastructure-as-a-Service**In Proceedings of the 2014 IEEE International Conference on Services Computing (SCC). IEEE Computer Society, pp. 203-210

**(2014)**[Show Abstract]

This paper explores how cloud provider competition inﬂuences instance pricing in an IaaS (Infrastructure-as-a-Service) market. When reserved instance pricing includes an on-demand price component in addition to a reservation fee (two-part tariffs), different providers might offer different price combinations, where the client’s choice depends on its load proﬁle. We investigate a duopoly of providers and analyze stable market prices in two-part tariffs. Further, we study offers that allow a speciﬁed amount of included usage (three-part tariffs). Neither two-part nor three-part tariffs produce an equilibrium market outcome other than a service pricing that equals production cost, i.e., complex price structures do not signiﬁcantly affect the results from ordinary Bertrand competition.

[Show BibTeX] @inproceedings{KKBH-2014,

author = {J{\"o}rn K{\"u}nsem{\"o}ller AND Sonja Brangewitz AND Holger Karl AND Claus-Jochen Haake},

title = {Provider Competition in Infrastructure-as-a-Service},

booktitle = {Proceedings of the 2014 IEEE International Conference onServices Computing (SCC)},

year = {2014},

pages = {203-210},

publisher = {IEEE Computer Society},

month = {June},

abstract = {This paper explores how cloud provider competition inﬂuences instance pricing in an IaaS (Infrastructure-as-a-Service) market. When reserved instance pricing includes an on-demand price component in addition to a reservation fee (two-part tariffs), different providers might offer different price combinations, where the client’s choice depends on its load proﬁle. We investigate a duopoly of providers and analyze stable market prices in two-part tariffs. Further, we study offers that allow a speciﬁed amount of included usage (three-part tariffs). Neither two-part nor three-part tariffs produce an equilibrium market outcome other than a service pricing that equals production cost, i.e., complex price structures do not signiﬁcantly affect the results from ordinary Bertrand competition.}

}

[DOI]
author = {J{\"o}rn K{\"u}nsem{\"o}ller AND Sonja Brangewitz AND Holger Karl AND Claus-Jochen Haake},

title = {Provider Competition in Infrastructure-as-a-Service},

booktitle = {Proceedings of the 2014 IEEE International Conference onServices Computing (SCC)},

year = {2014},

pages = {203-210},

publisher = {IEEE Computer Society},

month = {June},

abstract = {This paper explores how cloud provider competition inﬂuences instance pricing in an IaaS (Infrastructure-as-a-Service) market. When reserved instance pricing includes an on-demand price component in addition to a reservation fee (two-part tariffs), different providers might offer different price combinations, where the client’s choice depends on its load proﬁle. We investigate a duopoly of providers and analyze stable market prices in two-part tariffs. Further, we study offers that allow a speciﬁed amount of included usage (three-part tariffs). Neither two-part nor three-part tariffs produce an equilibrium market outcome other than a service pricing that equals production cost, i.e., complex price structures do not signiﬁcantly affect the results from ordinary Bertrand competition.}

}

Björn Feldkord:

Master's thesis, University of Paderborn

[Show BibTeX]

**On Variants of the Page Migration Problem**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{msc2014Feldkord,

author = {Bj{\"o}rn Feldkord},

title = {On Variants of the Page Migration Problem},

school = {University of Paderborn},

year = {2014}

}

author = {Bj{\"o}rn Feldkord},

title = {On Variants of the Page Migration Problem},

school = {University of Paderborn},

year = {2014}

}

Dianne Foreback, Andreas Koutsopoulos, Mikhail Nesterenko, Christian Scheideler, Thim Strothmann:

In Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems. Springer, LNCS, vol. 8756, pp. 48-62

[Show Abstract]

**On Stabilizing Departures in Overlay Networks**In Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems. Springer, LNCS, vol. 8756, pp. 48-62

**(2014)**[Show Abstract]

A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.

[Show BibTeX] @inproceedings{ForebackKNSS14,

author = {Dianne Foreback AND Andreas Koutsopoulos AND Mikhail Nesterenko AND Christian Scheideler AND Thim Strothmann},

title = {On Stabilizing Departures in Overlay Networks},

booktitle = {Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems},

year = {2014},

pages = {48--62},

publisher = {Springer},

abstract = {A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.},

series = {LNCS}

}

[DOI]
author = {Dianne Foreback AND Andreas Koutsopoulos AND Mikhail Nesterenko AND Christian Scheideler AND Thim Strothmann},

title = {On Stabilizing Departures in Overlay Networks},

booktitle = {Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems},

year = {2014},

pages = {48--62},

publisher = {Springer},

abstract = {A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.},

series = {LNCS}

}

Sebastian Abshoff, Andreas Cord Landwehr, Daniel Jung, Alexander Skopalik:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 435-440

[Show Abstract]

**Multilevel Network Games**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 435-440

**(2014)**[Show Abstract]

We consider a multilevel network game, where nodes can improve

their communication costs by connecting to a high-speed network.

The n nodes are connected by a static network and each node can decide

individually to become a gateway to the high-speed network. The goal

of a node v is to minimize its private costs, i.e., the sum (SUM-game) or

maximum (MAX-game) of communication distances from v to all other

nodes plus a fixed price α > 0 if it decides to be a gateway. Between gateways

the communication distance is 0, and gateways also improve other

nodes’ distances by behaving as shortcuts. For the SUM-game, we show

that for α ≤ n − 1, the price of anarchy is Θ (n/

√

α) and in this range

equilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchy

is Θ(

√

α), and for α ≥ n(n − 1) it is constant. For the MAX-game, we

show that the price of anarchy is either Θ (1 + n/

√

α), for α ≥ 1, or

else 1. Given a graph with girth of at least 4α, equilibria always exist.

Concerning the dynamics, both games are not potential games. For the

SUM-game, we even show that it is not weakly acyclic.

[Show BibTeX] their communication costs by connecting to a high-speed network.

The n nodes are connected by a static network and each node can decide

individually to become a gateway to the high-speed network. The goal

of a node v is to minimize its private costs, i.e., the sum (SUM-game) or

maximum (MAX-game) of communication distances from v to all other

nodes plus a fixed price α > 0 if it decides to be a gateway. Between gateways

the communication distance is 0, and gateways also improve other

nodes’ distances by behaving as shortcuts. For the SUM-game, we show

that for α ≤ n − 1, the price of anarchy is Θ (n/

√

α) and in this range

equilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchy

is Θ(

√

α), and for α ≥ n(n − 1) it is constant. For the MAX-game, we

show that the price of anarchy is either Θ (1 + n/

√

α), for α ≥ 1, or

else 1. Given a graph with girth of at least 4α, equilibria always exist.

Concerning the dynamics, both games are not potential games. For the

SUM-game, we even show that it is not weakly acyclic.

@inproceedings{ACJS14,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Multilevel Network Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {435-440},

publisher = {Springer International Publishing Switzerland},

abstract = {We consider a multilevel network game, where nodes can improvetheir communication costs by connecting to a high-speed network.The n nodes are connected by a static network and each node can decideindividually to become a gateway to the high-speed network. The goalof a node v is to minimize its private costs, i.e., the sum (SUM-game) ormaximum (MAX-game) of communication distances from v to all othernodes plus a fixed price α > 0 if it decides to be a gateway. Between gatewaysthe communication distance is 0, and gateways also improve othernodes’ distances by behaving as shortcuts. For the SUM-game, we showthat for α ≤ n − 1, the price of anarchy is Θ (n/√α) and in this rangeequilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchyis Θ(√α), and for α ≥ n(n − 1) it is constant. For the MAX-game, weshow that the price of anarchy is either Θ (1 + n/√α), for α ≥ 1, orelse 1. Given a graph with girth of at least 4α, equilibria always exist.Concerning the dynamics, both games are not potential games. For theSUM-game, we even show that it is not weakly acyclic.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Multilevel Network Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {435-440},

publisher = {Springer International Publishing Switzerland},

abstract = {We consider a multilevel network game, where nodes can improvetheir communication costs by connecting to a high-speed network.The n nodes are connected by a static network and each node can decideindividually to become a gateway to the high-speed network. The goalof a node v is to minimize its private costs, i.e., the sum (SUM-game) ormaximum (MAX-game) of communication distances from v to all othernodes plus a fixed price α > 0 if it decides to be a gateway. Between gatewaysthe communication distance is 0, and gateways also improve othernodes’ distances by behaving as shortcuts. For the SUM-game, we showthat for α ≤ n − 1, the price of anarchy is Θ (n/√α) and in this rangeequilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchyis Θ(√α), and for α ≥ n(n − 1) it is constant. For the MAX-game, weshow that the price of anarchy is either Θ (1 + n/√α), for α ≥ 1, orelse 1. Given a graph with girth of at least 4α, equilibria always exist.Concerning the dynamics, both games are not potential games. For theSUM-game, we even show that it is not weakly acyclic.},

series = {LNCS}

}

Christian Scheideler, Martina Eikel, Alexander Setzer:

In Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA). Springer, LNCS, vol. 8952, pp. 168-180

[Show Abstract]

**Minimum Linear Arrangement of Series-Parallel Graphs**In Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA). Springer, LNCS, vol. 8952, pp. 168-180

**(2014)**[Show Abstract]

We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log|E|)$ (or even $O(\log|E|\log^*|E|)$ on an EREW PRAM using $O(|E|)$ processors).

For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures.

On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.

[Show BibTeX] For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures.

On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.

@inproceedings{waoa2014mla,

author = {Christian Scheideler AND Martina Eikel AND Alexander Setzer},

title = {Minimum Linear Arrangement of Series-Parallel Graphs},

booktitle = {Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2014},

pages = {168--180},

publisher = {Springer},

abstract = {We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.},

series = {LNCS}

}

[DOI]
author = {Christian Scheideler AND Martina Eikel AND Alexander Setzer},

title = {Minimum Linear Arrangement of Series-Parallel Graphs},

booktitle = {Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2014},

pages = {168--180},

publisher = {Springer},

abstract = {We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.},

series = {LNCS}

}

Burkhard Monien, Marios Mavronicolas:

In

[Show Abstract]

**Minimizing Expectation Plus Variance**In

*Theory of Computing Systems*. Springer**(2014)**[Show Abstract]

We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist (Nash 1950; Nash Ann. Math. 54, 286–295, 1951), a mixed equilibrium for such games, called a V-equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the exact impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question in the context of expectation plus variance, a particular concave valuation denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:

A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable properties such as Weak Equilibrium and Strong Equilibrium. Some of these structural properties imply quantitative constraints on the existence of mixed RA-equilibria.

A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. We provide suitable examples with a mixed RA-equilibrium that is not an E-equilibrium and vice versa.

A purification technique to transform a player-specific scheduling game on two identical links into a player-specific scheduling game on two links so that all non-pure RA-equilibria are eliminated while no new pure equilibria are created; so, a particular player-specific scheduling game on two identical links with no pure equilibrium yields a player-specific scheduling game with no RA-equilibrium (whether mixed or pure). As a by-product, the first PLS-completeness result for the computation of RA-equilibria follows.

[Show BibTeX] A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable properties such as Weak Equilibrium and Strong Equilibrium. Some of these structural properties imply quantitative constraints on the existence of mixed RA-equilibria.

A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. We provide suitable examples with a mixed RA-equilibrium that is not an E-equilibrium and vice versa.

A purification technique to transform a player-specific scheduling game on two identical links into a player-specific scheduling game on two links so that all non-pure RA-equilibria are eliminated while no new pure equilibria are created; so, a particular player-specific scheduling game on two identical links with no pure equilibrium yields a player-specific scheduling game with no RA-equilibrium (whether mixed or pure). As a by-product, the first PLS-completeness result for the computation of RA-equilibria follows.

@article{MM2014,

author = {Burkhard Monien AND Marios Mavronicolas},

title = {Minimizing Expectation Plus Variance},

journal = {Theory of Computing Systems},

year = {2014},

abstract = {We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist (Nash 1950; Nash Ann. Math. 54, 286–295, 1951), a mixed equilibrium for such games, called a V-equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the exact impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question in the context of expectation plus variance, a particular concave valuation denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable properties such as Weak Equilibrium and Strong Equilibrium. Some of these structural properties imply quantitative constraints on the existence of mixed RA-equilibria.A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. We provide suitable examples with a mixed RA-equilibrium that is not an E-equilibrium and vice versa.A purification technique to transform a player-specific scheduling game on two identical links into a player-specific scheduling game on two links so that all non-pure RA-equilibria are eliminated while no new pure equilibria are created; so, a particular player-specific scheduling game on two identical links with no pure equilibrium yields a player-specific scheduling game with no RA-equilibrium (whether mixed or pure). As a by-product, the first PLS-completeness result for the computation of RA-equilibria follows.}

}

[DOI]
author = {Burkhard Monien AND Marios Mavronicolas},

title = {Minimizing Expectation Plus Variance},

journal = {Theory of Computing Systems},

year = {2014},

abstract = {We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist (Nash 1950; Nash Ann. Math. 54, 286–295, 1951), a mixed equilibrium for such games, called a V-equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the exact impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question in the context of expectation plus variance, a particular concave valuation denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable properties such as Weak Equilibrium and Strong Equilibrium. Some of these structural properties imply quantitative constraints on the existence of mixed RA-equilibria.A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. We provide suitable examples with a mixed RA-equilibrium that is not an E-equilibrium and vice versa.A purification technique to transform a player-specific scheduling game on two identical links into a player-specific scheduling game on two links so that all non-pure RA-equilibria are eliminated while no new pure equilibria are created; so, a particular player-specific scheduling game on two identical links with no pure equilibrium yields a player-specific scheduling game with no RA-equilibrium (whether mixed or pure). As a by-product, the first PLS-completeness result for the computation of RA-equilibria follows.}

}

Tobias Martin Lohre:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Machtverteilungen von Koalitionen im Fokus der politischen Realität**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Lohre14,

author = {Tobias Martin Lohre},

title = {Machtverteilungen von Koalitionen im Fokus der politischen Realit{\"a}t},

year = {2014}

}

author = {Tobias Martin Lohre},

title = {Machtverteilungen von Koalitionen im Fokus der politischen Realit{\"a}t},

year = {2014}

}

Dirk Van Straaten:

Master's thesis, University of Paderborn

[Show BibTeX]

**Kooperative Verhandlungen im duopolistischen Wettbewerb - eine spieltheoretische Analyse**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{VanStraaten14,

author = {Dirk Van Straaten},

title = {Kooperative Verhandlungen im duopolistischen Wettbewerb - eine spieltheoretische Analyse},

school = {University of Paderborn},

year = {2014}

}

author = {Dirk Van Straaten},

title = {Kooperative Verhandlungen im duopolistischen Wettbewerb - eine spieltheoretische Analyse},

school = {University of Paderborn},

year = {2014}

}

Olga Degraf:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Koalitionsbildung bei mehrdimensionalen Verhandlungsproblemen**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{Degraf14,

author = {Olga Degraf},

title = {Koalitionsbildung bei mehrdimensionalen Verhandlungsproblemen},

year = {2014}

}

author = {Olga Degraf},

title = {Koalitionsbildung bei mehrdimensionalen Verhandlungsproblemen},

year = {2014}

}

Alexander Jungmann, Sonja Brangewitz, Ronald Petrlic, Marie Christin Platenius:

In

[Show Abstract]

**Incorporating Reputation Information into Decision-Making Processes in Markets of Composed Services**In

*International Journal On Advances in Intelligent Systems (IntSys)*, vol. 7, no. 3&4, pp. 572-594. ThinkMind**(2014)**[Show Abstract]

One goal of service-oriented computing is to realize future markets of composed services. In such markets, service providers offer services that can be ﬂexibly combined with each other. However, although crucial for decision-making, market participants are usually not able to individually estimate the quality of traded services in advance. To overcome this problem, we present a conceptual design for a reputation system that collects and processes user feedback on transactions, and provides this information as a signal for quality to participants in the market. Based on our proposed concept, we describe the incorporation of reputation information into distinct decision-making processes that are crucial in such service markets. In this context, we present a fuzzy service matching approach that takes reputation information into account. Furthermore, we introduce an adaptive service composition approach, and investigate the impact of exchanging immediate user feedback by reputation information. Last but not least, we describe the importance of reputation information for economic decisions of different market participants. The overall output of this paper is a comprehensive view on managing and exploiting reputation information in markets of composed services using the example of On-The-Fly Computing.

[Show BibTeX] @article{JBPP2014,

author = {Alexander Jungmann AND Sonja Brangewitz AND Ronald Petrlic AND Marie Christin Platenius},

title = {Incorporating Reputation Information into Decision-Making Processes in Markets of Composed Services},

journal = {International Journal On Advances in Intelligent Systems (IntSys)},

year = {2014},

volume = {7},

number = {3&4},

pages = {572--594},

abstract = {One goal of service-oriented computing is to realize future markets of composed services. In such markets, service providers offer services that can be ﬂexibly combined with each other. However, although crucial for decision-making, market participants are usually not able to individually estimate the quality of traded services in advance. To overcome this problem, we present a conceptual design for a reputation system that collects and processes user feedback on transactions, and provides this information as a signal for quality to participants in the market. Based on our proposed concept, we describe the incorporation of reputation information into distinct decision-making processes that are crucial in such service markets. In this context, we present a fuzzy service matching approach that takes reputation information into account. Furthermore, we introduce an adaptive service composition approach, and investigate the impact of exchanging immediate user feedback by reputation information. Last but not least, we describe the importance of reputation information for economic decisions of different market participants. The overall output of this paper is a comprehensive view on managing and exploiting reputation information in markets of composed services using the example of On-The-Fly Computing.}

}

[DOI]
author = {Alexander Jungmann AND Sonja Brangewitz AND Ronald Petrlic AND Marie Christin Platenius},

title = {Incorporating Reputation Information into Decision-Making Processes in Markets of Composed Services},

journal = {International Journal On Advances in Intelligent Systems (IntSys)},

year = {2014},

volume = {7},

number = {3&4},

pages = {572--594},

abstract = {One goal of service-oriented computing is to realize future markets of composed services. In such markets, service providers offer services that can be ﬂexibly combined with each other. However, although crucial for decision-making, market participants are usually not able to individually estimate the quality of traded services in advance. To overcome this problem, we present a conceptual design for a reputation system that collects and processes user feedback on transactions, and provides this information as a signal for quality to participants in the market. Based on our proposed concept, we describe the incorporation of reputation information into distinct decision-making processes that are crucial in such service markets. In this context, we present a fuzzy service matching approach that takes reputation information into account. Furthermore, we introduce an adaptive service composition approach, and investigate the impact of exchanging immediate user feedback by reputation information. Last but not least, we describe the importance of reputation information for economic decisions of different market participants. The overall output of this paper is a comprehensive view on managing and exploiting reputation information in markets of composed services using the example of On-The-Fly Computing.}

}

Matthias Feldotto, Christian Scheideler, Kalman Graffi:

In Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P). IEEE, pp. 1-10

[Show Abstract]

**HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths**In Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P). IEEE, pp. 1-10

**(2014)**[Show Abstract]

In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.

[Show BibTeX] @inproceedings{FSG2014P2P,

author = {Matthias Feldotto AND Christian Scheideler AND Kalman Graffi},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

booktitle = {Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P)},

year = {2014},

pages = {1-10},

publisher = {IEEE},

abstract = {In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.}

}

[DOI]
author = {Matthias Feldotto AND Christian Scheideler AND Kalman Graffi},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

booktitle = {Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P)},

year = {2014},

pages = {1-10},

publisher = {IEEE},

abstract = {In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.}

}

Andre Kolle:

PhD thesis, University of Paderborn

[Show Abstract]

**Gender and ethnic discrimination in hiring : evidence from field experiments in the German labor market**PhD thesis, University of Paderborn

**(2014)**[Show Abstract]

The present thesis investigates the prevalence of and the reasons for hiring discrimination against women and ethnic Turks in the German labor market. Subsequent to a discussion of how to reveal discrimination, the literature on wage and employment differences inside and outside the German labor market is reviewed. Afterwards, different (economic) theories explaining inequalities in labor markets are presented. In the empirical analyses a field experiment - the so called correspondence testing - is conducted where matched pairs of (fictitious) male and female as well as German-named and Turkish-named applicants respond to, respectively, 656 and 608 (real) apprenticeship offers in predominantly male-dominated jobs. Descriptive results and econometric analyses using probit regressions on various model specifications indicate that the female applicant has a 19 percent lower callback probability compared to her male counterpart. However, differential treatment is both job- and firm-type driven. While callback rates are not statistically different from zero in female-dominated and “gender-neutral” occupations, they prevail in jobs where men are overrepresented. Furthermore, discrimination is restricted to late recruiters, i.e., companies that advertise their vacancies right before the apprenticeship is supposed to start. Similar conclusions can be drawn from the study investigating ethnic discrimination. The 32 percent lower callback probability of the Turkish-named applicant decreases if early rather than late recruiters are addressed. Apart from that, comparing response and callback rates to the candidates using different experimental designs, i.e., sending out single versus pairs of applications, yields no statistically significant differences demonstrating the unbiasedness of the correspondence approach.

[Show BibTeX] @phdthesis{KolleDiss2014,

author = {Andre Kolle},

title = {Gender and ethnic discrimination in hiring : evidence from field experiments in the German labor market},

school = {University of Paderborn},

year = {2014},

abstract = {The present thesis investigates the prevalence of and the reasons for hiring discrimination against women and ethnic Turks in the German labor market. Subsequent to a discussion of how to reveal discrimination, the literature on wage and employment differences inside and outside the German labor market is reviewed. Afterwards, different (economic) theories explaining inequalities in labor markets are presented. In the empirical analyses a field experiment - the so called correspondence testing - is conducted where matched pairs of (fictitious) male and female as well as German-named and Turkish-named applicants respond to, respectively, 656 and 608 (real) apprenticeship offers in predominantly male-dominated jobs. Descriptive results and econometric analyses using probit regressions on various model specifications indicate that the female applicant has a 19 percent lower callback probability compared to her male counterpart. However, differential treatment is both job- and firm-type driven. While callback rates are not statistically different from zero in female-dominated and “gender-neutral” occupations, they prevail in jobs where men are overrepresented. Furthermore, discrimination is restricted to late recruiters, i.e., companies that advertise their vacancies right before the apprenticeship is supposed to start. Similar conclusions can be drawn from the study investigating ethnic discrimination. The 32 percent lower callback probability of the Turkish-named applicant decreases if early rather than late recruiters are addressed. Apart from that, comparing response and callback rates to the candidates using different experimental designs, i.e., sending out single versus pairs of applications, yields no statistically significant differences demonstrating the unbiasedness of the correspondence approach.}

}

[DOI]
author = {Andre Kolle},

title = {Gender and ethnic discrimination in hiring : evidence from field experiments in the German labor market},

school = {University of Paderborn},

year = {2014},

abstract = {The present thesis investigates the prevalence of and the reasons for hiring discrimination against women and ethnic Turks in the German labor market. Subsequent to a discussion of how to reveal discrimination, the literature on wage and employment differences inside and outside the German labor market is reviewed. Afterwards, different (economic) theories explaining inequalities in labor markets are presented. In the empirical analyses a field experiment - the so called correspondence testing - is conducted where matched pairs of (fictitious) male and female as well as German-named and Turkish-named applicants respond to, respectively, 656 and 608 (real) apprenticeship offers in predominantly male-dominated jobs. Descriptive results and econometric analyses using probit regressions on various model specifications indicate that the female applicant has a 19 percent lower callback probability compared to her male counterpart. However, differential treatment is both job- and firm-type driven. While callback rates are not statistically different from zero in female-dominated and “gender-neutral” occupations, they prevail in jobs where men are overrepresented. Furthermore, discrimination is restricted to late recruiters, i.e., companies that advertise their vacancies right before the apprenticeship is supposed to start. Similar conclusions can be drawn from the study investigating ethnic discrimination. The 32 percent lower callback probability of the Turkish-named applicant decreases if early rather than late recruiters are addressed. Apart from that, comparing response and callback rates to the candidates using different experimental designs, i.e., sending out single versus pairs of applications, yields no statistically significant differences demonstrating the unbiasedness of the correspondence approach.}

}

Veit Dornseifer:

Master's thesis, University of Paderborn

[Show BibTeX]

**Evaluation of a Hybrid Packet-/Circuit-Switched Data Center Network**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{Dornseifer2014,

author = {Veit Dornseifer},

title = {Evaluation of a Hybrid Packet-/Circuit-Switched Data Center Network},

school = {University of Paderborn},

year = {2014}

}

author = {Veit Dornseifer},

title = {Evaluation of a Hybrid Packet-/Circuit-Switched Data Center Network},

school = {University of Paderborn},

year = {2014}

}

Nico Bredenbals:

Master's thesis, University of Paderborn

[Show BibTeX]

**Energy-Efficient Queuing with Delayed Deactivation**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{Bredenbals2014,

author = {Nico Bredenbals},

title = {Energy-Efficient Queuing with Delayed Deactivation},

school = {University of Paderborn},

year = {2014}

}

author = {Nico Bredenbals},

title = {Energy-Efficient Queuing with Delayed Deactivation},

school = {University of Paderborn},

year = {2014}

}

Linghui Luo:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Ein selbst-stabilisierender Algorithmus für das Finite Sleep Problem in Skip+ Graphen**Bachelor thesis, University of Paderborn

**(2014)**[Show BibTeX]

@misc{LL2014,

author = {Linghui Luo},

title = {Ein selbst-stabilisierender Algorithmus f{\"u}r das Finite Sleep Problem in Skip+ Graphen},

year = {2014}

}

author = {Linghui Luo},

title = {Ein selbst-stabilisierender Algorithmus f{\"u}r das Finite Sleep Problem in Skip+ Graphen},

year = {2014}

}

Daniel Kaimann:

PhD thesis, University of Paderborn

[Show BibTeX]

**Decision Making under Asymmetric Information in Markets for Experience Goods: Empirical Evidence of Signaling Effects on Consumer Perceptions**PhD thesis, University of Paderborn

**(2014)**[Show BibTeX]

@phdthesis{Kaiman-PhD,

author = {Daniel Kaimann},

title = {Decision Making under Asymmetric Information in Markets for Experience Goods: Empirical Evidence of Signaling Effects on Consumer Perceptions},

school = {University of Paderborn},

year = {2014}

}

[DOI]
author = {Daniel Kaimann},

title = {Decision Making under Asymmetric Information in Markets for Experience Goods: Empirical Evidence of Signaling Effects on Consumer Perceptions},

school = {University of Paderborn},

year = {2014}

}

Ana Mauleon, Nils Roehl, Vincent Vannetelbosch:

Techreport UPB.

[Show Abstract]

**Constitutions and Social Networks**Techreport UPB.

**(2014)**[Show Abstract]

The objective of this paper is to analyze the formation of group structures where individuals are allowed to engage in several groups at the same time. These structures are interpreted here as social networks. Each of the groups is supposed to have specific rules or constitutions governing which members may join or leave it. A social network is then considered to be stable if none of the groups is altered any more. Given this framework, we not only analyze which influence the constitutions have on network formation but we also provide requirements under which stable networks are induced for sure. Furthermore, by embedding many-to-many matchings into our setting, we apply our model to job markets with labor unions. To some extent the unions may provide job guarantees and, therefore, have influence on the stability of the job market.

[Show BibTeX] @techreport{MRV2014,

author = {Ana Mauleon AND Nils Roehl AND Vincent Vannetelbosch},

title = {Constitutions and Social Networks},

year = {2014},

type = {Techreport UPB},

abstract = {The objective of this paper is to analyze the formation of group structures where individuals are allowed to engage in several groups at the same time. These structures are interpreted here as social networks. Each of the groups is supposed to have specific rules or constitutions governing which members may join or leave it. A social network is then considered to be stable if none of the groups is altered any more. Given this framework, we not only analyze which influence the constitutions have on network formation but we also provide requirements under which stable networks are induced for sure. Furthermore, by embedding many-to-many matchings into our setting, we apply our model to job markets with labor unions. To some extent the unions may provide job guarantees and, therefore, have influence on the stability of the job market.}

}

author = {Ana Mauleon AND Nils Roehl AND Vincent Vannetelbosch},

title = {Constitutions and Social Networks},

year = {2014},

type = {Techreport UPB},

abstract = {The objective of this paper is to analyze the formation of group structures where individuals are allowed to engage in several groups at the same time. These structures are interpreted here as social networks. Each of the groups is supposed to have specific rules or constitutions governing which members may join or leave it. A social network is then considered to be stable if none of the groups is altered any more. Given this framework, we not only analyze which influence the constitutions have on network formation but we also provide requirements under which stable networks are induced for sure. Furthermore, by embedding many-to-many matchings into our setting, we apply our model to job markets with labor unions. To some extent the unions may provide job guarantees and, therefore, have influence on the stability of the job market.}

}

Behnud Djawadi, Rene Fahr, Florian Turk:

In

[Show Abstract]

**Conceptual Model and Economic Experiments to Explain Nonpersistence and Enable Mechanism Designs Fosterin Behavioral Change**In

*Value in Health*, vol. 17, no. 8, pp. 814-822.**(2014)**[Show Abstract]

Background

Medical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce.

Objectives

The aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients’ preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies.

Methods

Our conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions.

Results

Subjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy.

Conclusions

Our results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients’ compliance and persistence behavior.

[Show BibTeX] Medical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce.

Objectives

The aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients’ preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies.

Methods

Our conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions.

Results

Subjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy.

Conclusions

Our results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients’ compliance and persistence behavior.

@article{Djawadi et al. 2014,

author = {Behnud Djawadi AND Rene Fahr AND Florian Turk},

title = {Conceptual Model and Economic Experiments to Explain Nonpersistence and Enable Mechanism Designs Fosterin Behavioral Change},

journal = {Value in Health},

year = {2014},

volume = {17},

number = {8},

pages = {814-822},

abstract = {BackgroundMedical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce.ObjectivesThe aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients’ preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies.MethodsOur conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions.ResultsSubjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy.ConclusionsOur results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients’ compliance and persistence behavior.}

}

[DOI]
author = {Behnud Djawadi AND Rene Fahr AND Florian Turk},

title = {Conceptual Model and Economic Experiments to Explain Nonpersistence and Enable Mechanism Designs Fosterin Behavioral Change},

journal = {Value in Health},

year = {2014},

volume = {17},

number = {8},

pages = {814-822},

abstract = {BackgroundMedical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce.ObjectivesThe aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients’ preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies.MethodsOur conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions.ResultsSubjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy.ConclusionsOur results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients’ compliance and persistence behavior.}

}

Sonja Brangewitz, Jan-Philip Gamp:

In

[Show Abstract]

**Competitive outcomes and the inner core of NTU market games**In

*Economic Theory*, vol. 57, no. 3, pp. 529-554. Springer Berlin Heidelberg**(2014)**[Show Abstract]

We consider the inner core as a solution concept for cooperative games with non-transferable utility (NTU) and its relationship to payoffs of competitive equilibria of markets that are induced by NTU games. An NTU game is an NTU market game if there exists a market such that the set of utility allocations a coalition can achieve in the market coincides with the set of utility allocations the coalition can achieve in the game. In this paper, we introduce a new construction of a market based on a closed subset of the inner core which satisfies a strict positive separability. We show that the constructed market represents the NTU game and, further, has the given closed set as the set of payoff vectors of competitive equilibria. It turns out that this market is not uniquely determined, and thus, we obtain a class of markets. Our results generalize those relating to competitive outcomes of NTU market games in the literature.

[Show BibTeX] @article{SBJG14,

author = {Sonja Brangewitz AND Jan-Philip Gamp},

title = {Competitive outcomes and the inner core of NTU market games},

journal = {Economic Theory},

year = {2014},

volume = {57},

number = {3},

pages = {529-554},

abstract = {We consider the inner core as a solution concept for cooperative games with non-transferable utility (NTU) and its relationship to payoffs of competitive equilibria of markets that are induced by NTU games. An NTU game is an NTU market game if there exists a market such that the set of utility allocations a coalition can achieve in the market coincides with the set of utility allocations the coalition can achieve in the game. In this paper, we introduce a new construction of a market based on a closed subset of the inner core which satisfies a strict positive separability. We show that the constructed market represents the NTU game and, further, has the given closed set as the set of payoff vectors of competitive equilibria. It turns out that this market is not uniquely determined, and thus, we obtain a class of markets. Our results generalize those relating to competitive outcomes of NTU market games in the literature.}

}

[DOI]
author = {Sonja Brangewitz AND Jan-Philip Gamp},

title = {Competitive outcomes and the inner core of NTU market games},

journal = {Economic Theory},

year = {2014},

volume = {57},

number = {3},

pages = {529-554},

abstract = {We consider the inner core as a solution concept for cooperative games with non-transferable utility (NTU) and its relationship to payoffs of competitive equilibria of markets that are induced by NTU games. An NTU game is an NTU market game if there exists a market such that the set of utility allocations a coalition can achieve in the market coincides with the set of utility allocations the coalition can achieve in the game. In this paper, we introduce a new construction of a market based on a closed subset of the inner core which satisfies a strict positive separability. We show that the constructed market represents the NTU game and, further, has the given closed set as the set of payoff vectors of competitive equilibria. It turns out that this market is not uniquely determined, and thus, we obtain a class of markets. Our results generalize those relating to competitive outcomes of NTU market games in the literature.}

}

Maximilian Drees, Sören Riechers, Alexander Skopalik:

In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 8768, pp. 110-121

[Show Abstract]

**Budget-restricted utility games with ordered strategic decisions**In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 8768, pp. 110-121

**(2014)**[Show Abstract]

We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.

[Show BibTeX] @inproceedings{DRS14,

author = {Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Budget-restricted utility games with ordered strategic decisions},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {110-121},

publisher = {Springer Berlin Heidelberg},

abstract = {We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Budget-restricted utility games with ordered strategic decisions},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {110-121},

publisher = {Springer Berlin Heidelberg},

abstract = {We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.},

series = {Lecture Notes in Computer Science}

}

Sebastian Abshoff, Andreas Cord Landwehr, Daniel Jung, Alexander Skopalik:

In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 8768, pp. 294

[Show Abstract]

**Brief Announcement: A Model for Multilevel Network Games**In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 8768, pp. 294

**(2014)**[Show Abstract]

Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.

We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.

Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.

[Show BibTeX] We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.

Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.

@inproceedings{2014sagtmultilevel,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Brief Announcement: A Model for Multilevel Network Games},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {294},

publisher = {Springer},

abstract = {Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.},

series = {LNCS}

}

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Brief Announcement: A Model for Multilevel Network Games},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {294},

publisher = {Springer},

abstract = {Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.},

series = {LNCS}

}

Matthias Feldotto, Martin Gairing, Alexander Skopalik:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 30-43

[Show Abstract]

**Bounding the Potential Function in Congestion Games and Approximate Pure Nash Equilibria**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 30-43

**(2014)**[Show Abstract]

In this paper we study the potential function in congestion games. We consider both games with non-decreasing cost functions as well as games with non-increasing utility functions. We show that the value of the potential function $\Phi(\sf s)$ of any outcome $\sf s$ of a congestion game approximates the optimum potential value $\Phi(\sf s^*)$ by a factor $\Psi_\mathcalF$ which only depends on the set of cost/utility functions $\mathcalF$, and an additive term which is bounded by the sum of the total possible improvements of the players in the outcome $\sf s$. The significance of this result is twofold. On the one hand it provides \emphPrice-of-Anarchy-like results with respect to the potential function. On the other hand, we show that these approximations can be used to compute $(1+\varepsilon)\cdot\Psi_\mathcalF$-approximate pure Nash equilibria for congestion games with non-decreasing cost functions. For the special case of polynomial cost functions, this significantly improves the guarantees from Caragiannis et al. [FOCS 2011]. Moreover, our machinery provides the first guarantees for general latency functions.

[Show BibTeX] @inproceedings{FGS14,

author = {Matthias Feldotto AND Martin Gairing AND Alexander Skopalik},

title = {Bounding the Potential Function in Congestion Games and Approximate Pure Nash Equilibria},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {30-43},

publisher = {Springer International Publishing Switzerland},

abstract = {In this paper we study the potential function in congestion games. We consider both games with non-decreasing cost functions as well as games with non-increasing utility functions. We show that the value of the potential function $\Phi(\sf s)$ of any outcome $\sf s$ of a congestion game approximates the optimum potential value $\Phi(\sf s^*)$ by a factor $\Psi_{\mathcal{F}}$ which only depends on the set of cost/utility functions $\mathcal{F}$, and an additive term which is bounded by the sum of the total possible improvements of the players in the outcome $\sf s$. The significance of this result is twofold. On the one hand it provides \emph{Price-of-Anarchy}-like results with respect to the potential function. On the other hand, we show that these approximations can be used to compute $(1+\varepsilon)\cdot\Psi_{\mathcal{F}}$-approximate pure Nash equilibria for congestion games with non-decreasing cost functions. For the special case of polynomial cost functions, this significantly improves the guarantees from Caragiannis et al. [FOCS 2011]. Moreover, our machinery provides the first guarantees for general latency functions.},

series = {LNCS}

}

[DOI]
author = {Matthias Feldotto AND Martin Gairing AND Alexander Skopalik},

title = {Bounding the Potential Function in Congestion Games and Approximate Pure Nash Equilibria},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {30-43},

publisher = {Springer International Publishing Switzerland},

abstract = {In this paper we study the potential function in congestion games. We consider both games with non-decreasing cost functions as well as games with non-increasing utility functions. We show that the value of the potential function $\Phi(\sf s)$ of any outcome $\sf s$ of a congestion game approximates the optimum potential value $\Phi(\sf s^*)$ by a factor $\Psi_{\mathcal{F}}$ which only depends on the set of cost/utility functions $\mathcal{F}$, and an additive term which is bounded by the sum of the total possible improvements of the players in the outcome $\sf s$. The significance of this result is twofold. On the one hand it provides \emph{Price-of-Anarchy}-like results with respect to the potential function. On the other hand, we show that these approximations can be used to compute $(1+\varepsilon)\cdot\Psi_{\mathcal{F}}$-approximate pure Nash equilibria for congestion games with non-decreasing cost functions. For the special case of polynomial cost functions, this significantly improves the guarantees from Caragiannis et al. [FOCS 2011]. Moreover, our machinery provides the first guarantees for general latency functions.},

series = {LNCS}

}

Christoph Hansknecht, Max Klimm, Alexander Skopalik:

In Proceedings of the 17th. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX). Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, LIPIcs, vol. 28, pp. 242 - 257

[Show Abstract]

**Approximate pure Nash equilibria in weighted congestion games**In Proceedings of the 17th. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX). Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, LIPIcs, vol. 28, pp. 242 - 257

**(2014)**[Show Abstract]

We study the existence of approximate pure Nash equilibria in weighted congestion games and develop techniques to obtain approximate potential functions that prove the existence of alpha-approximate pure Nash equilibria and the convergence of alpha-improvement steps. Specifically, we show how to obtain upper bounds for approximation factor alpha for a given class of cost functions. For example for concave cost functions the factor is at most 3/2, for quadratic cost functions it is at most 4/3, and for polynomial cost functions of maximal degree d it is at at most d + 1. For games with two players we obtain tight bounds which are as small as for example 1.054 in the case of quadratic cost functions.

[Show BibTeX] @inproceedings{HKS14,

author = {Christoph Hansknecht AND Max Klimm AND Alexander Skopalik},

title = {Approximate pure Nash equilibria in weighted congestion games},

booktitle = {Proceedings of the 17th. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX)},

year = {2014},

pages = {242 - 257},

publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik},

abstract = {We study the existence of approximate pure Nash equilibria in weighted congestion games and develop techniques to obtain approximate potential functions that prove the existence of alpha-approximate pure Nash equilibria and the convergence of alpha-improvement steps. Specifically, we show how to obtain upper bounds for approximation factor alpha for a given class of cost functions. For example for concave cost functions the factor is at most 3/2, for quadratic cost functions it is at most 4/3, and for polynomial cost functions of maximal degree d it is at at most d + 1. For games with two players we obtain tight bounds which are as small as for example 1.054 in the case of quadratic cost functions.},

series = {LIPIcs}

}

[DOI]
author = {Christoph Hansknecht AND Max Klimm AND Alexander Skopalik},

title = {Approximate pure Nash equilibria in weighted congestion games},

booktitle = {Proceedings of the 17th. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX)},

year = {2014},

pages = {242 - 257},

publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik},

abstract = {We study the existence of approximate pure Nash equilibria in weighted congestion games and develop techniques to obtain approximate potential functions that prove the existence of alpha-approximate pure Nash equilibria and the convergence of alpha-improvement steps. Specifically, we show how to obtain upper bounds for approximation factor alpha for a given class of cost functions. For example for concave cost functions the factor is at most 3/2, for quadratic cost functions it is at most 4/3, and for polynomial cost functions of maximal degree d it is at at most d + 1. For games with two players we obtain tight bounds which are as small as for example 1.054 in the case of quadratic cost functions.},

series = {LIPIcs}

}

Martin Gairing, Grammateia Kotsialou, Alexander Skopalik:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 480 - 485

[Show Abstract]

**Approximate pure Nash equilibria in Social Context Congestion Games**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 480 - 485

**(2014)**[Show Abstract]

We study the existence of approximate pure Nash equilibria

in social context congestion games. For any given set of allowed cost

functions F, we provide a threshold value μ(F), and show that for the

class of social context congestion games with cost functions from F, α-

Nash dynamics are guaranteed to converge to α-approximate pure Nash

equilibrium if and only if α > μ(F).

Interestingly, μ(F) is related and always upper bounded by Roughgarden’s

anarchy value [19].

[Show BibTeX] in social context congestion games. For any given set of allowed cost

functions F, we provide a threshold value μ(F), and show that for the

class of social context congestion games with cost functions from F, α-

Nash dynamics are guaranteed to converge to α-approximate pure Nash

equilibrium if and only if α > μ(F).

Interestingly, μ(F) is related and always upper bounded by Roughgarden’s

anarchy value [19].

@inproceedings{gks14,

author = {Martin Gairing AND Grammateia Kotsialou AND Alexander Skopalik},

title = {Approximate pure Nash equilibria in Social Context Congestion Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {480 - 485},

publisher = {Springer International Publishing Switzerland},

abstract = {We study the existence of approximate pure Nash equilibriain social context congestion games. For any given set of allowed costfunctions F, we provide a threshold value μ(F), and show that for theclass of social context congestion games with cost functions from F, α-Nash dynamics are guaranteed to converge to α-approximate pure Nashequilibrium if and only if α > μ(F).Interestingly, μ(F) is related and always upper bounded by Roughgarden’sanarchy value [19].},

series = {LNCS}

}

[DOI]
author = {Martin Gairing AND Grammateia Kotsialou AND Alexander Skopalik},

title = {Approximate pure Nash equilibria in Social Context Congestion Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {480 - 485},

publisher = {Springer International Publishing Switzerland},

abstract = {We study the existence of approximate pure Nash equilibriain social context congestion games. For any given set of allowed costfunctions F, we provide a threshold value μ(F), and show that for theclass of social context congestion games with cost functions from F, α-Nash dynamics are guaranteed to converge to α-approximate pure Nashequilibrium if and only if α > μ(F).Interestingly, μ(F) is related and always upper bounded by Roughgarden’sanarchy value [19].},

series = {LNCS}

}

Philipp Dreimann:

Master's thesis, University of Paderborn

[Show BibTeX]

**Anticipatory Power Cycling of Mobile Network Equipment for High-Demand Multimedia Traffic**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{Dreimann2014,

author = {Philipp Dreimann},

title = {Anticipatory Power Cycling of Mobile Network Equipment for High-Demand Multimedia Traffic},

school = {University of Paderborn},

year = {2014}

}

author = {Philipp Dreimann},

title = {Anticipatory Power Cycling of Mobile Network Equipment for High-Demand Multimedia Traffic},

school = {University of Paderborn},

year = {2014}

}

Sebastian Kniesburges, Christine Markarian, Friedhelm Meyer auf der Heide, Christian Scheideler:

In Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO). Springer, LNCS, vol. 8576, pp. 1-13

[Show Abstract]

**Algorithmic Aspects of Resource Management in the Cloud**In Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO). Springer, LNCS, vol. 8576, pp. 1-13

**(2014)**[Show Abstract]

In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.

[Show BibTeX] @inproceedings{KMMS2014,

author = {Sebastian Kniesburges AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Christian Scheideler},

title = {Algorithmic Aspects of Resource Management in the Cloud},

booktitle = {Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO)},

year = {2014},

pages = {1-13},

publisher = {Springer},

abstract = {In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.},

series = {LNCS}

}

[DOI]
author = {Sebastian Kniesburges AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Christian Scheideler},

title = {Algorithmic Aspects of Resource Management in the Cloud},

booktitle = {Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO)},

year = {2014},

pages = {1-13},

publisher = {Springer},

abstract = {In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.},

series = {LNCS}

}

Sevil Mehraghdam (married name: Dräxler):

Master's thesis, University of Paderborn

[Show BibTeX]

**Adaptive Placement of Programmable Virtual Network Function Chains**Master's thesis, University of Paderborn

**(2014)**[Show BibTeX]

@mastersthesis{SevMehr2014,

author = {Sevil Mehraghdam (married name: Dr{\"a}xler)},

title = {Adaptive Placement of Programmable Virtual Network Function Chains},

school = {University of Paderborn},

year = {2014}

}

author = {Sevil Mehraghdam (married name: Dr{\"a}xler)},

title = {Adaptive Placement of Programmable Virtual Network Function Chains},

school = {University of Paderborn},

year = {2014}

}

Matthias Feldotto, Alexander Skopalik:

In Proceedings of the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014). SciTePress, pp. 625-630

[Show Abstract]

**A Simulation Framework for Analyzing Complex Infinitely Repeated Games**In Proceedings of the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014). SciTePress, pp. 625-630

**(2014)**[Show Abstract]

We discuss a technique to analyze complex infinitely repeated games using techniques from the fields of game theory and simulations. Our research is motivated by the analysis of electronic markets with thousands of participants and possibly complex strategic behavior. We consider an example of a global market of composed IT services to demonstrate the use of our simulation technique. We present our current work in this area and we want to discuss further approaches for the future.

[Show BibTeX] @inproceedings{FS2014SIMULTECH,

author = {Matthias Feldotto AND Alexander Skopalik},

title = {A Simulation Framework for Analyzing Complex Infinitely Repeated Games},

booktitle = {Proceedings of the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014)},

year = {2014},

pages = {625-630},

publisher = {SciTePress},

month = {August},

abstract = {We discuss a technique to analyze complex infinitely repeated games using techniques from the fields of game theory and simulations. Our research is motivated by the analysis of electronic markets with thousands of participants and possibly complex strategic behavior. We consider an example of a global market of composed IT services to demonstrate the use of our simulation technique. We present our current work in this area and we want to discuss further approaches for the future.}

}

[DOI]
author = {Matthias Feldotto AND Alexander Skopalik},

title = {A Simulation Framework for Analyzing Complex Infinitely Repeated Games},

booktitle = {Proceedings of the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014)},

year = {2014},

pages = {625-630},

publisher = {SciTePress},

month = {August},

abstract = {We discuss a technique to analyze complex infinitely repeated games using techniques from the fields of game theory and simulations. Our research is motivated by the analysis of electronic markets with thousands of participants and possibly complex strategic behavior. We consider an example of a global market of composed IT services to demonstrate the use of our simulation technique. We present our current work in this area and we want to discuss further approaches for the future.}

}

Dominik Gall, Riko Jacob, Andrea W. Richa, Christian Scheideler, Stefan Schmid, Hanjo Täubig:

In

[Show Abstract]

**A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization**In

*Theory of Computing Systems*, vol. 55, no. 1, pp. 110-135. Springer**(2014)**[Show Abstract]

Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.

[Show BibTeX] @article{GJRSST2014,

author = {Dominik Gall AND Riko Jacob AND Andrea W. Richa AND Christian Scheideler AND Stefan Schmid AND Hanjo T{\"a}ubig},

title = {A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {1},

pages = {110-135},

abstract = {Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.}

}

[DOI]
author = {Dominik Gall AND Riko Jacob AND Andrea W. Richa AND Christian Scheideler AND Stefan Schmid AND Hanjo T{\"a}ubig},

title = {A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {1},

pages = {110-135},

abstract = {Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.}

}

Jörn Künsemöller, Holger Karl:

In

[Show Abstract]

**A Game-Theoretic Approach to the Financial Benefits of Infrastructure-as-a-Service**In

*Future Generation Computer Systems*, vol. 41, pp. 44-52. Elsevier**(2014)**[Show Abstract]

Financial beneﬁts are an important factor when cloud infrastructure is considered to meet processing demand. The dynamics of on-demand pricing and service usage are investigated in a two-stage game model for a monopoly Infrastructure-as-a-Service (IaaS) market. The possibility of hybrid clouds (public clouds plus own infrastructure) turns out to be essential in order that not only the provider but also the clients have signiﬁcant beneﬁts from on-demand services. Even if the client meets all demand in the public cloud, the threat of building a hybrid cloud keeps the instance price low. This is not the case when reserved instances are oﬀered as well. Parameters like load proﬁles and economies of scale have a huge eﬀect on likely future pricing and on a cost-optimal split-up of client demand between either a client’s own data center and a public cloud service or between reserved and on-demand cloud instances.

[Show BibTeX] @article{KK-2014,

author = {J{\"o}rn K{\"u}nsem{\"o}ller AND Holger Karl},

title = {A Game-Theoretic Approach to the Financial Benefits of Infrastructure-as-a-Service},

journal = {Future Generation Computer Systems},

year = {2014},

volume = {41},

pages = {44--52},

abstract = {Financial beneﬁts are an important factor when cloud infrastructure is considered to meet processing demand. The dynamics of on-demand pricing and service usage are investigated in a two-stage game model for a monopoly Infrastructure-as-a-Service (IaaS) market. The possibility of hybrid clouds (public clouds plus own infrastructure) turns out to be essential in order that not only the provider but also the clients have signiﬁcant beneﬁts from on-demand services. Even if the client meets all demand in the public cloud, the threat of building a hybrid cloud keeps the instance price low. This is not the case when reserved instances are oﬀered as well. Parameters like load proﬁles and economies of scale have a huge eﬀect on likely future pricing and on a cost-optimal split-up of client demand between either a client’s own data center and a public cloud service or between reserved and on-demand cloud instances.}

}

[DOI]
author = {J{\"o}rn K{\"u}nsem{\"o}ller AND Holger Karl},

title = {A Game-Theoretic Approach to the Financial Benefits of Infrastructure-as-a-Service},

journal = {Future Generation Computer Systems},

year = {2014},

volume = {41},

pages = {44--52},

abstract = {Financial beneﬁts are an important factor when cloud infrastructure is considered to meet processing demand. The dynamics of on-demand pricing and service usage are investigated in a two-stage game model for a monopoly Infrastructure-as-a-Service (IaaS) market. The possibility of hybrid clouds (public clouds plus own infrastructure) turns out to be essential in order that not only the provider but also the clients have signiﬁcant beneﬁts from on-demand services. Even if the client meets all demand in the public cloud, the threat of building a hybrid cloud keeps the instance price low. This is not the case when reserved instances are oﬀered as well. Parameters like load proﬁles and economies of scale have a huge eﬀect on likely future pricing and on a cost-optimal split-up of client demand between either a client’s own data center and a public cloud service or between reserved and on-demand cloud instances.}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

**A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery**In

*Theoretical Computer Science*. Elsevier**(2014)**[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds.

[Show BibTeX] @article{ResDiscJournal,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

journal = {Theoretical Computer Science},

year = {2014},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds. }

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

journal = {Theoretical Computer Science},

year = {2014},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds. }

}

**2013** (46)

Philip Wette, Holger Karl:

In Proceedings of the ACM SIGCOMM '13. ACM, Digital Library, pp. 541-542

[Show Abstract]

**Which Flows Are Hiding Behind My Wildcard Rule? Adding Packet Sampling to OpenFlow**In Proceedings of the ACM SIGCOMM '13. ACM, Digital Library, pp. 541-542

**(2013)**[Show Abstract]

In OpenFlow [1], multiple switches share the same control plane which is centralized atwhat is called the OpenFlow controller. A switch only consists of a forwarding plane. Rules for forwarding individual packets (called ow entries in OpenFlow) are pushed from the controller to the switches. In a network with a high arrival rate of new ows, such as in a data center, the control trac between the switch and controller can become very high. As a consequence, routing of new ows will be slow. One way to reduce control trac is to use wildcarded ow entries. Wildcard ow entries can be used to create default routes in the network. However, since switches do not keep track of ows covered by a wildcard ow entry, the controller no longer has knowledge about individual ows. To nd out about these individual ows we propose an extension to the current OpenFlow standard to enable packet sampling of wildcard ow entries.

[Show BibTeX] @inproceedings{PWHK-2013b,

author = {Philip Wette AND Holger Karl},

title = {Which Flows Are Hiding Behind My Wildcard Rule? Adding Packet Sampling to OpenFlow},

booktitle = {Proceedings of the ACM SIGCOMM '13},

year = {2013},

pages = {541-542},

publisher = {ACM},

abstract = {In OpenFlow [1], multiple switches share the same control plane which is centralized atwhat is called the OpenFlow controller. A switch only consists of a forwarding plane. Rules for forwarding individual packets (called ow entries in OpenFlow) are pushed from the controller to the switches. In a network with a high arrival rate of new ows, such as in a data center, the control trac between the switch and controller can become very high. As a consequence, routing of new ows will be slow. One way to reduce control trac is to use wildcarded ow entries. Wildcard ow entries can be used to create default routes in the network. However, since switches do not keep track of ows covered by a wildcard ow entry, the controller no longer has knowledge about individual ows. To nd out about these individual ows we propose an extension to the current OpenFlow standard to enable packet sampling of wildcard ow entries.},

series = {Digital Library}

}

[DOI]
author = {Philip Wette AND Holger Karl},

title = {Which Flows Are Hiding Behind My Wildcard Rule? Adding Packet Sampling to OpenFlow},

booktitle = {Proceedings of the ACM SIGCOMM '13},

year = {2013},

pages = {541-542},

publisher = {ACM},

abstract = {In OpenFlow [1], multiple switches share the same control plane which is centralized atwhat is called the OpenFlow controller. A switch only consists of a forwarding plane. Rules for forwarding individual packets (called ow entries in OpenFlow) are pushed from the controller to the switches. In a network with a high arrival rate of new ows, such as in a data center, the control trac between the switch and controller can become very high. As a consequence, routing of new ows will be slow. One way to reduce control trac is to use wildcarded ow entries. Wildcard ow entries can be used to create default routes in the network. However, since switches do not keep track of ows covered by a wildcard ow entry, the controller no longer has knowledge about individual ows. To nd out about these individual ows we propose an extension to the current OpenFlow standard to enable packet sampling of wildcard ow entries.},

series = {Digital Library}

}

Nils Roehl:

Techreport UPB.

[Show Abstract]

**Two-Stage Allocation Procedures**Techreport UPB.

**(2013)**[Show Abstract]

Suppose some individuals are allowed to engage in different groups at the same time and they generate a certain welfare by cooperation. Finding appropriate ways for distributing this welfare is a non-trivial issue. The purpose of this work is to analyze two-stage allocation procedures where first each group receives a share of the welfare which is then, subsequently, distributed among the corresponding members. To study these procedures in a structured way, cooperative games and network games are combined in a general framework by using mathematical hypergraphs. Moreover, several convincing requirements on allocation procedures are discussed and formalized. Thereby it will be shown, for example, that the Position Value and iteratively applying the Myerson Value can be characterized by similar axiomatizations.

[Show BibTeX] @techreport{2StageRules13R,

author = {Nils Roehl},

title = {Two-Stage Allocation Procedures},

year = {2013},

type = {Techreport UPB},

abstract = {Suppose some individuals are allowed to engage in different groups at the same time and they generate a certain welfare by cooperation. Finding appropriate ways for distributing this welfare is a non-trivial issue. The purpose of this work is to analyze two-stage allocation procedures where first each group receives a share of the welfare which is then, subsequently, distributed among the corresponding members. To study these procedures in a structured way, cooperative games and network games are combined in a general framework by using mathematical hypergraphs. Moreover, several convincing requirements on allocation procedures are discussed and formalized. Thereby it will be shown, for example, that the Position Value and iteratively applying the Myerson Value can be characterized by similar axiomatizations.}

}

author = {Nils Roehl},

title = {Two-Stage Allocation Procedures},

year = {2013},

type = {Techreport UPB},

abstract = {Suppose some individuals are allowed to engage in different groups at the same time and they generate a certain welfare by cooperation. Finding appropriate ways for distributing this welfare is a non-trivial issue. The purpose of this work is to analyze two-stage allocation procedures where first each group receives a share of the welfare which is then, subsequently, distributed among the corresponding members. To study these procedures in a structured way, cooperative games and network games are combined in a general framework by using mathematical hypergraphs. Moreover, several convincing requirements on allocation procedures are discussed and formalized. Thereby it will be shown, for example, that the Position Value and iteratively applying the Myerson Value can be characterized by similar axiomatizations.}

}

Petr Kolman, Christian Scheideler:

In

[Show Abstract]

**Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing**In

*Theory of Computing Systems*, vol. 53, no. 2, pp. 341-363. Springer**(2013)**[Show Abstract]

An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.

[Show BibTeX] @article{KS2013,

author = {Petr Kolman AND Christian Scheideler},

title = {Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing},

journal = {Theory of Computing Systems},

year = {2013},

volume = {53},

number = {2},

pages = {341-363},

abstract = {An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.}

}

[DOI]
author = {Petr Kolman AND Christian Scheideler},

title = {Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing},

journal = {Theory of Computing Systems},

year = {2013},

volume = {53},

number = {2},

pages = {341-363},

abstract = {An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.}

}

Sebastian Abshoff, Markus Benter, Andreas Cord Landwehr, Manuel Malatyali, Friedhelm Meyer auf der Heide:

In Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, ALGOSENSORS 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers. Springer, Lecture Notes in Computer Science, vol. 8243, pp. 22-34

[Show Abstract]

**Token Dissemination in Geometric Dynamic Networks**In Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, ALGOSENSORS 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers. Springer, Lecture Notes in Computer Science, vol. 8243, pp. 22-34

**(2013)**[Show Abstract]

We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.

[Show BibTeX] @inproceedings{DBLP:conf/algosensors/AbshoffBCMH13,

author = {Sebastian Abshoff AND Markus Benter AND Andreas Cord Landwehr AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {Token Dissemination in Geometric Dynamic Networks},

booktitle = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, {ALGOSENSORS} 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers},

year = {2013},

pages = {22-34},

publisher = {Springer},

month = {September},

abstract = {We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Sebastian Abshoff AND Markus Benter AND Andreas Cord Landwehr AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {Token Dissemination in Geometric Dynamic Networks},

booktitle = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, {ALGOSENSORS} 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers},

year = {2013},

pages = {22-34},

publisher = {Springer},

month = {September},

abstract = {We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.},

series = {Lecture Notes in Computer Science}

}

Bernd Frick, Robert Simmons:

In

[Show Abstract]

**The Impact of Individual and Collective Reputation on Wine Prices: Empirical Evidence from the Mosel Valley**In

*Journal of Business Economics*, vol. 83, no. 2, pp. 101-119. Springer**(2013)**[Show Abstract]

Although of considerable practical importance, the separate impact of individual and collective reputation on firm performance (e.g. product prices) has not yet been convincingly demonstrated. We use a sample of some 70 different wineries offering more than 1,300 different Riesling wines from the Mosel valley to isolate the returns to individual reputation (measured by expert ratings in a highly respected wine guide) from the returns to collective reputation (measured by membership in two different professional associations where members are assumed to monitor each other very closely). We find that both effects are statistically significant and economically relevant with the latter being more important in quantitative terms than the former.

[Show BibTeX] @article{FS2012JoBE,

author = {Bernd Frick AND Robert Simmons},

title = {The Impact of Individual and Collective Reputation on Wine Prices: Empirical Evidence from the Mosel Valley},

journal = {Journal of Business Economics},

year = {2013},

volume = {83},

number = {2},

pages = {101-119},

abstract = {Although of considerable practical importance, the separate impact of individual and collective reputation on firm performance (e.g. product prices) has not yet been convincingly demonstrated. We use a sample of some 70 different wineries offering more than 1,300 different Riesling wines from the Mosel valley to isolate the returns to individual reputation (measured by expert ratings in a highly respected wine guide) from the returns to collective reputation (measured by membership in two different professional associations where members are assumed to monitor each other very closely). We find that both effects are statistically significant and economically relevant with the latter being more important in quantitative terms than the former.}

}

[DOI]
author = {Bernd Frick AND Robert Simmons},

title = {The Impact of Individual and Collective Reputation on Wine Prices: Empirical Evidence from the Mosel Valley},

journal = {Journal of Business Economics},

year = {2013},

volume = {83},

number = {2},

pages = {101-119},

abstract = {Although of considerable practical importance, the separate impact of individual and collective reputation on firm performance (e.g. product prices) has not yet been convincingly demonstrated. We use a sample of some 70 different wineries offering more than 1,300 different Riesling wines from the Mosel valley to isolate the returns to individual reputation (measured by expert ratings in a highly respected wine guide) from the returns to collective reputation (measured by membership in two different professional associations where members are assumed to monitor each other very closely). We find that both effects are statistically significant and economically relevant with the latter being more important in quantitative terms than the former.}

}

Kalman Graffi, Lars Bremer:

In Proceedings of the International Conference on Communications (ICC'13). IEEE Computer Society, pp. 3444 - 3449

[Show Abstract]

**Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case**In Proceedings of the International Conference on Communications (ICC'13). IEEE Computer Society, pp. 3444 - 3449

**(2013)**[Show Abstract]

Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.

[Show BibTeX] @inproceedings{BremerGraffi13,

author = {Kalman Graffi AND Lars Bremer},

title = {Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case},

booktitle = {Proceedings of the International Conference on Communications (ICC'13)},

year = {2013},

pages = {3444 - 3449 },

publisher = {IEEE Computer Society},

abstract = {Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.}

}

[DOI]
author = {Kalman Graffi AND Lars Bremer},

title = {Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case},

booktitle = {Proceedings of the International Conference on Communications (ICC'13)},

year = {2013},

pages = {3444 - 3449 },

publisher = {IEEE Computer Society},

abstract = {Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.}

}

Felix Wallaschek:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Routing in heterogenen OpenFlow Netzwerken**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Wallaschek2013,

author = {Felix Wallaschek},

title = {Routing in heterogenen OpenFlow Netzwerken},

year = {2013}

}

author = {Felix Wallaschek},

title = {Routing in heterogenen OpenFlow Netzwerken},

year = {2013}

}

Berno Buechel, Nils Roehl:

Techreport UPB.

[Show Abstract]

**Robust Equilibria in Location Games**Techreport UPB.

**(2013)**[Show Abstract]

In the framework of spatial competition, two or more players strategically choose a location

in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A proﬁle of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which diﬀer only by the consumers’ perceptions of distances. For a ﬁnite number of players and any distribution of consumers, we provide a full characterization of all robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal diﬀerentiation and ineﬃciency are robust phenomena. Thereby, we ﬁnd strong support for an old conjecture that in equilibrium ﬁrms form local clusters.

[Show BibTeX] in order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A proﬁle of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which diﬀer only by the consumers’ perceptions of distances. For a ﬁnite number of players and any distribution of consumers, we provide a full characterization of all robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal diﬀerentiation and ineﬃciency are robust phenomena. Thereby, we ﬁnd strong support for an old conjecture that in equilibrium ﬁrms form local clusters.

@techreport{Robust13BR,

author = {Berno Buechel AND Nils Roehl},

title = {Robust Equilibria in Location Games},

year = {2013},

type = {Techreport UPB},

abstract = {In the framework of spatial competition, two or more players strategically choose a locationin order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A proﬁle of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which diﬀer only by the consumers’ perceptions of distances. For a ﬁnite number of players and any distribution of consumers, we provide a full characterization of all robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal diﬀerentiation and ineﬃciency are robust phenomena. Thereby, we ﬁnd strong support for an old conjecture that in equilibrium ﬁrms form local clusters.}

}

author = {Berno Buechel AND Nils Roehl},

title = {Robust Equilibria in Location Games},

year = {2013},

type = {Techreport UPB},

abstract = {In the framework of spatial competition, two or more players strategically choose a locationin order to attract consumers. It is assumed standardly that consumers with the same favorite location fully agree on the ranking of all possible locations. To investigate the necessity of this questionable and restrictive assumption, we model heterogeneity in consumers’ distance perceptions by individual edge lengths of a given graph. A proﬁle of location choices is called a “robust equilibrium” if it is a Nash equilibrium in several games which diﬀer only by the consumers’ perceptions of distances. For a ﬁnite number of players and any distribution of consumers, we provide a full characterization of all robust equilibria and derive structural conditions for their existence. Furthermore, we discuss whether the classical observations of minimal diﬀerentiation and ineﬃciency are robust phenomena. Thereby, we ﬁnd strong support for an old conjecture that in equilibrium ﬁrms form local clusters.}

}

Christoph Robbert:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Ressource-Optimized Deployment of Multi-Tier Applications - The Data Rate-Constrained Case**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Robbert2013,

author = {Christoph Robbert},

title = {Ressource-Optimized Deployment of Multi-Tier Applications - The Data Rate-Constrained Case},

year = {2013}

}

author = {Christoph Robbert},

title = {Ressource-Optimized Deployment of Multi-Tier Applications - The Data Rate-Constrained Case},

year = {2013}

}

Artjom Terentjew:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Reputationssysteme und Gerichtsverfahren als Wekzeuge zur Sicherstellung von Qualitätsstandards in Transaktionen**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Terentjew13,

author = {Artjom Terentjew},

title = {Reputationssysteme und Gerichtsverfahren als Wekzeuge zur Sicherstellung von Qualit{\"a}tsstandards in Transaktionen},

year = {2013}

}

author = {Artjom Terentjew},

title = {Reputationssysteme und Gerichtsverfahren als Wekzeuge zur Sicherstellung von Qualit{\"a}tsstandards in Transaktionen},

year = {2013}

}

Markus Benter, Florentin Neumann, Hannes Frey:

In Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM). IEEE Computer Society, pp. 2193-2201

[Show Abstract]

**Reactive Planar Spanner Construction in Wireless Ad Hoc and Sensor Networks**In Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM). IEEE Computer Society, pp. 2193-2201

**(2013)**[Show Abstract]

Within reactive topology control, a node determines its adjacent edges of a network subgraph without prior knowledge of its neighborhood. The goal is to construct a local view on a topology which provides certain desired properties such as planarity. During algorithm execution, a node, in general, is not allowed to determine all its neighbors of the network graph. There are well-known reactive algorithms for computing planar subgraphs. However, the subgraphs obtained do not have constant Euclidean spanning ratio. This means that routing along these subgraphs may result in potentially long detours. So far, it has been unknown if planar spanners can be constructed reactively. In this work, we show that at least under the unit disk network model, this is indeed possible, by proposing an algorithm for reactive construction of the partial Delaunay triangulation, which recently turned out to be a spanner. Furthermore, we show that our algorithm is message-optimal as a node will only exchange messages with nodes that are also neighbors in the spanner. The algorithm’s presentation is complemented by a rigorous proof of correctness.

[Show BibTeX] @inproceedings{Benter13,

author = {Markus Benter AND Florentin Neumann AND Hannes Frey},

title = {Reactive Planar Spanner Construction in Wireless Ad Hoc and Sensor Networks},

booktitle = {Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM)},

year = {2013},

pages = {2193-2201},

publisher = {IEEE Computer Society},

abstract = {Within reactive topology control, a node determines its adjacent edges of a network subgraph without prior knowledge of its neighborhood. The goal is to construct a local view on a topology which provides certain desired properties such as planarity. During algorithm execution, a node, in general, is not allowed to determine all its neighbors of the network graph. There are well-known reactive algorithms for computing planar subgraphs. However, the subgraphs obtained do not have constant Euclidean spanning ratio. This means that routing along these subgraphs may result in potentially long detours. So far, it has been unknown if planar spanners can be constructed reactively. In this work, we show that at least under the unit disk network model, this is indeed possible, by proposing an algorithm for reactive construction of the partial Delaunay triangulation, which recently turned out to be a spanner. Furthermore, we show that our algorithm is message-optimal as a node will only exchange messages with nodes that are also neighbors in the spanner. The algorithm’s presentation is complemented by a rigorous proof of correctness.}

}

[DOI]
author = {Markus Benter AND Florentin Neumann AND Hannes Frey},

title = {Reactive Planar Spanner Construction in Wireless Ad Hoc and Sensor Networks},

booktitle = {Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM)},

year = {2013},

pages = {2193-2201},

publisher = {IEEE Computer Society},

abstract = {Within reactive topology control, a node determines its adjacent edges of a network subgraph without prior knowledge of its neighborhood. The goal is to construct a local view on a topology which provides certain desired properties such as planarity. During algorithm execution, a node, in general, is not allowed to determine all its neighbors of the network graph. There are well-known reactive algorithms for computing planar subgraphs. However, the subgraphs obtained do not have constant Euclidean spanning ratio. This means that routing along these subgraphs may result in potentially long detours. So far, it has been unknown if planar spanners can be constructed reactively. In this work, we show that at least under the unit disk network model, this is indeed possible, by proposing an algorithm for reactive construction of the partial Delaunay triangulation, which recently turned out to be a spanner. Furthermore, we show that our algorithm is message-optimal as a node will only exchange messages with nodes that are also neighbors in the spanner. The algorithm’s presentation is complemented by a rigorous proof of correctness.}

}

Margarita Staschewski:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Price Formation in the Restaurant Industry - An Empirical Analysis**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Staschewski13,

author = {Margarita Staschewski},

title = {Price Formation in the Restaurant Industry - An Empirical Analysis},

year = {2013}

}

author = {Margarita Staschewski},

title = {Price Formation in the Restaurant Industry - An Empirical Analysis},

year = {2013}

}

Andreas Blix:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Optimale und adaptive binäre Bäume in Netzwerken**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{bsc2013Blix,

author = {Andreas Blix},

title = {Optimale und adaptive bin{\"a}re B{\"a}ume in Netzwerken},

year = {2013}

}

author = {Andreas Blix},

title = {Optimale und adaptive bin{\"a}re B{\"a}ume in Netzwerken},

year = {2013}

}

Sebastian Abshoff, Markus Benter, Manuel Malatyali, Friedhelm Meyer auf der Heide:

In Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS). Springer, LNCS, vol. 8304, pp. 11-22

[Show Abstract]

**On Two-Party Communication Through Dynamic Networks**In Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS). Springer, LNCS, vol. 8304, pp. 11-22

**(2013)**[Show Abstract]

We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.

[Show BibTeX] @inproceedings{DBLP:conf/opodis/AbshoffBMH13,

author = {Sebastian Abshoff AND Markus Benter AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {On Two-Party Communication Through Dynamic Networks},

booktitle = {Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2013},

pages = {11-22},

publisher = {Springer},

month = {December},

abstract = {We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Markus Benter AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {On Two-Party Communication Through Dynamic Networks},

booktitle = {Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2013},

pages = {11-22},

publisher = {Springer},

month = {December},

abstract = {We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.},

series = {LNCS}

}

Philip Wette, Holger Karl:

In Proceedings of the 19th IEEE International Workshop on Local and Metropolitan Area Networks (IEEE LANMAN). IEEE Computer Society, pp. 1 - 6

[Show Abstract]

**On the Quality of Selfish Virtual Topology Reconfiguration in IP-over-WDM Networks**In Proceedings of the 19th IEEE International Workshop on Local and Metropolitan Area Networks (IEEE LANMAN). IEEE Computer Society, pp. 1 - 6

**(2013)**[Show Abstract]

The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios.

[Show BibTeX] @inproceedings{wette2013b,

author = {Philip Wette AND Holger Karl},

title = {On the Quality of Selfish Virtual Topology Reconfiguration in IP-over-WDM Networks},

booktitle = {Proceedings of the 19th IEEE International Workshop on Local and Metropolitan Area Networks (IEEE LANMAN)},

year = {2013},

pages = {1 - 6 },

publisher = {IEEE Computer Society},

abstract = {The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios.}

}

[DOI]
author = {Philip Wette AND Holger Karl},

title = {On the Quality of Selfish Virtual Topology Reconfiguration in IP-over-WDM Networks},

booktitle = {Proceedings of the 19th IEEE International Workshop on Local and Metropolitan Area Networks (IEEE LANMAN)},

year = {2013},

pages = {1 - 6 },

publisher = {IEEE Computer Society},

abstract = {The process of planning a virtual topology for a Wavelength Devision Multiplexing (WDM) network is called Virtual Topology Design (VTD). The goal of VTD is to find a virtual topology that supports forwarding the expected traffic without congestion. In networks with fluctuating, high traffic demands, it can happen that no single topology fits all changing traffic demands occurring over a longer time. Thus, during operation, the virtual topology has to be reconfigured. Since modern networks tend to be large, VTD algorithms have to scale well with increasing network size, requiring distributed algorithms. Existing distributed VTD algorithms, however, react too slowly on congestion for the real-time reconfiguration of large networks. We propose Selfish Virtual Topology Reconfiguration (SVTR) as a new algorithm for distributed VTD. It combines reconfiguring the virtual topology and routing through a Software Defined Network (SDN). SVTR is used for online, on-the-fly network reconfiguration. Its integrated routing and WDM reconfiguration keeps connection disruption due to network reconfiguration to a minimum and is able to react very quickly to traffic pattern changes. SVTR works by iteratively adapting the virtual topology to the observed traffic patterns without global traffic information and without future traffic estimations. We evaluated SVTR by simulation and found that it significantly lowers congestion in realistic networks and high load scenarios.}

}

Marcus Autenrieth, Hannes Frey:

In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 126-131

[Show Abstract]

**On Greedy Routing in Degree-bounded Graphs over d-Dimensional Internet Coordinate Embeddings**In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 126-131

**(2013)**[Show Abstract]

In this paper we will introduce a new d-dimensional graph for constructing geometric application layer overlay net-works. Our approach will use internet coordinates, embedded using the L∞ -metric. After describing the graph structure, we will show how it limits maintenance overhead by bounding each node’s out-degree and how it supports greedy routing using one-hop neighbourhood information in each routing step. We will further show that greedy routing can always compute a path in our graph and we will also prove that in each forwarding step the next hop is closer to the destination than the current node.

[Show BibTeX] @inproceedings{autenrieth13a,

author = {Marcus Autenrieth AND Hannes Frey},

title = {On Greedy Routing in Degree-bounded Graphs over d-Dimensional Internet Coordinate Embeddings},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {126-131},

publisher = {IEEE Computer Society},

abstract = {In this paper we will introduce a new d-dimensional graph for constructing geometric application layer overlay net-works. Our approach will use internet coordinates, embedded using the L∞ -metric. After describing the graph structure, we will show how it limits maintenance overhead by bounding each node’s out-degree and how it supports greedy routing using one-hop neighbourhood information in each routing step. We will further show that greedy routing can always compute a path in our graph and we will also prove that in each forwarding step the next hop is closer to the destination than the current node.}

}

[DOI]
author = {Marcus Autenrieth AND Hannes Frey},

title = {On Greedy Routing in Degree-bounded Graphs over d-Dimensional Internet Coordinate Embeddings},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {126-131},

publisher = {IEEE Computer Society},

abstract = {In this paper we will introduce a new d-dimensional graph for constructing geometric application layer overlay net-works. Our approach will use internet coordinates, embedded using the L∞ -metric. After describing the graph structure, we will show how it limits maintenance overhead by bounding each node’s out-degree and how it supports greedy routing using one-hop neighbourhood information in each routing step. We will further show that greedy routing can always compute a path in our graph and we will also prove that in each forwarding step the next hop is closer to the destination than the current node.}

}

Chintan Jayesh Parekh:

Master's thesis, University of Paderborn

[Show BibTeX]

**Meta-data based Search in Structured Peer-to-Peer Networks**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{msc2013Parekh,

author = {Chintan Jayesh Parekh},

title = {Meta-data based Search in Structured Peer-to-Peer Networks},

school = {University of Paderborn},

year = {2013}

}

author = {Chintan Jayesh Parekh},

title = {Meta-data based Search in Structured Peer-to-Peer Networks},

school = {University of Paderborn},

year = {2013}

}

Malte Splietker:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**MapReduce in Software Defined Networks**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Splietker2013,

author = {Malte Splietker},

title = {MapReduce in Software Defined Networks},

year = {2013}

}

author = {Malte Splietker},

title = {MapReduce in Software Defined Networks},

year = {2013}

}

Elvira Herzog:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Lösungsverfahren für das many-to-one Matching Problem**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Herzog13,

author = {Elvira Herzog},

title = {L{\"o}sungsverfahren f{\"u}r das many-to-one Matching Problem},

year = {2013}

}

author = {Elvira Herzog},

title = {L{\"o}sungsverfahren f{\"u}r das many-to-one Matching Problem},

year = {2013}

}

Chen Avin, Bernhard Haeupler, Zvi Lotker, Christian Scheideler, Stefan Schmid:

In Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 395-406

[Show Abstract]

**Locally Self-Adjusting Tree Networks**In Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 395-406

**(2013)**[Show Abstract]

This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.

[Show BibTeX] @inproceedings{AHLSS2013IPDPS,

author = {Chen Avin AND Bernhard Haeupler AND Zvi Lotker AND Christian Scheideler AND Stefan Schmid},

title = {Locally Self-Adjusting Tree Networks},

booktitle = {Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2013},

pages = {395-406},

publisher = {IEEE Computer Society},

abstract = {This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.}

}

[DOI]
author = {Chen Avin AND Bernhard Haeupler AND Zvi Lotker AND Christian Scheideler AND Stefan Schmid},

title = {Locally Self-Adjusting Tree Networks},

booktitle = {Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2013},

pages = {395-406},

publisher = {IEEE Computer Society},

abstract = {This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.}

}

Peter Pietrzyk:

PhD thesis, University of Paderborn

[Show Abstract]

**Local and Online Algorithms for Facility Location**PhD thesis, University of Paderborn

**(2013)**[Show Abstract]

Diese Arbeit beschäftigt sich mit dem Facility Location Problem. Dies ist ein Optimierungsproblem, bei dem festgelegt werden muss an welchen Positionen Ressourcen zur Verfügung gestellt werden, so dass diese von Nutzern gut erreicht werden können. Es sollen dabei Kosten minimiert werden, die zum einen durch Bereitstellung von Ressourcen und zum anderen durch Verbindungskosten zwischen Nutzern und Ressourcen entstehen. Die Schwierigkeit des Problems liegt darin, dass man einerseits möglichst wenige Ressourcen zur Verfügung stellen möchte, andererseits dafür sorgen muss, dass sich Nutzer nicht all zu weit weg von Ressourcen befinden. Dies würde nämlich hohe Verbindungskosten nach sich ziehen. Das Facility Location Problem wurde bereits sehr intensiv in vielen unterschiedlichen Varianten untersucht. In dieser Arbeit werden drei Varianten des Problems modelliert und neue Algorithmen für sie entwickelt und bezüglich ihres Approximationsfaktors und ihrer Laufzeit analysiert. Jede dieser drei untersuchten Varianten hat einen besonderen Schwerpunkt. Bei der ersten Varianten handelt es sich um ein Online Problem, da hier die Eingabe nicht von Anfang an bekannt ist, sondern Schritt für Schritt enthüllt wird. Die Schwierigkeit hierbei besteht darin unwiderrufliche Entscheidungen treffen zu müssen ohne dabei die Zukunft zu kennen und trotzdem eine zu jeder Zeit gute Lösung angeben zu können. Der Schwerpunkt der zweiten Variante liegt auf Lokalität, die z.B. in Sensornetzwerken von großer Bedeutung ist. Hier soll eine Lösung verteilt und nur mit Hilfe von lokalen Information berechnet werden. Schließlich beschäftigt sich die dritte Variante mit einer verteilten Berechnung, bei welcher nur eine stark beschränkte Datenmenge verschickt werden darf und dabei trotzdem ein sehr guter Approximationsfaktor erreicht werden muss. Die bei der Analyse der Approximationsfaktoren bzw. der Kompetitivität verwendeten Techniken basieren zum großen Teil auf Abschätzung der primalen Lösung mit Hilfe einer Lösung des zugehörigen dualen Problems. Für die Modellierung von Lokalität wird das weitverbreitete LOCAL Modell verwendet. In diesem Modell werden für die Algorithmen subpolynomielle obere Laufzeitschranken gezeigt.

[Show BibTeX] @phdthesis{SD2012,

author = {Peter Pietrzyk},

title = {Local and Online Algorithms for Facility Location},

school = {University of Paderborn},

year = {2013},

abstract = {Diese Arbeit besch{\"a}ftigt sich mit dem Facility Location Problem. Dies ist ein Optimierungsproblem, bei dem festgelegt werden muss an welchen Positionen Ressourcen zur Verf{\"u}gung gestellt werden, so dass diese von Nutzern gut erreicht werden k{\"o}nnen. Es sollen dabei Kosten minimiert werden, die zum einen durch Bereitstellung von Ressourcen und zum anderen durch Verbindungskosten zwischen Nutzern und Ressourcen entstehen. Die Schwierigkeit des Problems liegt darin, dass man einerseits m{\"o}glichst wenige Ressourcen zur Verf{\"u}gung stellen m{\"o}chte, andererseits daf{\"u}r sorgen muss, dass sich Nutzer nicht all zu weit weg von Ressourcen befinden. Dies w{\"u}rde n{\"a}mlich hohe Verbindungskosten nach sich ziehen. Das Facility Location Problem wurde bereits sehr intensiv in vielen unterschiedlichen Varianten untersucht. In dieser Arbeit werden drei Varianten des Problems modelliert und neue Algorithmen f{\"u}r sie entwickelt und bez{\"u}glich ihres Approximationsfaktors und ihrer Laufzeit analysiert. Jede dieser drei untersuchten Varianten hat einen besonderen Schwerpunkt. Bei der ersten Varianten handelt es sich um ein Online Problem, da hier die Eingabe nicht von Anfang an bekannt ist, sondern Schritt f{\"u}r Schritt enth{\"u}llt wird. Die Schwierigkeit hierbei besteht darin unwiderrufliche Entscheidungen treffen zu m{\"u}ssen ohne dabei die Zukunft zu kennen und trotzdem eine zu jeder Zeit gute L{\"o}sung angeben zu k{\"o}nnen. Der Schwerpunkt der zweiten Variante liegt auf Lokalit{\"a}t, die z.B. in Sensornetzwerken von großer Bedeutung ist. Hier soll eine L{\"o}sung verteilt und nur mit Hilfe von lokalen Information berechnet werden. Schließlich besch{\"a}ftigt sich die dritte Variante mit einer verteilten Berechnung, bei welcher nur eine stark beschr{\"a}nkte Datenmenge verschickt werden darf und dabei trotzdem ein sehr guter Approximationsfaktor erreicht werden muss. Die bei der Analyse der Approximationsfaktoren bzw. der Kompetitivit{\"a}t verwendeten Techniken basieren zum großen Teil auf Absch{\"a}tzung der primalen L{\"o}sung mit Hilfe einer L{\"o}sung des zugeh{\"o}rigen dualen Problems. F{\"u}r die Modellierung von Lokalit{\"a}t wird das weitverbreitete LOCAL Modell verwendet. In diesem Modell werden f{\"u}r die Algorithmen subpolynomielle obere Laufzeitschranken gezeigt.}

}

[DOI]
author = {Peter Pietrzyk},

title = {Local and Online Algorithms for Facility Location},

school = {University of Paderborn},

year = {2013},

abstract = {Diese Arbeit besch{\"a}ftigt sich mit dem Facility Location Problem. Dies ist ein Optimierungsproblem, bei dem festgelegt werden muss an welchen Positionen Ressourcen zur Verf{\"u}gung gestellt werden, so dass diese von Nutzern gut erreicht werden k{\"o}nnen. Es sollen dabei Kosten minimiert werden, die zum einen durch Bereitstellung von Ressourcen und zum anderen durch Verbindungskosten zwischen Nutzern und Ressourcen entstehen. Die Schwierigkeit des Problems liegt darin, dass man einerseits m{\"o}glichst wenige Ressourcen zur Verf{\"u}gung stellen m{\"o}chte, andererseits daf{\"u}r sorgen muss, dass sich Nutzer nicht all zu weit weg von Ressourcen befinden. Dies w{\"u}rde n{\"a}mlich hohe Verbindungskosten nach sich ziehen. Das Facility Location Problem wurde bereits sehr intensiv in vielen unterschiedlichen Varianten untersucht. In dieser Arbeit werden drei Varianten des Problems modelliert und neue Algorithmen f{\"u}r sie entwickelt und bez{\"u}glich ihres Approximationsfaktors und ihrer Laufzeit analysiert. Jede dieser drei untersuchten Varianten hat einen besonderen Schwerpunkt. Bei der ersten Varianten handelt es sich um ein Online Problem, da hier die Eingabe nicht von Anfang an bekannt ist, sondern Schritt f{\"u}r Schritt enth{\"u}llt wird. Die Schwierigkeit hierbei besteht darin unwiderrufliche Entscheidungen treffen zu m{\"u}ssen ohne dabei die Zukunft zu kennen und trotzdem eine zu jeder Zeit gute L{\"o}sung angeben zu k{\"o}nnen. Der Schwerpunkt der zweiten Variante liegt auf Lokalit{\"a}t, die z.B. in Sensornetzwerken von großer Bedeutung ist. Hier soll eine L{\"o}sung verteilt und nur mit Hilfe von lokalen Information berechnet werden. Schließlich besch{\"a}ftigt sich die dritte Variante mit einer verteilten Berechnung, bei welcher nur eine stark beschr{\"a}nkte Datenmenge verschickt werden darf und dabei trotzdem ein sehr guter Approximationsfaktor erreicht werden muss. Die bei der Analyse der Approximationsfaktoren bzw. der Kompetitivit{\"a}t verwendeten Techniken basieren zum großen Teil auf Absch{\"a}tzung der primalen L{\"o}sung mit Hilfe einer L{\"o}sung des zugeh{\"o}rigen dualen Problems. F{\"u}r die Modellierung von Lokalit{\"a}t wird das weitverbreitete LOCAL Modell verwendet. In diesem Modell werden f{\"u}r die Algorithmen subpolynomielle obere Laufzeitschranken gezeigt.}

}

Philip Wette, Holger Karl:

In Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM). IEEE Computer Society, pp. 51-52

[Show Abstract]

**Incorporating feedback from application layer into routing and wavelength assignment algorithms**In Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM). IEEE Computer Society, pp. 51-52

**(2013)**[Show Abstract]

Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to set up a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known to the RWA algorithm.Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parametersknown only at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason an RWA algorithm is needed which incorporates feedback from the application layer in the preemption decisions.

This work proposes a simple interface along with an algorithm for computing and selecting preemption candidates in case a lightpath cannot be established. We reason about the necessity of using information from the application layer in the RWA and present two example applications which benefit from this idea.

[Show BibTeX] This work proposes a simple interface along with an algorithm for computing and selecting preemption candidates in case a lightpath cannot be established. We reason about the necessity of using information from the application layer in the RWA and present two example applications which benefit from this idea.

@inproceedings{PWHK-2013a,

author = {Philip Wette AND Holger Karl},

title = {Incorporating feedback from application layer into routing and wavelength assignment algorithms},

booktitle = {Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM)},

year = {2013},

pages = {51-52},

publisher = {IEEE Computer Society},

abstract = {Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to set up a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known to the RWA algorithm.Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parametersknown only at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason an RWA algorithm is needed which incorporates feedback from the application layer in the preemption decisions.This work proposes a simple interface along with an algorithm for computing and selecting preemption candidates in case a lightpath cannot be established. We reason about the necessity of using information from the application layer in the RWA and present two example applications which benefit from this idea.}

}

author = {Philip Wette AND Holger Karl},

title = {Incorporating feedback from application layer into routing and wavelength assignment algorithms},

booktitle = {Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM)},

year = {2013},

pages = {51-52},

publisher = {IEEE Computer Society},

abstract = {Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to set up a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known to the RWA algorithm.Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parametersknown only at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason an RWA algorithm is needed which incorporates feedback from the application layer in the preemption decisions.This work proposes a simple interface along with an algorithm for computing and selecting preemption candidates in case a lightpath cannot be established. We reason about the necessity of using information from the application layer in the RWA and present two example applications which benefit from this idea.}

}

Matthias Feldotto:

Master's thesis, University of Paderborn

[Show BibTeX]

**HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{msc2013Feldotto,

author = {Matthias Feldotto},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

school = {University of Paderborn},

year = {2013}

}

author = {Matthias Feldotto},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

school = {University of Paderborn},

year = {2013}

}

Marios Mavronicolas, Burkhard Monien, Vicky Papadopoulou Lesta:

In

[Show Abstract]

**How many attackers can selfish defenders catch?**In

*Discrete Applied Mathematics*, vol. 161, pp. 2563-2586. Elsevier**(2013)**[Show Abstract]

In a distributed system with attacks and defenses, both attackers and defenders are self-interested entities. We assume a reward-sharing scheme among interdependent defenders; each defender wishes to (locally) maximize her own total fair share to the attackers extinguished due to her involvement (and possibly due to those of others). What is the maximum amount of protection achievable by a number of such defenders against a number of attackers while the system is in a Nash equilibrium? As a measure of system protection, we adopt the Defense-Ratio (Mavronicolas et al., 2008)[20], which provides the expected (inverse) proportion of attackers caught by the defenders. In a Defense-Optimal Nash equilibrium, the Defense-Ratio matches a simple lower bound.

We discover that the existence of Defense-Optimal Nash equilibria depends in a subtle way on how the number of defenders compares to two natural graph-theoretic thresholds we identify. In this vein, we obtain, through a combinatorial analysis of Nash equilibria, a collection of trade-off results:

• When the number of defenders is either sufficiently small or sufficiently large, Defense-Optimal Nash equilibria may exist. The corresponding decision problem is computationally tractable for a large number of defenders; the problem becomes NPNP-complete for a small number of defenders and the intractability is inherited from a previously unconsidered combinatorial problem in Fractional Graph Theory.

• Perhaps paradoxically, there is a middle range of values for the number of defenders where Defense-Optimal Nash equilibria do not exist.

[Show BibTeX] We discover that the existence of Defense-Optimal Nash equilibria depends in a subtle way on how the number of defenders compares to two natural graph-theoretic thresholds we identify. In this vein, we obtain, through a combinatorial analysis of Nash equilibria, a collection of trade-off results:

• When the number of defenders is either sufficiently small or sufficiently large, Defense-Optimal Nash equilibria may exist. The corresponding decision problem is computationally tractable for a large number of defenders; the problem becomes NPNP-complete for a small number of defenders and the intractability is inherited from a previously unconsidered combinatorial problem in Fractional Graph Theory.

• Perhaps paradoxically, there is a middle range of values for the number of defenders where Defense-Optimal Nash equilibria do not exist.

@article{MMP2013,

author = {Marios Mavronicolas AND Burkhard Monien AND Vicky Papadopoulou Lesta},

title = {How many attackers can selfish defenders catch?},

journal = {Discrete Applied Mathematics},

year = {2013},

volume = {161},

pages = {2563-2586},

abstract = {In a distributed system with attacks and defenses, both attackers and defenders are self-interested entities. We assume a reward-sharing scheme among interdependent defenders; each defender wishes to (locally) maximize her own total fair share to the attackers extinguished due to her involvement (and possibly due to those of others). What is the maximum amount of protection achievable by a number of such defenders against a number of attackers while the system is in a Nash equilibrium? As a measure of system protection, we adopt the Defense-Ratio (Mavronicolas et al., 2008)[20], which provides the expected (inverse) proportion of attackers caught by the defenders. In a Defense-Optimal Nash equilibrium, the Defense-Ratio matches a simple lower bound.We discover that the existence of Defense-Optimal Nash equilibria depends in a subtle way on how the number of defenders compares to two natural graph-theoretic thresholds we identify. In this vein, we obtain, through a combinatorial analysis of Nash equilibria, a collection of trade-off results:• When the number of defenders is either sufficiently small or sufficiently large, Defense-Optimal Nash equilibria may exist. The corresponding decision problem is computationally tractable for a large number of defenders; the problem becomes NPNP-complete for a small number of defenders and the intractability is inherited from a previously unconsidered combinatorial problem in Fractional Graph Theory.• Perhaps paradoxically, there is a middle range of values for the number of defenders where Defense-Optimal Nash equilibria do not exist.}

}

[DOI]
author = {Marios Mavronicolas AND Burkhard Monien AND Vicky Papadopoulou Lesta},

title = {How many attackers can selfish defenders catch?},

journal = {Discrete Applied Mathematics},

year = {2013},

volume = {161},

pages = {2563-2586},

abstract = {In a distributed system with attacks and defenses, both attackers and defenders are self-interested entities. We assume a reward-sharing scheme among interdependent defenders; each defender wishes to (locally) maximize her own total fair share to the attackers extinguished due to her involvement (and possibly due to those of others). What is the maximum amount of protection achievable by a number of such defenders against a number of attackers while the system is in a Nash equilibrium? As a measure of system protection, we adopt the Defense-Ratio (Mavronicolas et al., 2008)[20], which provides the expected (inverse) proportion of attackers caught by the defenders. In a Defense-Optimal Nash equilibrium, the Defense-Ratio matches a simple lower bound.We discover that the existence of Defense-Optimal Nash equilibria depends in a subtle way on how the number of defenders compares to two natural graph-theoretic thresholds we identify. In this vein, we obtain, through a combinatorial analysis of Nash equilibria, a collection of trade-off results:• When the number of defenders is either sufficiently small or sufficiently large, Defense-Optimal Nash equilibria may exist. The corresponding decision problem is computationally tractable for a large number of defenders; the problem becomes NPNP-complete for a small number of defenders and the intractability is inherited from a previously unconsidered combinatorial problem in Fractional Graph Theory.• Perhaps paradoxically, there is a middle range of values for the number of defenders where Defense-Optimal Nash equilibria do not exist.}

}

Friedhelm Meyer auf der Heide, Kamil Swierkot:

In

[Show Abstract]

**Hierarchies in Local Distributed Decision**In

*ArXiv e-prints*.**(2013)**(eprint arXiv:1311.7229)[Show Abstract]

We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.

[Show BibTeX] @article{hniid=7931,

author = {Friedhelm Meyer auf der Heide AND Kamil Swierkot},

title = {Hierarchies in Local Distributed Decision},

journal = {ArXiv e-prints},

year = {2013},

month = {nov},

note = {eprint arXiv:1311.7229},

abstract = {We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Kamil Swierkot},

title = {Hierarchies in Local Distributed Decision},

journal = {ArXiv e-prints},

year = {2013},

month = {nov},

note = {eprint arXiv:1311.7229},

abstract = {We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.}

}

Tim Niklas Vinkemeier:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Haptics - Hadoop performance testing in concurrent job scenarios**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Vinkemeier2013,

author = {Tim Niklas Vinkemeier},

title = {Haptics - Hadoop performance testing in concurrent job scenarios},

year = {2013}

}

author = {Tim Niklas Vinkemeier},

title = {Haptics - Hadoop performance testing in concurrent job scenarios},

year = {2013}

}

Alexander Mäcker:

Master's thesis, University of Paderborn

[Show BibTeX]

**Greedy Network Creation With Heavy And Light Edges**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{msc2013maecker,

author = {Alexander M{\"a}cker},

title = {Greedy Network Creation With Heavy And Light Edges},

school = {University of Paderborn},

year = {2013}

}

author = {Alexander M{\"a}cker},

title = {Greedy Network Creation With Heavy And Light Edges},

school = {University of Paderborn},

year = {2013}

}

Suhas Satya:

Master's thesis, University of Paderborn

[Show BibTeX]

**Emulating Wavelength Division Multiplexing using Openflow**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{Satya2013,

author = {Suhas Satya},

title = {Emulating Wavelength Division Multiplexing using Openflow},

school = {University of Paderborn},

year = {2013}

}

author = {Suhas Satya},

title = {Emulating Wavelength Division Multiplexing using Openflow},

school = {University of Paderborn},

year = {2013}

}

Max Reineke:

Master's thesis, University of Paderborn

[Show BibTeX]

**Effizienzsteigerung durch gewichtete Produktbewertungen**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{Reineke13,

author = {Max Reineke},

title = {Effizienzsteigerung durch gewichtete Produktbewertungen},

school = {University of Paderborn},

year = {2013}

}

author = {Max Reineke},

title = {Effizienzsteigerung durch gewichtete Produktbewertungen},

school = {University of Paderborn},

year = {2013}

}

Nadja Maraun:

Master's thesis, University of Paderborn

[Show BibTeX]

**Dynamic One-to-One Matching: Theory and a Job Market Application**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{Maraun13,

author = {Nadja Maraun},

title = {Dynamic One-to-One Matching: Theory and a Job Market Application},

school = {University of Paderborn},

year = {2013}

}

author = {Nadja Maraun},

title = {Dynamic One-to-One Matching: Theory and a Job Market Application},

school = {University of Paderborn},

year = {2013}

}

Stefan Heindorf:

Master's thesis, University of Paderborn

[Show BibTeX]

**Dispersion of Multi-Robot Teams**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{msc2013Heindorf,

author = {Stefan Heindorf},

title = {Dispersion of Multi-Robot Teams},

school = {University of Paderborn},

year = {2013}

}

author = {Stefan Heindorf},

title = {Dispersion of Multi-Robot Teams},

school = {University of Paderborn},

year = {2013}

}

Tobias Kornhoff:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Der Einfluss adaptierter Erwartungen in dynamischen Cournot Oligopolen**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Kornhoff13,

author = {Tobias Kornhoff},

title = {Der Einfluss adaptierter Erwartungen in dynamischen Cournot Oligopolen},

year = {2013}

}

author = {Tobias Kornhoff},

title = {Der Einfluss adaptierter Erwartungen in dynamischen Cournot Oligopolen},

year = {2013}

}

Sonja Brangewitz, Claus-Jochen Haake:

Techreport UPB.

[Show Abstract]

**Cooperative Transfer Price Negotiations under Incomplete Information**Techreport UPB.

**(2013)**[Show Abstract]

In this paper, we analyze a model in which two divisions negotiate over an intrarm transfer price for an intermediate product. Formally, we consider bargaining problems under incomplete information, since the upstream division's (seller's) costs and downstream division's (buyer's) revenues are supposed to be private information. Assuming two possible types for buyer and seller each, we rst establish that the bargaining problem is regular, regardless whether incentive and/or eciency constraints are imposed. This allows us to apply the generalized Nash bargaining solution to determine transfer payments and transfer probabilities. Furthermore, we derive general properties of this solution for the transfer pricing problem and compare the model developed here with the existing literature for negotiated transfer pricing under incomplete information. In particular, we focus on the models presented in Wagenhofer (1994).

[Show BibTeX] @techreport{BG12BG,

author = {Sonja Brangewitz AND Claus-Jochen Haake},

title = {Cooperative Transfer Price Negotiations under Incomplete Information},

year = {2013},

type = {Techreport UPB},

abstract = {In this paper, we analyze a model in which two divisions negotiate over an intrarm transfer price for an intermediate product. Formally, we consider bargaining problems under incomplete information, since the upstream division's (seller's) costs and downstream division's (buyer's) revenues are supposed to be private information. Assuming two possible types for buyer and seller each, we rst establish that the bargaining problem is regular, regardless whether incentive and/or eciency constraints are imposed. This allows us to apply the generalized Nash bargaining solution to determine transfer payments and transfer probabilities. Furthermore, we derive general properties of this solution for the transfer pricing problem and compare the model developed here with the existing literature for negotiated transfer pricing under incomplete information. In particular, we focus on the models presented in Wagenhofer (1994).}

}

author = {Sonja Brangewitz AND Claus-Jochen Haake},

title = {Cooperative Transfer Price Negotiations under Incomplete Information},

year = {2013},

type = {Techreport UPB},

abstract = {In this paper, we analyze a model in which two divisions negotiate over an intrarm transfer price for an intermediate product. Formally, we consider bargaining problems under incomplete information, since the upstream division's (seller's) costs and downstream division's (buyer's) revenues are supposed to be private information. Assuming two possible types for buyer and seller each, we rst establish that the bargaining problem is regular, regardless whether incentive and/or eciency constraints are imposed. This allows us to apply the generalized Nash bargaining solution to determine transfer payments and transfer probabilities. Furthermore, we derive general properties of this solution for the transfer pricing problem and compare the model developed here with the existing literature for negotiated transfer pricing under incomplete information. In particular, we focus on the models presented in Wagenhofer (1994).}

}

Kalman Graffi, Vitaliy Rapp:

In Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13). IEEE Computer Society, pp. 1-7

[Show Abstract]

**Continuous Gossip-based Aggregation through Dynamic Information Aging**In Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13). IEEE Computer Society, pp. 1-7

**(2013)**[Show Abstract]

Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.

[Show BibTeX] @inproceedings{RappGraffi13,

author = {Kalman Graffi AND Vitaliy Rapp},

title = {Continuous Gossip-based Aggregation through Dynamic Information Aging},

booktitle = {Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13)},

year = {2013},

pages = {1-7},

publisher = {IEEE Computer Society},

abstract = {Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.}

}

[DOI]
author = {Kalman Graffi AND Vitaliy Rapp},

title = {Continuous Gossip-based Aggregation through Dynamic Information Aging},

booktitle = {Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13)},

year = {2013},

pages = {1-7},

publisher = {IEEE Computer Society},

abstract = {Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.}

}

Matthias Feldotto, Kalman Graffi:

In Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13). IEEE Computer Society, pp. 99-106

[Show Abstract]

**Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM**In Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13). IEEE Computer Society, pp. 99-106

**(2013)**[Show Abstract]

Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.

[Show BibTeX] @inproceedings{FeldGraffi13,

author = {Matthias Feldotto AND Kalman Graffi},

title = {Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM},

booktitle = {Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13)},

year = {2013},

pages = {99-106},

publisher = {IEEE Computer Society},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.}

}

[DOI]
author = {Matthias Feldotto AND Kalman Graffi},

title = {Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM},

booktitle = {Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13)},

year = {2013},

pages = {99-106},

publisher = {IEEE Computer Society},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.}

}

Fritz Blumentritt:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Cliquenbildung in verteilten Systemen**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{bsc2013Blumentritt,

author = {Fritz Blumentritt},

title = {Cliquenbildung in verteilten Systemen},

year = {2013}

}

author = {Fritz Blumentritt},

title = {Cliquenbildung in verteilten Systemen},

year = {2013}

}

Kalman Graffi, Markus Benter, Mohammad Divband, Sebastian Kniesburges, Andreas Koutsopoulos:

In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 27-34

[Show Abstract]

**Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network**In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 27-34

**(2013)**[Show Abstract]

Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.

[Show BibTeX] @inproceedings{benter13a,

author = {Kalman Graffi AND Markus Benter AND Mohammad Divband AND Sebastian Kniesburges AND Andreas Koutsopoulos},

title = {Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {27-34},

publisher = {IEEE Computer Society},

abstract = {Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.}

}

[DOI]
author = {Kalman Graffi AND Markus Benter AND Mohammad Divband AND Sebastian Kniesburges AND Andreas Koutsopoulos},

title = {Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {27-34},

publisher = {IEEE Computer Society},

abstract = {Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.}

}

Kalman Graffi, Timo Klerx:

In Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13). IEEE Computer Society, pp. 1-5

[Show Abstract]

**Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks**In Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13). IEEE Computer Society, pp. 1-5

**(2013)**[Show Abstract]

Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.

[Show BibTeX] @inproceedings{KlerxGraffi13,

author = {Kalman Graffi AND Timo Klerx},

title = {Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks},

booktitle = {Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13)},

year = {2013},

pages = {1-5},

publisher = {IEEE Computer Society},

abstract = {Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.}

}

[DOI]
author = {Kalman Graffi AND Timo Klerx},

title = {Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks},

booktitle = {Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13)},

year = {2013},

pages = {1-5},

publisher = {IEEE Computer Society},

abstract = {Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.}

}

Kevin Meckenstock:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Auktionen im Beschaffungsmanagement - Eine spieltheoretische Analyse**Bachelor thesis, University of Paderborn

**(2013)**[Show BibTeX]

@misc{Meckenstock13,

author = {Kevin Meckenstock},

title = {Auktionen im Beschaffungsmanagement - Eine spieltheoretische Analyse},

year = {2013}

}

author = {Kevin Meckenstock},

title = {Auktionen im Beschaffungsmanagement - Eine spieltheoretische Analyse},

year = {2013}

}

Sonja Brangewitz, Jan-Philip Gamp:

In

[Show Abstract]

**Asymmetric Nash bargaining solutions and competitive payoffs**In

*Economics Letters*, vol. 121, no. 2, pp. 224 - 227. Elsevier**(2013)**[Show Abstract]

We establish a link between cooperative and competitive behavior. For every possible vector of weights of an asymmetric Nash bargaining solution there exists a market that has this asymmetric Nash bargaining solution as its unique competitive payoff vector.

[Show BibTeX] @article{SBJG13,

author = {Sonja Brangewitz AND Jan-Philip Gamp},

title = {Asymmetric Nash bargaining solutions and competitive payoffs},

journal = {Economics Letters},

year = {2013},

volume = {121},

number = {2},

pages = {224 - 227},

abstract = {We establish a link between cooperative and competitive behavior. For every possible vector of weights of an asymmetric Nash bargaining solution there exists a market that has this asymmetric Nash bargaining solution as its unique competitive payoff vector.}

}

[DOI]
author = {Sonja Brangewitz AND Jan-Philip Gamp},

title = {Asymmetric Nash bargaining solutions and competitive payoffs},

journal = {Economics Letters},

year = {2013},

volume = {121},

number = {2},

pages = {224 - 227},

abstract = {We establish a link between cooperative and competitive behavior. For every possible vector of weights of an asymmetric Nash bargaining solution there exists a market that has this asymmetric Nash bargaining solution as its unique competitive payoff vector.}

}

Alexander Setzer:

Master's thesis, University of Paderborn

[Show BibTeX]

**Approximation Algorithms for the Linear Arrangement of Special Classes of Graphs**Master's thesis, University of Paderborn

**(2013)**[Show BibTeX]

@mastersthesis{msc2013Setzer,

author = {Alexander Setzer},

title = {Approximation Algorithms for the Linear Arrangement of Special Classes of Graphs},

school = {University of Paderborn},

year = {2013}

}

author = {Alexander Setzer},

title = {Approximation Algorithms for the Linear Arrangement of Special Classes of Graphs},

school = {University of Paderborn},

year = {2013}

}

Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide (eds.):

Springer, LNCS, vol. 8243

[Show BibTeX]

**Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics**Springer, LNCS, vol. 8243

**(2013)**[Show BibTeX]

@proceedings{FGKM2013,

title = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics},

year = {2013},

editor = {Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide},

publisher = {Springer},

month = {September}

}

[DOI]
title = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics},

year = {2013},

editor = {Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide},

publisher = {Springer},

month = {September}

}

Philip Wette, Kalman Graffi:

In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 35-42

[Show Abstract]

**Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables**In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 35-42

**(2013)**[Show Abstract]

Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.

[Show BibTeX] @inproceedings{wette13a,

author = {Philip Wette AND Kalman Graffi},

title = {Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {35-42},

publisher = {IEEE Computer Society},

abstract = {Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.}

}

[DOI]
author = {Philip Wette AND Kalman Graffi},

title = {Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {35-42},

publisher = {IEEE Computer Society},

abstract = {Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.}

}

Matthias Keller, Stefan Pawlik, Peter Pietrzyk, Holger Karl:

In Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing. IEEE/ACM, pp. 429-434

[Show Abstract]

**A Local Heuristic for Latency-Optimized Distributed Cloud Deployment**In Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing. IEEE/ACM, pp. 429-434

**(2013)**[Show Abstract]

In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.

[Show BibTeX] @inproceedings{mkeller2013c,

author = {Matthias Keller AND Stefan Pawlik AND Peter Pietrzyk AND Holger Karl},

title = {A Local Heuristic for Latency-Optimized Distributed Cloud Deployment},

booktitle = {Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing},

year = {2013},

pages = {429-434},

publisher = {IEEE/ACM},

abstract = {In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.}

}

[DOI]
author = {Matthias Keller AND Stefan Pawlik AND Peter Pietrzyk AND Holger Karl},

title = {A Local Heuristic for Latency-Optimized Distributed Cloud Deployment},

booktitle = {Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing},

year = {2013},

pages = {429-434},

publisher = {IEEE/ACM},

abstract = {In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.}

}

Christine Markarian, Friedhelm Meyer auf der Heide, Michael Schubert:

In Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS). Springer, LNCS, vol. 8243, pp. 217-227

[Show Abstract]

**A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks**In Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS). Springer, LNCS, vol. 8243, pp. 217-227

**(2013)**[Show Abstract]

Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_max/r_min with r_max and r_min being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.

[Show BibTeX] @inproceedings{MMS2013,

author = {Christine Markarian AND Friedhelm Meyer auf der Heide AND Michael Schubert},

title = {A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks},

booktitle = {Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS)},

year = {2013},

pages = {217-227},

publisher = {Springer},

month = {September},

abstract = {Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_{max}/r_{min} with r_{max} and r_{min} being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.},

series = {LNCS}

}

[DOI]
author = {Christine Markarian AND Friedhelm Meyer auf der Heide AND Michael Schubert},

title = {A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks},

booktitle = {Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS)},

year = {2013},

pages = {217-227},

publisher = {Springer},

month = {September},

abstract = {Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_{max}/r_{min} with r_{max} and r_{min} being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.},

series = {LNCS}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO). Springer, Lecture Notes in Computer Science, vol. 8179, pp. 165-176

[Show Abstract]

**A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery**In Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO). Springer, Lecture Notes in Computer Science, vol. 8179, pp. 165-176

**(2013)**(won the SIROCCO Best Student Paper Award)[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.

[Show BibTeX] @inproceedings{SIROCCO-KKS13,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

booktitle = {Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO)},

year = {2013},

pages = {165-176},

publisher = {Springer},

note = {won the SIROCCO Best Student Paper Award},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

booktitle = {Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO)},

year = {2013},

pages = {165-176},

publisher = {Springer},

note = {won the SIROCCO Best Student Paper Award},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.},

series = {Lecture Notes in Computer Science}

}

**2012** (30)

Riko Jacob, Stephan Ritscher, Christian Scheideler, Stefan Schmid:

In

[Show Abstract]

**Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs**In

*Theoretical Computer Science*, vol. 457, pp. 137-148. Elsevier**(2012)**[Show Abstract]

This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.

Keywords: Distributed Algorithms, Topology Control, Social Networks

[Show BibTeX] Keywords: Distributed Algorithms, Topology Control, Social Networks

@article{JRSS2012TCS,

author = {Riko Jacob AND Stephan Ritscher AND Christian Scheideler AND Stefan Schmid},

title = {Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs},

journal = {Theoretical Computer Science},

year = {2012},

volume = {457},

pages = {137-148},

abstract = {This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.Keywords: Distributed Algorithms, Topology Control, Social Networks}

}

[DOI]
author = {Riko Jacob AND Stephan Ritscher AND Christian Scheideler AND Stefan Schmid},

title = {Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs},

journal = {Theoretical Computer Science},

year = {2012},

volume = {457},

pages = {137-148},

abstract = {This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.Keywords: Distributed Algorithms, Topology Control, Social Networks}

}

Thomas Clouser, Mikhail Nesterenko, Christian Scheideler:

In

[Show Abstract]

**Tiara: A self-stabilizing deterministic skip list and skip graph**In

*Theoretical Computer Science*, vol. 428, pp. 18-35. Elsevier**(2012)**[Show Abstract]

We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.

Key words: Peer-to-peer networks, overlay networks, self-stabilization.

[Show BibTeX] Key words: Peer-to-peer networks, overlay networks, self-stabilization.

@article{CNS2012TCS,

author = {Thomas Clouser AND Mikhail Nesterenko AND Christian Scheideler},

title = {Tiara: A self-stabilizing deterministic skip list and skip graph},

journal = {Theoretical Computer Science},

year = {2012},

volume = {428},

pages = {18-35},

abstract = {We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.Key words: Peer-to-peer networks, overlay networks, self-stabilization.}

}

[DOI]
author = {Thomas Clouser AND Mikhail Nesterenko AND Christian Scheideler},

title = {Tiara: A self-stabilizing deterministic skip list and skip graph},

journal = {Theoretical Computer Science},

year = {2012},

volume = {428},

pages = {18-35},

abstract = {We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.Key words: Peer-to-peer networks, overlay networks, self-stabilization.}

}

Lars Bremer:

Master's thesis, University of Paderborn

[Show BibTeX]

**Symbiotic Coupling of Peer-to-Peer and Cloud Systems**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{msc2012Bremer,

author = {Lars Bremer},

title = {Symbiotic Coupling of Peer-to-Peer and Cloud Systems},

school = {University of Paderborn},

year = {2012}

}

author = {Lars Bremer},

title = {Symbiotic Coupling of Peer-to-Peer and Cloud Systems},

school = {University of Paderborn},

year = {2012}

}

Sonja Brangewitz, Sarah Brockhoff:

Techreport UPB.

[Show Abstract]

**Stability of Coalitional Equilibria within Repeated Tax Competition**Techreport UPB.

**(2012)**[Show Abstract]

This paper analyzes the stability of capital tax harmonization agreements in a stylized model where countries have formed coalitions which set a common tax rate in order to avoid the inefficient fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extend the stability of tax agreements is affected by the coalitions that have formed. In our set-up, countries are symmetric, but coalitions can be of arbitrary size. We analyze stability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we are able to rank the stability of different coalition structures as long as the size of the largest coalition does not change. Our main results are: (1) singleton regions have the largest incentives to deviate, (2) the stability of cooperation depends on the degree of cooperative behavior ex-ante.

[Show BibTeX] @techreport{BR12BB,

author = {Sonja Brangewitz AND Sarah Brockhoff},

title = {Stability of Coalitional Equilibria within Repeated Tax Competition},

year = {2012},

type = {Techreport UPB},

abstract = {This paper analyzes the stability of capital tax harmonization agreements in a stylized model where countries have formed coalitions which set a common tax rate in order to avoid the inefficient fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extend the stability of tax agreements is affected by the coalitions that have formed. In our set-up, countries are symmetric, but coalitions can be of arbitrary size. We analyze stability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we are able to rank the stability of different coalition structures as long as the size of the largest coalition does not change. Our main results are: (1) singleton regions have the largest incentives to deviate, (2) the stability of cooperation depends on the degree of cooperative behavior ex-ante.}

}

author = {Sonja Brangewitz AND Sarah Brockhoff},

title = {Stability of Coalitional Equilibria within Repeated Tax Competition},

year = {2012},

type = {Techreport UPB},

abstract = {This paper analyzes the stability of capital tax harmonization agreements in a stylized model where countries have formed coalitions which set a common tax rate in order to avoid the inefficient fully non-cooperative Nash equilibrium. In particular, for a given coalition structure we study to what extend the stability of tax agreements is affected by the coalitions that have formed. In our set-up, countries are symmetric, but coalitions can be of arbitrary size. We analyze stability by means of a repeated game setting employing simple trigger strategies and we allow a sub-coalition to deviate from the coalitional equilibrium. For a given form of punishment we are able to rank the stability of different coalition structures as long as the size of the largest coalition does not change. Our main results are: (1) singleton regions have the largest incentives to deviate, (2) the stability of cooperation depends on the degree of cooperative behavior ex-ante.}

}

Valentina Damerow, Bodo Manthey, Friedhelm Meyer auf der Heide, Harald Räcke, Christian Scheideler, Christian Sohler, Till Tantau:

In

[Show Abstract]

**Smoothed analysis of left-to-right maxima with applications**In

*Transactions on Algorithms*, vol. 8, no. 3, pp. 30. ACM**(2012)**[Show Abstract]

A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).

We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.

[Show BibTeX] We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.

@article{DMM+2012,

author = {Valentina Damerow AND Bodo Manthey AND Friedhelm Meyer auf der Heide AND Harald R{\"a}cke AND Christian Scheideler AND Christian Sohler AND Till Tantau},

title = {Smoothed analysis of left-to-right maxima with applications},

journal = {Transactions on Algorithms},

year = {2012},

volume = {8},

number = {3},

pages = {30},

abstract = {A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.}

}

[DOI]
author = {Valentina Damerow AND Bodo Manthey AND Friedhelm Meyer auf der Heide AND Harald R{\"a}cke AND Christian Scheideler AND Christian Sohler AND Till Tantau},

title = {Smoothed analysis of left-to-right maxima with applications},

journal = {Transactions on Algorithms},

year = {2012},

volume = {8},

number = {3},

pages = {30},

abstract = {A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.}

}

Thim Strothmann:

Master's thesis, University of Paderborn

[Show BibTeX]

**Self-Optimizing Binary Search Trees - A Game Theoretic Approach**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{msc2012Strothmann,

author = {Thim Strothmann},

title = {Self-Optimizing Binary Search Trees - A Game Theoretic Approach},

school = {University of Paderborn},

year = {2012}

}

author = {Thim Strothmann},

title = {Self-Optimizing Binary Search Trees - A Game Theoretic Approach},

school = {University of Paderborn},

year = {2012}

}

Julian Drücker:

Master's thesis, University of Paderborn

[Show BibTeX]

**Revenue-maximizing Order of Sale in Sequential Auctions**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{Drücker12,

author = {Julian Dr{\"u}cker},

title = {Revenue-maximizing Order of Sale in Sequential Auctions},

school = {University of Paderborn},

year = {2012}

}

author = {Julian Dr{\"u}cker},

title = {Revenue-maximizing Order of Sale in Sequential Auctions},

school = {University of Paderborn},

year = {2012}

}

Till Hohenberger:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Queuing Latency at Cooperative Base Stations**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Hohenberger2012,

author = {Till Hohenberger},

title = {Queuing Latency at Cooperative Base Stations},

year = {2012}

}

author = {Till Hohenberger},

title = {Queuing Latency at Cooperative Base Stations},

year = {2012}

}

Aydin Celik:

Master's thesis, University of Paderborn

[Show BibTeX]

**Penny Auctions: Design und Strategisches Verhalten**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{Celik12,

author = {Aydin Celik},

title = {Penny Auctions: Design und Strategisches Verhalten},

school = {University of Paderborn},

year = {2012}

}

author = {Aydin Celik},

title = {Penny Auctions: Design und Strategisches Verhalten},

school = {University of Paderborn},

year = {2012}

}

Tobias Rojahn:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Optimale Zuteilung von Nutzern zu verteilten Cloud-Standorten**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Rojahn2012,

author = {Tobias Rojahn},

title = {Optimale Zuteilung von Nutzern zu verteilten Cloud-Standorten},

year = {2012}

}

author = {Tobias Rojahn},

title = {Optimale Zuteilung von Nutzern zu verteilten Cloud-Standorten},

year = {2012}

}

Timo Klerx:

Master's thesis, University of Paderborn

[Show BibTeX]

**Online Parameteroptimierung in P2P-Netzwerken mit Hilfe von Neuronalen Netzen**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{msc2012Klerx,

author = {Timo Klerx},

title = {Online Parameteroptimierung in P2P-Netzwerken mit Hilfe von Neuronalen Netzen},

school = {University of Paderborn},

year = {2012}

}

author = {Timo Klerx},

title = {Online Parameteroptimierung in P2P-Netzwerken mit Hilfe von Neuronalen Netzen},

school = {University of Paderborn},

year = {2012}

}

Marios Mavronicolas, Burkhard Monien:

In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 239-250

[Show Abstract]

**Minimizing Expectation Plus Variance**In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 239-250

**(2012)**[Show Abstract]

We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist [15, 16], a mixed equilibrium for such games, called a V -equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question for a particular concave valuation: expectation plus variance, denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:

- A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable equilibrium properties.

- A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. Using examples, we provide the first demonstration that going from E to RA may as well create new mixed (RA-)equilibria.

- A purification technique to transform a player-specific scheduling game on identical links into a player-specific scheduling game so that all non-pure RA-equilibria are eliminated while new pure equilibria cannot be created; so, a particular game on two identical links yields one with no RA-equilibrium. As a by-product, the first-completeness result for the computation of RA-equilibria follows.

[Show BibTeX] - A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable equilibrium properties.

- A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. Using examples, we provide the first demonstration that going from E to RA may as well create new mixed (RA-)equilibria.

- A purification technique to transform a player-specific scheduling game on identical links into a player-specific scheduling game so that all non-pure RA-equilibria are eliminated while new pure equilibria cannot be created; so, a particular game on two identical links yields one with no RA-equilibrium. As a by-product, the first-completeness result for the computation of RA-equilibria follows.

@inproceedings{MMBM-SAGT12,

author = {Marios Mavronicolas AND Burkhard Monien},

title = {Minimizing Expectation Plus Variance},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {239-250},

publisher = {Springer},

abstract = {We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist [15, 16], a mixed equilibrium for such games, called a V -equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question for a particular concave valuation: expectation plus variance, denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:- A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable equilibrium properties.- A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. Using examples, we provide the first demonstration that going from E to RA may as well create new mixed (RA-)equilibria.- A purification technique to transform a player-specific scheduling game on identical links into a player-specific scheduling game so that all non-pure RA-equilibria are eliminated while new pure equilibria cannot be created; so, a particular game on two identical links yields one with no RA-equilibrium. As a by-product, the first-completeness result for the computation of RA-equilibria follows.},

series = {LNCS}

}

[DOI]
author = {Marios Mavronicolas AND Burkhard Monien},

title = {Minimizing Expectation Plus Variance},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {239-250},

publisher = {Springer},

abstract = {We consider strategic games in which each player seeks a mixed strategy to minimize her cost evaluated by a concave valuation V (mapping probability distributions to reals); such valuations are used to model risk. In contrast to games with expectation-optimizer players where mixed equilibria always exist [15, 16], a mixed equilibrium for such games, called a V -equilibrium, may fail to exist, even though pure equilibria (if any) transfer over. What is the impact of such valuations on the existence, structure and complexity of mixed equilibria? We address this fundamental question for a particular concave valuation: expectation plus variance, denoted as RA, which stands for risk-averse; so, variance enters as a measure of risk and it is used as an additive adjustment to expectation. We obtain the following results about RA-equilibria:- A collection of general structural properties of RA-equilibria connecting to (i) E-equilibria and Var-equilibria, which correspond to the expectation and variance valuations E and Var, respectively, and to (ii) other weaker or incomparable equilibrium properties.- A second collection of (i) existence, (ii) equivalence and separation (with respect to E-equilibria), and (iii) characterization results for RA-equilibria in the new class of player-specific scheduling games. Using examples, we provide the first demonstration that going from E to RA may as well create new mixed (RA-)equilibria.- A purification technique to transform a player-specific scheduling game on identical links into a player-specific scheduling game so that all non-pure RA-equilibria are eliminated while new pure equilibria cannot be created; so, a particular game on two identical links yields one with no RA-equilibrium. As a by-product, the first-completeness result for the computation of RA-equilibria follows.},

series = {LNCS}

}

Fuad Mammadov:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Methoden zur Bestimmung von innerbetrieblichen Verrechnungspreisen**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Mammadov12,

author = {Fuad Mammadov},

title = {Methoden zur Bestimmung von innerbetrieblichen Verrechnungspreisen},

year = {2012}

}

author = {Fuad Mammadov},

title = {Methoden zur Bestimmung von innerbetrieblichen Verrechnungspreisen},

year = {2012}

}

Xenia Löwen:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Managerial Delegation and Capacity Choices: An Analysis of the Cournot-Nash Equilibrium**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Löwen12,

author = {Xenia L{\"o}wen},

title = {Managerial Delegation and Capacity Choices: An Analysis of the Cournot-Nash Equilibrium},

year = {2012}

}

author = {Xenia L{\"o}wen},

title = {Managerial Delegation and Capacity Choices: An Analysis of the Cournot-Nash Equilibrium},

year = {2012}

}

Björn Feldkord:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Lokale Swaps und überholte Informationen in Basic Network Creation Games**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{bsc2012feldkord,

author = {Bj{\"o}rn Feldkord},

title = {Lokale Swaps und {\"u}berholte Informationen in Basic Network Creation Games},

year = {2012}

}

author = {Bj{\"o}rn Feldkord},

title = {Lokale Swaps und {\"u}berholte Informationen in Basic Network Creation Games},

year = {2012}

}

Barbara Kempkes:

PhD thesis, University of Paderborn

[Show Abstract]

**Local strategies for robot formation problems**PhD thesis, University of Paderborn

**(2012)**[Show Abstract]

Wir betrachten eine Gruppe von mobilen, autonomen Robotern in einem ebenen Gelände. Es gibt keine zentrale Steuerung und die Roboter müssen sich selbst koordinieren. Zentrale Herausforderung dabei ist, dass jeder Roboter nur seine unmittelbare Nachbarschaft sieht und auch nur mit Robotern in seiner unmittelbaren Nachbarschaft kommunizieren kann. Daraus ergeben sich viele algorithmische Fragestellungen. In dieser Arbeit wird untersucht, unter welchen Voraussetzungen die Roboter sich auf einem Punkt versammeln bzw. eine Linie zwischen zwei festen Stationen bilden können. Dafür werden mehrere Roboter-Strategien in verschiedenen Bewegungsmodellen vorgestellt. Diese Strategien werden auf ihre Effizienz hin untersucht. Es werden obere und untere Schranken für die benötigte Anzahl Runden und die Bewegungsdistanz gezeigt. In einigen Fällen wird außerdem die benötigte Bewegungsdistanz mit derjenigen Bewegungsdistanz verglichen, die eine optimale globale Strategie auf der gleichen Instanz benötigen würde. So werden kompetititve Faktoren hergeleitet.

[Show BibTeX] @phdthesis{DissKempkes12,

author = {Barbara Kempkes},

title = {Local strategies for robot formation problems},

school = {University of Paderborn},

year = {2012},

abstract = {Wir betrachten eine Gruppe von mobilen, autonomen Robotern in einem ebenen Gel{\"a}nde. Es gibt keine zentrale Steuerung und die Roboter m{\"u}ssen sich selbst koordinieren. Zentrale Herausforderung dabei ist, dass jeder Roboter nur seine unmittelbare Nachbarschaft sieht und auch nur mit Robotern in seiner unmittelbaren Nachbarschaft kommunizieren kann. Daraus ergeben sich viele algorithmische Fragestellungen. In dieser Arbeit wird untersucht, unter welchen Voraussetzungen die Roboter sich auf einem Punkt versammeln bzw. eine Linie zwischen zwei festen Stationen bilden k{\"o}nnen. Daf{\"u}r werden mehrere Roboter-Strategien in verschiedenen Bewegungsmodellen vorgestellt. Diese Strategien werden auf ihre Effizienz hin untersucht. Es werden obere und untere Schranken f{\"u}r die ben{\"o}tigte Anzahl Runden und die Bewegungsdistanz gezeigt. In einigen F{\"a}llen wird außerdem die ben{\"o}tigte Bewegungsdistanz mit derjenigen Bewegungsdistanz verglichen, die eine optimale globale Strategie auf der gleichen Instanz ben{\"o}tigen w{\"u}rde. So werden kompetititve Faktoren hergeleitet.}

}

[DOI]
author = {Barbara Kempkes},

title = {Local strategies for robot formation problems},

school = {University of Paderborn},

year = {2012},

abstract = {Wir betrachten eine Gruppe von mobilen, autonomen Robotern in einem ebenen Gel{\"a}nde. Es gibt keine zentrale Steuerung und die Roboter m{\"u}ssen sich selbst koordinieren. Zentrale Herausforderung dabei ist, dass jeder Roboter nur seine unmittelbare Nachbarschaft sieht und auch nur mit Robotern in seiner unmittelbaren Nachbarschaft kommunizieren kann. Daraus ergeben sich viele algorithmische Fragestellungen. In dieser Arbeit wird untersucht, unter welchen Voraussetzungen die Roboter sich auf einem Punkt versammeln bzw. eine Linie zwischen zwei festen Stationen bilden k{\"o}nnen. Daf{\"u}r werden mehrere Roboter-Strategien in verschiedenen Bewegungsmodellen vorgestellt. Diese Strategien werden auf ihre Effizienz hin untersucht. Es werden obere und untere Schranken f{\"u}r die ben{\"o}tigte Anzahl Runden und die Bewegungsdistanz gezeigt. In einigen F{\"a}llen wird außerdem die ben{\"o}tigte Bewegungsdistanz mit derjenigen Bewegungsdistanz verglichen, die eine optimale globale Strategie auf der gleichen Instanz ben{\"o}tigen w{\"u}rde. So werden kompetititve Faktoren hergeleitet.}

}

Sonja Brangewitz, Gael Giraud:

Techreport UPB.

[Show Abstract]

**Learning by Trading in Infinite Horizon Strategic Market Games with Default**Techreport UPB.

**(2012)**[Show Abstract]

We study the consequences of dropping the perfect competition assumption in a standard infinite horizon model with infinitely-lived traders and real collateralized assets, together with one additional ingredient: information among players is asymmetric and monitoring is incomplete. The key insight is that trading assets is not only a way to hedge oneself against uncertainty and to smooth consumption across time: It also enables learning information. Conversely, defaulting now becomes strategic: Certain players may manipulate prices so as to provoke a default in order to prevent their opponents from learning. We focus on learning equilibria, at the end of which no player has incorrect beliefs — not because those players with heterogeneous beliefs were eliminated from the market (although default is possible at equilibrium) but because they have taken time to update their prior belief. We prove a partial Folk theorem à la Wiseman (2011) of the following form: For any function that maps each state of the world to a sequence of feasible and strongly individually rational allocations, and for any degree of precision, there is a perfect Bayesian equilibrium in which patient players learn the realized state with this degree of precision and achieve a payoff close to the one specified for each state.

[Show BibTeX] @techreport{BG12BG,

author = {Sonja Brangewitz AND Gael Giraud},

title = {Learning by Trading in Infinite Horizon Strategic Market Games with Default},

year = {2012},

type = {Techreport UPB},

abstract = {We study the consequences of dropping the perfect competition assumption in a standard infinite horizon model with infinitely-lived traders and real collateralized assets, together with one additional ingredient: information among players is asymmetric and monitoring is incomplete. The key insight is that trading assets is not only a way to hedge oneself against uncertainty and to smooth consumption across time: It also enables learning information. Conversely, defaulting now becomes strategic: Certain players may manipulate prices so as to provoke a default in order to prevent their opponents from learning. We focus on learning equilibria, at the end of which no player has incorrect beliefs — not because those players with heterogeneous beliefs were eliminated from the market (although default is possible at equilibrium) but because they have taken time to update their prior belief. We prove a partial Folk theorem à la Wiseman (2011) of the following form: For any function that maps each state of the world to a sequence of feasible and strongly individually rational allocations, and for any degree of precision, there is a perfect Bayesian equilibrium in which patient players learn the realized state with this degree of precision and achieve a payoff close to the one specified for each state.}

}

author = {Sonja Brangewitz AND Gael Giraud},

title = {Learning by Trading in Infinite Horizon Strategic Market Games with Default},

year = {2012},

type = {Techreport UPB},

abstract = {We study the consequences of dropping the perfect competition assumption in a standard infinite horizon model with infinitely-lived traders and real collateralized assets, together with one additional ingredient: information among players is asymmetric and monitoring is incomplete. The key insight is that trading assets is not only a way to hedge oneself against uncertainty and to smooth consumption across time: It also enables learning information. Conversely, defaulting now becomes strategic: Certain players may manipulate prices so as to provoke a default in order to prevent their opponents from learning. We focus on learning equilibria, at the end of which no player has incorrect beliefs — not because those players with heterogeneous beliefs were eliminated from the market (although default is possible at equilibrium) but because they have taken time to update their prior belief. We prove a partial Folk theorem à la Wiseman (2011) of the following form: For any function that maps each state of the world to a sequence of feasible and strongly individually rational allocations, and for any degree of precision, there is a perfect Bayesian equilibrium in which patient players learn the realized state with this degree of precision and achieve a payoff close to the one specified for each state.}

}

Philip Wette, Holger Karl:

Techreport UPB, no. TR-RI-12-328.

[Show Abstract]

**Introducing feedback to preemptive routing and wavelength assignment algorithms for dynamic traffic scenarios**Techreport UPB, no. TR-RI-12-328.

**(2012)**[Show Abstract]

Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to setup a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known by the RWA algorithm. Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parameters only known at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason a RWA algorithm is needed which involves its users in the preemption decisions. We present a family of preemptive RWA algorithms for WDM networks. These algorithms have two distinguishing features: a) they can handle dynamic trafﬁc by on-the-ﬂy reconﬁguration, and b) users can give feedback for reconﬁguration decisions and thus inﬂuence the preemption decision of the RWA algorithm, leading to networks which adapt directly to application needs. This is different from trafﬁc engineering where the network is (slowly) adapted to observed trafﬁc patterns. Our algorithms handle various WDM network conﬁgurations including networks consisting of heterogeneous WDM hardware. To this end, we are using the layered graph approach together with a newly developed graph model that is used to determine conﬂicting lightpaths.

[Show BibTeX] @techreport{PWHG-TR2012,

author = {Philip Wette AND Holger Karl},

title = {Introducing feedback to preemptive routing and wavelength assignment algorithms for dynamic traffic scenarios},

year = {2012},

type = {Techreport UPB},

number = {TR-RI-12-328},

abstract = {Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to setup a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known by the RWA algorithm. Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parameters only known at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason a RWA algorithm is needed which involves its users in the preemption decisions. We present a family of preemptive RWA algorithms for WDM networks. These algorithms have two distinguishing features: a) they can handle dynamic trafﬁc by on-the-ﬂy reconﬁguration, and b) users can give feedback for reconﬁguration decisions and thus inﬂuence the preemption decision of the RWA algorithm, leading to networks which adapt directly to application needs. This is different from trafﬁc engineering where the network is (slowly) adapted to observed trafﬁc patterns. Our algorithms handle various WDM network conﬁgurations including networks consisting of heterogeneous WDM hardware. To this end, we are using the layered graph approach together with a newly developed graph model that is used to determine conﬂicting lightpaths.}

}

author = {Philip Wette AND Holger Karl},

title = {Introducing feedback to preemptive routing and wavelength assignment algorithms for dynamic traffic scenarios},

year = {2012},

type = {Techreport UPB},

number = {TR-RI-12-328},

abstract = {Preemptive Routing and Wavelength Assignment (RWA) algorithms preempt established lightpaths in case not enough resources are available to setup a new lightpath in a Wavelength Division Multiplexing (WDM) network. The selection of lightpaths to be preempted relies on internal decisions of the RWA algorithm. Thus, if dedicated properties of the network topology are required by the applications running on the network, these requirements have to be known by the RWA algorithm. Otherwise it might happen that by preempting a particular lightpath these requirements are violated. If, however, these requirements include parameters only known at the nodes running the application, the RWA algorithm cannot evaluate the requirements. For this reason a RWA algorithm is needed which involves its users in the preemption decisions. We present a family of preemptive RWA algorithms for WDM networks. These algorithms have two distinguishing features: a) they can handle dynamic trafﬁc by on-the-ﬂy reconﬁguration, and b) users can give feedback for reconﬁguration decisions and thus inﬂuence the preemption decision of the RWA algorithm, leading to networks which adapt directly to application needs. This is different from trafﬁc engineering where the network is (slowly) adapted to observed trafﬁc patterns. Our algorithms handle various WDM network conﬁgurations including networks consisting of heterogeneous WDM hardware. To this end, we are using the layered graph approach together with a newly developed graph model that is used to determine conﬂicting lightpaths.}

}

Sven Kluczniok:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Effiziente Paketbildung in mehrdimensionalen Verhandlungsproblemen**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Kluczniok12,

author = {Sven Kluczniok},

title = {Effiziente Paketbildung in mehrdimensionalen Verhandlungsproblemen},

year = {2012}

}

author = {Sven Kluczniok},

title = {Effiziente Paketbildung in mehrdimensionalen Verhandlungsproblemen},

year = {2012}

}

Sven Kurras:

Master's thesis, University of Paderborn

[Show BibTeX]

**Distributed Sampling of Regular Graphs**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{msc2012kurras,

author = {Sven Kurras},

title = {Distributed Sampling of Regular Graphs},

school = {University of Paderborn},

year = {2012}

}

author = {Sven Kurras},

title = {Distributed Sampling of Regular Graphs},

school = {University of Paderborn},

year = {2012}

}

Philipp Brandes, Friedhelm Meyer auf der Heide:

In Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS). ACM, ICPS, pp. 9-14

[Show Abstract]

**Distributed Computing in Fault-Prone Dynamic Networks**In Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS). ACM, ICPS, pp. 9-14

**(2012)**[Show Abstract]

Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.

In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.

[Show BibTeX] In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.

@inproceedings{BMadHTADDS,

author = {Philipp Brandes AND Friedhelm Meyer auf der Heide},

title = {Distributed Computing in Fault-Prone Dynamic Networks},

booktitle = {Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS)},

year = {2012},

pages = {9-14},

publisher = {ACM},

abstract = {Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.},

series = {ICPS}

}

[DOI]
author = {Philipp Brandes AND Friedhelm Meyer auf der Heide},

title = {Distributed Computing in Fault-Prone Dynamic Networks},

booktitle = {Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS)},

year = {2012},

pages = {9-14},

publisher = {ACM},

abstract = {Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.},

series = {ICPS}

}

Stefan Schmid, Chen Avin, Christian Scheideler, Bernhard Haeupler, Zvi Lotker:

In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 439-440

[Show Abstract]

**Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures**In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 439-440

**(2012)**[Show Abstract]

This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.

[Show BibTeX] @inproceedings{SASHL2012DISC,

author = {Stefan Schmid AND Chen Avin AND Christian Scheideler AND Bernhard Haeupler AND Zvi Lotker},

title = {Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {439-440},

publisher = {Springer},

abstract = {This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.},

series = {LNCS}

}

[DOI]
author = {Stefan Schmid AND Chen Avin AND Christian Scheideler AND Bernhard Haeupler AND Zvi Lotker},

title = {Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {439-440},

publisher = {Springer},

abstract = {This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.},

series = {LNCS}

}

Sebastian Kniesburges, Christian Scheideler:

In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 435-436

[Show Abstract]

**Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems**In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 435-436

**(2012)**[Show Abstract]

The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).

[Show BibTeX] @inproceedings{KS2012DISC,

author = {Sebastian Kniesburges AND Christian Scheideler},

title = {Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {435-436},

publisher = {Springer},

abstract = {The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).},

series = {LNCS}

}

[DOI]
author = {Sebastian Kniesburges AND Christian Scheideler},

title = {Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {435-436},

publisher = {Springer},

abstract = {The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).},

series = {LNCS}

}

Andreas Cord Landwehr, Martina Huellmann (married name: Eikel), Peter Kling, Alexander Setzer:

In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 72-83

[Show Abstract]

**Basic Network Creation Games with Communication Interests**In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 72-83

**(2012)**[Show Abstract]

Network creation games model the creation and usage costs of networks formed by a set of selfish peers.

Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.

In doing so, a peer can reduce its individual communication cost.

Typically, these costs are modeled by the maximum or average distance in the network.

We introduce a generalized version of the basic network creation game (BNCG).

In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.

This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.

That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.

However, participants of large networks are seldom interested in all peers.

Rather, they want to communicate efficiently with a small subset only.

Our model incorporates these (communication) interests explicitly.

Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.

We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.

Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.

This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.

For the case of general networks we show the price of anarchy to be Theta(n).

Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.

[Show BibTeX] Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.

In doing so, a peer can reduce its individual communication cost.

Typically, these costs are modeled by the maximum or average distance in the network.

We introduce a generalized version of the basic network creation game (BNCG).

In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.

This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.

That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.

However, participants of large networks are seldom interested in all peers.

Rather, they want to communicate efficiently with a small subset only.

Our model incorporates these (communication) interests explicitly.

Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.

We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.

Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.

This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.

For the case of general networks we show the price of anarchy to be Theta(n).

Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.

@inproceedings{IBNCGSAGT12,

author = {Andreas Cord Landwehr AND Martina Huellmann (married name: Eikel) AND Peter Kling AND Alexander Setzer},

title = {Basic Network Creation Games with Communication Interests},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {72--83},

publisher = {Springer},

abstract = {Network creation games model the creation and usage costs of networks formed by a set of selfish peers.Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.In doing so, a peer can reduce its individual communication cost.Typically, these costs are modeled by the maximum or average distance in the network.We introduce a generalized version of the basic network creation game (BNCG).In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.However, participants of large networks are seldom interested in all peers.Rather, they want to communicate efficiently with a small subset only.Our model incorporates these (communication) interests explicitly.Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.For the case of general networks we show the price of anarchy to be Theta(n).Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Martina Huellmann (married name: Eikel) AND Peter Kling AND Alexander Setzer},

title = {Basic Network Creation Games with Communication Interests},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {72--83},

publisher = {Springer},

abstract = {Network creation games model the creation and usage costs of networks formed by a set of selfish peers.Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.In doing so, a peer can reduce its individual communication cost.Typically, these costs are modeled by the maximum or average distance in the network.We introduce a generalized version of the basic network creation game (BNCG).In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.However, participants of large networks are seldom interested in all peers.Rather, they want to communicate efficiently with a small subset only.Our model incorporates these (communication) interests explicitly.Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.For the case of general networks we show the price of anarchy to be Theta(n).Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.},

series = {LNCS}

}

Petr Kolman, Christian Scheideler:

In Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA). SIAM, pp. 800-810

[Show Abstract]

**Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case**In Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA). SIAM, pp. 800-810

**(2012)**[Show Abstract]

Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.

[Show BibTeX] @inproceedings{SODA12KS,

author = {Petr Kolman AND Christian Scheideler},

title = {Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case},

booktitle = {Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA)},

year = {2012},

pages = {800-810},

publisher = {SIAM},

abstract = {Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.}

}

[DOI]
author = {Petr Kolman AND Christian Scheideler},

title = {Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case},

booktitle = {Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA)},

year = {2012},

pages = {800-810},

publisher = {SIAM},

abstract = {Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.}

}

Friedhelm Meyer auf der Heide, Peter Pietrzyk, Peter Kling:

In Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO). Springer, LNCS, vol. 7355, pp. 61-72

[Show Abstract]

**An Algorithm for Facility Leasing**In Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO). Springer, LNCS, vol. 7355, pp. 61-72

**(2012)**[Show Abstract]

We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.

This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.

[Show BibTeX] This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.

@inproceedings{OFLSIROCCO12,

author = {Friedhelm Meyer auf der Heide AND Peter Pietrzyk AND Peter Kling},

title = {An Algorithm for Facility Leasing},

booktitle = {Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO)},

year = {2012},

pages = {61-72},

publisher = {Springer},

abstract = {We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.},

series = {LNCS}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Peter Pietrzyk AND Peter Kling},

title = {An Algorithm for Facility Leasing},

booktitle = {Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO)},

year = {2012},

pages = {61-72},

publisher = {Springer},

abstract = {We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.},

series = {LNCS}

}

Friederike Dawirs:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Alternative Berechnung der Machtindizes: Banzhaf und Shapley-Shubik Index**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{Dawirs12,

author = {Friederike Dawirs},

title = {Alternative Berechnung der Machtindizes: Banzhaf und Shapley-Shubik Index},

year = {2012}

}

author = {Friederike Dawirs},

title = {Alternative Berechnung der Machtindizes: Banzhaf und Shapley-Shubik Index},

year = {2012}

}

Fabian Eidens:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Adaptive Verbindungsstrategien in dynamischen Suchnetzwerken**Bachelor thesis, University of Paderborn

**(2012)**[Show BibTeX]

@misc{bsc2012eidens,

author = {Fabian Eidens},

title = {Adaptive Verbindungsstrategien in dynamischen Suchnetzwerken},

year = {2012}

}

author = {Fabian Eidens},

title = {Adaptive Verbindungsstrategien in dynamischen Suchnetzwerken},

year = {2012}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 1261-1271

[Show Abstract]

**A Self-Stabilization Process for Small-World Networks**In Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 1261-1271

**(2012)**[Show Abstract]

Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.

[Show BibTeX] @inproceedings{IPDPS12KKS,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Self-Stabilization Process for Small-World Networks},

booktitle = {Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2012},

pages = {1261--1271},

publisher = {IEEE Computer Society},

abstract = {Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Self-Stabilization Process for Small-World Networks},

booktitle = {Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2012},

pages = {1261--1271},

publisher = {IEEE Computer Society},

abstract = {Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.}

}

Jonathan Schluessler:

Master's thesis, University of Paderborn

[Show BibTeX]

**A Forensic Framework for Automatic Information Retrieval in Distributed Systems**Master's thesis, University of Paderborn

**(2012)**[Show BibTeX]

@mastersthesis{msc2012Schluessler,

author = {Jonathan Schluessler},

title = {A Forensic Framework for Automatic Information Retrieval in Distributed Systems},

school = {University of Paderborn},

year = {2012}

}

author = {Jonathan Schluessler},

title = {A Forensic Framework for Automatic Information Retrieval in Distributed Systems},

school = {University of Paderborn},

year = {2012}

}

**2011** (11)

Matthias Diehl:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Vorteile der Paketbildung in Verhandlungen: Ein prozeduraler Zugang zu Superadditivität**Bachelor thesis, University of Paderborn

**(2011)**[Show BibTeX]

@misc{Diehl11,

author = {Matthias Diehl},

title = {Vorteile der Paketbildung in Verhandlungen: Ein prozeduraler Zugang zu Superadditivit{\"a}t},

year = {2011}

}

author = {Matthias Diehl},

title = {Vorteile der Paketbildung in Verhandlungen: Ein prozeduraler Zugang zu Superadditivit{\"a}t},

year = {2011}

}

Philipp Brandes:

Master's thesis, University of Paderborn

[Show BibTeX]

**Robust Distributed Computation in Dynamic Networks**Master's thesis, University of Paderborn

**(2011)**[Show BibTeX]

@mastersthesis{msc2011brandes,

author = {Philipp Brandes},

title = {Robust Distributed Computation in Dynamic Networks},

school = {University of Paderborn},

year = {2011}

}

author = {Philipp Brandes},

title = {Robust Distributed Computation in Dynamic Networks},

school = {University of Paderborn},

year = {2011}

}

Nadja Maraun:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Prozedurale Ansätze zur Lösung mehrdimensionaler Verhandlungsprobleme**Bachelor thesis, University of Paderborn

**(2011)**[Show BibTeX]

@misc{Maraun11,

author = {Nadja Maraun},

title = {Prozedurale Ans{\"a}tze zur L{\"o}sung mehrdimensionaler Verhandlungsprobleme},

year = {2011}

}

author = {Nadja Maraun},

title = {Prozedurale Ans{\"a}tze zur L{\"o}sung mehrdimensionaler Verhandlungsprobleme},

year = {2011}

}

Kalman Graffi:

In Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP). IEEE Computer Society, pp. 154-155

[Show Abstract]

**PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned**In Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP). IEEE Computer Society, pp. 154-155

**(2011)**[Show Abstract]

Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.

[Show BibTeX] @inproceedings{PsP11G,

author = {Kalman Graffi},

title = {PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned},

booktitle = {Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP)},

year = {2011},

pages = {154-155},

publisher = {IEEE Computer Society},

abstract = {Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.}

}

[DOI]
author = {Kalman Graffi},

title = {PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned},

booktitle = {Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP)},

year = {2011},

pages = {154-155},

publisher = {IEEE Computer Society},

abstract = {Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.}

}

Sebastian Abshoff, Andreas Cord Landwehr, Bastian Degener, Barbara Kempkes, Peter Pietrzyk:

In Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS). Springer, LNCS, vol. 7111, pp. 13-27

[Show Abstract]

**Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks**In Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS). Springer, LNCS, vol. 7111, pp. 13-27

**(2011)**[Show Abstract]

We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_1+\epsilon n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^1/p)^p approximation, while the second has a running time of expected O(log^2_1+\epsilon n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^1/p)^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.

[Show BibTeX] @inproceedings{ALGO12ACDKP,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Bastian Degener AND Barbara Kempkes AND Peter Pietrzyk},

title = {Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks},

booktitle = {Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS)},

year = {2011},

pages = {13-27},

publisher = {Springer},

abstract = {We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_{1+\epsilon} n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^{1/p})^p approximation, while the second has a running time of expected O(log^2_{1+\epsilon} n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^{1/p})^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Bastian Degener AND Barbara Kempkes AND Peter Pietrzyk},

title = {Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks},

booktitle = {Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS)},

year = {2011},

pages = {13-27},

publisher = {Springer},

abstract = {We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_{1+\epsilon} n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^{1/p})^p approximation, while the second has a running time of expected O(log^2_{1+\epsilon} n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^{1/p})^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.},

series = {LNCS}

}

Manuel Peuster:

Bachelor thesis, University of Paderborn

[Show BibTeX]

**Defining and Deploying Complex Applicances in Multi-Site Cloud Environments**Bachelor thesis, University of Paderborn

**(2011)**[Show BibTeX]

@misc{MP2011,

author = {Manuel Peuster},

title = {Defining and Deploying Complex Applicances in Multi-Site Cloud Environments},

year = {2011}

}

author = {Manuel Peuster},

title = {Defining and Deploying Complex Applicances in Multi-Site Cloud Environments},

year = {2011}

}

Mikhail Nesterenko, Rizal Mohd Nor, Christian Scheideler:

In Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, LNCS, vol. 6976, pp. 356-370

[Show Abstract]

**Corona: A Stabilizing Deterministic Message-Passing Skip List**In Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, LNCS, vol. 6976, pp. 356-370

**(2011)**[Show Abstract]

We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.

[Show BibTeX] @inproceedings{SSS12NNS,

author = {Mikhail Nesterenko AND Rizal Mohd Nor AND Christian Scheideler},

title = {Corona: A Stabilizing Deterministic Message-Passing Skip List},

booktitle = {Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2011},

pages = {356--370},

publisher = {Springer},

abstract = {We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.},

series = {LNCS}

}

[DOI]
author = {Mikhail Nesterenko AND Rizal Mohd Nor AND Christian Scheideler},

title = {Corona: A Stabilizing Deterministic Message-Passing Skip List},

booktitle = {Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2011},

pages = {356--370},

publisher = {Springer},

abstract = {We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.},

series = {LNCS}

}

Kamil Swierkot:

Master's thesis, University of Paderborn

[Show BibTeX]

**Complexity Classes for Local Computation**Master's thesis, University of Paderborn

**(2011)**[Show BibTeX]

@mastersthesis{msc2011swierkot,

author = {Kamil Swierkot},

title = {Complexity Classes for Local Computation},

school = {University of Paderborn},

year = {2011}

}

author = {Kamil Swierkot},

title = {Complexity Classes for Local Computation},

school = {University of Paderborn},

year = {2011}

}

Philip Wette:

Master's thesis, University of Paderborn

[Show BibTeX]

**Adaptives Loadbalancing für strukturierte Peer-to-Peer-Netzwerke am Beispiel von Chord**Master's thesis, University of Paderborn

**(2011)**[Show BibTeX]

@mastersthesis{msc2011Wette,

author = {Philip Wette},

title = {Adaptives Loadbalancing f{\"u}r strukturierte Peer-to-Peer-Netzwerkeam Beispiel von Chord},

school = {University of Paderborn},

year = {2011}

}

author = {Philip Wette},

title = {Adaptives Loadbalancing f{\"u}r strukturierte Peer-to-Peer-Netzwerkeam Beispiel von Chord},

school = {University of Paderborn},

year = {2011}

}

Friedhelm Meyer auf der Heide, Rajmohan Rajaraman (eds.):

ACM

[Show BibTeX]

**23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures**ACM

**(2011)**[Show BibTeX]

@proceedings{FMRR2011,

title = {23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures},

year = {2011},

editor = {Friedhelm Meyer auf der Heide, Rajmohan Rajaraman},

publisher = {ACM},

month = {June}

}

[DOI]
title = {23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures},

year = {2011},

editor = {Friedhelm Meyer auf der Heide, Rajmohan Rajaraman},

publisher = {ACM},

month = {June}

}

Daniel Kaimann:

Techreport UPB.

[Show Abstract]

**"To infinity and beyond!" - A genre-specific film analysis of movie success mechanisms**Techreport UPB.

**(2011)**[Show Abstract]

The objective of this study is the analysis of movie success mechanisms in a genre-specific context. Instead of the examination of all time box office champions, we focus on the two film genres of computer animated and comic book based films. By introducing the concept of the motion-picture marketing mix, which represents a set of tactical marketing tools in order to strengthen a company’s strategic customer orientation, we are able to systematically identify key movie success factors. We conduct a cross-sectional empirical analysis across regional distinctions based on dataset that covers a time horizon of more than 30 years. We find empirical evidence that actors with ex ante popularity, award nominations and the production budget represent key movie success mechanisms and significantly influence a movie’s commercial appeal. Additionally, word-of-mouth creates reputation effects that also significantly affects box office gross.

[Show BibTeX] @techreport{DK11,

author = {Daniel Kaimann},

title = {"To infinity and beyond!" - A genre-specific film analysis of movie success mechanisms},

year = {2011},

type = {Techreport UPB},

abstract = {The objective of this study is the analysis of movie success mechanisms in a genre-specific context. Instead of the examination of all time box office champions, we focus on the two film genres of computer animated and comic book based films. By introducing the concept of the motion-picture marketing mix, which represents a set of tactical marketing tools in order to strengthen a company’s strategic customer orientation, we are able to systematically identify key movie success factors. We conduct a cross-sectional empirical analysis across regional distinctions based on dataset that covers a time horizon of more than 30 years. We find empirical evidence that actors with ex ante popularity, award nominations and the production budget represent key movie success mechanisms and significantly influence a movie’s commercial appeal. Additionally, word-of-mouth creates reputation effects that also significantly affects box office gross.}

}

author = {Daniel Kaimann},

title = {"To infinity and beyond!" - A genre-specific film analysis of movie success mechanisms},

year = {2011},

type = {Techreport UPB},

abstract = {The objective of this study is the analysis of movie success mechanisms in a genre-specific context. Instead of the examination of all time box office champions, we focus on the two film genres of computer animated and comic book based films. By introducing the concept of the motion-picture marketing mix, which represents a set of tactical marketing tools in order to strengthen a company’s strategic customer orientation, we are able to systematically identify key movie success factors. We conduct a cross-sectional empirical analysis across regional distinctions based on dataset that covers a time horizon of more than 30 years. We find empirical evidence that actors with ex ante popularity, award nominations and the production budget represent key movie success mechanisms and significantly influence a movie’s commercial appeal. Additionally, word-of-mouth creates reputation effects that also significantly affects box office gross.}

}