# Publications

**2016** (6)

Sebastian Abshoff, Peter Kling, Christine Markarian, Friedhelm Meyer auf der Heide, Peter Pietrzyk:

In

[Show Abstract]

**Towards the price of leasing online**In

*Journal of Combinatorial Optimization*, vol. 32, no. 4, pp. 1197-1216.**(2016)**[Show Abstract]

We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).

[Show BibTeX] @article{AKMMP17,

author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

[DOI]
author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

Christian Scheideler, Alexander Setzer, Thim Strothmann:

In Proceedings of the 30th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 9888, pp. 71-84

[Show Abstract]

**Towards a Universal Approach for Monotonic Searchability in Self-stabilizing Overlay Networks**In Proceedings of the 30th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 9888, pp. 71-84

**(2016)**[Show Abstract]

For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.

We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.

We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.

As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.

[Show BibTeX] We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.

We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.

As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.

@inproceedings{disc2016sss,

author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards a Universal Approach for Monotonic Searchability in Self-stabilizingOverlay Networks},

booktitle = {Proceedings of the 30th International Symposium on Distributed Computing (DISC)},

year = {2016},

pages = {71--84},

publisher = {Springer},

abstract = {For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.},

series = {LNCS}

}

[DOI]
author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards a Universal Approach for Monotonic Searchability in Self-stabilizingOverlay Networks},

booktitle = {Proceedings of the 30th International Symposium on Distributed Computing (DISC)},

year = {2016},

pages = {71--84},

publisher = {Springer},

abstract = {For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.},

series = {LNCS}

}

Matthias Feldotto, Kalman Graffi:

In

[Show Abstract]

**Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM**In

*Concurrency and Computation: Practice and Experience*, vol. 28, no. 5, pp. 1655-1677.**(2016)**[Show Abstract]

Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing.

[Show BibTeX] @article{FG16,

author = {Matthias Feldotto AND Kalman Graffi},

title = {Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM},

journal = {Concurrency and Computation: Practice and Experience},

year = {2016},

volume = {28},

number = {5},

pages = {1655--1677},

month = {April},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing. }

}

[DOI]
author = {Matthias Feldotto AND Kalman Graffi},

title = {Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM},

journal = {Concurrency and Computation: Practice and Experience},

year = {2016},

volume = {28},

number = {5},

pages = {1655--1677},

month = {April},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing. }

}

Maximilian Drees, Björn Feldkord, Alexander Skopalik:

In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 593-607

[Show Abstract]

**Strategic Online Facility Location**In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 593-607

**(2016)**[Show Abstract]

In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.

We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).

[Show BibTeX] We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).

@inproceedings{SOFL16,

author = {Maximilian Drees AND Bj{\"o}rn Feldkord AND Alexander Skopalik},

title = {Strategic Online Facility Location},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {593--607},

publisher = {Springer},

abstract = {In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).},

series = {LNCS}

}

[DOI]
author = {Maximilian Drees AND Bj{\"o}rn Feldkord AND Alexander Skopalik},

title = {Strategic Online Facility Location},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {593--607},

publisher = {Springer},

abstract = {In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value α. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is O(√α⋅α) times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to O(α). We also show that for fixed facility costs, we can find strategies such that this bound further improves to O(√α).},

series = {LNCS}

}

Friedhelm Meyer auf der Heide, Peter Sanders, Nodari Sitchinava:

In

[Show BibTeX]

**Introduction to the Special Issue on SPAA 2014**In

*Transactions on Parallel Computing (TOPC)*, vol. 3, no. 1, pp. 1. ACM**(2016)**[Show BibTeX]

@article{MPS2016,

author = {Friedhelm Meyer auf der Heide AND Peter Sanders AND Nodari Sitchinava},

title = {Introduction to the Special Issue on SPAA 2014},

journal = {Transactions on Parallel Computing (TOPC)},

year = {2016},

volume = {3},

number = {1},

pages = {1}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Peter Sanders AND Nodari Sitchinava},

title = {Introduction to the Special Issue on SPAA 2014},

journal = {Transactions on Parallel Computing (TOPC)},

year = {2016},

volume = {3},

number = {1},

pages = {1}

}

Andreas Cord Landwehr, Matthias Fischer, Daniel Jung, Friedhelm Meyer auf der Heide:

In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 301-312

[Show Abstract]

**Asymptotically Optimal Gathering on a Grid**In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 301-312

**(2016)**[Show Abstract]

In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm - executed by every robot - which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).

[Show BibTeX] @inproceedings{CLFJMadH16,

author = {Andreas Cord Landwehr AND Matthias Fischer AND Daniel Jung AND Friedhelm Meyer auf der Heide},

title = {Asymptotically Optimal Gathering on a Grid},

booktitle = {Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2016},

pages = {301-312},

publisher = {ACM},

abstract = {In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm -- executed by every robot -- which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).}

}

[DOI]
author = {Andreas Cord Landwehr AND Matthias Fischer AND Daniel Jung AND Friedhelm Meyer auf der Heide},

title = {Asymptotically Optimal Gathering on a Grid},

booktitle = {Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2016},

pages = {301-312},

publisher = {ACM},

abstract = {In this paper, we solve the local gathering problem of a swarm of n indistinguishable, point-shaped robots on a two-dimensional grid in asymptotically optimal time O(n) in the fully synchronous FSYNC time model. Given an arbitrarily distributed (yet connected) swarm of robots, the gathering problem on the grid is to locate all robots within a 2 x 2-sized area that is not known beforehand. Two robots are connected if they are vertical or horizontal neighbors on the grid. The locality constraint means that no global control, no compass, no global communication and only local vision is available; hence, a robot can see its grid neighbors only up to a constant L1-distance, which also limits its movements. A robot can move to one of its eight neighboring grid cells and if two or more robots move to the same location they are merged to be only one robot. The locality constraint is the significant challenging issue here, since robot movements must not harm the (only globally checkable) swarm connectivity. For solving the gathering problem, we provide a synchronous algorithm -- executed by every robot -- which ensures that robots merge without breaking the swarm connectivity. In our model, robots can obtain a special state, which marks such a robot to be performing specific connectivity preserving movements in order to allow later merge operations of the swarm. Compared to the grid, for gathering in the Euclidean plane for the same robot and time model the best known upper bound is O(n²).}

}

**2015** (7)

Shouwei Li, Alexander Mäcker, Christine Markarian, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

[Show Abstract]

**Towards Flexible Demands in Online Leasing Problems**In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

**(2015)**[Show Abstract]

We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.

[Show BibTeX] @inproceedings{geilerscheiss,

author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

Christian Scheideler, Alexander Setzer, Thim Strothmann:

In Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Leibniz International Proceedings in Informatics (LIPIcs)

[Show Abstract]

**Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures**In Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Leibniz International Proceedings in Informatics (LIPIcs)

**(2015)**[Show Abstract]

Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.

[Show BibTeX] @inproceedings{opodis15sss,

author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures},

booktitle = {Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2015},

publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},

abstract = {Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.},

series = {Leibniz International Proceedings in Informatics (LIPIcs)}

}

[DOI]
author = {Christian Scheideler AND Alexander Setzer AND Thim Strothmann},

title = {Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures},

booktitle = {Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2015},

publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},

abstract = {Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.},

series = {Leibniz International Proceedings in Informatics (LIPIcs)}

}

Andreas Koutsopoulos, Christian Scheideler, Thim Strothmann:

In Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, Lecture Notes in Computer Science, vol. 9212, pp. 201-216

[Show Abstract]

**Towards a Universal Approach for the Finite Departure Problem in Overlay Networks**In Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, Lecture Notes in Computer Science, vol. 9212, pp. 201-216

**(2015)**[Show Abstract]

A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.

[Show BibTeX] @inproceedings{universal-departure-sss,

author = {Andreas Koutsopoulos AND Christian Scheideler AND Thim Strothmann},

title = {Towards a Universal Approach for the Finite Departure Problem in Overlay Networks},

booktitle = {Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2015},

pages = {201-216},

publisher = {Springer},

abstract = {A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Andreas Koutsopoulos AND Christian Scheideler AND Thim Strothmann},

title = {Towards a Universal Approach for the Finite Departure Problem in Overlay Networks},

booktitle = {Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2015},

pages = {201-216},

publisher = {Springer},

abstract = {A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.},

series = {Lecture Notes in Computer Science}

}

Thim Strothmann:

In Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM). Springer, LNCS, vol. 8973, pp. 175-186

[Show Abstract]

**The impact of communication patterns on distributed locally self-adjusting binary search trees**In Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM). Springer, LNCS, vol. 8973, pp. 175-186

**(2015)**[Show Abstract]

This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.

To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.

We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.

Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.

[Show BibTeX] To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.

We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.

Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.

@inproceedings{StrothmannWalcom2015,

author = {Thim Strothmann},

title = {The impact of communication patterns on distributed locally self-adjusting binary search trees},

booktitle = {Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM)},

year = {2015},

pages = {175--186},

publisher = {Springer},

abstract = {This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.},

series = {LNCS}

}

[DOI]
author = {Thim Strothmann},

title = {The impact of communication patterns on distributed locally self-adjusting binary search trees},

booktitle = {Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM)},

year = {2015},

pages = {175--186},

publisher = {Springer},

abstract = {This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.},

series = {LNCS}

}

Christine Markarian, Friedhelm Meyer auf der Heide:

In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC). ACM, pp. 343-344

[Show Abstract]

**Online Resource Leasing**In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC). ACM, pp. 343-344

**(2015)**[Show Abstract]

Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems".

[Show BibTeX] @inproceedings{MM-PODC2016,

author = {Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Online Resource Leasing},

booktitle = {Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC)},

year = {2015},

pages = {343-344},

publisher = {ACM},

month = {July, 21-23},

abstract = {Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems". }

}

[DOI]
author = {Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Online Resource Leasing},

booktitle = {Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC)},

year = {2015},

pages = {343-344},

publisher = {ACM},

month = {July, 21-23},

abstract = {Many markets have seen a shift from the idea of buying and moved to leasing instead. Arguably, the latter has been the major catalyst for their success. Ten years ago, research realized this shift and initiated the study of "online leasing problems" by introducing leasing to online optimization problems. Resources required to provide a service in an "online leasing problem" are no more bought but leased for different durations. In this paper, we provide an overview of results that contribute to the understanding of "online resource leasing problems". }

}

Andreas Cord Landwehr, Pascal Lenzner:

In Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS). Springer, LNCS, vol. 9235, pp. 248-260

[Show Abstract]

**Network Creation Games: Think Global - Act Local**In Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS). Springer, LNCS, vol. 9235, pp. 248-260

**(2015)**[Show Abstract]

We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.

[Show BibTeX] @inproceedings{mfcs2015ncg,

author = {Andreas Cord Landwehr AND Pascal Lenzner},

title = {Network Creation Games: Think Global - Act Local},

booktitle = {Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS)},

year = {2015},

pages = {248--260},

publisher = {Springer},

abstract = {We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Pascal Lenzner},

title = {Network Creation Games: Think Global - Act Local},

booktitle = {Proceedings of the 40th Conference on Mathematical Foundations of Computer Science (MFCS)},

year = {2015},

pages = {248--260},

publisher = {Springer},

abstract = {We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.},

series = {LNCS}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

**A deterministic worst-case message complexity optimal solution for resource discovery**In

*Theoretical of Computer Science*, vol. 584, pp. 67-79. Elsevier**(2015)**[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.

[Show BibTeX] @article{KKS15-TOCS,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A deterministic worst-case message complexity optimal solution for resource discovery},

journal = {Theoretical of Computer Science},

year = {2015},

volume = {584},

pages = {67-79},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A deterministic worst-case message complexity optimal solution for resource discovery},

journal = {Theoretical of Computer Science},

year = {2015},

volume = {584},

pages = {67-79},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.}

}

**2014** (13)

Laszlo Blazovics, Tamas Lukovszki, Bertalan Forstner:

In

[Show Abstract]

**Surrounding robots - A discrete localized solution for the intruder problem**In

*Journal of Advanced Computational Intelligence and Intelligent Informatics*, vol. 18, no. 3, pp. 315-319. Fuji Technology Press**(2014)**[Show Abstract]

Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time.

[Show BibTeX] @article{BLF2014,

author = {Laszlo Blazovics AND Tamas Lukovszki AND Bertalan Forstner},

title = {Surrounding robots -- A discrete localized solution for the intruder problem},

journal = {Journal of Advanced Computational Intelligence and Intelligent Informatics},

year = {2014},

volume = {18},

number = {3},

pages = {315--319},

abstract = {Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time. }

}

[DOI]
author = {Laszlo Blazovics AND Tamas Lukovszki AND Bertalan Forstner},

title = {Surrounding robots -- A discrete localized solution for the intruder problem},

journal = {Journal of Advanced Computational Intelligence and Intelligent Informatics},

year = {2014},

volume = {18},

number = {3},

pages = {315--319},

abstract = {Decentralized algorithms are often used in the cooperative robotics field, especially by large swarm systems. We present a distributed algorithm for a problem in which a group of autonomous mobile robots must surround a given target. These robots are oblivious, i.e., they have no memory of the past. They use only local sensing and need no dedicated communication among themselves. We introduce, then solve the problem in which the group of autonomous mobile robots must surround a given target – we call it the “discrete multiorbit target surrounding problem” (DMTSP). We evaluate our solution using simulation and prove that our solution invariably ensures that robots enclose the target in finite time. }

}

Jens Janiuk, Alexander Mäcker, Kalman Graffi:

In Proceedings of the International Conference on Collaboration Technologies and Systems (CTS). IEEE Computer Society, pp. 396-405

[Show Abstract]

**Secure Distributed Data Structures for Peer-to-Peer-based Social Networks**In Proceedings of the International Conference on Collaboration Technologies and Systems (CTS). IEEE Computer Society, pp. 396-405

**(2014)**[Show Abstract]

Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.

[Show BibTeX] @inproceedings{JaniukMaeckerGraffi14,

author = {Jens Janiuk AND Alexander M{\"a}cker AND Kalman Graffi},

title = {Secure Distributed Data Structures for Peer-to-Peer-based Social Networks},

booktitle = {Proceedings of the International Conference on Collaboration Technologies and Systems (CTS)},

year = {2014},

pages = {396-405},

publisher = {IEEE Computer Society},

abstract = {Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.}

}

[DOI]
author = {Jens Janiuk AND Alexander M{\"a}cker AND Kalman Graffi},

title = {Secure Distributed Data Structures for Peer-to-Peer-based Social Networks},

booktitle = {Proceedings of the International Conference on Collaboration Technologies and Systems (CTS)},

year = {2014},

pages = {396-405},

publisher = {IEEE Computer Society},

abstract = {Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again. [Show BibTeX]

**Re-Chord: A Self-stabilizing Chord Overlay Network**In

*Theory of Computing Systems*, vol. 55, no. 3, pp. 591-612. Springer**(2014)**[Show Abstract]

The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.

@article{RECHORDjournal,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {Re-Chord: A Self-stabilizing Chord Overlay Network},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {3},

pages = {591-612},

abstract = {The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {Re-Chord: A Self-stabilizing Chord Overlay Network},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {3},

pages = {591-612},

abstract = {The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.}

}

Sebastian Abshoff, Christine Markarian, Friedhelm Meyer auf der Heide:

In Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 8881, pp. 25-34

[Show Abstract]

**Randomized Online Algorithms for Set Cover Leasing Problems**In Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 8881, pp. 25-34

**(2014)**[Show Abstract]

In the leasing variant of Set Cover presented by Anthony et al.

[1], elements U arrive over time and must be covered by sets from a family

F of subsets of U. Each set can be leased for K different periods of time.

Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ck

S and allows S to cover its elements for the next lk time steps. The objective

is to minimize the total cost of the sets leased, such that elements arriving

at any time t are covered by sets which contain them and are leased during

time t. Anthony et al. [1] gave an optimal O(log n)-approximation for

the problem in the offline setting, unless P = NP [22]. In this paper, we

give randomized algorithms for variants of Set Cover Leasing in the online

setting, including a generalization of Online Set Cover with Repetitions

presented by Alon et al. [2], where elements appear multiple times and

must be covered by a different set at each arrival. Our results improve the

O(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]

to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum number

of sets an element belongs to.

[Show BibTeX] [1], elements U arrive over time and must be covered by sets from a family

F of subsets of U. Each set can be leased for K different periods of time.

Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ck

S and allows S to cover its elements for the next lk time steps. The objective

is to minimize the total cost of the sets leased, such that elements arriving

at any time t are covered by sets which contain them and are leased during

time t. Anthony et al. [1] gave an optimal O(log n)-approximation for

the problem in the offline setting, unless P = NP [22]. In this paper, we

give randomized algorithms for variants of Set Cover Leasing in the online

setting, including a generalization of Online Set Cover with Repetitions

presented by Alon et al. [2], where elements appear multiple times and

must be covered by a different set at each arrival. Our results improve the

O(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]

to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum number

of sets an element belongs to.

@inproceedings{AMM2014,

author = {Sebastian Abshoff AND Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Randomized Online Algorithms for Set Cover Leasing Problems},

booktitle = {Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2014},

pages = {25-34},

publisher = {Springer},

abstract = {In the leasing variant of Set Cover presented by Anthony et al.[1], elements U arrive over time and must be covered by sets from a familyF of subsets of U. Each set can be leased for K different periods of time.Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ckS and allows S to cover its elements for the next lk time steps. The objectiveis to minimize the total cost of the sets leased, such that elements arrivingat any time t are covered by sets which contain them and are leased duringtime t. Anthony et al. [1] gave an optimal O(log n)-approximation forthe problem in the offline setting, unless P = NP [22]. In this paper, wegive randomized algorithms for variants of Set Cover Leasing in the onlinesetting, including a generalization of Online Set Cover with Repetitionspresented by Alon et al. [2], where elements appear multiple times andmust be covered by a different set at each arrival. Our results improve theO(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum numberof sets an element belongs to.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Christine Markarian AND Friedhelm Meyer auf der Heide},

title = {Randomized Online Algorithms for Set Cover Leasing Problems},

booktitle = {Proceedings of the 8th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2014},

pages = {25-34},

publisher = {Springer},

abstract = {In the leasing variant of Set Cover presented by Anthony et al.[1], elements U arrive over time and must be covered by sets from a familyF of subsets of U. Each set can be leased for K different periods of time.Let |U| = n and |F| = m. Leasing a set S for a period k incurs a cost ckS and allows S to cover its elements for the next lk time steps. The objectiveis to minimize the total cost of the sets leased, such that elements arrivingat any time t are covered by sets which contain them and are leased duringtime t. Anthony et al. [1] gave an optimal O(log n)-approximation forthe problem in the offline setting, unless P = NP [22]. In this paper, wegive randomized algorithms for variants of Set Cover Leasing in the onlinesetting, including a generalization of Online Set Cover with Repetitionspresented by Alon et al. [2], where elements appear multiple times andmust be covered by a different set at each arrival. Our results improve theO(log2(mn)) competitive factor of Online Set Cover with Repetitions [2]to O(log d log(dn)) = O(logmlog(mn)), where d is the maximum numberof sets an element belongs to.},

series = {LNCS}

}

Andreas Cord Landwehr, Alexander Mäcker, Friedhelm Meyer auf der Heide:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 423-428

[Show Abstract]

**Quality of Service in Network Creation Games**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 423-428

**(2014)**[Show Abstract]

Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.

[Show BibTeX] @inproceedings{wine2014qosncg,

author = {Andreas Cord Landwehr AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide},

title = {Quality of Service in Network Creation Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {423-428},

publisher = {Springer International Publishing Switzerland},

abstract = {Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide},

title = {Quality of Service in Network Creation Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {423-428},

publisher = {Springer International Publishing Switzerland},

abstract = {Network creation games model the creation and usage costs of networks formed by n selfish nodes. Each node v can buy a set of edges, each for a fixed price α > 0. Its goal is to minimize its private costs, i.e., the sum (SUM-game, Fabrikant et al., PODC 2003) or maximum (MAX-game, Demaine et al., PODC 2007) of distances from v to all other nodes plus the prices of the bought edges. The above papers show the existence of Nash equilibria as well as upper and lower bounds for the prices of anarchy and stability. In several subsequent papers, these bounds were improved for a wide range of prices α. In this paper, we extend these models by incorporating quality-of-service aspects: Each edge cannot only be bought at a fixed quality (edge length one) for a fixed price α. Instead, we assume that quality levels (i.e., edge lengths) are varying in a fixed interval [βˇ,β^] , 0<βˇ≤β^ . A node now cannot only choose which edge to buy, but can also choose its quality x, for the price p(x), for a given price function p. For both games and all price functions, we show that Nash equilibria exist and that the price of stability is either constant or depends only on the interval size of available edge lengths. Our main results are bounds for the price of anarchy. In case of the SUM-game, we show that they are tight if price functions decrease sufficiently fast.},

series = {LNCS}

}

Dianne Foreback, Andreas Koutsopoulos, Mikhail Nesterenko, Christian Scheideler, Thim Strothmann:

In Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems. Springer, LNCS, vol. 8756, pp. 48-62

[Show Abstract]

**On Stabilizing Departures in Overlay Networks**In Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems. Springer, LNCS, vol. 8756, pp. 48-62

**(2014)**[Show Abstract]

A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.

[Show BibTeX] @inproceedings{ForebackKNSS14,

author = {Dianne Foreback AND Andreas Koutsopoulos AND Mikhail Nesterenko AND Christian Scheideler AND Thim Strothmann},

title = {On Stabilizing Departures in Overlay Networks},

booktitle = {Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems},

year = {2014},

pages = {48--62},

publisher = {Springer},

abstract = {A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.},

series = {LNCS}

}

[DOI]
author = {Dianne Foreback AND Andreas Koutsopoulos AND Mikhail Nesterenko AND Christian Scheideler AND Thim Strothmann},

title = {On Stabilizing Departures in Overlay Networks},

booktitle = {Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems},

year = {2014},

pages = {48--62},

publisher = {Springer},

abstract = {A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.},

series = {LNCS}

}

Sebastian Abshoff, Andreas Cord Landwehr, Daniel Jung, Alexander Skopalik:

In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 435-440

[Show Abstract]

**Multilevel Network Games**In Proceedings of the 10th International Conference on Web and Internet Economics (WINE). Springer International Publishing Switzerland, LNCS, vol. 8877, pp. 435-440

**(2014)**[Show Abstract]

We consider a multilevel network game, where nodes can improve

their communication costs by connecting to a high-speed network.

The n nodes are connected by a static network and each node can decide

individually to become a gateway to the high-speed network. The goal

of a node v is to minimize its private costs, i.e., the sum (SUM-game) or

maximum (MAX-game) of communication distances from v to all other

nodes plus a fixed price α > 0 if it decides to be a gateway. Between gateways

the communication distance is 0, and gateways also improve other

nodes’ distances by behaving as shortcuts. For the SUM-game, we show

that for α ≤ n − 1, the price of anarchy is Θ (n/

√

α) and in this range

equilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchy

is Θ(

√

α), and for α ≥ n(n − 1) it is constant. For the MAX-game, we

show that the price of anarchy is either Θ (1 + n/

√

α), for α ≥ 1, or

else 1. Given a graph with girth of at least 4α, equilibria always exist.

Concerning the dynamics, both games are not potential games. For the

SUM-game, we even show that it is not weakly acyclic.

[Show BibTeX] their communication costs by connecting to a high-speed network.

The n nodes are connected by a static network and each node can decide

individually to become a gateway to the high-speed network. The goal

of a node v is to minimize its private costs, i.e., the sum (SUM-game) or

maximum (MAX-game) of communication distances from v to all other

nodes plus a fixed price α > 0 if it decides to be a gateway. Between gateways

the communication distance is 0, and gateways also improve other

nodes’ distances by behaving as shortcuts. For the SUM-game, we show

that for α ≤ n − 1, the price of anarchy is Θ (n/

√

α) and in this range

equilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchy

is Θ(

√

α), and for α ≥ n(n − 1) it is constant. For the MAX-game, we

show that the price of anarchy is either Θ (1 + n/

√

α), for α ≥ 1, or

else 1. Given a graph with girth of at least 4α, equilibria always exist.

Concerning the dynamics, both games are not potential games. For the

SUM-game, we even show that it is not weakly acyclic.

@inproceedings{ACJS14,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Multilevel Network Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {435-440},

publisher = {Springer International Publishing Switzerland},

abstract = {We consider a multilevel network game, where nodes can improvetheir communication costs by connecting to a high-speed network.The n nodes are connected by a static network and each node can decideindividually to become a gateway to the high-speed network. The goalof a node v is to minimize its private costs, i.e., the sum (SUM-game) ormaximum (MAX-game) of communication distances from v to all othernodes plus a fixed price α > 0 if it decides to be a gateway. Between gatewaysthe communication distance is 0, and gateways also improve othernodes’ distances by behaving as shortcuts. For the SUM-game, we showthat for α ≤ n − 1, the price of anarchy is Θ (n/√α) and in this rangeequilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchyis Θ(√α), and for α ≥ n(n − 1) it is constant. For the MAX-game, weshow that the price of anarchy is either Θ (1 + n/√α), for α ≥ 1, orelse 1. Given a graph with girth of at least 4α, equilibria always exist.Concerning the dynamics, both games are not potential games. For theSUM-game, we even show that it is not weakly acyclic.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Multilevel Network Games},

booktitle = {Proceedings of the 10th International Conference on Web and Internet Economics (WINE)},

year = {2014},

pages = {435-440},

publisher = {Springer International Publishing Switzerland},

abstract = {We consider a multilevel network game, where nodes can improvetheir communication costs by connecting to a high-speed network.The n nodes are connected by a static network and each node can decideindividually to become a gateway to the high-speed network. The goalof a node v is to minimize its private costs, i.e., the sum (SUM-game) ormaximum (MAX-game) of communication distances from v to all othernodes plus a fixed price α > 0 if it decides to be a gateway. Between gatewaysthe communication distance is 0, and gateways also improve othernodes’ distances by behaving as shortcuts. For the SUM-game, we showthat for α ≤ n − 1, the price of anarchy is Θ (n/√α) and in this rangeequilibria always exist. In range α ∈ (n−1, n(n−1)) the price of anarchyis Θ(√α), and for α ≥ n(n − 1) it is constant. For the MAX-game, weshow that the price of anarchy is either Θ (1 + n/√α), for α ≥ 1, orelse 1. Given a graph with girth of at least 4α, equilibria always exist.Concerning the dynamics, both games are not potential games. For theSUM-game, we even show that it is not weakly acyclic.},

series = {LNCS}

}

Christian Scheideler, Martina Eikel, Alexander Setzer:

In Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA). Springer, LNCS, vol. 8952, pp. 168-180

[Show Abstract]

**Minimum Linear Arrangement of Series-Parallel Graphs**In Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA). Springer, LNCS, vol. 8952, pp. 168-180

**(2014)**[Show Abstract]

We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log|E|)$ (or even $O(\log|E|\log^*|E|)$ on an EREW PRAM using $O(|E|)$ processors).

For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures.

On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.

[Show BibTeX] For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures.

On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.

@inproceedings{waoa2014mla,

author = {Christian Scheideler AND Martina Eikel AND Alexander Setzer},

title = {Minimum Linear Arrangement of Series-Parallel Graphs},

booktitle = {Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2014},

pages = {168--180},

publisher = {Springer},

abstract = {We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.},

series = {LNCS}

}

[DOI]
author = {Christian Scheideler AND Martina Eikel AND Alexander Setzer},

title = {Minimum Linear Arrangement of Series-Parallel Graphs},

booktitle = {Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2014},

pages = {168--180},

publisher = {Springer},

abstract = {We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.},

series = {LNCS}

}

Matthias Feldotto, Christian Scheideler, Kalman Graffi:

In Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P). IEEE, pp. 1-10

[Show Abstract]

**HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths**In Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P). IEEE, pp. 1-10

**(2014)**[Show Abstract]

In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.

[Show BibTeX] @inproceedings{FSG2014P2P,

author = {Matthias Feldotto AND Christian Scheideler AND Kalman Graffi},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

booktitle = {Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P)},

year = {2014},

pages = {1-10},

publisher = {IEEE},

abstract = {In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.}

}

[DOI]
author = {Matthias Feldotto AND Christian Scheideler AND Kalman Graffi},

title = {HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths},

booktitle = {Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P)},

year = {2014},

pages = {1-10},

publisher = {IEEE},

abstract = {In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.}

}

Sebastian Abshoff, Andreas Cord Landwehr, Daniel Jung, Alexander Skopalik:

In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 8768, pp. 294

[Show Abstract]

**Brief Announcement: A Model for Multilevel Network Games**In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 8768, pp. 294

**(2014)**[Show Abstract]

Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.

We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.

Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.

[Show BibTeX] We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.

Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.

@inproceedings{2014sagtmultilevel,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Brief Announcement: A Model for Multilevel Network Games},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {294},

publisher = {Springer},

abstract = {Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.},

series = {LNCS}

}

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Daniel Jung AND Alexander Skopalik},

title = {Brief Announcement: A Model for Multilevel Network Games},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {294},

publisher = {Springer},

abstract = {Today's networks, like the Internet, do not consist of one but a mixture of several interconnected networks. Each has individual qualities and hence the performance of a network node results from the networks' interplay.We introduce a new game theoretic model capturing the interplay between a high-speed backbone network and a low-speed general purpose network. In our model, n nodes are connected by a static network and each node can decide individually to become a gateway node. A gateway node pays a fixed price for its connection to the high-speed network, but can utilize the high-speed network to gain communication distance 0 to all other gateways. Communication distances in the low-speed network are given by the hop distances. The effective communication distance between any two nodes then is given by the shortest path, which is possibly improved by using gateways as shortcuts.Every node v has the objective to minimize its communication costs, given by the sum (SUM-game) or maximum (MAX-game) of the effective communication distances from v to all other nodes plus a fixed price \alpha > 0, if it decides to be a gateway. For both games and different ranges of \alpha, we study the existence of equilibria, the price of anarchy, and convergence properties of best-response dynamics.},

series = {LNCS}

}

Sebastian Kniesburges, Christine Markarian, Friedhelm Meyer auf der Heide, Christian Scheideler:

In Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO). Springer, LNCS, vol. 8576, pp. 1-13

[Show Abstract]

**Algorithmic Aspects of Resource Management in the Cloud**In Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO). Springer, LNCS, vol. 8576, pp. 1-13

**(2014)**[Show Abstract]

In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.

[Show BibTeX] @inproceedings{KMMS2014,

author = {Sebastian Kniesburges AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Christian Scheideler},

title = {Algorithmic Aspects of Resource Management in the Cloud},

booktitle = {Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO)},

year = {2014},

pages = {1-13},

publisher = {Springer},

abstract = {In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.},

series = {LNCS}

}

[DOI]
author = {Sebastian Kniesburges AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Christian Scheideler},

title = {Algorithmic Aspects of Resource Management in the Cloud},

booktitle = {Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO)},

year = {2014},

pages = {1-13},

publisher = {Springer},

abstract = {In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.},

series = {LNCS}

}

Dominik Gall, Riko Jacob, Andrea W. Richa, Christian Scheideler, Stefan Schmid, Hanjo Täubig:

In

[Show Abstract]

**A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization**In

*Theory of Computing Systems*, vol. 55, no. 1, pp. 110-135. Springer**(2014)**[Show Abstract]

Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.

[Show BibTeX] @article{GJRSST2014,

author = {Dominik Gall AND Riko Jacob AND Andrea W. Richa AND Christian Scheideler AND Stefan Schmid AND Hanjo T{\"a}ubig},

title = {A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {1},

pages = {110-135},

abstract = {Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.}

}

[DOI]
author = {Dominik Gall AND Riko Jacob AND Andrea W. Richa AND Christian Scheideler AND Stefan Schmid AND Hanjo T{\"a}ubig},

title = {A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization},

journal = {Theory of Computing Systems},

year = {2014},

volume = {55},

number = {1},

pages = {110-135},

abstract = {Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In

[Show Abstract]

**A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery**In

*Theoretical Computer Science*. Elsevier**(2014)**[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds.

[Show BibTeX] @article{ResDiscJournal,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

journal = {Theoretical Computer Science},

year = {2014},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds. }

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

journal = {Theoretical Computer Science},

year = {2014},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linear runtime (O(n)) on the number of rounds. }

}

**2013** (15)

Petr Kolman, Christian Scheideler:

In

[Show Abstract]

**Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing**In

*Theory of Computing Systems*, vol. 53, no. 2, pp. 341-363. Springer**(2013)**[Show Abstract]

An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.

[Show BibTeX] @article{KS2013,

author = {Petr Kolman AND Christian Scheideler},

title = {Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing},

journal = {Theory of Computing Systems},

year = {2013},

volume = {53},

number = {2},

pages = {341-363},

abstract = {An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.}

}

[DOI]
author = {Petr Kolman AND Christian Scheideler},

title = {Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing},

journal = {Theory of Computing Systems},

year = {2013},

volume = {53},

number = {2},

pages = {341-363},

abstract = {An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.}

}

Sebastian Abshoff, Markus Benter, Andreas Cord Landwehr, Manuel Malatyali, Friedhelm Meyer auf der Heide:

In Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, ALGOSENSORS 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers. Springer, Lecture Notes in Computer Science, vol. 8243, pp. 22-34

[Show Abstract]

**Token Dissemination in Geometric Dynamic Networks**In Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, ALGOSENSORS 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers. Springer, Lecture Notes in Computer Science, vol. 8243, pp. 22-34

**(2013)**[Show Abstract]

We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.

[Show BibTeX] @inproceedings{DBLP:conf/algosensors/AbshoffBCMH13,

author = {Sebastian Abshoff AND Markus Benter AND Andreas Cord Landwehr AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {Token Dissemination in Geometric Dynamic Networks},

booktitle = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, {ALGOSENSORS} 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers},

year = {2013},

pages = {22-34},

publisher = {Springer},

month = {September},

abstract = {We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Sebastian Abshoff AND Markus Benter AND Andreas Cord Landwehr AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {Token Dissemination in Geometric Dynamic Networks},

booktitle = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, {ALGOSENSORS} 2013, Sophia Antipolis, France, September 5-6, 2013, Revised Selected Papers},

year = {2013},

pages = {22-34},

publisher = {Springer},

month = {September},

abstract = {We consider the k-token dissemination problem, where k initially arbitrarily distributed tokens have to be disseminated to all nodes in a dynamic network (as introduced by Kuhn et al., STOC 2010). In contrast to general dynamic networks, our dynamic networks are unit disk graphs, i.e., nodes are embedded into the Euclidean plane and two nodes are connected if and only if their distance is at most R. Our worst-case adversary is allowed to move the nodes on the plane, but the maximum velocity v_max of each node is limited and the graph must be connected in each round. For this model, we provide almost tight lower and upper bounds for k-token dissemination if nodes are restricted to send only one token per round. It turns out that the maximum velocity v_max is a meaningful parameter to characterize dynamics in our model.},

series = {Lecture Notes in Computer Science}

}

Kalman Graffi, Lars Bremer:

In Proceedings of the International Conference on Communications (ICC'13). IEEE Computer Society, pp. 3444 - 3449

[Show Abstract]

**Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case**In Proceedings of the International Conference on Communications (ICC'13). IEEE Computer Society, pp. 3444 - 3449

**(2013)**[Show Abstract]

Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.

[Show BibTeX] @inproceedings{BremerGraffi13,

author = {Kalman Graffi AND Lars Bremer},

title = {Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case},

booktitle = {Proceedings of the International Conference on Communications (ICC'13)},

year = {2013},

pages = {3444 - 3449 },

publisher = {IEEE Computer Society},

abstract = {Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.}

}

[DOI]
author = {Kalman Graffi AND Lars Bremer},

title = {Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case},

booktitle = {Proceedings of the International Conference on Communications (ICC'13)},

year = {2013},

pages = {3444 - 3449 },

publisher = {IEEE Computer Society},

abstract = {Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.}

}

Sebastian Abshoff, Markus Benter, Manuel Malatyali, Friedhelm Meyer auf der Heide:

In Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS). Springer, LNCS, vol. 8304, pp. 11-22

[Show Abstract]

**On Two-Party Communication Through Dynamic Networks**In Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS). Springer, LNCS, vol. 8304, pp. 11-22

**(2013)**[Show Abstract]

We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.

[Show BibTeX] @inproceedings{DBLP:conf/opodis/AbshoffBMH13,

author = {Sebastian Abshoff AND Markus Benter AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {On Two-Party Communication Through Dynamic Networks},

booktitle = {Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2013},

pages = {11-22},

publisher = {Springer},

month = {December},

abstract = {We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Markus Benter AND Manuel Malatyali AND Friedhelm Meyer auf der Heide},

title = {On Two-Party Communication Through Dynamic Networks},

booktitle = {Proceedings of the 17th International Conference on Principles of Distributed Systems (OPODIS)},

year = {2013},

pages = {11-22},

publisher = {Springer},

month = {December},

abstract = {We study two-party communication in the context of directed dynamic networks that are controlled by an adaptive adversary. This adversary is able to change all edges as long as the networks stay strongly-connected in each round. In this work, we establish a relation between counting the total number of nodes in the network and the problem of exchanging tokens between two communication partners which communicate through a dynamic network. We show that the communication problem for a constant fraction of n tokens in a dynamic network with n nodes is at most as hard as counting the number of nodes in a dynamic network with at most 4n+3 nodes. For the proof, we construct a family of directed dynamic networks and apply a lower bound from two-party communication complexity.},

series = {LNCS}

}

Chen Avin, Bernhard Haeupler, Zvi Lotker, Christian Scheideler, Stefan Schmid:

In Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 395-406

[Show Abstract]

**Locally Self-Adjusting Tree Networks**In Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 395-406

**(2013)**[Show Abstract]

This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.

[Show BibTeX] @inproceedings{AHLSS2013IPDPS,

author = {Chen Avin AND Bernhard Haeupler AND Zvi Lotker AND Christian Scheideler AND Stefan Schmid},

title = {Locally Self-Adjusting Tree Networks},

booktitle = {Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2013},

pages = {395-406},

publisher = {IEEE Computer Society},

abstract = {This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.}

}

[DOI]
author = {Chen Avin AND Bernhard Haeupler AND Zvi Lotker AND Christian Scheideler AND Stefan Schmid},

title = {Locally Self-Adjusting Tree Networks},

booktitle = {Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2013},

pages = {395-406},

publisher = {IEEE Computer Society},

abstract = {This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.}

}

Friedhelm Meyer auf der Heide, Kamil Swierkot:

In

[Show Abstract]

**Hierarchies in Local Distributed Decision**In

*ArXiv e-prints*.**(2013)**(eprint arXiv:1311.7229)[Show Abstract]

We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.

[Show BibTeX] @article{hniid=7931,

author = {Friedhelm Meyer auf der Heide AND Kamil Swierkot},

title = {Hierarchies in Local Distributed Decision},

journal = {ArXiv e-prints},

year = {2013},

month = {nov},

note = {eprint arXiv:1311.7229},

abstract = {We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Kamil Swierkot},

title = {Hierarchies in Local Distributed Decision},

journal = {ArXiv e-prints},

year = {2013},

month = {nov},

note = {eprint arXiv:1311.7229},

abstract = {We study the complexity theory for the local distributed setting introduced by Korman, Peleg and Fraigniaud. They have defined three complexity classes LD (Local Decision), NLD (Nondeterministic Local Decision) and NLD^#n. The class LD consists of all languages which can be decided with a constant number of communication rounds. The class NLD consists of all languages which can be verified by a nondeterministic algorithm with a constant number of communication rounds. In order to define the nondeterministic classes, they have transferred the notation of nondeterminism into the distributed setting by the use of certificates and verifiers. The class NLD^#n consists of all languages which can be verified by a nondeterministic algorithm where each node has access to an oracle for the number of nodes. They have shown the hierarchy LD subset NLD subset NLD^#n. Our main contributions are strict hierarchies within the classes defined by Korman, Peleg and Fraigniaud. We define additional complexity classes: the class LD(t) consists of all languages which can be decided with at most t communication rounds. The class NLD-O(f) consists of all languages which can be verified by a local verifier such that the size of the certificates that are needed to verify the language are bounded by a function from O(f). Our main results are refined strict hierarchies within these nondeterministic classes.}

}

Kalman Graffi, Vitaliy Rapp:

In Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13). IEEE Computer Society, pp. 1-7

[Show Abstract]

**Continuous Gossip-based Aggregation through Dynamic Information Aging**In Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13). IEEE Computer Society, pp. 1-7

**(2013)**[Show Abstract]

Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.

[Show BibTeX] @inproceedings{RappGraffi13,

author = {Kalman Graffi AND Vitaliy Rapp},

title = {Continuous Gossip-based Aggregation through Dynamic Information Aging},

booktitle = {Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13)},

year = {2013},

pages = {1-7},

publisher = {IEEE Computer Society},

abstract = {Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.}

}

[DOI]
author = {Kalman Graffi AND Vitaliy Rapp},

title = {Continuous Gossip-based Aggregation through Dynamic Information Aging},

booktitle = {Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13)},

year = {2013},

pages = {1-7},

publisher = {IEEE Computer Society},

abstract = {Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.}

}

Matthias Feldotto, Kalman Graffi:

In Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13). IEEE Computer Society, pp. 99-106

[Show Abstract]

**Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM**In Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13). IEEE Computer Society, pp. 99-106

**(2013)**[Show Abstract]

Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.

[Show BibTeX] @inproceedings{FeldGraffi13,

author = {Matthias Feldotto AND Kalman Graffi},

title = {Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM},

booktitle = {Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13)},

year = {2013},

pages = {99-106},

publisher = {IEEE Computer Society},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.}

}

[DOI]
author = {Matthias Feldotto AND Kalman Graffi},

title = {Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM},

booktitle = {Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13)},

year = {2013},

pages = {99-106},

publisher = {IEEE Computer Society},

abstract = {Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.}

}

Kalman Graffi, Markus Benter, Mohammad Divband, Sebastian Kniesburges, Andreas Koutsopoulos:

In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 27-34

[Show Abstract]

**Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network**In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 27-34

**(2013)**[Show Abstract]

Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.

[Show BibTeX] @inproceedings{benter13a,

author = {Kalman Graffi AND Markus Benter AND Mohammad Divband AND Sebastian Kniesburges AND Andreas Koutsopoulos},

title = {Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {27-34},

publisher = {IEEE Computer Society},

abstract = {Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.}

}

[DOI]
author = {Kalman Graffi AND Markus Benter AND Mohammad Divband AND Sebastian Kniesburges AND Andreas Koutsopoulos},

title = {Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {27-34},

publisher = {IEEE Computer Society},

abstract = {Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.}

}

Kalman Graffi, Timo Klerx:

In Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13). IEEE Computer Society, pp. 1-5

[Show Abstract]

**Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks**In Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13). IEEE Computer Society, pp. 1-5

**(2013)**[Show Abstract]

Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.

[Show BibTeX] @inproceedings{KlerxGraffi13,

author = {Kalman Graffi AND Timo Klerx},

title = {Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks},

booktitle = {Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13)},

year = {2013},

pages = {1-5},

publisher = {IEEE Computer Society},

abstract = {Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.}

}

[DOI]
author = {Kalman Graffi AND Timo Klerx},

title = {Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks},

booktitle = {Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13)},

year = {2013},

pages = {1-5},

publisher = {IEEE Computer Society},

abstract = {Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.}

}

Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide (eds.):

Springer, LNCS, vol. 8243

[Show BibTeX]

**Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics**Springer, LNCS, vol. 8243

**(2013)**[Show BibTeX]

@proceedings{FGKM2013,

title = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics},

year = {2013},

editor = {Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide},

publisher = {Springer},

month = {September}

}

[DOI]
title = {Algorithms for Sensor Systems - 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics},

year = {2013},

editor = {Paola Flocchini, Jie Gao, Evangelos Kranakis, Friedhelm Meyer auf der Heide},

publisher = {Springer},

month = {September}

}

Philip Wette, Kalman Graffi:

In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 35-42

[Show Abstract]

**Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables**In Proceedings of the Conference on Networked Systems (NetSys). IEEE Computer Society, pp. 35-42

**(2013)**[Show Abstract]

Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.

[Show BibTeX] @inproceedings{wette13a,

author = {Philip Wette AND Kalman Graffi},

title = {Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {35-42},

publisher = {IEEE Computer Society},

abstract = {Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.}

}

[DOI]
author = {Philip Wette AND Kalman Graffi},

title = {Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables},

booktitle = {Proceedings of the Conference on Networked Systems (NetSys)},

year = {2013},

pages = {35-42},

publisher = {IEEE Computer Society},

abstract = {Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.}

}

Matthias Keller, Stefan Pawlik, Peter Pietrzyk, Holger Karl:

In Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing. IEEE/ACM, pp. 429-434

[Show Abstract]

**A Local Heuristic for Latency-Optimized Distributed Cloud Deployment**In Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing. IEEE/ACM, pp. 429-434

**(2013)**[Show Abstract]

In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.

[Show BibTeX] @inproceedings{mkeller2013c,

author = {Matthias Keller AND Stefan Pawlik AND Peter Pietrzyk AND Holger Karl},

title = {A Local Heuristic for Latency-Optimized Distributed Cloud Deployment},

booktitle = {Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing},

year = {2013},

pages = {429-434},

publisher = {IEEE/ACM},

abstract = {In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.}

}

[DOI]
author = {Matthias Keller AND Stefan Pawlik AND Peter Pietrzyk AND Holger Karl},

title = {A Local Heuristic for Latency-Optimized Distributed Cloud Deployment},

booktitle = {Proceedings of the 6th International Conference on Utility and Cloud Computing (UCC) workshop on Distributed cloud computing},

year = {2013},

pages = {429-434},

publisher = {IEEE/ACM},

abstract = {In Distributed Cloud Computing, applications are deployed across many data centres at topologically diverse locations to improved network-related quality of service (QoS). As we focus on interactive applications, we minimize the latency between users and an application by allocating Cloud resources nearby the customers. Allocating resources at all locations will result in the best latency but also in the highest expenses. So we need to find an optimal subset of locations which reduces the latency but also the expenses – the facility location problem (FLP). In addition, we consider resource capacity restrictions, as a resource can only serve a limited amount of users. An FLP can be globally solved. Additionally, we propose a local, distributed heuristic. This heuristic is running within the network and does not depend on a global component. No distributed, local approximations for the capacitated FLP have been proposed so far due to the complexity of the problem. We compared the heuristic with an optimal solution obtained from a mixed integer program for different network topologies. We investigated the influence of different parameters like overall resource utilization or different latency weights.}

}

Christine Markarian, Friedhelm Meyer auf der Heide, Michael Schubert:

In Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS). Springer, LNCS, vol. 8243, pp. 217-227

[Show Abstract]

**A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks**In Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS). Springer, LNCS, vol. 8243, pp. 217-227

**(2013)**[Show Abstract]

Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_max/r_min with r_max and r_min being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.

[Show BibTeX] @inproceedings{MMS2013,

author = {Christine Markarian AND Friedhelm Meyer auf der Heide AND Michael Schubert},

title = {A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks},

booktitle = {Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS)},

year = {2013},

pages = {217-227},

publisher = {Springer},

month = {September},

abstract = {Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_{max}/r_{min} with r_{max} and r_{min} being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.},

series = {LNCS}

}

[DOI]
author = {Christine Markarian AND Friedhelm Meyer auf der Heide AND Michael Schubert},

title = {A Distributed Approximation Algorithm for Strongly Connected Dominating-Absorbent Sets in Asymmetric Wireless Ad-Hoc Networks},

booktitle = {Proceedings of the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics (ALGOSENSORS)},

year = {2013},

pages = {217-227},

publisher = {Springer},

month = {September},

abstract = {Dominating set based virtual backbones are used for rou-ting in wireless ad-hoc networks. Such backbones receive and transmit messages from/to every node in the network. Existing distributed algorithms only consider undirected graphs, which model symmetric networks with uniform transmission ranges. We are particularly interested in the well-established disk graphs, which model asymmetric networks with non-uniform transmission ranges. The corresponding graph theoretic problem seeks a strongly connected dominating-absorbent set of minimum cardinality in a digraph. A subset of nodes in a digraph is a strongly connected dominating-absorbent set if the subgraph induced by these nodes is strongly connected and each node in the graph is either in the set or has both an in-neighbor and an out-neighbor in it. We introduce the first distributed algorithm for this problem in disk graphs. The algorithm gives an O(k^4) -approximation ratio and has a runtime bound of O(Diam) where Diam is the diameter of the graph and k denotes the transmission ratio r_{max}/r_{min} with r_{max} and r_{min} being the maximum and minimum transmission range, respectively. Moreover, we apply our algorithm on the subgraph of disk graphs consisting of only bidirectional edges. Our algorithm gives an O(ln k) -approximation and a runtime bound of O(k^8 log^∗ n) , which, for bounded k , is an optimal approximation for the problem, following Lenzen and Wattenhofer’s Ω(log^∗ n) runtime lower bound for distributed constant approximation in disk graphs.},

series = {LNCS}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO). Springer, Lecture Notes in Computer Science, vol. 8179, pp. 165-176

[Show Abstract]

**A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery**In Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO). Springer, Lecture Notes in Computer Science, vol. 8179, pp. 165-176

**(2013)**(won the SIROCCO Best Student Paper Award)[Show Abstract]

We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.

[Show BibTeX] @inproceedings{SIROCCO-KKS13,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

booktitle = {Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO)},

year = {2013},

pages = {165-176},

publisher = {Springer},

note = {won the SIROCCO Best Student Paper Award},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Deterministic Worst-Case Message Complexity Optimal Solution for Resource Discovery},

booktitle = {Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO)},

year = {2013},

pages = {165-176},

publisher = {Springer},

note = {won the SIROCCO Best Student Paper Award},

abstract = {We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.},

series = {Lecture Notes in Computer Science}

}

**2012** (10)

Riko Jacob, Stephan Ritscher, Christian Scheideler, Stefan Schmid:

In

[Show Abstract]

**Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs**In

*Theoretical Computer Science*, vol. 457, pp. 137-148. Elsevier**(2012)**[Show Abstract]

This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.

Keywords: Distributed Algorithms, Topology Control, Social Networks

[Show BibTeX] Keywords: Distributed Algorithms, Topology Control, Social Networks

@article{JRSS2012TCS,

author = {Riko Jacob AND Stephan Ritscher AND Christian Scheideler AND Stefan Schmid},

title = {Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs},

journal = {Theoretical Computer Science},

year = {2012},

volume = {457},

pages = {137-148},

abstract = {This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.Keywords: Distributed Algorithms, Topology Control, Social Networks}

}

[DOI]
author = {Riko Jacob AND Stephan Ritscher AND Christian Scheideler AND Stefan Schmid},

title = {Towards higher-dimensional topological self-stabilization: A distributed algorithm for Delaunay graphs},

journal = {Theoretical Computer Science},

year = {2012},

volume = {457},

pages = {137-148},

abstract = {This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of \local Delaunay graphs" and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves aect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higherdimensional linearization problems.Keywords: Distributed Algorithms, Topology Control, Social Networks}

}

Thomas Clouser, Mikhail Nesterenko, Christian Scheideler:

In

[Show Abstract]

**Tiara: A self-stabilizing deterministic skip list and skip graph**In

*Theoretical Computer Science*, vol. 428, pp. 18-35. Elsevier**(2012)**[Show Abstract]

We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.

Key words: Peer-to-peer networks, overlay networks, self-stabilization.

[Show BibTeX] Key words: Peer-to-peer networks, overlay networks, self-stabilization.

@article{CNS2012TCS,

author = {Thomas Clouser AND Mikhail Nesterenko AND Christian Scheideler},

title = {Tiara: A self-stabilizing deterministic skip list and skip graph},

journal = {Theoretical Computer Science},

year = {2012},

volume = {428},

pages = {18-35},

abstract = {We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.Key words: Peer-to-peer networks, overlay networks, self-stabilization.}

}

[DOI]
author = {Thomas Clouser AND Mikhail Nesterenko AND Christian Scheideler},

title = {Tiara: A self-stabilizing deterministic skip list and skip graph},

journal = {Theoretical Computer Science},

year = {2012},

volume = {428},

pages = {18-35},

abstract = {We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.Key words: Peer-to-peer networks, overlay networks, self-stabilization.}

}

Valentina Damerow, Bodo Manthey, Friedhelm Meyer auf der Heide, Harald Räcke, Christian Scheideler, Christian Sohler, Till Tantau:

In

[Show Abstract]

**Smoothed analysis of left-to-right maxima with applications**In

*Transactions on Algorithms*, vol. 8, no. 3, pp. 30. ACM**(2012)**[Show Abstract]

A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).

We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.

[Show BibTeX] We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.

@article{DMM+2012,

author = {Valentina Damerow AND Bodo Manthey AND Friedhelm Meyer auf der Heide AND Harald R{\"a}cke AND Christian Scheideler AND Christian Sohler AND Till Tantau},

title = {Smoothed analysis of left-to-right maxima with applications},

journal = {Transactions on Algorithms},

year = {2012},

volume = {8},

number = {3},

pages = {30},

abstract = {A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.}

}

[DOI]
author = {Valentina Damerow AND Bodo Manthey AND Friedhelm Meyer auf der Heide AND Harald R{\"a}cke AND Christian Scheideler AND Christian Sohler AND Till Tantau},

title = {Smoothed analysis of left-to-right maxima with applications},

journal = {Transactions on Algorithms},

year = {2012},

volume = {8},

number = {3},

pages = {30},

abstract = {A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.}

}

Philipp Brandes, Friedhelm Meyer auf der Heide:

In Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS). ACM, ICPS, pp. 9-14

[Show Abstract]

**Distributed Computing in Fault-Prone Dynamic Networks**In Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS). ACM, ICPS, pp. 9-14

**(2012)**[Show Abstract]

Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.

In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.

[Show BibTeX] In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.

@inproceedings{BMadHTADDS,

author = {Philipp Brandes AND Friedhelm Meyer auf der Heide},

title = {Distributed Computing in Fault-Prone Dynamic Networks},

booktitle = {Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS)},

year = {2012},

pages = {9-14},

publisher = {ACM},

abstract = {Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.},

series = {ICPS}

}

[DOI]
author = {Philipp Brandes AND Friedhelm Meyer auf der Heide},

title = {Distributed Computing in Fault-Prone Dynamic Networks},

booktitle = {Proceedings of the 4th Workshop on Theoretical Aspects of Dynamic Distributed Systems (TADDS)},

year = {2012},

pages = {9-14},

publisher = {ACM},

abstract = {Dynamics in networks is caused by a variety of reasons, like nodes moving in 2D (or 3D) in multihop cellphone networks, joins and leaves in peer-to-peer networks, evolution in social networks, and many others. In order to understand such kinds of dynamics, and to design distributed algorithms that behave well under dynamics, many ways to model dynamics are introduced and analyzed w.r.t. correctness and eciency of distributed algorithms. In [16], Kuhn, Lynch, and Oshman have introduced a very general, worst case type model of dynamics: The edge set of the network may change arbitrarily from step to step, the only restriction is that it is connected at all times and the set of nodes does not change. An extended model demands that a xed connected subnetwork is maintained over each time interval of length T (T-interval dynamics). They have presented, among others, algorithms for counting the number of nodes under such general models of dynamics.In this paper, we generalize their models and algorithms by adding random edge faults, i.e., we consider fault-prone dynamic networks: We assume that an edge currently existing may fail to transmit data with some probability p. We rst observe that strong counting, i.e., each node knows the correct count and stops, is not possible in a model with random edge faults. Our main two positive results are feasibility and runtime bounds for weak counting, i.e., stopping is no longer required (but still a correct count in each node), and for strong counting with an upper bound, i.e., an upper bound N on n is known to all nodes.},

series = {ICPS}

}

Stefan Schmid, Chen Avin, Christian Scheideler, Bernhard Haeupler, Zvi Lotker:

In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 439-440

[Show Abstract]

**Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures**In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 439-440

**(2012)**[Show Abstract]

This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.

[Show BibTeX] @inproceedings{SASHL2012DISC,

author = {Stefan Schmid AND Chen Avin AND Christian Scheideler AND Bernhard Haeupler AND Zvi Lotker},

title = {Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {439-440},

publisher = {Springer},

abstract = {This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.},

series = {LNCS}

}

[DOI]
author = {Stefan Schmid AND Chen Avin AND Christian Scheideler AND Bernhard Haeupler AND Zvi Lotker},

title = {Brief Announcement: SplayNets - Towards Self-Adjusting Distributed Data Structures},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {439-440},

publisher = {Springer},

abstract = {This paper initiates the study of self-adjusting distributed data structures for networks. In particular, we present SplayNets: a binary search tree based network that is self-adjusting to routing request.We derive entropy bounds on the amortized routing cost and show that our splaying algorithm has some interesting properties.},

series = {LNCS}

}

Sebastian Kniesburges, Christian Scheideler:

In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 435-436

[Show Abstract]

**Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems**In Proceedings of the 26th International Symposium on Distributed Computing (DISC). Springer, LNCS, vol. 7611, pp. 435-436

**(2012)**[Show Abstract]

The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).

[Show BibTeX] @inproceedings{KS2012DISC,

author = {Sebastian Kniesburges AND Christian Scheideler},

title = {Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {435-436},

publisher = {Springer},

abstract = {The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).},

series = {LNCS}

}

[DOI]
author = {Sebastian Kniesburges AND Christian Scheideler},

title = {Brief Announcement: Hashed Predecessor Patricia Trie - A Data Structure for Efficient Predecessor Queries in Peer-to-Peer Systems},

booktitle = {Proceedings of the 26th International Symposium on Distributed Computing (DISC)},

year = {2012},

pages = {435-436},

publisher = {Springer},

abstract = {The design of ecient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this announcement we address the problem of nding the predecessor in a key set and present an ecient data structure called hashed Predecessor Patricia trie. Our hashed Predecessor Patricia trie supports PredecessorSearch(x) and Insert(x) and Delete(x) in O(log log u) hash table accesses when u is the size of the universe of the keys. That is the costs only depend on u and not the size of the data structure. One feature of our approach is that it only uses the lookup interface of the hash table and therefore hash table accesses may be realized by any distributed hash table (DHT).},

series = {LNCS}

}

Andreas Cord Landwehr, Martina Huellmann (married name: Eikel), Peter Kling, Alexander Setzer:

In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 72-83

[Show Abstract]

**Basic Network Creation Games with Communication Interests**In Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT). Springer, LNCS, vol. 7615, pp. 72-83

**(2012)**[Show Abstract]

Network creation games model the creation and usage costs of networks formed by a set of selfish peers.

Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.

In doing so, a peer can reduce its individual communication cost.

Typically, these costs are modeled by the maximum or average distance in the network.

We introduce a generalized version of the basic network creation game (BNCG).

In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.

This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.

That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.

However, participants of large networks are seldom interested in all peers.

Rather, they want to communicate efficiently with a small subset only.

Our model incorporates these (communication) interests explicitly.

Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.

We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.

Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.

This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.

For the case of general networks we show the price of anarchy to be Theta(n).

Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.

[Show BibTeX] Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.

In doing so, a peer can reduce its individual communication cost.

Typically, these costs are modeled by the maximum or average distance in the network.

We introduce a generalized version of the basic network creation game (BNCG).

In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.

This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.

That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.

However, participants of large networks are seldom interested in all peers.

Rather, they want to communicate efficiently with a small subset only.

Our model incorporates these (communication) interests explicitly.

Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.

We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.

Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.

This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.

For the case of general networks we show the price of anarchy to be Theta(n).

Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.

@inproceedings{IBNCGSAGT12,

author = {Andreas Cord Landwehr AND Martina Huellmann (married name: Eikel) AND Peter Kling AND Alexander Setzer},

title = {Basic Network Creation Games with Communication Interests},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {72--83},

publisher = {Springer},

abstract = {Network creation games model the creation and usage costs of networks formed by a set of selfish peers.Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.In doing so, a peer can reduce its individual communication cost.Typically, these costs are modeled by the maximum or average distance in the network.We introduce a generalized version of the basic network creation game (BNCG).In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.However, participants of large networks are seldom interested in all peers.Rather, they want to communicate efficiently with a small subset only.Our model incorporates these (communication) interests explicitly.Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.For the case of general networks we show the price of anarchy to be Theta(n).Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Martina Huellmann (married name: Eikel) AND Peter Kling AND Alexander Setzer},

title = {Basic Network Creation Games with Communication Interests},

booktitle = {Proceedings of the 5th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2012},

pages = {72--83},

publisher = {Springer},

abstract = {Network creation games model the creation and usage costs of networks formed by a set of selfish peers.Each peer has the ability to change the network in a limited way, e.g., by creating or deleting incident links.In doing so, a peer can reduce its individual communication cost.Typically, these costs are modeled by the maximum or average distance in the network.We introduce a generalized version of the basic network creation game (BNCG).In the BNCG (by Alon et al., SPAA 2010), each peer may replace one of its incident links by a link to an arbitrary peer.This is done in a selfish way in order to minimize either the maximum or average distance to all other peers.That is, each peer works towards a network structure that allows himself to communicate efficiently with all other peers.However, participants of large networks are seldom interested in all peers.Rather, they want to communicate efficiently with a small subset only.Our model incorporates these (communication) interests explicitly.Given peers with interests and a communication network forming a tree, we prove several results on the structure and quality of equilibria in our model.We focus on the MAX-version, i.e., each node tries to minimize the maximum distance to nodes it is interested in, and give an upper bound of O(\sqrt(n)) for the private costs in an equilibrium of n peers.Moreover, we give an equilibrium for a circular interest graph where a node has private cost Omega(\sqrt(n)), showing that our bound is tight.This example can be extended such that we get a tight bound of Theta(\sqrt(n)) for the price of anarchy.For the case of general networks we show the price of anarchy to be Theta(n).Additionally, we prove an interesting connection between a maximum independent set in the interest graph and the private costs of the peers.},

series = {LNCS}

}

Petr Kolman, Christian Scheideler:

In Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA). SIAM, pp. 800-810

[Show Abstract]

**Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case**In Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA). SIAM, pp. 800-810

**(2012)**[Show Abstract]

Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.

[Show BibTeX] @inproceedings{SODA12KS,

author = {Petr Kolman AND Christian Scheideler},

title = {Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case},

booktitle = {Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA)},

year = {2012},

pages = {800-810},

publisher = {SIAM},

abstract = {Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.}

}

[DOI]
author = {Petr Kolman AND Christian Scheideler},

title = {Approximate Duality of Multicommodity Multiroute Flows and Cuts: Single Source Case},

booktitle = {Proceedings of the 23th ACM SIAM Symposium on Discrete Algorithms (SODA)},

year = {2012},

pages = {800-810},

publisher = {SIAM},

abstract = {Given an integer h, a graph G = (V;E) with arbitrary positive edge capacities and k pairs of vertices (s1; t1); (s2; t2); : : : ; (sk; tk), called terminals, an h-route cut is a set F µ E of edges such that after the removal of the edges in F no pair si ¡ ti is connected by h edge-disjoint paths (i.e., the connectivity of every si ¡ ti pair is at most h ¡ 1 in (V;E n F)). The h-route cut is a natural generalization of the classical cut problem for multicommodity °ows (take h = 1). The main result of this paper is an O(h722h log2 k)-approximation algorithm for the minimum h-route cut problem in the case that s1 = s2 = ¢ ¢ ¢ = sk, called the single source case. As a corollary of it we obtain an approximate duality theorem for multiroute multicom-modity °ows and cuts with a single source. This partially answers an open question posted in several previous papers dealing with cuts for multicommodity multiroute problems.}

}

Friedhelm Meyer auf der Heide, Peter Pietrzyk, Peter Kling:

In Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO). Springer, LNCS, vol. 7355, pp. 61-72

[Show Abstract]

**An Algorithm for Facility Leasing**In Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO). Springer, LNCS, vol. 7355, pp. 61-72

**(2012)**[Show Abstract]

We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.

This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.

[Show BibTeX] This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.

@inproceedings{OFLSIROCCO12,

author = {Friedhelm Meyer auf der Heide AND Peter Pietrzyk AND Peter Kling},

title = {An Algorithm for Facility Leasing},

booktitle = {Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO)},

year = {2012},

pages = {61-72},

publisher = {Springer},

abstract = {We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.},

series = {LNCS}

}

[DOI]
author = {Friedhelm Meyer auf der Heide AND Peter Pietrzyk AND Peter Kling},

title = {An Algorithm for Facility Leasing},

booktitle = {Proceedings of the 19th International Colloquium on Structural Information & Communication Complexity (SIROCCO)},

year = {2012},

pages = {61-72},

publisher = {Springer},

abstract = {We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of f k i and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs ci j into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor.This variant of the online facility location problem was introduced by Nagarajan and Williamson [7] and is strongly related to both the online facility problem by Meyerson [5] and the parking permit problem by Meyerson [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O(Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is O(lmax log(lmax))-competitive. Here lmax denotes the maximum lease length. Moreover, we prove that it is O(log2(lmax))-competitive for many “natural” cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in lmax.},

series = {LNCS}

}

Sebastian Kniesburges, Andreas Koutsopoulos, Christian Scheideler:

In Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 1261-1271

[Show Abstract]

**A Self-Stabilization Process for Small-World Networks**In Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE Computer Society, pp. 1261-1271

**(2012)**[Show Abstract]

Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.

[Show BibTeX] @inproceedings{IPDPS12KKS,

author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Self-Stabilization Process for Small-World Networks},

booktitle = {Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2012},

pages = {1261--1271},

publisher = {IEEE Computer Society},

abstract = {Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.}

}

[DOI]
author = {Sebastian Kniesburges AND Andreas Koutsopoulos AND Christian Scheideler},

title = {A Self-Stabilization Process for Small-World Networks},

booktitle = {Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS)},

year = {2012},

pages = {1261--1271},

publisher = {IEEE Computer Society},

abstract = {Small-world networks have received significant attention because of their potential as models for the interaction networks of complex systems. Specifically, neither random networks nor regular lattices seem to be an adequate framework within which to study real-world complex systems such as chemical-reaction networks, neural networks, food webs, social networks, scientific-collaboration networks, and computer networks. Small-world networks provide some desired properties like an expected polylogarithmic distance between two processes in the network, which allows routing in polylogarithmic hops by simple greedy routing, and robustness against attacks or failures. By these properties, small-world networks are possible solutions for large overlay networks comparable to structured overlay networks like CAN, Pastry, Chord, which also provide polylogarithmic routing, but due to their uniform structure, structured overlay networks are more vulnerable to attacks or failures. In this paper we bring together a randomized process converging to a small-world network and a self-stabilization process so that a small-world network is formed out of any weakly connected initial state. To the best of our knowledge this is the first distributed self-stabilization process for building a small-world network.}

}

**2011** (4)

Kalman Graffi:

In Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP). IEEE Computer Society, pp. 154-155

[Show Abstract]

**PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned**In Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP). IEEE Computer Society, pp. 154-155

**(2011)**[Show Abstract]

Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.

[Show BibTeX] @inproceedings{PsP11G,

author = {Kalman Graffi},

title = {PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned},

booktitle = {Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP)},

year = {2011},

pages = {154-155},

publisher = {IEEE Computer Society},

abstract = {Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.}

}

[DOI]
author = {Kalman Graffi},

title = {PeerfactSim.KOM: A PSP System Simulator - Experiences and Lessons Learned},

booktitle = {Proceedings of the IEEE International Conference on Peer-to-Peer Computing (IEEE PsP)},

year = {2011},

pages = {154-155},

publisher = {IEEE Computer Society},

abstract = {Research on peer-to-peer (p2p) and distributed systems needs evaluation tools to predict and observe the behavior of protocols and mechanisms in large scale networks. PeerfactSim.KOM is a simulator for large scale distributed/p2p systems aiming at the evaluation of interdependencies in multi-layered p2p systems. The simulator is written in Java, is event-based and mainly used in p2p research projects. The main development of PeerfactSim.KOM started in 2005 and is driven since 2006 by the project “QuaP2P”,which aims at the systematic improvement and benchmarking of p2p systems. Further users of the simulator are working in the project “On-the-ﬂy Computing” aiming at researching p2p-based service oriented architectures. Both projects state severe requirements on the evaluation of multi-layered and large-scale distributed systems. We describe the architecture of PeerfactSim.KOM supporting these requirements in Section II, present the workﬂow, selected experiences and lessons learned in Section III and conclude the overview in Section IV.}

}

Sebastian Abshoff, Andreas Cord Landwehr, Bastian Degener, Barbara Kempkes, Peter Pietrzyk:

In Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS). Springer, LNCS, vol. 7111, pp. 13-27

[Show Abstract]

**Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks**In Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS). Springer, LNCS, vol. 7111, pp. 13-27

**(2011)**[Show Abstract]

We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_1+\epsilon n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^1/p)^p approximation, while the second has a running time of expected O(log^2_1+\epsilon n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^1/p)^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.

[Show BibTeX] @inproceedings{ALGO12ACDKP,

author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Bastian Degener AND Barbara Kempkes AND Peter Pietrzyk},

title = {Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks},

booktitle = {Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS)},

year = {2011},

pages = {13-27},

publisher = {Springer},

abstract = {We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_{1+\epsilon} n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^{1/p})^p approximation, while the second has a running time of expected O(log^2_{1+\epsilon} n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^{1/p})^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.},

series = {LNCS}

}

[DOI]
author = {Sebastian Abshoff AND Andreas Cord Landwehr AND Bastian Degener AND Barbara Kempkes AND Peter Pietrzyk},

title = {Local Approximation Algorithms for the Uncapacitated Metric Facility Location Problem in Power-Aware Sensor Networks},

booktitle = {Proceedings of the 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS)},

year = {2011},

pages = {13-27},

publisher = {Springer},

abstract = {We present two distributed, constant factor approximation algorithms for the metric facility location problem. Both algorithms have been designed with a strong emphasis on applicability in the area of wireless sensor networks: in order to execute them, each sensor node only requires limited local knowledge and simple computations. Also, the algorithms can cope with measurement errors and take into account that communication costs between sensor nodes do not necessarily increase linearly with the distance, but can be represented by a polynomial. Since it cannot always be expected that sensor nodes execute algorithms in a synchronized way, our algorithms are executed in an asynchronous model (but they are still able to break symmetry that might occur when two neighboring nodes act at exactly the same time). Furthermore, they can deal with dynamic scenarios: if a node moves, the solution is updated and the update affects only nodes in the local neighborhood. Finally, the algorithms are robust in the sense that incorrect behavior of some nodes during some round will, in the end, still result in a good approximation. The first algorithm runs in expected O(log_{1+\epsilon} n) communication rounds and yields a \my^4(1+4\my^2(1+\epsilon)^{1/p})^p approximation, while the second has a running time of expected O(log^2_{1+\epsilon} n) communication rounds and an approximation factor of \my^4(1 + 2(1 + \epsilon)^{1/p})^p. Here, \epsilon > 0 is an arbitrarily small constant, p the exponent of the polynomial representing the communication costs, and \my the relative measurement error.},

series = {LNCS}

}

Mikhail Nesterenko, Rizal Mohd Nor, Christian Scheideler:

In Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, LNCS, vol. 6976, pp. 356-370

[Show Abstract]

**Corona: A Stabilizing Deterministic Message-Passing Skip List**In Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS). Springer, LNCS, vol. 6976, pp. 356-370

**(2011)**[Show Abstract]

We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.

[Show BibTeX] @inproceedings{SSS12NNS,

author = {Mikhail Nesterenko AND Rizal Mohd Nor AND Christian Scheideler},

title = {Corona: A Stabilizing Deterministic Message-Passing Skip List},

booktitle = {Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2011},

pages = {356--370},

publisher = {Springer},

abstract = {We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.},

series = {LNCS}

}

[DOI]
author = {Mikhail Nesterenko AND Rizal Mohd Nor AND Christian Scheideler},

title = {Corona: A Stabilizing Deterministic Message-Passing Skip List},

booktitle = {Proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS)},

year = {2011},

pages = {356--370},

publisher = {Springer},

abstract = {We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.},

series = {LNCS}

}

Friedhelm Meyer auf der Heide, Rajmohan Rajaraman (eds.):

ACM

[Show BibTeX]

**23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures**ACM

**(2011)**[Show BibTeX]

@proceedings{FMRR2011,

title = {23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures},

year = {2011},

editor = {Friedhelm Meyer auf der Heide, Rajmohan Rajaraman},

publisher = {ACM},

month = {June}

}

[DOI]
title = {23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures},

year = {2011},

editor = {Friedhelm Meyer auf der Heide, Rajmohan Rajaraman},

publisher = {ACM},

month = {June}

}