# Publications

**2017** (9)

Sevil Dräxler, Holger Karl:

In

[Show Abstract]

**Specification, Composition, and Placement of Network Services with Flexible Structures**In

*International Journal of Network Management*, vol. 27, no. 2, pp. 1-16. John Wiley & Sons Ltd**(2017)**[Show Abstract]

Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We define the service structure in a flexible way that enables changing the order of functions in case the functionality of the service is not influenced by this, and propose a YANG data model for expressing this flexibility. Flexible structures allow the network orchestration system to choose the optimal composition of service components that for example gives the best results for placement of services in the network. When number of flexible services and number of components in each service increase, combinatorial explosion limits the practical use of this flexibility. In this paper, we describe a selection heuristic that gives a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different optimization objectives. Moreover, we present a heuristic algorithm for placement of a combination of services, which aims at placing service components along shortest paths that have enough capacity for accommodating the services. By applying these solutions, we show that allowing flexibility in the service structure is feasible.

[Show BibTeX] @article{MehrKarl2017,

author = {Sevil Dr{\"a}xler AND Holger Karl},

title = {Specification, Composition, and Placement of Network Services with Flexible Structures},

journal = {International Journal of Network Management},

year = {2017},

volume = {27},

number = {2},

pages = {1--16},

abstract = {Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We define the service structure in a flexible way that enables changing the order of functions in case the functionality of the service is not influenced by this, and propose a YANG data model for expressing this flexibility. Flexible structures allow the network orchestration system to choose the optimal composition of service components that for example gives the best results for placement of services in the network. When number of flexible services and number of components in each service increase, combinatorial explosion limits the practical use of this flexibility. In this paper, we describe a selection heuristic that gives a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different optimization objectives. Moreover, we present a heuristic algorithm for placement of a combination of services, which aims at placing service components along shortest paths that have enough capacity for accommodating the services. By applying these solutions, we show that allowing flexibility in the service structure is feasible.}

}

[DOI]
author = {Sevil Dr{\"a}xler AND Holger Karl},

title = {Specification, Composition, and Placement of Network Services with Flexible Structures},

journal = {International Journal of Network Management},

year = {2017},

volume = {27},

number = {2},

pages = {1--16},

abstract = {Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We define the service structure in a flexible way that enables changing the order of functions in case the functionality of the service is not influenced by this, and propose a YANG data model for expressing this flexibility. Flexible structures allow the network orchestration system to choose the optimal composition of service components that for example gives the best results for placement of services in the network. When number of flexible services and number of components in each service increase, combinatorial explosion limits the practical use of this flexibility. In this paper, we describe a selection heuristic that gives a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different optimization objectives. Moreover, we present a heuristic algorithm for placement of a combination of services, which aims at placing service components along shortest paths that have enough capacity for accommodating the services. By applying these solutions, we show that allowing flexibility in the service structure is feasible.}

}

Peter Kling, Alexander Mäcker, Sören Riechers, Alexander Skopalik:

In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 123-132

[Show Abstract]

**Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource**In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 123-132

**(2017)**[Show Abstract]

We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbbR$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.

Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.

Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).

[Show BibTeX] Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.

Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).

@inproceedings{KMRS17,

author = {Peter Kling AND Alexander M{\"a}cker AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

pages = {123--132},

publisher = {ACM},

abstract = {We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbb{R}$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).}

}

[DOI]
author = {Peter Kling AND Alexander M{\"a}cker AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Sharing is Caring: Multiprocessor Scheduling with a Sharable Resource},

booktitle = {Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2017},

pages = {123--132},

publisher = {ACM},

abstract = {We consider a scheduling problem on $m$ identical processors sharing an arbitrarily divisible resource. In addition to assigning jobs to processors, the scheduler must distribute the resource among the processors (e.g., for three processors in shares of 20\%, 15\%, and 65\%) and adjust this distribution over time. Each job $j$ comes with a size $p_j \in \mathbb{R}$ and a resource requirement $r_j > 0$. Jobs do not benefit when receiving a share larger than $r_j$ of the resource. But providing them with a fraction of the resource requirement causes a linear decrease in the processing efficiency. We seek a (non-preemptive) job and resource assignment minimizing the makespan.Our main result is an efficient approximation algorithm which achieves an approximation ratio of $2 + 1/(m-2)$. It can be improved to an (asymptotic) ratio of $1 + 1/(m-1)$ if all jobs have unit size. Our algorithms also imply new results for a well-known bin packing problem with splittable items and a restricted number of allowed item parts per bin.Based upon the above solution, we also derive an approximation algorithm with similar guarantees for a setting in which we introduce so-called tasks each containing several jobs and where we are interested in the average completion time of tasks (a task is completed when all its jobs are completed).}

}

Ernst Althaus, Andre Brinkmann, Peter Kling, Friedhelm Meyer auf der Heide, Lars Nagel, Sören Riechers, Jiří Sgall, Tim Suess:

In

[Show BibTeX]

**Scheduling Shared Continuous Resources on Many-Cores**In

*Journal of Scheduling*.**(2017)**(to appear)[Show BibTeX]

@article{ABKMNRSS2017,

author = {Ernst Althaus AND Andre Brinkmann AND Peter Kling AND Friedhelm Meyer auf der Heide AND Lars Nagel AND S{\"o}ren Riechers AND Jiří Sgall AND Tim Suess},

title = {Scheduling Shared Continuous Resources on Many-Cores},

journal = {Journal of Scheduling},

year = {2017},

note = {to appear}

}

[DOI]
author = {Ernst Althaus AND Andre Brinkmann AND Peter Kling AND Friedhelm Meyer auf der Heide AND Lars Nagel AND S{\"o}ren Riechers AND Jiří Sgall AND Tim Suess},

title = {Scheduling Shared Continuous Resources on Many-Cores},

journal = {Journal of Scheduling},

year = {2017},

note = {to appear}

}

Matthias Keller, Holger Karl:

In

[Show Abstract]

**Response-Time-Optimised Service Deployment: MILP Formulations of Piece-wise Linear Functions Approximating Non-linear Bivariate Mixed-integer Functions**In

*Transactions on Network and Service Management*, vol. 14, no. 1, pp. 121-135. IEEE**(2017)**[Show Abstract]

A current trend in networking and cloud computing is to provide compute resources at widely distributed sites; this is exemplified by developments such as Network Function Virtualisation. This paves the way for wide-area service deployments with improved service quality: e.g. user-perceived response times can be reduced by offering services at nearby sites. But always assigning users to the nearest site can be a bad decision if this site is already highly utilised. This paper formalises two related decisions of allocating compute resources at different sites and assigning users to them with the goal of minimising the response times while the total number of resources to be allocated is limited – a non-linear capacitated Facility Location Problem with integrated queuing systems. To efficiently handle its non-linearity, we introduce five linear problem linearisations and adapt the currently best heuristic for a similar scenario to our scenario. All six approaches are compared in experiments for solution quality and solving time. Surprisingly, our best optimisation formulation outperforms the heuristic in both time and quality. Additionally, we evaluate the influence of distributions of available compute resources in the network on the response time: The time was halved for some configurations. The presented formulation techniques for our problem linearisations are applicable to a broader optimisation domain.

[Show BibTeX] @article{KK2016_IEEE,

author = {Matthias Keller AND Holger Karl},

title = {Response-Time-Optimised Service Deployment: MILP Formulations of Piece-wise Linear Functions Approximating Non-linear Bivariate Mixed-integer Functions},

journal = {Transactions on Network and Service Management},

year = {2017},

volume = {14},

number = {1},

pages = {121--135},

abstract = {A current trend in networking and cloud computing is to provide compute resources at widely distributed sites; this is exemplified by developments such as Network Function Virtualisation. This paves the way for wide-area service deployments with improved service quality: e.g. user-perceived response times can be reduced by offering services at nearby sites. But always assigning users to the nearest site can be a bad decision if this site is already highly utilised. This paper formalises two related decisions of allocating compute resources at different sites and assigning users to them with the goal of minimising the response times while the total number of resources to be allocated is limited – a non-linear capacitated Facility Location Problem with integrated queuing systems. To efficiently handle its non-linearity, we introduce five linear problem linearisations and adapt the currently best heuristic for a similar scenario to our scenario. All six approaches are compared in experiments for solution quality and solving time. Surprisingly, our best optimisation formulation outperforms the heuristic in both time and quality. Additionally, we evaluate the influence of distributions of available compute resources in the network on the response time: The time was halved for some configurations. The presented formulation techniques for our problem linearisations are applicable to a broader optimisation domain.}

}

[DOI]
author = {Matthias Keller AND Holger Karl},

title = {Response-Time-Optimised Service Deployment: MILP Formulations of Piece-wise Linear Functions Approximating Non-linear Bivariate Mixed-integer Functions},

journal = {Transactions on Network and Service Management},

year = {2017},

volume = {14},

number = {1},

pages = {121--135},

abstract = {A current trend in networking and cloud computing is to provide compute resources at widely distributed sites; this is exemplified by developments such as Network Function Virtualisation. This paves the way for wide-area service deployments with improved service quality: e.g. user-perceived response times can be reduced by offering services at nearby sites. But always assigning users to the nearest site can be a bad decision if this site is already highly utilised. This paper formalises two related decisions of allocating compute resources at different sites and assigning users to them with the goal of minimising the response times while the total number of resources to be allocated is limited – a non-linear capacitated Facility Location Problem with integrated queuing systems. To efficiently handle its non-linearity, we introduce five linear problem linearisations and adapt the currently best heuristic for a similar scenario to our scenario. All six approaches are compared in experiments for solution quality and solving time. Surprisingly, our best optimisation formulation outperforms the heuristic in both time and quality. Additionally, we evaluate the influence of distributions of available compute resources in the network on the response time: The time was halved for some configurations. The presented formulation techniques for our problem linearisations are applicable to a broader optimisation domain.}

}

Matthias Feldotto, Maximilian Drees, Sören Riechers, Alexander Skopalik:

In Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON). Springer, LNCS, vol. 10392, pp. 175-187

[Show Abstract]

**Pure Nash Equilibria in Restricted Budget Games**In Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON). Springer, LNCS, vol. 10392, pp. 175-187

**(2017)**[Show Abstract]

In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.

[Show BibTeX] @inproceedings{DFRS17,

author = {Matthias Feldotto AND Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Pure Nash Equilibria in Restricted Budget Games},

booktitle = {Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON)},

year = {2017},

pages = {175--187},

publisher = {Springer},

abstract = {In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.},

series = {LNCS}

}

[DOI]
author = {Matthias Feldotto AND Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Pure Nash Equilibria in Restricted Budget Games},

booktitle = {Proceedings of the 23rd International Computing and Combinatorics Conference (COCOON)},

year = {2017},

pages = {175--187},

publisher = {Springer},

abstract = {In budget games, players compete over resources with finite budgets. For every resource, a player has a specific demand and as a strategy, he chooses a subset of resources. If the total demand on a resource does not exceed its budget, the utility of each player who chose that resource equals his demand. Otherwise, the budget is shared proportionally. In the general case, pure Nash equilibria (NE) do not exist for such games. In this paper, we consider the natural classes of singleton and matroid budget games with additional constraints and show that for each, pure NE can be guaranteed. In addition, we introduce a lexicographical potential function to prove that every matroid budget game has an approximate pure NE which depends on the largest ratio between the different demands of each individual player.},

series = {LNCS}

}

Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 15th Workshop on Approximation and Online Algorithms (WAOA). Springer

[Show Abstract]

**Non-Clairvoyant Scheduling to Minimize Max Flow Time on a Machine with Setup Times**In Proceedings of the 15th Workshop on Approximation and Online Algorithms (WAOA). Springer

**(2017)**(to appear)[Show Abstract]

Consider a problem in which $n$ jobs that are classified into $k$ types arrive over time at their release times and are to be scheduled on a single machine so as to minimize the maximum flow time.

The machine requires a setup taking $s$ time units whenever it switches from processing jobs of one type to jobs of a different type.

We consider the problem as an online problem where each job is only known to the scheduler as soon as it arrives and where the processing time of a job only becomes known upon its completion (non-clairvoyance).

We are interested in the potential of simple ``greedy-like'' algorithms.

We analyze a modification of the FIFO strategy and show its competitiveness to be $\Theta(\sqrtn)$, which is optimal for the considered class of algorithms.

For $k=2$ types it achieves a constant competitiveness.

Our main insight is obtained by an analysis of the smoothed competitiveness.

If processing times $p_j$ are independently perturbed to $\hat p_j = (1+X_j)p_j$,

we obtain a competitiveness of $O(\sigma^-2 \log^2 n)$ when $X_j$ is drawn from a uniform or a (truncated) normal distribution with standard deviation $\sigma$.

The result proves that bad instances are fragile and ``practically'' one might expect a much better performance than given by the $\Omega(\sqrtn)$-bound.

[Show BibTeX] The machine requires a setup taking $s$ time units whenever it switches from processing jobs of one type to jobs of a different type.

We consider the problem as an online problem where each job is only known to the scheduler as soon as it arrives and where the processing time of a job only becomes known upon its completion (non-clairvoyance).

We are interested in the potential of simple ``greedy-like'' algorithms.

We analyze a modification of the FIFO strategy and show its competitiveness to be $\Theta(\sqrtn)$, which is optimal for the considered class of algorithms.

For $k=2$ types it achieves a constant competitiveness.

Our main insight is obtained by an analysis of the smoothed competitiveness.

If processing times $p_j$ are independently perturbed to $\hat p_j = (1+X_j)p_j$,

we obtain a competitiveness of $O(\sigma^-2 \log^2 n)$ when $X_j$ is drawn from a uniform or a (truncated) normal distribution with standard deviation $\sigma$.

The result proves that bad instances are fragile and ``practically'' one might expect a much better performance than given by the $\Omega(\sqrtn)$-bound.

@inproceedings{mmmrWaoa2017,

author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Non-Clairvoyant Scheduling to Minimize Max Flow Time on a Machine with Setup Times},

booktitle = {Proceedings of the 15th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2017},

publisher = {Springer},

note = {to appear},

abstract = {Consider a problem in which $n$ jobs that are classified into $k$ types arrive over time at their release times and are to be scheduled on a single machine so as to minimize the maximum flow time.The machine requires a setup taking $s$ time units whenever it switches from processing jobs of one type to jobs of a different type.We consider the problem as an online problem where each job is only known to the scheduler as soon as it arrives and where the processing time of a job only becomes known upon its completion (non-clairvoyance).We are interested in the potential of simple ``greedy-like'' algorithms.We analyze a modification of the FIFO strategy and show its competitiveness to be $\Theta(\sqrt{n})$, which is optimal for the considered class of algorithms.For $k=2$ types it achieves a constant competitiveness.Our main insight is obtained by an analysis of the smoothed competitiveness.If processing times $p_j$ are independently perturbed to $\hat p_j = (1+X_j)p_j$, we obtain a competitiveness of $O(\sigma^{-2} \log^2 n)$ when $X_j$ is drawn from a uniform or a (truncated) normal distribution with standard deviation $\sigma$.The result proves that bad instances are fragile and ``practically'' one might expect a much better performance than given by the $\Omega(\sqrt{n})$-bound.}

}

author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Non-Clairvoyant Scheduling to Minimize Max Flow Time on a Machine with Setup Times},

booktitle = {Proceedings of the 15th Workshop on Approximation and Online Algorithms (WAOA)},

year = {2017},

publisher = {Springer},

note = {to appear},

abstract = {Consider a problem in which $n$ jobs that are classified into $k$ types arrive over time at their release times and are to be scheduled on a single machine so as to minimize the maximum flow time.The machine requires a setup taking $s$ time units whenever it switches from processing jobs of one type to jobs of a different type.We consider the problem as an online problem where each job is only known to the scheduler as soon as it arrives and where the processing time of a job only becomes known upon its completion (non-clairvoyance).We are interested in the potential of simple ``greedy-like'' algorithms.We analyze a modification of the FIFO strategy and show its competitiveness to be $\Theta(\sqrt{n})$, which is optimal for the considered class of algorithms.For $k=2$ types it achieves a constant competitiveness.Our main insight is obtained by an analysis of the smoothed competitiveness.If processing times $p_j$ are independently perturbed to $\hat p_j = (1+X_j)p_j$, we obtain a competitiveness of $O(\sigma^{-2} \log^2 n)$ when $X_j$ is drawn from a uniform or a (truncated) normal distribution with standard deviation $\sigma$.The result proves that bad instances are fragile and ``practically'' one might expect a much better performance than given by the $\Omega(\sqrt{n})$-bound.}

}

Sevil Dräxler, Holger Karl, Zoltan Adam Mann:

In Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2017). IEEE Press

[Show Abstract]

**Joint Optimization of Scaling and Placement of Virtual Network Services**In Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2017). IEEE Press

**(2017)**[Show Abstract]

Management of complex network services requires flexible and efficient service provisioning as well as optimized handling of continuous changes in the workload of the service.

To adapt to changes in the demand, service components need to be replicated (scaling) and allocated to physical resources (placement) dynamically. In this paper, we propose a fully

automated approach to the joint optimization problem of scaling and placement, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability and

effectiveness of the proposed approach.

[Show BibTeX] To adapt to changes in the demand, service components need to be replicated (scaling) and allocated to physical resources (placement) dynamically. In this paper, we propose a fully

automated approach to the joint optimization problem of scaling and placement, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability and

effectiveness of the proposed approach.

@inproceedings{MehrKarlMann2017,

author = {Sevil Dr{\"a}xler AND Holger Karl AND Zoltan Adam Mann},

title = {Joint Optimization of Scaling and Placement of Virtual Network Services},

booktitle = {Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2017)},

year = {2017},

publisher = {IEEE Press},

abstract = {Management of complex network services requires flexible and efficient service provisioning as well as optimized handling of continuous changes in the workload of the service.To adapt to changes in the demand, service components need to be replicated (scaling) and allocated to physical resources (placement) dynamically. In this paper, we propose a fullyautomated approach to the joint optimization problem of scaling and placement, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability andeffectiveness of the proposed approach.}

}

[DOI]
author = {Sevil Dr{\"a}xler AND Holger Karl AND Zoltan Adam Mann},

title = {Joint Optimization of Scaling and Placement of Virtual Network Services},

booktitle = {Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2017)},

year = {2017},

publisher = {IEEE Press},

abstract = {Management of complex network services requires flexible and efficient service provisioning as well as optimized handling of continuous changes in the workload of the service.To adapt to changes in the demand, service components need to be replicated (scaling) and allocated to physical resources (placement) dynamically. In this paper, we propose a fullyautomated approach to the joint optimization problem of scaling and placement, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability andeffectiveness of the proposed approach.}

}

Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide, Sören Riechers:

In

[Show BibTeX]

**Cost-efficient Scheduling on Machines from the Cloud**In

*Journal of Combinatorial Optimization*.**(2017)**[Show BibTeX]

@article{jocoCloud,

author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Cost-efficient Scheduling on Machines from the Cloud},

journal = {Journal of Combinatorial Optimization},

year = {2017}

}

[DOI]
author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Cost-efficient Scheduling on Machines from the Cloud},

journal = {Journal of Combinatorial Optimization},

year = {2017}

}

Antonios Antoniadis, Peter Kling, Sebastian Ott, Sören Riechers:

In

[Show Abstract]

**Continuous Speed Scaling with Variability: A Simple and Direct Approach**In

*Theoretical Computer Science*, vol. 678, pp. 1-13.**(2017)**[Show Abstract]

We consider an extension of the dynamic speed scaling scheduling model introduced by Yao et al.: A set of jobs, each with a release time, deadline, and workload, has to be scheduled on a single, speed-scalable processor. Both the maximum allowed speed of the processor and the energy costs may vary continuously over time. The objective is to find a feasible schedule that minimizes the total energy costs. Theoretical algorithm design for speed scaling problems often tends to discretize problems, as our tools in the discrete realm are often better developed or understood. Using the above speed scaling variant with variable, continuous maximal processor speeds and energy prices as an example, we demonstrate that a more direct approach via tools from variational calculus can not only lead to a very concise and elegant formulation and analysis, but also avoids the “explosion of variables/constraints” that often comes with discretizing. Using well-known tools from calculus of variations, we derive combinatorial optimality characteristics for our continuous problem and provide a quite concise and simple correctness proof.

[Show BibTeX] @article{AKOR2017,

author = {Antonios Antoniadis AND Peter Kling AND Sebastian Ott AND S{\"o}ren Riechers},

title = {Continuous Speed Scaling with Variability: A Simple and Direct Approach},

journal = {Theoretical Computer Science},

year = {2017},

volume = {678},

pages = {1-13},

month = {May},

abstract = {We consider an extension of the dynamic speed scaling scheduling model introduced by Yao et al.: A set of jobs, each with a release time, deadline, and workload, has to be scheduled on a single, speed-scalable processor. Both the maximum allowed speed of the processor and the energy costs may vary continuously over time. The objective is to find a feasible schedule that minimizes the total energy costs. Theoretical algorithm design for speed scaling problems often tends to discretize problems, as our tools in the discrete realm are often better developed or understood. Using the above speed scaling variant with variable, continuous maximal processor speeds and energy prices as an example, we demonstrate that a more direct approach via tools from variational calculus can not only lead to a very concise and elegant formulation and analysis, but also avoids the “explosion of variables/constraints” that often comes with discretizing. Using well-known tools from calculus of variations, we derive combinatorial optimality characteristics for our continuous problem and provide a quite concise and simple correctness proof.}

}

[DOI]
author = {Antonios Antoniadis AND Peter Kling AND Sebastian Ott AND S{\"o}ren Riechers},

title = {Continuous Speed Scaling with Variability: A Simple and Direct Approach},

journal = {Theoretical Computer Science},

year = {2017},

volume = {678},

pages = {1-13},

month = {May},

abstract = {We consider an extension of the dynamic speed scaling scheduling model introduced by Yao et al.: A set of jobs, each with a release time, deadline, and workload, has to be scheduled on a single, speed-scalable processor. Both the maximum allowed speed of the processor and the energy costs may vary continuously over time. The objective is to find a feasible schedule that minimizes the total energy costs. Theoretical algorithm design for speed scaling problems often tends to discretize problems, as our tools in the discrete realm are often better developed or understood. Using the above speed scaling variant with variable, continuous maximal processor speeds and energy prices as an example, we demonstrate that a more direct approach via tools from variational calculus can not only lead to a very concise and elegant formulation and analysis, but also avoids the “explosion of variables/constraints” that often comes with discretizing. Using well-known tools from calculus of variations, we derive combinatorial optimality characteristics for our continuous problem and provide a quite concise and simple correctness proof.}

}

**2016** (5)

Sebastian Abshoff, Peter Kling, Christine Markarian, Friedhelm Meyer auf der Heide, Peter Pietrzyk:

In

[Show Abstract]

**Towards the price of leasing online**In

*Journal of Combinatorial Optimization*, vol. 32, no. 4, pp. 1197-1216.**(2016)**[Show Abstract]

We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).

[Show BibTeX] @article{AKMMP17,

author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

[DOI]
author = {Sebastian Abshoff AND Peter Kling AND Christine Markarian AND Friedhelm Meyer auf der Heide AND Peter Pietrzyk},

title = {Towards the price of leasing online},

journal = {Journal of Combinatorial Optimization},

year = {2016},

volume = {32},

number = {4},

pages = { 1197--1216},

abstract = {We consider online optimization problems in which certain goods have to be acquired in order to provide a service or infrastructure. Classically, decisions for such problems are considered as final: one buys the goods. However, in many real world applications, there is a shift away from the idea of buying goods. Instead, leasing is often a more flexible and lucrative business model. Research has realized this shift and recently initiated the theoretical study of leasing models (Anthony and Gupta in Proceedings of the integer programming and combinatorial optimization: 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007; Meyerson in Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23–25 Oct 2005, Pittsburgh, PA, USA, 2005; Nagarajan and Williamson in Discret Optim 10(4):361–370, 2013) We extend this line of work and suggest a more systematic study of leasing aspects for a class of online optimization problems. We provide two major technical results. We introduce the leasing variant of online set multicover and give an O(log(mK)logn)-competitive algorithm (with n, m, and K being the number of elements, sets, and leases, respectively). Our results also imply improvements for the non-leasing variant of online set cover. Moreover, we extend results for the leasing variant of online facility location. Nagarajan and Williamson (Discret Optim 10(4):361–370, 2013) gave an O(Klogn)-competitive algorithm for this problem (with n and K being the number of clients and leases, respectively). We remove the dependency on n (and, thereby, on time). In general, this leads to a bound of O(lmaxloglmax) (with the maximal lease length lmax). For many natural problem instances, the bound improves to O(K2).}

}

Jürgen König, Alexander Mäcker, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 563-577

[Show Abstract]

**Scheduling with Interjob Communication on Parallel Processors**In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 563-577

**(2016)**[Show Abstract]

Consider a scheduling problem in which a set of jobs with interjob communication, canonically represented by a weighted tree, needs to be scheduled on m parallel processors interconnected by a shared communication channel. In each time step, we may allow any processed job to use a certain capacity of the channel in order to satisfy (parts of) its communication demands to adjacent jobs processed in parallel. The goal is to find a schedule that minimizes the makespan and in which communication demands of all jobs are satisfied.

We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant.

Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio 2-2/m in case of paths and a ratio of 5/2 in case of arbitrary trees.

[Show BibTeX] We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant.

Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio 2-2/m in case of paths and a ratio of 5/2 in case of arbitrary trees.

@inproceedings{KMMR2016,

author = {J{\"u}rgen K{\"o}nig AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Scheduling with Interjob Communication on Parallel Processors},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {563--577},

publisher = {Springer},

abstract = {Consider a scheduling problem in which a set of jobs with interjob communication, canonically represented by a weighted tree, needs to be scheduled on m parallel processors interconnected by a shared communication channel. In each time step, we may allow any processed job to use a certain capacity of the channel in order to satisfy (parts of) its communication demands to adjacent jobs processed in parallel. The goal is to find a schedule that minimizes the makespan and in which communication demands of all jobs are satisfied.We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant.Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio 2-2/m in case of paths and a ratio of 5/2 in case of arbitrary trees.},

series = {LNCS}

}

[DOI]
author = {J{\"u}rgen K{\"o}nig AND Alexander M{\"a}cker AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Scheduling with Interjob Communication on Parallel Processors},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {563--577},

publisher = {Springer},

abstract = {Consider a scheduling problem in which a set of jobs with interjob communication, canonically represented by a weighted tree, needs to be scheduled on m parallel processors interconnected by a shared communication channel. In each time step, we may allow any processed job to use a certain capacity of the channel in order to satisfy (parts of) its communication demands to adjacent jobs processed in parallel. The goal is to find a schedule that minimizes the makespan and in which communication demands of all jobs are satisfied.We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant.Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio 2-2/m in case of paths and a ratio of 5/2 in case of arbitrary trees.},

series = {LNCS}

}

Falko Dressler, Friedhelm Meyer auf der Heide (eds.):

ACM

[Show BibTeX]

**Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)**ACM

**(2016)**[Show BibTeX]

@proceedings{DM-MobiHoc2016,

title = {Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)},

year = {2016},

editor = {Falko Dressler, Friedhelm Meyer auf der Heide},

publisher = {ACM},

month = {July, 4-8}

}

[DOI]
title = {Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)},

year = {2016},

editor = {Falko Dressler, Friedhelm Meyer auf der Heide},

publisher = {ACM},

month = {July, 4-8}

}

Sevil Mehraghdam (married name: Dräxler), Holger Karl:

In Proceedings of the 2nd International IEEE Conference on Network Softwarization (NetSoft). IEEE, pp. 184-192

[Show Abstract]

**Placement of Services with Flexible Structures Specified by a YANG Data Model**In Proceedings of the 2nd International IEEE Conference on Network Softwarization (NetSoft). IEEE, pp. 184-192

**(2016)**(won the NetSoft Best Student Paper Award)[Show Abstract]

Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We present a YANG model for describing the service structure in deployment requests in a flexible way that enables changing the order of functions in case the order of traversing them does not affect the functionality of the service. Upon receiving such requests, the network orchestration system can choose the optimal composition of service components that gives the best results for placement of services in the network. This introduces new complexities to the placement problem by greatly increasing the number of possible ways a service can be composed. In this paper, we describe a heuristic solution that selects a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different resource requirements of the services. Our evaluations show that the selected combinations consist of representative samples of possible structures and requirements and therefore, can result in optimal or close-to-optimal placement results.

[Show BibTeX] @inproceedings{MehrKarl2016,

author = {Sevil Mehraghdam (married name: Dr{\"a}xler) AND Holger Karl},

title = {Placement of Services with Flexible Structures Specified by a YANG Data Model},

booktitle = {Proceedings of the 2nd International IEEE Conference on Network Softwarization (NetSoft)},

year = {2016},

pages = {184--192},

publisher = {IEEE},

note = {won the NetSoft Best Student Paper Award},

abstract = {Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We present a YANG model for describing the service structure in deployment requests in a flexible way that enables changing the order of functions in case the order of traversing them does not affect the functionality of the service. Upon receiving such requests, the network orchestration system can choose the optimal composition of service components that gives the best results for placement of services in the network. This introduces new complexities to the placement problem by greatly increasing the number of possible ways a service can be composed. In this paper, we describe a heuristic solution that selects a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different resource requirements of the services. Our evaluations show that the selected combinations consist of representative samples of possible structures and requirements and therefore, can result in optimal or close-to-optimal placement results.}

}

[DOI]
author = {Sevil Mehraghdam (married name: Dr{\"a}xler) AND Holger Karl},

title = {Placement of Services with Flexible Structures Specified by a YANG Data Model},

booktitle = {Proceedings of the 2nd International IEEE Conference on Network Softwarization (NetSoft)},

year = {2016},

pages = {184--192},

publisher = {IEEE},

note = {won the NetSoft Best Student Paper Award},

abstract = {Network function virtualization and software-defined networking allow services consisting of virtual network functions to be designed and implemented with great flexibility by facilitating automatic deployments, migrations, and reconfigurations for services and their components. For extended flexibility, we go beyond seeing services as a fixed chain of functions. We present a YANG model for describing the service structure in deployment requests in a flexible way that enables changing the order of functions in case the order of traversing them does not affect the functionality of the service. Upon receiving such requests, the network orchestration system can choose the optimal composition of service components that gives the best results for placement of services in the network. This introduces new complexities to the placement problem by greatly increasing the number of possible ways a service can be composed. In this paper, we describe a heuristic solution that selects a Pareto set of the possible compositions of a service as well as possible combinations of different services, with respect to different resource requirements of the services. Our evaluations show that the selected combinations consist of representative samples of possible structures and requirements and therefore, can result in optimal or close-to-optimal placement results.}

}

Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 578-592

[Show Abstract]

**Cost-efficient Scheduling on Machines from the Cloud**In Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA). Springer, LNCS, vol. 10043, pp. 578-592

**(2016)**[Show Abstract]

We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of $\beta$ if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least $\beta$. While for $\beta < s$ no finite competitiveness is possible, our main result is an $O(c/\varepsilon + 1/(\varepsilon^3))$-competitive online algorithm for $\beta = (1+\varepsilon)s$ with $1/s <= \varepsilon <= 1$, where $s$ and $c$ denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of $\Omega(c/\varepsilon)$.

[Show BibTeX] @inproceedings{MMMR2016,

author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Cost-efficient Scheduling on Machines from the Cloud},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {578--592},

publisher = {Springer},

abstract = {We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of $\beta$ if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least $\beta$. While for $\beta < s$ no finite competitiveness is possible, our main result is an $O(c/\varepsilon + 1/(\varepsilon^3))$-competitive online algorithm for $\beta = (1+\varepsilon)s$ with $1/s <= \varepsilon <= 1$, where $s$ and $c$ denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of $\Omega(c/\varepsilon)$.},

series = {LNCS}

}

[DOI]
author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Cost-efficient Scheduling on Machines from the Cloud},

booktitle = {Proceedings of the 10th Annual International Conference on Combinatorial Optimization and Applications (COCOA)},

year = {2016},

pages = {578--592},

publisher = {Springer},

abstract = {We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of $\beta$ if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least $\beta$. While for $\beta < s$ no finite competitiveness is possible, our main result is an $O(c/\varepsilon + 1/(\varepsilon^3))$-competitive online algorithm for $\beta = (1+\varepsilon)s$ with $1/s <= \varepsilon <= 1$, where $s$ and $c$ denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of $\Omega(c/\varepsilon)$.},

series = {LNCS}

}

**2015** (4)

Shouwei Li, Alexander Mäcker, Christine Markarian, Friedhelm Meyer auf der Heide, Sören Riechers:

In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

[Show Abstract]

**Towards Flexible Demands in Online Leasing Problems**In Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON). Springer, Lecture Notes in Computer Science, vol. 9198, pp. 277-288

**(2015)**[Show Abstract]

We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.

[Show BibTeX] @inproceedings{geilerscheiss,

author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Shouwei Li AND Alexander M{\"a}cker AND Christine Markarian AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Towards Flexible Demands in Online Leasing Problems},

booktitle = {Proceedings of the 21st Annual International Computing and Combinatorics Conference (COCOON)},

year = {2015},

pages = {277--288},

publisher = {Springer},

abstract = {We consider online leasing problems in which demands arrive over time and need to be served by leasing resources. We introduce a new model for these problems such that a resource can be leased for K different durations each incurring a different cost (longer leases cost less per time unit). Each demand i can be served anytime between its arrival ai and its deadline ai+di by a leased resource. The objective is to meet all deadlines while minimizing the total leasing costs. This model is a natural generalization of Meyerson’s ParkingPermitProblem (FOCS 2005) in which di=0 for all i. We propose an online algorithm that is Θ(K+dmaxlmin)-competitive where dmax and lmin denote the largest di and the shortest available lease length, respectively. We also extend the SetCoverLeasing problem by deadlines and give a competitive online algorithm which also improves on existing solutions for the original SetCoverLeasing problem.},

series = {Lecture Notes in Computer Science}

}

Maximilian Drees, Matthias Feldotto, Sören Riechers, Alexander Skopalik:

In Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 9347, pp. 178-189

[Show Abstract]

**On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games**In Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 9347, pp. 178-189

**(2015)**[Show Abstract]

In \emphbandwidth allocation games (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is $\sf NP$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.

[Show BibTeX] @inproceedings{DFRS15,

author = {Maximilian Drees AND Matthias Feldotto AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games},

booktitle = {Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2015},

pages = {178-189},

publisher = {Springer Berlin Heidelberg},

abstract = {In \emph{bandwidth allocation games} (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is ${\sf NP}$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Maximilian Drees AND Matthias Feldotto AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {On Existence and Properties of Approximate Pure Nash Equilibria in Bandwidth Allocation Games},

booktitle = {Proceedings of the 8th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2015},

pages = {178-189},

publisher = {Springer Berlin Heidelberg},

abstract = {In \emph{bandwidth allocation games} (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is ${\sf NP}$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.},

series = {Lecture Notes in Computer Science}

}

Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide, Sören Riechers:

In Frank Dehne and Jörg-Rüdiger Sack and Ulrike Stege (eds.): Algorithms and Data Structures: 14th International Symposium, WADS 2015, Victoria, BC, Canada, August 5-7, 2015. Proceedings. Springer, Lecture Notes in Computer Science, vol. 9214, pp. 542-553

[Show Abstract]

**Non-preemptive Scheduling on Machines with Setup Times**In Frank Dehne and Jörg-Rüdiger Sack and Ulrike Stege (eds.): Algorithms and Data Structures: 14th International Symposium, WADS 2015, Victoria, BC, Canada, August 5-7, 2015. Proceedings. Springer, Lecture Notes in Computer Science, vol. 9214, pp. 542-553

**(2015)**[Show Abstract]

Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n,m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3/2.

[Show BibTeX] @inproceedings{geilershit,

author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Non-preemptive Scheduling on Machines with Setup Times},

booktitle = {Algorithms and Data Structures: 14th International Symposium, {WADS}2015, Victoria, BC, Canada, August 5-7, 2015. Proceedings},

year = {2015},

editor = {Frank Dehne and J{\"o}rg{-}R{\"u}diger Sack and Ulrike Stege},

pages = {542--553},

publisher = {Springer},

abstract = {Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n,m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3/2.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Alexander M{\"a}cker AND Manuel Malatyali AND Friedhelm Meyer auf der Heide AND S{\"o}ren Riechers},

title = {Non-preemptive Scheduling on Machines with Setup Times},

booktitle = {Algorithms and Data Structures: 14th International Symposium, {WADS}2015, Victoria, BC, Canada, August 5-7, 2015. Proceedings},

year = {2015},

editor = {Frank Dehne and J{\"o}rg{-}R{\"u}diger Sack and Ulrike Stege},

pages = {542--553},

publisher = {Springer},

abstract = {Consider the problem in which n jobs that are classified into k types are to be scheduled on m identical machines without preemption. A machine requires a proper setup taking s time units before processing jobs of a given type. The objective is to minimize the makespan of the resulting schedule. We design and analyze an approximation algorithm that runs in time polynomial in n,m and k and computes a solution with an approximation factor that can be made arbitrarily close to 3/2.},

series = {Lecture Notes in Computer Science}

}

Philip Wette, Holger Karl:

In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 1-7

[Show Abstract]

**HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks**In Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015). IEEE, pp. 1-7

**(2015)**(won best paper award )[Show Abstract]

The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.

[Show BibTeX] @inproceedings{WetteKarl15,

author = {Philip Wette AND Holger Karl},

title = {HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {1--7},

publisher = {IEEE},

note = {won best paper award},

abstract = {The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.}

}

[DOI]
author = {Philip Wette AND Holger Karl},

title = {HybridTE: Traffic Engineering for Very Low-Cost Software-Defined Data-Center Networks},

booktitle = {Proceedings of the 4th European Workshop on Software Defined Networks (EWSDN 2015)},

year = {2015},

pages = {1--7},

publisher = {IEEE},

note = {won best paper award},

abstract = {The size of modern data centers is constantly increasing. As it is not economic to interconnect all machines in the data center using a full-bisection-bandwidth network, techniques have to be developed to increase the efficiency of data-center networks. The Software-Defined Network paradigm opened the door for centralized traffic engineering (TE) in such environments. Up to now, there were already a number of TE proposals for SDN-controlled data centers that all work very well. However, these techniques either use a high amount of flow table entries or a high flow installation rate that overwhelms available switching hardware, or they require custom or very expensive end-of-line equipment to be usable in practice. We present HybridTE, a TE technique that uses (uncertain) information about large flows. Using this extra information, our technique has very low hardware requirements while maintaining better performance than existing TE techniques. This enables us to build very low-cost, high performance data-center networks.}

}

**2014** (3)

Andre Brinkmann, Peter Kling, Friedhelm Meyer auf der Heide, Lars Nagel, Sören Riechers, Tim Suess:

In Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 128-137

[Show Abstract]

**Scheduling Shared Continuous Resources on Many-Cores**In Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 128-137

**(2014)**[Show Abstract]

We consider the problem of scheduling a number of jobs on $m$ identical processors sharing a continuously divisible resource. Each job j comes with a resource requirement r_j \in 0,1. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their \emphprocessing speeds, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.

In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.

[Show BibTeX] In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.

@inproceedings{BKMNRS14,

author = {Andre Brinkmann AND Peter Kling AND Friedhelm Meyer auf der Heide AND Lars Nagel AND S{\"o}ren Riechers AND Tim Suess},

title = {Scheduling Shared Continuous Resources on Many-Cores},

booktitle = {Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2014},

pages = {128-137},

publisher = {ACM},

month = {June},

abstract = {We consider the problem of scheduling a number of jobs on $m$ identical processors sharing a continuously divisible resource. Each job j comes with a resource requirement r_j \in {0,1}. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their \emph{processing speeds}, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.}

}

[DOI]
author = {Andre Brinkmann AND Peter Kling AND Friedhelm Meyer auf der Heide AND Lars Nagel AND S{\"o}ren Riechers AND Tim Suess},

title = {Scheduling Shared Continuous Resources on Many-Cores},

booktitle = {Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2014},

pages = {128-137},

publisher = {ACM},

month = {June},

abstract = {We consider the problem of scheduling a number of jobs on $m$ identical processors sharing a continuously divisible resource. Each job j comes with a resource requirement r_j \in {0,1}. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their \emph{processing speeds}, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.}

}

Antonios Antoniadis, Neal Barcelo, Mario Consuegra, Peter Kling, Michael Nugent, Kirk Pruhs, Michele Scquizzato:

In Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS). Schloß Dagstuhl, LIPIcs, vol. 25, pp. 63-74

[Show Abstract]

**Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-off Schedules**In Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS). Schloß Dagstuhl, LIPIcs, vol. 25, pp. 63-74

**(2014)**[Show Abstract]

We give a polynomial time algorithm to compute an optimal energy and fractional weighted flow trade-off schedule for a speed-scalable processor with discrete speeds.

Our algorithm uses a geometric approach that is based on structural properties obtained from a primal-dual formulation of the problem.

[Show BibTeX] Our algorithm uses a geometric approach that is based on structural properties obtained from a primal-dual formulation of the problem.

@inproceedings{[Kling:2014:STACS],

author = {Antonios Antoniadis AND Neal Barcelo AND Mario Consuegra AND Peter Kling AND Michael Nugent AND Kirk Pruhs AND Michele Scquizzato},

title = {Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-off Schedules},

booktitle = {Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS)},

year = {2014},

pages = {63--74},

publisher = {Schloß Dagstuhl},

month = {March},

abstract = {We give a polynomial time algorithm to compute an optimal energy and fractional weighted flow trade-off schedule for a speed-scalable processor with discrete speeds.Our algorithm uses a geometric approach that is based on structural properties obtained from a primal-dual formulation of the problem.},

series = {LIPIcs}

}

[DOI]
author = {Antonios Antoniadis AND Neal Barcelo AND Mario Consuegra AND Peter Kling AND Michael Nugent AND Kirk Pruhs AND Michele Scquizzato},

title = {Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-off Schedules},

booktitle = {Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS)},

year = {2014},

pages = {63--74},

publisher = {Schloß Dagstuhl},

month = {March},

abstract = {We give a polynomial time algorithm to compute an optimal energy and fractional weighted flow trade-off schedule for a speed-scalable processor with discrete speeds.Our algorithm uses a geometric approach that is based on structural properties obtained from a primal-dual formulation of the problem.},

series = {LIPIcs}

}

Maximilian Drees, Sören Riechers, Alexander Skopalik:

In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 8768, pp. 110-121

[Show Abstract]

**Budget-restricted utility games with ordered strategic decisions**In Ron Lavi (eds.): Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT). Springer Berlin Heidelberg, Lecture Notes in Computer Science, vol. 8768, pp. 110-121

**(2014)**[Show Abstract]

We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.

[Show BibTeX] @inproceedings{DRS14,

author = {Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Budget-restricted utility games with ordered strategic decisions},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {110-121},

publisher = {Springer Berlin Heidelberg},

abstract = {We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.},

series = {Lecture Notes in Computer Science}

}

[DOI]
author = {Maximilian Drees AND S{\"o}ren Riechers AND Alexander Skopalik},

title = {Budget-restricted utility games with ordered strategic decisions},

booktitle = {Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT)},

year = {2014},

editor = {Ron Lavi},

pages = {110-121},

publisher = {Springer Berlin Heidelberg},

abstract = {We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analysed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential.},

series = {Lecture Notes in Computer Science}

}

**2013** (1)

Peter Kling, Peter Pietrzyk:

In Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 251-260

[Show Abstract]

**Profitable Scheduling on Multiple Speed-Scalable Processors**In Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM, pp. 251-260

**(2013)**[Show Abstract]

We present a new online algorithm for profit-oriented scheduling on multiple speed-scalable processors.

Moreover, we provide a tight analysis of the algorithm's competitiveness.

Our results generalize and improve upon work by \citetChan:2010, which considers a single speed-scalable processor.

Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm.

In our scheduling problem, jobs arrive over time and are preemptable.

They have different workloads, values, and deadlines.

The scheduler may decide not to finish a job but instead to suffer a loss equaling the job's value.

However, to process a job's workload until its deadline the scheduler must invest a certain amount of energy.

The cost of a schedule is the sum of lost values and invested energy.

In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly.

A processor's energy consumption is power $\Powers$ integrated over time, where $\Powers=s^\alpha$ is the power consumption when running at speed $s$.

Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs.

This problem was introduced by~\citetChan:2010 for the case of a single processor.

They presented an online algorithm which is $\alpha^\alpha+2e\alpha$-competitive.

We provide an online algorithm for the case of multiple processors with an improved competitive ratio of $\alpha^\alpha$.

[Show BibTeX] Moreover, we provide a tight analysis of the algorithm's competitiveness.

Our results generalize and improve upon work by \citetChan:2010, which considers a single speed-scalable processor.

Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm.

In our scheduling problem, jobs arrive over time and are preemptable.

They have different workloads, values, and deadlines.

The scheduler may decide not to finish a job but instead to suffer a loss equaling the job's value.

However, to process a job's workload until its deadline the scheduler must invest a certain amount of energy.

The cost of a schedule is the sum of lost values and invested energy.

In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly.

A processor's energy consumption is power $\Powers$ integrated over time, where $\Powers=s^\alpha$ is the power consumption when running at speed $s$.

Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs.

This problem was introduced by~\citetChan:2010 for the case of a single processor.

They presented an online algorithm which is $\alpha^\alpha+2e\alpha$-competitive.

We provide an online algorithm for the case of multiple processors with an improved competitive ratio of $\alpha^\alpha$.

@inproceedings{Kling:ProfSchedSPAA,

author = {Peter Kling AND Peter Pietrzyk},

title = {Profitable Scheduling on Multiple Speed-Scalable Processors},

booktitle = {Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2013},

pages = {251-260 },

publisher = {ACM},

month = {July},

abstract = {We present a new online algorithm for profit-oriented scheduling on multiple speed-scalable processors.Moreover, we provide a tight analysis of the algorithm's competitiveness.Our results generalize and improve upon work by \citet{Chan:2010}, which considers a single speed-scalable processor.Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm.In our scheduling problem, jobs arrive over time and are preemptable.They have different workloads, values, and deadlines.The scheduler may decide not to finish a job but instead to suffer a loss equaling the job's value.However, to process a job's workload until its deadline the scheduler must invest a certain amount of energy.The cost of a schedule is the sum of lost values and invested energy.In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly.A processor's energy consumption is power $\Power{s}$ integrated over time, where $\Power{s}=s^{\alpha}$ is the power consumption when running at speed $s$.Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs.This problem was introduced by~\citet{Chan:2010} for the case of a single processor.They presented an online algorithm which is $\alpha^{\alpha}+2e\alpha$-competitive.We provide an online algorithm for the case of multiple processors with an improved competitive ratio of $\alpha^{\alpha}$.}

}

[DOI]
author = {Peter Kling AND Peter Pietrzyk},

title = {Profitable Scheduling on Multiple Speed-Scalable Processors},

booktitle = {Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA)},

year = {2013},

pages = {251-260 },

publisher = {ACM},

month = {July},

abstract = {We present a new online algorithm for profit-oriented scheduling on multiple speed-scalable processors.Moreover, we provide a tight analysis of the algorithm's competitiveness.Our results generalize and improve upon work by \citet{Chan:2010}, which considers a single speed-scalable processor.Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm.In our scheduling problem, jobs arrive over time and are preemptable.They have different workloads, values, and deadlines.The scheduler may decide not to finish a job but instead to suffer a loss equaling the job's value.However, to process a job's workload until its deadline the scheduler must invest a certain amount of energy.The cost of a schedule is the sum of lost values and invested energy.In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly.A processor's energy consumption is power $\Power{s}$ integrated over time, where $\Power{s}=s^{\alpha}$ is the power consumption when running at speed $s$.Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs.This problem was introduced by~\citet{Chan:2010} for the case of a single processor.They presented an online algorithm which is $\alpha^{\alpha}+2e\alpha$-competitive.We provide an online algorithm for the case of multiple processors with an improved competitive ratio of $\alpha^{\alpha}$.}

}

**2012** (1)

Andreas Cord Landwehr, Peter Kling, Frederik Mallmann Trenn:

In Even, Guy and Rawitz, Dror (eds.): Proceedings of the 1st Mediterranean Conference on Algorithms (MedAlg). Springer, LNCS, vol. 7659, pp. 218-231

[Show Abstract]

**Slow Down & Sleep for Profit in Online Deadline Scheduling**In Even, Guy and Rawitz, Dror (eds.): Proceedings of the 1st Mediterranean Conference on Algorithms (MedAlg). Springer, LNCS, vol. 7659, pp. 218-231

**(2012)**[Show Abstract]

We present and study a new model for energy-aware and profit-oriented scheduling on a single processor.

The processor features dynamic speed scaling as well as suspension to a sleep mode.

Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines.

On the arrival of a new job, the scheduler may either accept or reject the job.

Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values.

Here, power consumption at speed $s$ is given by $P(s)=s^\alpha+\beta$ and the energy investment is power integrated over time.

Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs $\gamma$.

The objective is to minimize the total value of rejected jobs plus the total energy.

Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profit-oriented scheduling models.

We show that \emphrejection-oblivious schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio.

It turns out that the jobs' value densities (the ratio between a job's value and its work) are crucial for the performance of such schedulers.

We give an algorithm whose competitiveness nearly matches the lower bound w.r.t\text. the maximum value density.

If the maximum value density is not too large, the competitiveness becomes $\alpha^\alpha+2e\alpha$.

Also, we show that it suffices to restrict the value density of low-value jobs only.

Using a technique from \citeChan:2010 we transfer our results to processors with a fixed maximum speed.

[Show BibTeX] The processor features dynamic speed scaling as well as suspension to a sleep mode.

Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines.

On the arrival of a new job, the scheduler may either accept or reject the job.

Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values.

Here, power consumption at speed $s$ is given by $P(s)=s^\alpha+\beta$ and the energy investment is power integrated over time.

Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs $\gamma$.

The objective is to minimize the total value of rejected jobs plus the total energy.

Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profit-oriented scheduling models.

We show that \emphrejection-oblivious schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio.

It turns out that the jobs' value densities (the ratio between a job's value and its work) are crucial for the performance of such schedulers.

We give an algorithm whose competitiveness nearly matches the lower bound w.r.t\text. the maximum value density.

If the maximum value density is not too large, the competitiveness becomes $\alpha^\alpha+2e\alpha$.

Also, we show that it suffices to restrict the value density of low-value jobs only.

Using a technique from \citeChan:2010 we transfer our results to processors with a fixed maximum speed.

@inproceedings{KlingSlowDown2012,

author = {Andreas Cord Landwehr AND Peter Kling AND Frederik Mallmann Trenn},

title = {Slow Down & Sleep for Profit in Online Deadline Scheduling},

booktitle = {Proceedings of the 1st Mediterranean Conference on Algorithms (MedAlg)},

year = {2012},

editor = {Even, Guy and Rawitz, Dror},

pages = {218-231},

publisher = {Springer},

month = {December},

abstract = {We present and study a new model for energy-aware and profit-oriented scheduling on a single processor.The processor features dynamic speed scaling as well as suspension to a sleep mode.Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines.On the arrival of a new job, the scheduler may either accept or reject the job.Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values.Here, power consumption at speed $s$ is given by $P(s)=s^{\alpha}+\beta$ and the energy investment is power integrated over time.Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs $\gamma$.The objective is to minimize the total value of rejected jobs plus the total energy.Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profit-oriented scheduling models.We show that \emph{rejection-oblivious} schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio.It turns out that the jobs' value densities (the ratio between a job's value and its work) are crucial for the performance of such schedulers.We give an algorithm whose competitiveness nearly matches the lower bound w.r.t\text{.} the maximum value density.If the maximum value density is not too large, the competitiveness becomes $\alpha^{\alpha}+2e\alpha$.Also, we show that it suffices to restrict the value density of low-value jobs only.Using a technique from \cite{Chan:2010} we transfer our results to processors with a fixed maximum speed.},

series = {LNCS}

}

[DOI]
author = {Andreas Cord Landwehr AND Peter Kling AND Frederik Mallmann Trenn},

title = {Slow Down & Sleep for Profit in Online Deadline Scheduling},

booktitle = {Proceedings of the 1st Mediterranean Conference on Algorithms (MedAlg)},

year = {2012},

editor = {Even, Guy and Rawitz, Dror},

pages = {218-231},

publisher = {Springer},

month = {December},

abstract = {We present and study a new model for energy-aware and profit-oriented scheduling on a single processor.The processor features dynamic speed scaling as well as suspension to a sleep mode.Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines.On the arrival of a new job, the scheduler may either accept or reject the job.Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values.Here, power consumption at speed $s$ is given by $P(s)=s^{\alpha}+\beta$ and the energy investment is power integrated over time.Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs $\gamma$.The objective is to minimize the total value of rejected jobs plus the total energy.Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profit-oriented scheduling models.We show that \emph{rejection-oblivious} schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio.It turns out that the jobs' value densities (the ratio between a job's value and its work) are crucial for the performance of such schedulers.We give an algorithm whose competitiveness nearly matches the lower bound w.r.t\text{.} the maximum value density.If the maximum value density is not too large, the competitiveness becomes $\alpha^{\alpha}+2e\alpha$.Also, we show that it suffices to restrict the value density of low-value jobs only.Using a technique from \cite{Chan:2010} we transfer our results to processors with a fixed maximum speed.},

series = {LNCS}

}

**2011** (1)

Joachim Gehweiler, Peter Kling, Friedhelm Meyer auf der Heide:

In Proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM). Springer, LNCS, vol. 7204, pp. 31-40

[Show Abstract]

**An Experimental Comparison of Load Balancing Strategies in a Web Computing Environment**In Proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM). Springer, LNCS, vol. 7204, pp. 31-40

**(2011)**[Show Abstract]

Web Computing is a variant of parallel computing where the idle times of PCs donated by worldwide distributed users are employed to execute parallel programs. The PUB-Web library developed by us supports this kind of usage of computing resources. A major problem for the efficient execution of such parallel programs is load balancing. In the Web Computing context, this problem becomes more difficult because of the dynamic behavior of the underlying "parallel computer": the set of available processors (donated PCs) as well as their availability (idle times) change over time in an unpredictable fashion.

In this paper, we experimentally evaluate and compare load balancing algorithms in this scenario, namely a variant of the well-established Work Stealing algorithm and strategies based on a heterogeneous version of distributed hash-tables (DHHTs) introduced recently. In order to run a meaningful experimental evaluation, we employ, in addition to our Web Computing library PUB-Web, realistic data sets for the job input streams and for the dynamics of the availability of the resources.

Our experimental evaluations suggest that Work Stealing is the better strategy if the number of processes ready to run matches the number of available processors. But a suitable variant of DHHTs outperforms Work Stealing if there are significantly more processes ready to run than available processors.

[Show BibTeX] In this paper, we experimentally evaluate and compare load balancing algorithms in this scenario, namely a variant of the well-established Work Stealing algorithm and strategies based on a heterogeneous version of distributed hash-tables (DHHTs) introduced recently. In order to run a meaningful experimental evaluation, we employ, in addition to our Web Computing library PUB-Web, realistic data sets for the job input streams and for the dynamics of the availability of the resources.

Our experimental evaluations suggest that Work Stealing is the better strategy if the number of processes ready to run matches the number of available processors. But a suitable variant of DHHTs outperforms Work Stealing if there are significantly more processes ready to run than available processors.

@inproceedings{PPAM11GKM,

author = {Joachim Gehweiler AND Peter Kling AND Friedhelm Meyer auf der Heide},

title = {An Experimental Comparison of Load Balancing Strategies in a Web Computing Environment},

booktitle = {Proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM)},

year = {2011},

pages = {31--40},

publisher = {Springer},

abstract = {Web Computing is a variant of parallel computing where the idle times of PCs donated by worldwide distributed users are employed to execute parallel programs. The PUB-Web library developed by us supports this kind of usage of computing resources. A major problem for the efficient execution of such parallel programs is load balancing. In the Web Computing context, this problem becomes more difficult because of the dynamic behavior of the underlying "parallel computer": the set of available processors (donated PCs) as well as their availability (idle times) change over time in an unpredictable fashion.In this paper, we experimentally evaluate and compare load balancing algorithms in this scenario, namely a variant of the well-established Work Stealing algorithm and strategies based on a heterogeneous version of distributed hash-tables (DHHTs) introduced recently. In order to run a meaningful experimental evaluation, we employ, in addition to our Web Computing library PUB-Web, realistic data sets for the job input streams and for the dynamics of the availability of the resources.Our experimental evaluations suggest that Work Stealing is the better strategy if the number of processes ready to run matches the number of available processors. But a suitable variant of DHHTs outperforms Work Stealing if there are significantly more processes ready to run than available processors.},

series = {LNCS}

}

[DOI]
author = {Joachim Gehweiler AND Peter Kling AND Friedhelm Meyer auf der Heide},

title = {An Experimental Comparison of Load Balancing Strategies in a Web Computing Environment},

booktitle = {Proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM)},

year = {2011},

pages = {31--40},

publisher = {Springer},

abstract = {Web Computing is a variant of parallel computing where the idle times of PCs donated by worldwide distributed users are employed to execute parallel programs. The PUB-Web library developed by us supports this kind of usage of computing resources. A major problem for the efficient execution of such parallel programs is load balancing. In the Web Computing context, this problem becomes more difficult because of the dynamic behavior of the underlying "parallel computer": the set of available processors (donated PCs) as well as their availability (idle times) change over time in an unpredictable fashion.In this paper, we experimentally evaluate and compare load balancing algorithms in this scenario, namely a variant of the well-established Work Stealing algorithm and strategies based on a heterogeneous version of distributed hash-tables (DHHTs) introduced recently. In order to run a meaningful experimental evaluation, we employ, in addition to our Web Computing library PUB-Web, realistic data sets for the job input streams and for the dynamics of the availability of the resources.Our experimental evaluations suggest that Work Stealing is the better strategy if the number of processes ready to run matches the number of available processors. But a suitable variant of DHHTs outperforms Work Stealing if there are significantly more processes ready to run than available processors.},

series = {LNCS}

}