School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
Show Author Information
Hide Author Information
Abstract
A -submodular function is a generalization of a submodular function, its definition domain is extended from the collection of single subsets to the collection of disjoint subsets. The -submodular maximization problem has a wide range of applications. In this paper, we propose a nested greedy and local search algorithm for the problem of maximizing a monotone -submodular function subject to a knapsack constraint and matroid constraints.
No abstract is available for this article. Click the button above to view the PDF directly.
References
[1]
G. L.Nemhauser, L. A.Wolsey, and M. L.Fisher, An analysis of approximations for maximizing submodular set functions-I, Math. Program., vol. 14, no. 1, pp. 265–294, 1978.
A.Ene and H. L.Nguyên, A nearly-linear time algorithm for submodular maximization with a knapsack constraint, in Proc. 46th Int. Colloquium on Automata, Languages, and Programming, Patras, Greece, 2019, pp. 53:1–53:12.
G.Calinescu, C.Chekuri, M.Pál, and J.Vondrák, Maximizing a monotone submodular function subject to a matroid constraint, SIAM J. Comput., vol. 40, no. 6, pp. 1740–1766, 2011.
Y.Filmus and J.Ward, Monotone submodular maximization over a matroid via non-oblivious local search, SIAM J. Comput., vol. 43, no. 2, pp. 514–542, 2014.
K. K.Sarpatwar, B.Schieber, and H.Shachnai, Constrained submodular maximization via greedy local search, Oper. Res. Lett., vol. 47, no. 1, pp. 1–6, 2019.
A. A.Bian, J. M.Buhmann, A.Krause, and S.Tschiatschek, Guarantees for greedy maximization of non-submodular functions with applications, in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 498–507.
C. C.Huang, N.Kakimura, S.Mauras, and Y.Yoshida, Approximability of monotone submodular function maximization under cardinality and matroid constraints in the streaming model, SIAM J. Discrete Math., vol. 36, no. 1, pp. 355–382, 2022.
Z. C.Liu, L. K.Guo, D. L.Du, D. C.Xu, and X. Y.Zhang, Maximization problems of balancing submodular relevance and supermodular diversity, J. Glob. Optim., vol. 82, no. 1, pp. 179–194, 2022.
Y.Yoshida, Maximizing a monotone submodular function with a bounded curvature under a knapsack constraint, SIAMJ. Discrete Math., vol. 33, no. 3, pp. 1452–1471, 2019.
N.Ohsaka and Y.Yoshida, Monotone -submodular function maximization with size constraints, in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 694–702.
A.Rafiey and Y.Yoshida. Fast and private submodular and -submodular functions maximization with matroid constraints, in Proc. 37th Int. Conf. Machine Learning, Virtual event, 2020, p. 731, 2020.
S.Iwata, S. I.Tanigawa, and Y.Yoshida, Improved approximation algorithms for -submodular function maximization, in Proc. 27th Ann. ACM-SIAM Symp. Discrete Algorithms, Arlington, VA, USA, 2016, pp. 404–413.
Z. Z.Tang, C. H.Wang, and H.Chan, On maximizing a monotone -submodular function under a knapsack constraint, Oper. Res. Lett., vol. 50, no. 1, pp. 28–31, 2022.
Liu Q, Yu K, Li M, et al. k-Submodular Maximization with a Knapsack Constraint and p Matroid Constraints. Tsinghua Science and Technology, 2023, 28(5): 896-905. https://doi.org/10.26599/TST.2022.9010039
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
<strong><i>k</i></strong>-Submodular maximization
Assume is a nonnegative, monotone, and -submodular function, the -submodular maximization problem over is to find an element , such as to maximize . That is,
Algorithm 1 does not care about the order of items selected from , and each selected item is placed in the position with the largest marginal gain. From Ward and ivn[
14
], Algorithm 1 provides a -approximate solution. We use to denote an outputs of Algorithm 1 and to denote the optimal value. The corresponding result holds.
Lemma 3[
14
] A -approximate solution of Formula (1) can be obtained, that is, .
Maximization with a Knapsack Constraint and a Matroid Constraint
In this section, we assume that a function is nonnegative monotone -submodular and . Let be an optimal solution of Formula (2). We will propose a deterministic algorithm (see Algorithm 2) for the maximization problem of -submodular function under a knapsack constraint and a matroid constraint. Firstly, we guess two items with the largest marginal gains in the optimal solution through enumeration as the initial solution . For every current solution s, we have the collection of all . Algorithm 2 picks each swap with the largest marginal profit density in . The judgement statement controls to pass in and out of “while” loops. It changes as the current solution is updated.
When the picked swap passes conditions of Line 8, we update the support set of current solution s and rearrange the positions of these elements by Algorithm 1. Considering the order of each element in as it is added to s, we find the greedy position for every element with the same order. By Lemma 2, we input instead of . We update the current solution in the rearranged positions and its knapsack value . In order to regenerate , we change to . Algorithm 2 break the loop in Lines 6–15 and regenerate in Line 5. To continue running the algorithm, we need to change to . Until now, the process of updating the current solution is completed.
When the picked swap violates conditions of Line 8, Algorithm 2 removes the swap and picks next swap. In addition, nothing will be changed. It implies that always holds and the loop in Lines 6–15 continues running. Repeat the above process until Algorithm 2 finds a swap to update s or all swaps are removed. In the last iteration, all of swaps violate conditions of Line 8, then all of them are removed. Due to in Line 6, Algorithm 2 break all loops and return the output solution.
A useful property of matroid is proposed by Sarpatwar et al.[
6
] In order to understand it easily, we propose a new and detailed idea of proof as follows.
Lemma 5[
6
] For any sets , we can always construct a mapping , where each element satisfies , and satisfies .
Proof For convenience of explanation, we divide the given into several parts. We set , and , where is the collection of all elements , such that , is the collection of all elements , such that , and . We will construct a qualified mapping .
Firstly, when belongs to , we make . Next, let’s construct the other part of the corresponding relationships on and as follows.
Note that holds. In fact, we assume that , then there is . Using the definition of matroid, there exists an element , such that , which contradicts the definition of . So we always have .
Let . If , we take . Again , there is and .
We consider the case of . For a fixed , there are and . According to the definition of matroid, there exists an element , such that . Repeatedly using the definition of matroid, we can always find , such that , for all . Again , we set and . Then holds.
We update the sets and . Similar to the above method, we can find a corresponding point in for the next elment in . In this way, each element corresponds to . Therefore, We construct a mapping : , for any and , for any with corresponding . It follows that holds, for any element .
■
Theorem 2 According to Algorithm 2, a -approximate solution s of Formula (2) can be obtained.
Proof Since Lemma 5 and , there exists a swap for any . In Line 8 of Algorithm 2, we need to check whether the swap is acceptable. Then the outputs s need to satisfy that for each , the corresponding swap is rejected. In the following, we will discuss it in two cases.
Case 1: For all elements , the swap satisfied . Then the swap is rejected just because . Then
Here , where is obtained by Algorithm 1 to maximize over items . Firstly, we consider the order of each element in as it is added to s in Algorithm 2. Then we add elements in in the same order and get . By Lemmas 2 and 3, we know that and . The third inequality in Formula (5) is obtained from Lemma 1. And the fourth follows from orthant submodularity. By our assumption about the swaps, the last inequality holds. Let , where we assign arbitrarily the indices Then, amplifying Formula (5) with -submodularity and monotonicity, we have
Therefore, we get a -approximation to the optimum in this case.
For the elements , at least one swap violates the knapsack constraint, that is, .
In Algorithm 2, the set generated in the -th iteration is recorded as . By Lemma 5, there exists a mapping , such that , for each . Suppose that swap is rejected for violating the knapsack constraint for the first time, where , and . By orthant submodularity and monotonicity, we have
So the following inequality holds:
We define a function as , where It is obviously that is also -submodular and monotone. In the following we will prove that
Note that a swap occurs in each iteration of Algorithm 2 for . For simplicity, we denote by . We greedily add elements in to one by one, and construct a set over for . Clearly . Let , by Lemma 3 and Lemma 1, we have
Since is -submodular, for each , there is a swap satifying the inequalities in the following:
where . By the choice of and , we have
We define , where . Similar to Formula (6), we have
Thus, Formulas (9)–(12) imply that
for all . Let and , for any . Define and . By the assumption of Case 2, we have . For , we define when . Note that , using the above definition, we obtain that
for , and
Using Formulas (13) and (14), we have
From Formulas (15) and (16) and Lemma 4, we obtain that
Then, combing Eqs. (
8
) and (
17
), we have
Hence, Algorithm 2 has an approximation ratio of at least .
■
Maximization with a Knapsack and <strong><i>p</i></strong> Matroid Constraints
In this section, we further give an approximate algorithm for Formula (3) based on Algorithm 2. We assume that is a fixed positive integer.
In order to deal with matroid constraints, we need to introduce a -swap to replace the swap in Algorithm 2. We refer to the family of subsets of whose size does not exceed as . It should be pointed out that is included in the set . Given a subset , , and , is said to be a -swap if is independent, i.e., . We refer to the collection of -swaps involving elements in as .
We use the key Lemma 6 to analyze Algorithm 3, similar to Lemma 5 for Algorithm 2.
Lemma 6[
6
] For any sets , we can always construct a mapping , where each element satisfies , and appears in at most subsets.
Theorem 3 According to Algorithm 3, a -approximate solution s of Formula (3) can be obtained.
Proof Due to the fact that the constraints are generalized from a single matroid to the intersection of matroids, we must ensure that the feasible solution in every iteration of Algorithm 3 is an independent set of the intersection of matroids. Therefore, there are fewer selectable operations to update . We still check that whether the -swap is acceptable in Line 8 of Algorithm 3. Then the outputs s need to satisfy that for each , the corresponding -swap is rejected.
In the following analysis, we need to consider the -swap involving in two cases. By Lemma 6 and , there exists a -swap for each .
The classification is similar to Theorem 2.
Case 1: For all elements , the -swap satisfied . Then the swap is rejected just because . In order to ensure the feasibility of the solution, we use -swap instead of swap to update the solution s. Then we have
We still construct , where is obtained by Algorithm 1 to maximize over items . The generalization of constraints does not affect the derivation of the first three inequalities. Therefore, the order of each element in is the same as it added to s in Algorithm 3. Then we add elements in in the same order and get . By Lemmas 2 and 3, we know that and . The third inequality in Formula (19) is obtained from Lemma 1. The generalization of constraints affects the fourth and the later inequalities. Since swap is generalized to -swap, we need to remove more points when using orthant submodularity in the fourth inequality in Formula (19). By our assumption about the -swaps, the last inequality holds.
Let = , where the indices are assigned arbitrarily, and for all . Whenever we fix an element , there exists a corresponding . If , we set = , where , and . Using -submodularity, we have
By Lemma 6, each element in will not appear more than times as a mapped image of mapping , that is, the cardinality of mapped subsets will not exceed , for . Combing Formulas (19) and (22), we have
Therefore, we get a -approximation for this case.
Case 2: For the elements , at least one -swap violates the knapsack constraint, that is, .
In Algorithm 3, the set generated in the -th iteration is recorded as . Due to the generalization of constraints, we need to analyze the -swap of it. By Lemma 6, there exists a mapping , such that , for each . Suppose that -swap is rejected for violating the knapsack constraint for the first time, where , , and , for all . Due to -swap instead of swap, we need to remove more points by orthant submodularity. Then we still have
So the following inequality holds,
The generalization of constraints hardly affect our proof idea, except that more points are removed each time. Using the same definition as in Section 3, we define the function . In the following we will prove that
Note that a -swap occurs in each iteration of Algorithm 3 for . For simplicity, we denote by . We greedily add elements in to one by one, and construct a set over for . Clearly . Let , by Lemmas 1 and 3, we have
Since is -submodular, for each , there is a -swap satifying
where for each . By the choice of and , we have
Recall the method similar to Case 1, let , where the indices are assigned arbitrarily. Whenever we fix an element , if the corresponding , we set = , where , and . Using -submodularity, we have
By Lemma 6, each element in appears in at most mapped subsets , for . Summing Formula (27) over all elements in , we get
From the following part, the generalization of constraints will no longer affect the proof. We just need to deduce according to the idea of Theorem 2. Thus, Formulas (24)–(28) imply that
for all . Let and , for any . Define and . By the assumption of Case 2, we have . For , we define when . Note that , using the above definition, we obtain that
for ,
Using Formulas (29) and (30), we have
From Eqs. (
30
) and (
31
) and Lemma 4, we obtain
Then, combing Formulas (23) and (33), we have
Hence, Algorithm 3 has an approximation ratio of at least .