Dataset Viewer
Auto-converted to Parquet
title
stringlengths
12
235
full_text
stringlengths
58
1.17M
summary
stringlengths
99
1.95k
page_count
float64
2
649
cleaned_text
stringlengths
58
746k
Online Weighted Paging with Unknown Weights
Online Weighted Paging with Unknown Weights Orin Levy∗ Tel-Aviv University [email protected] Noam Touitou Amazon Science [email protected] Aviv Rosenberg† Google Research [email protected] Abstract Online paging is a fundamental problem in the field of online algorithms, in which one maintains a cache of 𝑘slots as requests for fetching pages arrive online. In the weighted variant of this problem, each page has its own fetching cost; a substantial line of work on this problem culminated in an (optimal) 𝑂(log 𝑘)-competitive randomized algorithm, due to Bansal, Buchbinder and Naor (FOCS’07). Existing work for weighted paging assumes that page weights are known in advance, which is not always the case in practice. For example, in multi-level caching architectures, the expected cost of fetching a memory block is a function of its probability of being in a mid-level cache rather than the main memory. This complex property cannot be predicted in advance; over time, however, one may glean information about page weights through sampling their fetching cost multiple times. We present the first algorithm for online weighted paging that does not know page weights in advance, but rather learns from weight samples. In terms of techniques, this requires providing (integral) samples to a fractional solver, requiring a delicate interface between this solver and the randomized rounding scheme; we believe that our work can inspire online algorithms to other problems that involve cost sampling. 1 Introduction Online weighted paging. In the online weighted paging problem, or OWP, one is given a cache of 𝑘slots, and requests for pages arrive online. Upon each requested page, the algorithm must ensure that the page is in the cache, possibly evicting existing pages in the process. Each page 𝑝also has a weight 𝑤𝑝, which represents the cost of fetching the page into the cache; the goal of the algorithm is to minimize the total cost of fetching pages. Assuming that the page weights are known, this problem admits an 𝑂(log 𝑘)-competitive randomized online algorithm, due to Bansal, Buchbinder, and Naor [2010, 2012]; This is optimal, as there exists an Ω(log 𝑘)-competitiveness lower bound for randomized algorithms due to Fiat et al. [1991] (that holds even for the unweighted case). However, all previous work on paging assumes that the page weights are known in advance. This assumption is not always justified; for example, the following scenario, reminiscent of real-world architectures, naturally gives rise to unknown page weights. Consider a multi-core architecture, in which data can be stored in one of the following: a local “L1” cache, unique to each core; a global “L2” cache, shared between the cores; and the (large but slow) main memory. As a specific core requests memory blocks, managing its L1 cache can be seen as an OWP instance. Suppose the costs ∗Research conducted while the author was an intern at Amazon Science. †Research conducted while the author was at Amazon Science. 38th Conference on Neural Information Processing Systems (NeurIPS 2024). arXiv:2410.21266v1 [cs.LG] 28 Oct 2024 of fetching a block from the main memory and from the L2 cache are 1 and 𝜖≪1, respectively. Then, when a core demands a memory block, the expected cost of fetching this block (i.e., its weight) is a convex combination of 1 and 𝜖, weighted by the probability that the block is in the L2 cache; this probability can be interpreted as the demand for this block by the various cores. When managing the L1 cache of a core, we would prefer to evict blocks with low expected fetching cost, as they are more likely to be available in the L2 cache. But, this expected cost is a complicated property of the computation run by the cores, and estimating it in advance is infeasible; however, when a block is fetched in the above example, we observe a stochastic cost of either 1 or 𝜖. As we sample a given block multiple times, we can gain insight into its weight. Multi-armed bandit. The above example, in which we learn about various options through sampling, is reminiscent of the multi-armed bandit problem, or MAB. In the cost-minimization version of this problem, one is given 𝑛options (or arms), each with its own cost in [0, 1]. At each time step, the algorithm must choose an option and pay the corresponding cost; when choosing an option 𝑝, rather than learning its cost 𝑤𝑝, the algorithm is only revealed a sample from some distribution whose expectation is 𝑤𝑝. In this problem, the goal is to minimize the regret, which is the difference between the algorithm’s total cost and the optimal cost (which is to always choose the cheapest option). Over 𝑇time steps, the best known regret bound for this problem is ˜𝑂( √ 𝑛𝑇), achieved through multiple techniques. (See, e.g., Slivkins et al. [2019], Lattimore and Szepesvári [2020]). 1.1 Our Results We make the first consideration of OWP where page weights are not known in advance, and show that the optimal competitive ratio of 𝑂(log 𝑘) can still be obtained. Specifically, we present the problem of OWP-UW (Online Weighted Paging with Unknown Weights), that combines OWP with bandit-like feedback. In OWP-UW, every page 𝑝has an arbitrary distribution, whose expectation is its weight 0 < 𝑤𝑝≤1. Upon fetching a page, the algorithm observes a random, independent sample from the distribution of the page. We present the following theorem for OWP-UW. Theorem 1.1. There exists a randomized algorithm ON for OWP-UW such that, for every input 𝑄, E[ON(𝑄)] ≤𝑂(log 𝑘) · OPT(𝑄) + ˜𝑂( √ 𝑛𝑇), where ON(𝑄) is the cost of ON on 𝑄, OPT(𝑄) is the cost of the optimal solution to 𝑄, and the expectation is taken over both the randomness in ON and the samples from the distributions of pages. Note that the bound in Theorem 1.1 combines a competitive ratio of 𝑂(log 𝑘) with a regret (i.e., additive) term of ˜𝑂( √ 𝑛𝑇). To motivate this type of bound, we observe that OWP-UW does not admit sublinear regret without a competitive ratio. Consider the lower bound of Ω(log 𝑘) for the competitive ratio of paging; stated simply, one of 𝑘+ 1 pages of weight 1 is requested at random. Over a sequence of 𝑇requests, the expected cost of any online algorithm is Ω(𝑇/𝑘); meanwhile, the expected cost of the optimal solution is at most 𝑂(𝑇/(𝑘log 𝑘)). (The optimal solution would be to wait for a maximal phase of requests containing at most 𝑘pages, whose expected length is Θ(𝑘log 𝑘), then change state at constant cost.) Without a competitive ratio term, the difference between the online and offline solutions is Ω(𝑇/𝑘), i.e., linear regret. We note that this kind of bound appears in several previous works such as Basu et al. [2019], Foussoul et al. [2023]. As OWP-UW generalizes both standard OWP and MAB, both the competitive ratio and regret terms are asymptotically tight: a competitiveness lower bound of Ω(log 𝑘) is known for randomized algorithms for online (weighted) paging [Fiat et al., 1991], and a regret lower bound of ˜Ω( √ 𝑛𝑇) is known for MAB [Lattimore and Szepesvári, 2020]. 1.2 Our Techniques Interface between fractional solution and rounding scheme. Randomized online algorithms are often built of the following components: 1. A deterministic, 𝛼-competitive online algorithm for a fractional relaxation of the problem. 2. An online randomized rounding scheme that encapsulates any online fractional algorithm, and has expected cost 𝛽times the fractional cost. Combining these components yields an 𝛼𝛽-competitive randomized online (integral) algorithm. 2 For our problem, it is easy to see where this common scheme fails. The fractional algorithm cannot be competitive without sampling pages; but, pages are sampled by the rounding scheme! Thus, the competitiveness of the fractional algorithm is not independent of the randomized rounding, which must provide samples. One could think of addressing this by feeding any samples obtained by the rounding procedure into the fractional algorithm. However, as the rounding is randomized, this would result in a non-deterministic fractional algorithm. As described later in the paper, this is problematic: the rounding scheme demands a globally accepted fractional solution against which probabilities of cache states are balanced. Instead, we outline a sampling interface between the fractional solver and the rounding scheme. Once the total fractional eviction of a page reaches an integer, the fractional algorithm will pop a sample of the page from a designated sampling queue, and process that sample. On the other side of the interface, the rounding scheme fills the sampling queue and ensures that when the fractional algorithm demands a sample, the queue will be non-empty with probability 1. Optimistic fractional algorithm, pessimistic rounding scheme. When learning from samples, one must balance the exploration of unfamiliar options and the exploitation of familiar options that are known to be good. A well-known paradigm for achieving this balance in multi-armed bandit problems is optimism under uncertainty. Using this paradigm to minimize total cost, one maintains a lower confidence bound (LCB) for the cost of an option, which holds with high probability, and tightens upon receiving samples; then, the option with the lowest LCB is chosen. As a result, one of the following two cases holds: either the option was good (high exploitation); or, the option was bad, which means that the LCB was not tight, and henceforth sampling greatly improves it (high exploration). Our fractional algorithm for weighted paging employs this method. It optimistically assumes that the price of moving a page is cheap, i.e., is equal to some lower confidence bound (LCB) for that page. It then uses multiplicative updates to allocate servers according to these LCB costs. The optimism under uncertainty paradigm then implies that the fractional algorithm learns the weights over time. However, the rounding scheme behaves very differently. Unlike the fractional algorithm, the (ran- domized) rounding scheme is not allowed to use samples to update the confidence bounds; otherwise, our fractional solution would behave non-deterministically. Instead, the rounding scheme takes a pessimistic view: it uses an upper confidence bound (UCB) as the cost of a page, thus assuming that the page is expensive. Such pessimistic approaches are common in scenarios where obtaining additional samples is not possible (e.g., offline reinforcement learning [Levine et al., 2020]), but rarely appear as a component of an online algorithm as we suggest in this paper. 1.3 Related Work The online paging problem is a fundamental problem in the field of online algorithms. In the unweighted setting, the optimal competitive ratio for a deterministic algorithm is 𝑘, due to Sleator and Tarjan [1985]. Allowing randomization improves the best possible competitive ratio to Θ(log 𝑘) [Fiat et al., 1991]. As part of a line of work on weighted paging and its variants (e.g., Young [1994], Manasse et al. [1990], Albers [2003], Irani [2002], Fiat and Mendel [2000], Bansal et al. [2008], Irani [1997]), the best competitive ratios for weighted paging were settled, and were seen to match the unweighted setting: 𝑘-competitiveness for deterministic algorithms, due to Chrobak et al. [1991]; and Θ(log 𝑘)-competitiveness for randomized algorithms, due to Bansal et al. [2012]. Online (weighted) paging is a special case of the 𝑘-server problem, in which 𝑘servers exist in a general metric space, and must be moved to address requests on various points in this space; the cache slots in (weighted) paging can be seen as servers, moving in a (weighted) uniform metric space. The Θ(𝑘) bound on optimal competitiveness in the deterministic for paging also extends to general 𝑘-server [Manasse et al., 1990, Koutsoupias and Papadimitriou, 1995]. However, allowing randomization, a recent breakthrough result by Bubeck et al. [2023] was a lower bound of Ω(log2 𝑘)- competitiveness for 𝑘-server, diverging from the 𝑂(log 𝑘)-competitiveness possible for paging. Multi-Armed Bandit (MAB) is one of the most fundamental problems in online sequential decision making, often used to describe a trade-off between exploration and exploitation. It was extensively studied in the past few decades, giving rise to several algorithmic approaches that guarantee optimal regret. The most popular methods include Optimism Under Uncertainty (e.g., the UCB algorithm [Lai and Robbins, 1985, Auer et al., 2002a]), Action Elimination [Even-Dar et al., 2006], Thompson 3 Sampling [Thompson, 1933, Agrawal and Goyal, 2012] and Exponential Weights (e.g., the EXP3 algorithm [Auer et al., 2002b]). For a comprehensive review of the MAB literature, see Slivkins et al. [2019], Lattimore and Szepesvári [2020]. 2 Preliminaries In OWP-UW, we are given a memory cache of 𝑘slots. A sequence of 𝑇page requests then arrives in an online fashion; we denote the set of requested pages by 𝑃, define 𝑛:= |𝑃|, and assume that 𝑛> 𝑘. Each page 𝑝has a corresponding weight 0 < 𝑤𝑝≤1; the weights are not known to the algorithm. Moreover, every page 𝑝has a distribution D𝑝supported in (0, 1], such that E𝑥∼D𝑝[𝑥] = 𝑤𝑝. The online scenario proceeds in 𝑇rounds.3 In each round 𝑡∈{1, 2, . . . ,𝑇}: • A page 𝑝𝑡∈𝑃is requested. • If the requested page is already in the cache, then it is immediately served. • Otherwise, we experience a cache miss, and we must fetch 𝑝𝑡into the cache; if the cache is full, the algorithm must evict some page from the cache to make room for 𝑝𝑡. • Upon evicting any page 𝑝from the cache, the algorithm receives an independent sample from D𝑝. The algorithm incurs cost when evicting pages from the cache: when evicting a page 𝑝, the algorithm incurs a cost of 𝑤𝑝4. Our goal is to minimize the algorithm’s total cost of evicting pages, denoted by ON, and we measure our performance by comparison to the total cost of the optimal algorithm, denoted by OPT. We say that our algorithm is 𝛼-competitive with R regret if E[ON] ≤𝛼· OPT + R. 3 Algorithmic Framework and Analysis Overview We present an overview of the concepts and algorithmic components we use to address OWP-UW. We would like to follow the paradigm of solving a fractional problem online, and then randomly rounding the resulting solution; however, as discussed in the introduction, employing this paradigm for OWP-UW requires a well-defined interface between the fractional solver and the rounding procedure. Thus, we present a fractional version of OWP-UW that captures this interface. Fractional OWP-UW. In fractional OWP-UW, one is allowed to move fractions of servers, and a request for a page is satisfied if the total server fraction at that point sums to 1. More formally, for every page 𝑝∈𝑃we maintain an amount 𝑦𝑝∈[0, 1] which is the fraction of 𝑝missing from the cache; we call 𝑦𝑝the fractional anti-server at 𝑝. (The term anti-server comes from the related 𝑘-server problem.) The feasibility constraints are: 1. At any point in the algorithm, it holds that Í 𝑝∈𝑃𝑦𝑝≥𝑛−𝑘. (I.e., the total number of pages in the cache is at most 𝑘.) 2. After a page 𝑝is requested, it holds that 𝑦𝑝= 0. (I.e., there exists a total server fraction of 1 at 𝑝.) Evicting an 𝜖server fraction from 𝑝(i.e., increasing 𝑦𝑝by 𝜖) costs 𝜖· 𝑤𝑝. Sampling. The fractional algorithm must receive samples of pages over time in order to learn about their weights. An algorithm for fractional OWP-UW receives a sample of a page 𝑝whenever the total fraction of 𝑝evicted by the algorithm reaches an integer. In particular, the algorithm obtains the first sample of 𝑝(corresponding to 0 eviction) when 𝑝is first requested in the online input. 3We make a simplifying assumption that 𝑇is known in advanced. This can be easily removed using a standard doubling (see, e.g., Slivkins et al. [2019]). 4Note that charging an OWP solution for evicting rather than fetching pages is standard; indeed, with the exception of at most 𝑘pages, every fetched page is subsequently evicted, and thus the difference between eviction and fetching costs is at most 𝑘. Moreover, as we analyze additive regret, note that 𝑘≤ √ 𝑛𝑇, implying that using fetching costs would not affect the bounds in this paper. Finally, note that we sample upon eviction rather than upon fetching, which is the “harder” model. 4 Algorithmic components. We present the fractional algorithm and randomized rounding scheme. Fractional algorithm. In Section 4, we present an algorithm ONF for fractional OWP-UW. Fixing the random samples from the pages’ weight distributions, the fractional algorithm ONF is deterministic. For every page 𝑝∈𝑃, the fractional algorithm maintains an upper confidence bound UCB𝑝and a lower confidence bound LCB𝑝. These confidence bounds depend on the samples provided for that page; we define the good event E to be the event that at every time and for every page 𝑝∈𝑃, it holds that LCB𝑝≤𝑤𝑝≤UCB𝑝. We later show that E happens with high probability, and analyze the complementary event separately5. Thus, we henceforth focus on the good event. The following lemma bounds the cost of ONF subject to the good event. In fact, it states a stronger bound, that applies also when the cost of evicting page 𝑝is the upper confidence bound UCB𝑝≥𝑤𝑝. Lemma 3.1. Fixing any input 𝑄for fractional OWP-UW, and assuming the good event, it holds that ONF(𝑄) ≤ONF(𝑄) ≤𝑂(log 𝑘) · OPT(𝑄) + ˜𝑂( √ 𝑛𝑇) where ONF is the cost of the algorithm on the input where the cost of evicting a page 𝑝is UCB𝑝≥𝑤𝑝. Randomized rounding. In Section 5 we present the randomized algorithm ON for (integral) OWP-UW. It maintains a probability distribution over integral cache states by holding an instance of ONF, to which it feeds the online input. For the online input to constitute a valid fractional input, the randomized algorithm ensures that samples are provided to ONF when required. In addition, the randomized algorithm makes use of ONF’s exploration of page weights; specifically, it uses the UCBs calculated by ONF. Lemma 3.2. Fixing any input 𝑄for (integral) OWP-UW, assuming the good event E, it holds that E[ON(𝑄)] ≤𝑂(1) · ONF(𝑄) + 𝑛 where ONF(𝑄) is the cost of the algorithm on 𝑄such that the cost of evicting a page 𝑝is UCB𝑝≥𝑤𝑝. Figure 1 provides a step-by-step visualization of the interface between the fractional algorithm and the rounding scheme over the handling of a page request. 4 Algorithm for Fractional OWP-UW We now describe our algorithm for the fractional relaxation of OWP-UW, proving Lemma 3.1. Our fractional algorithm, presented in Algorithm 1 below, uses samples provided by the rounding scheme to learn the weights. A new sample for page 𝑝is provided and processed whenever the sum of fractional movements (in absolute value) 𝑚𝑝hits a natural number. (At this point the number of samples 𝑛𝑝is incremented.) The algorithm calculates non-increasing UCBs and non-decreasing LCBs that will be specified later in Section C and guarantee with high probability, for every page 𝑝∈𝑃and time step 𝑡∈[1,𝑇], LCB𝑝≤𝑤𝑝≤UCB𝑝. At each time step 𝑡, upon a new page request 𝑝𝑡, the algorithm updates its feasible fractional cache solution {𝑦𝑝}𝑝∈𝑃. The fractions are computed using optimistic estimates of the weights, i.e., the LCBs, in order to induce exploration and allow the true weights to be learned over time. After serving page 𝑝𝑡(that is, setting 𝑦𝑝𝑡= 0), the algorithm continuously increases the anti-servers of all the other pages in the cache until feasibility is reached (that is, until Í 𝑝∈𝑃𝑦𝑝= 𝑛−𝑘). The fraction 𝑦𝑝for some page 𝑝in the cache is increased proportionally to 𝑦𝑝+𝜂 LCB𝑝, which is our adaption of the algorithmic approach of Bansal et al. [2010] to the unknown-weights scenario. Finally, to fulfil its end in the interface, the fractional algorithm passes its feasible fractional solution to the rounding scheme together with pessimistic estimates of the weights, i.e, the UCBs. 4.1 Analysis In this analysis section, our goal is to bound the amount ONF with respect to the UCBs and LCBs calculated by the algorithm; i.e., to prove Lemma 4.1. Lemma C.1 and Lemma C.2 from Appendix C then make the choice of confidence bounds concrete, such that combining it with Lemma 4.1 yields the final bound for the fractional algorithm, i.e., Lemma 3.1. 5Specifically, we show that the complementary event E happens with probability at most 1 𝑛𝑇, and that the algorithm’s cost is at most 𝑛𝑇times the optimal cost when it happens. 5 This figure visualizes the running of the algorithm over a request in the input. (a) shows the state prior to the arrival of the request. The integral algorithm maintains an instance of the fractional algorithm ONF, as well as a distribution over integral cache states that upholds some properties w.r.t. ONF (specifically, the consistency property and the subset property); note that these properties are a function of both the anti-server state  𝑦𝑝 𝑝∈𝑃 and the upper-confidence bounds  UCB𝑝 𝑝∈𝑃in ONF. The integral algorithm also maintains a set of page samples, to be demanded by ONF at a later time. In (b), a page is requested (in red); thus, the fractional algorithm must fetch it into the cache, i.e., set its anti-server to 0. To maintain feasibility, the fractional algorithm will increase the anti-server at other pages in which some server fraction exists (in green). These changes in anti-server are also fed into the integral algorithm, which modifies its distribution to maintain consistency and the subset property w.r.t. ONF. In (c), ONF reaches integral total eviction of a page 𝑝, and demands a sample from the integral algorithm. (We show that such a sample always exists when demanded.) The bound UCB𝑝is updated in ONF, and is then fed to the integral algorithm to maintain the desired properties. After this sample, continuous increasing of anti-server continues until feasibility is reached in (d). Then, the integral algorithm ensures that a sample exists for the requested page 𝑝𝑡, sampling the page if needed. (As 𝑝𝑡 now exists in 𝜇with probability 1, sampling is done through evicting and re-fetching 𝑝𝑡.) Figure 1: Visualization of the interface between the fractional and integral algorithms Algorithm 1: Fractional Online Weighted Caching with Unknown Weights 1 Set 𝜂←1/𝑘and 𝑦𝑝←1 for every 𝑝∈𝑃. 2 for time 𝑡= 1, 2, ...,𝑇do 3 Page 𝑝𝑡∈𝑃is requested. 4 continually increase 𝑦𝑝, 𝑚𝑝in proportion to 𝑦𝑝+𝜂 LCB𝑝for every 𝑝∈𝑃\\ {𝑝𝑡} where 𝑦𝑝< 1 until: 5 if 𝑚𝑝reaches an integer for some 𝑝∈𝑃\\ {𝑝𝑡} then 6 receive sample e𝑤𝑝for 𝑝. 7 set 𝑛𝑝←𝑛𝑝+ 1. 8 call UPDATECONFBOUNDS(𝑝, e𝑤𝑝). // recalculate confidence bounds for 𝑝 9 if Í 𝑝∈𝑃𝑦𝑝= 𝑛−𝑘then break from the continuous increase loop. 10 if this is the first request for 𝑝𝑡then 11 receive sample e𝑤𝑝𝑡for 𝑝𝑡. 12 define 𝑚𝑝←0, 𝑛𝑝←1. 13 call UPDATECONFBOUNDS(𝑝𝑡, e𝑤𝑝𝑡). // calculate confidence bounds for 𝑝𝑡 Lemma 4.1. Fixing any input 𝑄for fractional OWP-UW, and assuming the good event, it holds that ONF(𝑄) ≤𝑂(log 𝑘) · OPT(𝑄) + ∑︁ 𝑝∈𝑃 𝑛𝑝 ∑︁ 𝑖=1
Online paging is a fundamental problem in the field of online algorithms, in which one maintains a cache of $k$ slots as requests for fetching pages arrive online. In the weighted variant of this problem, each page has its own fetching cost; a substantial line of work on this problem culminated in an (optimal) $O(\\log k)$-competitive randomized algorithm, due to Bansal, Buchbinder and Naor (FOCS'07). Existing work for weighted paging assumes that page weights are known in advance, which is not always the case in practice. For example, in multi-level caching architectures, the expected cost of fetching a memory block is a function of its probability of being in a mid-level cache rather than the main memory. This complex property cannot be predicted in advance; over time, however, one may glean information about page weights through sampling their fetching cost multiple times. We present the first algorithm for online weighted paging that does not know page weights in advance, but rather learns from weight samples. In terms of techniques, this requires providing (integral) samples to a fractional solver, requiring a delicate interface between this solver and the randomized rounding scheme; we believe that our work can inspire online algorithms to other problems that involve cost sampling.
19
Online Weighted Paging with Unknown Weights Orin Levy∗ Tel-Aviv University [email protected] Noam Touitou Amazon Science [email protected] Aviv Rosenberg† Google Research [email protected] Online weighted paging. In the online weighted paging problem, or OWP, one is given a cache of 𝑘slots, and requests for pages arrive online. Upon each requested page, the algorithm must ensure that the page is in the cache, possibly evicting existing pages in the process. Each page 𝑝also has a weight 𝑤𝑝, which represents the cost of fetching the page into the cache; the goal of the algorithm is to minimize the total cost of fetching pages. Assuming that the page weights are known, this problem admits an 𝑂(log 𝑘)-competitive randomized online algorithm, due to Bansal, Buchbinder, and Naor [2010, 2012]; This is optimal, as there exists an Ω(log 𝑘)-competitiveness lower bound for randomized algorithms due to Fiat et al. [1991] (that holds even for the unweighted case). However, all previous work on paging assumes that the page weights are known in advance. This assumption is not always justified; for example, the following scenario, reminiscent of real-world architectures, naturally gives rise to unknown page weights. Consider a multi-core architecture, in which data can be stored in one of the following: a local “L1” cache, unique to each core; a global “L2” cache, shared between the cores; and the (large but slow) main memory. As a specific core requests memory blocks, managing its L1 cache can be seen as an OWP instance. Suppose the costs ∗Research conducted while the author was an intern at Amazon Science. †Research conducted while the author was at Amazon Science. 38th Conference on Neural Information Processing Systems (NeurIPS 2024). arXiv:2410.21266v1 [cs.LG] 28 Oct 2024 of fetching a block from the main memory and from the L2 cache are 1 and 𝜖≪1, respectively. Then, when a core demands a memory block, the expected cost of fetching this block (i.e., its weight) is a convex combination of 1 and 𝜖, weighted by the probability that the block is in the L2 cache; this probability can be interpreted as the demand for this block by the various cores. When managing the L1 cache of a core, we would prefer to evict blocks with low expected fetching cost, as they are more likely to be available in the L2 cache. But, this expected cost is a complicated property of the computation run by the cores, and estimating it in advance is infeasible; however, when a block is fetched in the above example, we observe a stochastic cost of either 1 or 𝜖. As we sample a given block multiple times, we can gain insight into its weight. Multi-armed bandit. The above example, in which we learn about various options through sampling, is reminiscent of the multi-armed bandit problem, or MAB. In the cost-minimization version of this problem, one is given 𝑛options (or arms), each with its own cost in [0, 1]. At each time step, the algorithm must choose an option and pay the corresponding cost; when choosing an option 𝑝, rather than learning its cost 𝑤𝑝, the algorithm is only revealed a sample from some distribution whose expectation is 𝑤𝑝. In this problem, the goal is to minimize the regret, which is the difference between the algorithm’s total cost and the optimal cost (which is to always choose the cheapest option). Over 𝑇time steps, the best known regret bound for this problem is ˜𝑂( √ 𝑛𝑇), achieved through multiple techniques. (See, e.g., Slivkins et al. [2019], Lattimore and Szepesvári [2020]). 1.1 Our Results We make the first consideration of OWP where page weights are not known in advance, and show that the optimal competitive ratio of 𝑂(log 𝑘) can still be obtained. Specifically, we present the problem of OWP-UW (Online Weighted Paging with Unknown Weights), that combines OWP with bandit-like feedback. In OWP-UW, every page 𝑝has an arbitrary distribution, whose expectation is its weight 0 < 𝑤𝑝≤1. Upon fetching a page, the algorithm observes a random, independent sample from the distribution of the page. We present the following theorem for OWP-UW. Theorem 1.1. There exists a randomized algorithm ON for OWP-UW such that, for every input 𝑄, E[ON(𝑄)] ≤𝑂(log 𝑘) · OPT(𝑄) + ˜𝑂( √ 𝑛𝑇), where ON(𝑄) is the cost of ON on 𝑄, OPT(𝑄) is the cost of the optimal solution to 𝑄, and the expectation is taken over both the randomness in ON and the samples from the distributions of pages. Note that the bound in Theorem 1.1 combines a competitive ratio of 𝑂(log 𝑘) with a regret (i.e., additive) term of ˜𝑂( √ 𝑛𝑇). To motivate this type of bound, we observe that OWP-UW does not admit sublinear regret without a competitive ratio. Consider the lower bound of Ω(log 𝑘) for the competitive ratio of paging; stated simply, one of 𝑘+ 1 pages of weight 1 is requested at random. Over a sequence of 𝑇requests, the expected cost of any online algorithm is Ω(𝑇/𝑘); meanwhile, the expected cost of the optimal solution is at most 𝑂(𝑇/(𝑘log 𝑘)). (The optimal solution would be to wait for a maximal phase of requests containing at most 𝑘pages, whose expected length is Θ(𝑘log 𝑘), then change state at constant cost.) Without a competitive ratio term, the difference between the online and offline solutions is Ω(𝑇/𝑘), i.e., linear regret. We note that this kind of bound appears in several previous works such as Basu et al. [2019], Foussoul et al. [2023]. As OWP-UW generalizes both standard OWP and MAB, both the competitive ratio and regret terms are asymptotically tight: a competitiveness lower bound of Ω(log 𝑘) is known for randomized algorithms for online (weighted) paging [Fiat et al., 1991], and a regret lower bound of ˜Ω( √ 𝑛𝑇) is known for MAB [Lattimore and Szepesvári, 2020]. 1.2 Our Techniques Interface between fractional solution and rounding scheme. Randomized online algorithms are often built of the following components: 1. A deterministic, 𝛼-competitive online algorithm for a fractional relaxation of the problem. 2. An online randomized rounding scheme that encapsulates any online fractional algorithm, and has expected cost 𝛽times the fractional cost. Combining these components yields an 𝛼𝛽-competitive randomized online (integral) algorithm. 2 For our problem, it is easy to see where this common scheme fails. The fractional algorithm cannot be competitive without sampling pages; but, pages are sampled by the rounding scheme! Thus, the competitiveness of the fractional algorithm is not independent of the randomized rounding, which must provide samples. One could think of addressing this by feeding any samples obtained by the rounding procedure into the fractional algorithm. However, as the rounding is randomized, this would result in a non-deterministic fractional algorithm. As described later in the paper, this is problematic: the rounding scheme demands a globally accepted fractional solution against which probabilities of cache states are balanced. Instead, we outline a sampling interface between the fractional solver and the rounding scheme. Once the total fractional eviction of a page reaches an integer, the fractional algorithm will pop a sample of the page from a designated sampling queue, and process that sample. On the other side of the interface, the rounding scheme fills the sampling queue and ensures that when the fractional algorithm demands a sample, the queue will be non-empty with probability 1. Optimistic fractional algorithm, pessimistic rounding scheme. When learning from samples, one must balance the exploration of unfamiliar options and the exploitation of familiar options that are known to be good. A well-known paradigm for achieving this balance in multi-armed bandit problems is optimism under uncertainty. Using this paradigm to minimize total cost, one maintains a lower confidence bound (LCB) for the cost of an option, which holds with high probability, and tightens upon receiving samples; then, the option with the lowest LCB is chosen. As a result, one of the following two cases holds: either the option was good (high exploitation); or, the option was bad, which means that the LCB was not tight, and henceforth sampling greatly improves it (high exploration). Our fractional algorithm for weighted paging employs this method. It optimistically assumes that the price of moving a page is cheap, i.e., is equal to some lower confidence bound (LCB) for that page. It then uses multiplicative updates to allocate servers according to these LCB costs. The optimism under uncertainty paradigm then implies that the fractional algorithm learns the weights over time. However, the rounding scheme behaves very differently. Unlike the fractional algorithm, the (ran- domized) rounding scheme is not allowed to use samples to update the confidence bounds; otherwise, our fractional solution would behave non-deterministically. Instead, the rounding scheme takes a pessimistic view: it uses an upper confidence bound (UCB) as the cost of a page, thus assuming that the page is expensive. Such pessimistic approaches are common in scenarios where obtaining additional samples is not possible (e.g., offline reinforcement learning [Levine et al., 2020]), but rarely appear as a component of an online algorithm as we suggest in this paper. 1.3 Related Work The online paging problem is a fundamental problem in the field of online algorithms. In the unweighted setting, the optimal competitive ratio for a deterministic algorithm is 𝑘, due to Sleator and Tarjan [1985]. Allowing randomization improves the best possible competitive ratio to Θ(log 𝑘) [Fiat et al., 1991]. As part of a line of work on weighted paging and its variants (e.g., Young [1994], Manasse et al. [1990], Albers [2003], Irani [2002], Fiat and Mendel [2000], Bansal et al. [2008], Irani [1997]), the best competitive ratios for weighted paging were settled, and were seen to match the unweighted setting: 𝑘-competitiveness for deterministic algorithms, due to Chrobak et al. [1991]; and Θ(log 𝑘)-competitiveness for randomized algorithms, due to Bansal et al. [2012]. Online (weighted) paging is a special case of the 𝑘-server problem, in which 𝑘servers exist in a general metric space, and must be moved to address requests on various points in this space; the cache slots in (weighted) paging can be seen as servers, moving in a (weighted) uniform metric space. The Θ(𝑘) bound on optimal competitiveness in the deterministic for paging also extends to general 𝑘-server [Manasse et al., 1990, Koutsoupias and Papadimitriou, 1995]. However, allowing randomization, a recent breakthrough result by Bubeck et al. [2023] was a lower bound of Ω(log2 𝑘)- competitiveness for 𝑘-server, diverging from the 𝑂(log 𝑘)-competitiveness possible for paging. Multi-Armed Bandit (MAB) is one of the most fundamental problems in online sequential decision making, often used to describe a trade-off between exploration and exploitation. It was extensively studied in the past few decades, giving rise to several algorithmic approaches that guarantee optimal regret. The most popular methods include Optimism Under Uncertainty (e.g., the UCB algorithm [Lai and Robbins, 1985, Auer et al., 2002a]), Action Elimination [Even-Dar et al., 2006], Thompson 3 Sampling [Thompson, 1933, Agrawal and Goyal, 2012] and Exponential Weights (e.g., the EXP3 algorithm [Auer et al., 2002b]). For a comprehensive review of the MAB literature, see Slivkins et al. [2019], Lattimore and Szepesvári [2020]. 2 Preliminaries In OWP-UW, we are given a memory cache of 𝑘slots. A sequence of 𝑇page requests then arrives in an online fashion; we denote the set of requested pages by 𝑃, define 𝑛:= |𝑃|, and assume that 𝑛> 𝑘. Each page 𝑝has a corresponding weight 0 < 𝑤𝑝≤1; the weights are not known to the algorithm. Moreover, every page 𝑝has a distribution D𝑝supported in (0, 1], such that E𝑥∼D𝑝[𝑥] = 𝑤𝑝. The online scenario proceeds in 𝑇rounds.3 In each round 𝑡∈{1, 2, . . . ,𝑇}: • A page 𝑝𝑡∈𝑃is requested. • If the requested page is already in the cache, then it is immediately served. • Otherwise, we experience a cache miss, and we must fetch 𝑝𝑡into the cache; if the cache is full, the algorithm must evict some page from the cache to make room for 𝑝𝑡. • Upon evicting any page 𝑝from the cache, the algorithm receives an independent sample from D𝑝. The algorithm incurs cost when evicting pages from the cache: when evicting a page 𝑝, the algorithm incurs a cost of 𝑤𝑝4. Our goal is to minimize the algorithm’s total cost of evicting pages, denoted by ON, and we measure our performance by comparison to the total cost of the optimal algorithm, denoted by OPT. We say that our algorithm is 𝛼-competitive with R regret if E[ON] ≤𝛼· OPT + R. 3 Algorithmic Framework and Analysis Overview We present an overview of the concepts and algorithmic components we use to address OWP-UW. We would like to follow the paradigm of solving a fractional problem online, and then randomly rounding the resulting solution; however, as discussed in the introduction, employing this paradigm for OWP-UW requires a well-defined interface between the fractional solver and the rounding procedure. Thus, we present a fractional version of OWP-UW that captures this interface. Fractional OWP-UW. In fractional OWP-UW, one is allowed to move fractions of servers, and a request for a page is satisfied if the total server fraction at that point sums to 1. More formally, for every page 𝑝∈𝑃we maintain an amount 𝑦𝑝∈[0, 1] which is the fraction of 𝑝missing from the cache; we call 𝑦𝑝the fractional anti-server at 𝑝. (The term anti-server comes from the related 𝑘-server problem.) The feasibility constraints are: 1. At any point in the algorithm, it holds that Í 𝑝∈𝑃𝑦𝑝≥𝑛−𝑘. (I.e., the total number of pages in the cache is at most 𝑘.) 2. After a page 𝑝is requested, it holds that 𝑦𝑝= 0. (I.e., there exists a total server fraction of 1 at 𝑝.) Evicting an 𝜖server fraction from 𝑝(i.e., increasing 𝑦𝑝by 𝜖) costs 𝜖· 𝑤𝑝. Sampling. The fractional algorithm must receive samples of pages over time in order to learn about their weights. An algorithm for fractional OWP-UW receives a sample of a page 𝑝whenever the total fraction of 𝑝evicted by the algorithm reaches an integer. In particular, the algorithm obtains the first sample of 𝑝(corresponding to 0 eviction) when 𝑝is first requested in the online input. 3We make a simplifying assumption that 𝑇is known in advanced. This can be easily removed using a standard doubling (see, e.g., Slivkins et al. [2019]). 4Note that charging an OWP solution for evicting rather than fetching pages is standard; indeed, with the exception of at most 𝑘pages, every fetched page is subsequently evicted, and thus the difference between eviction and fetching costs is at most 𝑘. Moreover, as we analyze additive regret, note that 𝑘≤ √ 𝑛𝑇, implying that using fetching costs would not affect the bounds in this paper. Finally, note that we sample upon eviction rather than upon fetching, which is the “harder” model. 4 Algorithmic components. We present the fractional algorithm and randomized rounding scheme. Fractional algorithm. In Section 4, we present an algorithm ONF for fractional OWP-UW. Fixing the random samples from the pages’ weight distributions, the fractional algorithm ONF is deterministic. For every page 𝑝∈𝑃, the fractional algorithm maintains an upper confidence bound UCB𝑝and a lower confidence bound LCB𝑝. These confidence bounds depend on the samples provided for that page; we define the good event E to be the event that at every time and for every page 𝑝∈𝑃, it holds that LCB𝑝≤𝑤𝑝≤UCB𝑝. We later show that E happens with high probability, and analyze the complementary event separately5. Thus, we henceforth focus on the good event. The following lemma bounds the cost of ONF subject to the good event. In fact, it states a stronger bound, that applies also when the cost of evicting page 𝑝is the upper confidence bound UCB𝑝≥𝑤𝑝. Lemma 3.1. Fixing any input 𝑄for fractional OWP-UW, and assuming the good event, it holds that ONF(𝑄) ≤ONF(𝑄) ≤𝑂(log 𝑘) · OPT(𝑄) + ˜𝑂( √ 𝑛𝑇) where ONF is the cost of the algorithm on the input where the cost of evicting a page 𝑝is UCB𝑝≥𝑤𝑝. Randomized rounding. In Section 5 we present the randomized algorithm ON for (integral) OWP-UW. It maintains a probability distribution over integral cache states by holding an instance of ONF, to which it feeds the online input. For the online input to constitute a valid fractional input, the randomized algorithm ensures that samples are provided to ONF when required. In addition, the randomized algorithm makes use of ONF’s exploration of page weights; specifically, it uses the UCBs calculated by ONF. Lemma 3.2. Fixing any input 𝑄for (integral) OWP-UW, assuming the good event E, it holds that E[ON(𝑄)] ≤𝑂(1) · ONF(𝑄) + 𝑛 where ONF(𝑄) is the cost of the algorithm on 𝑄such that the cost of evicting a page 𝑝is UCB𝑝≥𝑤𝑝. Figure 1 provides a step-by-step visualization of the interface between the fractional algorithm and the rounding scheme over the handling of a page request. 4 Algorithm for Fractional OWP-UW We now describe our algorithm for the fractional relaxation of OWP-UW, proving Lemma 3.1. Our fractional algorithm, presented in Algorithm 1 below, uses samples provided by the rounding scheme to learn the weights. A new sample for page 𝑝is provided and processed whenever the sum of fractional movements (in absolute value) 𝑚𝑝hits a natural number. (At this point the number of samples 𝑛𝑝is incremented.) The algorithm calculates non-increasing UCBs and non-decreasing LCBs that will be specified later in Section C and guarantee with high probability, for every page 𝑝∈𝑃and time step 𝑡∈[1,𝑇], LCB𝑝≤𝑤𝑝≤UCB𝑝. At each time step 𝑡, upon a new page request 𝑝𝑡, the algorithm updates its feasible fractional cache solution {𝑦𝑝}𝑝∈𝑃. The fractions are computed using optimistic estimates of the weights, i.e., the LCBs, in order to induce exploration and allow the true weights to be learned over time. After serving page 𝑝𝑡(that is, setting 𝑦𝑝𝑡= 0), the algorithm continuously increases the anti-servers of all the other pages in the cache until feasibility is reached (that is, until Í 𝑝∈𝑃𝑦𝑝= 𝑛−𝑘). The fraction 𝑦𝑝for some page 𝑝in the cache is increased proportionally to 𝑦𝑝+𝜂 LCB𝑝, which is our adaption of the algorithmic approach of Bansal et al. [2010] to the unknown-weights scenario. Finally, to fulfil its end in the interface, the fractional algorithm passes its feasible fractional solution to the rounding scheme together with pessimistic estimates of the weights, i.e, the UCBs. 4.1 Analysis In this analysis section, our goal is to bound the amount ONF with respect to the UCBs and LCBs calculated by the algorithm; i.e., to prove Lemma 4.1. Lemma C.1 and Lemma C.2 from Appendix C then make the choice of confidence bounds concrete, such that combining it with Lemma 4.1 yields the final bound for the fractional algorithm, i.e., Lemma 3.1. 5Specifically, we show that the complementary event E happens with probability at most 1 𝑛𝑇, and that the algorithm’s cost is at most 𝑛𝑇times the optimal cost when it happens. 5 This figure visualizes the running of the algorithm over a request in the input. (a) shows the state prior to the arrival of the request. The integral algorithm maintains an instance of the fractional algorithm ONF, as well as a distribution over integral cache states that upholds some properties w.r.t. ONF (specifically, the consistency property and the subset property); note that these properties are a function of both the anti-server state  𝑦𝑝 𝑝∈𝑃 and the upper-confidence bounds  UCB𝑝 𝑝∈𝑃in ONF. The integral algorithm also maintains a set of page samples, to be demanded by ONF at a later time. In (b), a page is requested (in red); thus, the fractional algorithm must fetch it into the cache, i.e., set its anti-server to 0. To maintain feasibility, the fractional algorithm will increase the anti-server at other pages in which some server fraction exists (in green). These changes in anti-server are also fed into the integral algorithm, which modifies its distribution to maintain consistency and the subset property w.r.t. ONF. In (c), ONF reaches integral total eviction of a page 𝑝, and demands a sample from the integral algorithm. (We show that such a sample always exists when demanded.) The bound UCB𝑝is updated in ONF, and is then fed to the integral algorithm to maintain the desired properties. After this sample, continuous increasing of anti-server continues until feasibility is reached in (d). Then, the integral algorithm ensures that a sample exists for the requested page 𝑝𝑡, sampling the page if needed. (As 𝑝𝑡 now exists in 𝜇with probability 1, sampling is done through evicting and re-fetching 𝑝𝑡.) Figure 1: Visualization of the interface between the fractional and integral algorithms Algorithm 1: Fractional Online Weighted Caching with Unknown Weights 1 Set 𝜂←1/𝑘and 𝑦𝑝←1 for every 𝑝∈𝑃. 2 for time 𝑡= 1, 2, ...,𝑇do 3 Page 𝑝𝑡∈𝑃is requested. 4 continually increase 𝑦𝑝, 𝑚𝑝in proportion to 𝑦𝑝+𝜂 LCB𝑝for every 𝑝∈𝑃\\ {𝑝𝑡} where 𝑦𝑝< 1 until: 5 if 𝑚𝑝reaches an integer for some 𝑝∈𝑃\\ {𝑝𝑡} then 6 receive sample e𝑤𝑝for 𝑝. 7 set 𝑛𝑝←𝑛𝑝+ 1. 8 call UPDATECONFBOUNDS(𝑝, e𝑤𝑝). // recalculate confidence bounds for 𝑝 9 if Í 𝑝∈𝑃𝑦𝑝= 𝑛−𝑘then break from the continuous increase loop. 10 if this is the first request for 𝑝𝑡then 11 receive sample e𝑤𝑝𝑡for 𝑝𝑡. 12 define 𝑚𝑝←0, 𝑛𝑝←1. 13 call UPDATECONFBOUNDS(𝑝𝑡, e𝑤𝑝𝑡). // calculate confidence bounds for 𝑝𝑡 Lemma 4.1. Fixing any input 𝑄for fractional OWP-UW, and assuming the good event, it holds that ONF(𝑄) ≤𝑂(log 𝑘) · OPT(𝑄) + ∑︁ 𝑝∈𝑃 𝑛𝑝 ∑︁ 𝑖=1
Modular Duality in Deep Learning
Modular Duality in Deep Learning Jeremy Bernstein [email protected] Laker Newhouse [email protected] MIT CSAIL Abstract An old idea in optimization theory says that since the gradient is a dual vector it may not be subtracted from the weights without first being mapped to the primal space where the weights reside. We take this idea seriously in this paper and construct such a duality map for general neural networks. Our map, which we call modular dualization, forms a unifying theoretical basis for training algorithms that are a) fast and b) scalable. Modular dualization involves first assigning operator norms to layers based on the semantics of each layer, and then using these layerwise norms to recursively induce a duality map on the weight space of the full neural architecture. We conclude by deriving GPU-friendly algorithms for dualizing Embed, Linear and Conv2D layers—the latter two methods are based on a new rectangular Newton-Schulz iteration that we propose. Our iteration was recently used to set new speed records for training NanoGPT. Overall, we hope that our theory of modular duality will yield a next generation of fast and scalable optimizers for general neural architectures. 1 Introduction In this paper, we pursue a rigorous and first-principles theoretical framework for designing neural network training algorithms. We hope that building such a framework will facilitate the design of a next generation of fast and scalable optimizers that are automatically tailored to different neural architectures. While gradient descent is the workhorse of modern machine learning, the most vanilla form of the algorithm does not, in our view, pass a basic type check. For a gradient update to type check, we insist that the gradient must be passed through a duality map before being multiplied by a learning rate and applied to the weights: weight ´ LR ˚ weight.grad type check: failed! (1) weight ´ LR ˚ dualizepweight.gradq type check: passed! (2) Why? The reason is that the loss function may not be equally smooth in all directions in weight space, and there is no reason for the sizes of different components of the raw gradient vector to respect this heterogeneity. In other words, the geometry of the loss function may be non-isotropic. Insisting on a type check should force the user to become cognizant of this issue and to find a suitable duality map. A good duality map should adjust the size and direction of the gradient to respect the smoothness structure of the loss function. Duality maps on vector spaces are commonplace in physics and applied math. Examples include the musical isomorphism in differential geometry (Grosse, 2022), raising and lowering indices in general relativity (Carroll, 2019) and the bra-ket notation in quantum mechanics (Sakurai & Napolitano, 2020). Duality maps are also central to several optimization theories including mirror descent (Nemirovsky & Yudin, 1983), natural gradient descent (Amari, 2016) and steepest descent on a normed space (Boyd & Vandenberghe, 2004). Despite the efforts of some prescient papers (Carlson et al., 2015b; Flynn, 2017), the latter kind of duality map involving normed vector spaces is yet to puncture the deep learning mainstream. We believe that duality is a key theoretical concept that will help in building performant large-scale machine learning systems. To support this belief, we show in this paper that two important and seemingly disparate methods in contemporary optimization research may be seen as approximations to a single duality map. These methods are maximal update parameterization (Yang & Hu, 2021), which is aimed at scalable training, 1 arXiv:2410.21265v1 [cs.LG] 28 Oct 2024 and Shampoo (Shi et al., 2023), which is targeted at fast training. We show in Section 4.1 that both methods emerge as partial approximations to a single duality map induced by the RMS–RMS operator norm. The main contribution of this paper is to describe a procedure for constructing duality maps for general neural architectures. The procedure, which we call modular dualization, works in three steps: Step 1: Operator norms are assigned to individual layers based on the input-output semantics of each layer; Step 2: Based on these operator norms, duality maps are constructed for individual layers; Step 3: Given the layerwise duality maps and the structure of the neural architecture, a single duality map is recursively induced on the full weight space of the architecture. To instantiate this procedure for a rich family of neural architectures—including convolutional networks and transformers—we write down duality maps for Linear, Embed and Conv2D layers. We also provide novel, GPU-friendly algorithms for computing these duality maps. Overall, we hope that modular dualization will help in the principled design of the machine learning systems of the future. 2 Related Work This paper constructs a duality map for general neural architectures. Our approach is based on assigning operator norms to individual network layers and using these layerwise norms to recursively induce a duality map on the full neural architecture. The most closely related prior work is a series of papers on spectral descent (Carlson et al., 2015a;b; 2016) and a paper on duality structure gradient descent (Flynn, 2017). Spectral descent has been applied to restricted Boltzmann machines (Carlson et al., 2015a) and discrete graphical models (Carlson et al., 2016), but let us focus on the more closely related paper on spectral descent for deep learning (Carlson et al., 2015b). In that paper, the authors propose assigning the Schatten-8 norm (a.k.a. spectral norm) to individual linear layers. This assignment is based on the observation that neural networks admit natural majorization bounds in the Schatten-8 norm. The authors call the corresponding duality map for linear layers the “#-operator”—a name presumably inspired by the musical isomorphism (Grosse, 2022). The authors propose a cheap approximation to the #-operator based on sketching (Martinsson & Tropp, 2020), and they also propose a way to mix RMSprop-style pre-conditioning information (Tieleman & Hinton, 2012) into the weight updates. In contrast to our work, the authors only derive duality maps for single linear layers, and these maps are then heuristically extended to all-layer updates. Nonetheless, the authors achieve substantial wallclock speedups using variants of spectral descent to train small networks. Now, let us turn our attention to duality structure gradient descent (Flynn, 2017), which constructs a duality map on the full weight space of the neural architecture based on identifying a Finsler structure (Deimling, 1985) inherent to neural networks. Similar to modular dualization, Flynn (2017)’s duality map works by assigning duality maps to each layer and then inducing a duality map on the full weight space. The substantial difference to our approach is that Flynn (2017) leverages a weighted sum (L1 combination) of layerwise norms to construct his full duality map. This leads to optimization methods that only update a single layer at each iteration, and the methods need to be heuristically extended to achieve all-layer updates. In contrast, we leverage the modular norm (Large et al., 2024), which takes a weighted max (L8 combination) of layerwise norms. In turn, our duality map leads directly to more conventional all-layer optimizers. Another important difference between our work on modular duality and prior work on duality structure gradient descent is that we fully “modularize” our theory—meaning that our construction is explicitly recursive—and as such it is easy to code up into a software package. In this regard, we are inspired by a line of work that attempts to build optimization algorithms that automatically adapt to the structure of general computation graphs. The earliest work we know of in this category is the PhD thesis of Grant (2004) on disciplined convex programming, which aims to infer the convexity properties of general functions by breaking them up into subexpressions and applying composition theorems from convex analysis. More recent progress in this vein includes work on universal majorization-minimization algorithms (Streeter & Dillon, 2022; Streeter, 2023) and related papers on automatic majorization (Tran et al., 2015; Bernstein et al., 2023). 2 3 Theoretical Preliminaries In this section, we introduce duality maps, a means of constructing duality maps based on norms, and finally a norm called the modular norm that is well-suited to describe the geometry of general neural architectures. 3.1 Duality Maps Given a vector space V, we say that a function f : V Ñ R is a linear functional on V if f is linear. We define the dual space V˚ to be the set of linear functionals on the vector space V. The dual space is itself a vector space provided that addition is defined pointwise pf ` gqpxq :“ fpxq ` gpxq and scalar multiplication is defined pointwise pαfqpxq :“ αfpxq for any scalar α. By duality map we simply mean any rule for identifying members of the dual vector space V˚ with members of the primal vector space V, or potentially vice versa. Let L : W Ñ R denote the loss of a differentiable machine learning model with weight space W “ Rn. The Taylor expansion of the loss at weight setting w P W is given by: Lpw ` ∆wq “ Lpwq ` ∇wLpwqJ∆w ` higher-order terms. (3) Observe that, in the first-order term, the gradient ∇wLpwq is acting as a linear functional: it is pairing with the weight vector ∆w P W in a linear way to produce a real number. As such, we shall say that the gradient belongs to the dual weight space: ∇wLpwq P W˚. We shall forbid ourselves from directly subtracting a member of the dual weight space W˚ from the weight space W. If we would like to conduct a gradient descent update, then we had better find a duality map to send the gradient back to the primal space W. This restriction may seem absurd! After all, here the weight space W and its dual W˚ are both just Rn. However, insisting upon this type check serves to remind us that the curvature of the loss function may be highly heterogeneous. The next section will show one way to construct duality maps to account for this. 3.2 Steepest Descent on a Normed Space Suppose that we have found a norm }¨} : W Ñ R and a sharpness parameter λ ą 0 that serve as a good model of the higher-order terms in the Taylor expansion of the loss function given in Equation (3): Lpw ` ∆wq « Lpwq ` ∇wLpwqJ∆w ` λ 2 ¨ }∆w}2. (4) In other words, the norm provides a good characterization of the heterogeneity in curvature of the loss function. Then it makes sense to solve for a weight update ∆w by minimizing the right-hand side of Equation (4). We will show that the minimizer can be expressed in terms of a dual norm and a duality map: Definition 1 (Dual norm). Given a norm }¨} : Rn Ñ R, the dual norm }¨}: of a vector g P Rn is given by: }g}: :“ max tPRn:}t}“1 gJt. (5) Definition 2 (Duality map based on a norm). Given a norm }¨} : Rn Ñ R, we consider the duality map: dualize}¨} g :“ arg max tPRn:}t}“1 gJt, (6) where, if the arg max is not unique, dualize}¨} returns any maximizer. Given these definitions, minimizing the expression in the right-hand side of Equation (4) can be done using the following standard proposition, for which Bernstein & Newhouse (2024) provide a proof: Proposition 1 (Steepest descent under a norm). For any g P Rn thought of as “the gradient”, any λ ě 0 thought of as “the sharpness”, and any norm }¨} : Rn Ñ R with dual norm }¨}: and duality map dualize}¨}: arg min ∆wPRn „ gJ∆w ` λ 2 }∆w}2 ȷ “ ´}g}: λ ˆ dualize}¨} g. (7) In words: to find the minimizer of a linear term penalized by a squared norm, we need only evaluate the dual norm and a duality map. In this paper, we focus on constructing a duality map for the modular norm, which is defined on general neural architectures. The next section reviews duality maps for more standard norms. 3 3.3 Basic Norms and Duality Maps Many basic norms and duality maps are already covered in prior work (Carlson et al., 2016; 2015a;b; Flynn, 2017). For some warmup examples, the following duality maps for vector norms are standard: Example 1 (Duality map for the Euclidean norm). For a vector g P Rd, we have dualize}¨}2 g “ g{}g}2. Example 2 (Duality map for the infinity norm). For a vector g P Rd, we have dualize}¨}8 g “ signpgq, where the sign function is applied entrywise and we are free to take signp0q “ 0. In neural networks, the weight spaces of individual layers tend to have matrix structure. And layers with the same shape weight matrix may have semantically different input and output spaces—think embedding versus linear layers in a transformer. As such, we will need duality maps for different induced operator norms: Definition 3 (Induced operator norm). Given a matrix M P Rdoutˆdin and two normed vector spaces pRdin, }¨}αq and pRdout, }¨}βq, the “α to β” induced operator norm is given by: }M}αÑβ “ max xPRdin }Mx}β }x}α . (8) For tensors, we define the duality map via dualize}¨} G :“ arg max}T }“1 flattenpGqJ flattenpT q. For linear layers, we will need the duality map for the RMS Ñ RMS induced operator norm. This ends up as a rescaled version of the spectral norm duality map from prior work (Carlson et al., 2015b; Flynn, 2017). Example 3 (Duality map for the RMS Ñ RMS operator norm). For a vector v P Rd, we define the RMS norm to be the normalized Euclidean norm: }v}RMS “ }v}2{ ? d. Given a matrix W P Rdoutˆdin, the RMS Ñ RMS induced operator norm resolves to a rescaled spectral norm: }W }RMSÑRMS “ a din{dout ˆ }W }˚, where }¨}˚ denotes the standard spectral norm. For a matrix G P Rdoutˆdin with reduced singular value decomposition G “ UΣV J, the corresponding duality map is given by dualize}¨}RMSÑRMS G “ a dout{din ˆ UV J. And for embedding layers, we will need the duality map for the ℓ1 Ñ RMS operator norm: Example 4 (Duality map for the ℓ1 Ñ RMS operator norm). Given a matrix W P Rdoutˆdin, the ℓ1 Ñ RMS induced operator norm resolves to the max RMS norm of the columns: }W }ℓ1ÑRMS “ maxi }colipW q}RMS. For a matrix G P Rdoutˆdin, the corresponding duality map dualize}¨}ℓ1ÑRMS G simply normalizes each column of G to have unit RMS norm: colipGq ÞÑ colipGq{}colipGq}RMS for each i “ 1, ..., din. 3.4 The Modular Norm The modular norm (Large et al., 2024) is intended to help characterize the heterogeneous curvature of general neural architectures. The construction first defines an abstract module type along with a notion of what is a good, or well-normed, module. Then combination rules are given for constructing new well-normed modules from a library of existing well-normed modules. So modules are a special case of combinator pattern from functional programming (Haskell Wiki Contributors, 2007). Modules are also related to the monoidal category from category theory (Fong & Spivak, 2019). We begin by defining the abstract notion of a module: Definition 4 (Module). Given input vector space X, output vector space Y and weight vector space W, a module M is an object with the following four attributes: (a) a function, M.forward : W ˆ X Ñ Y, which maps an input and a weight vector to an output; (b) a number, M.mass ě 0, which is used to set the proportion of feature learning that this module contributes to any supermodule; (c) a number, M.sensitivity ě 0, which estimates the module’s sensitivity to input perturbations; (d) a norm over the weight space, M.norm : W Ñ Rě0, sometimes abbreviated to just }¨}M. We shall care most about modules that are well-normed, which amounts to requiring that the forward function is Lipschitz-continuous in the weights with constant 1 and in the inputs with constant M.sensitivity: 4 Definition 5 (Well-normed module). Let M be a module on pX, Y, Wq, where the input and output spaces have respective norms }¨}X and }¨}Y. M is well-normed if for all inputs x P X and weights w P W: }∇wM.forwardpw, xq ˛ ∆w}Y ď M.normp∆wq for all ∆w P W; (9) }∇xM.forwardpw, xq ˛ ∆x}Y ď M.sensitivity ˚ }∆x}X for all ∆x P X. (10) The ˛ operator denotes summation over any shared tensor indices. This definition of well-normed-ness can be used as a guiding principle in the design of a library of atomic (i.e. handwritten) modules. First, norms should be assigned to the input and output space of each module based on the semantics of M.forward. Then a norm M.norm should be assigned to the module’s weight space and a number M.sensitivity should be chosen to make the module well-normed. Examples are given in Section 4.1. Given such a library of well-normed atomic modules, a compound module built through any arbitrary sequence of module compositions and module concatenations is automatically well-normed (Large et al., 2024). And if the atomic modules in the library are not only well-normed but are also smooth in an appropriate sense, then Large et al. (2024) give an automatic procedure for computing sharpness coefficients for any compound module built from the library. The relevant definition of module composition is as follows: Definition 6 (Module composition). Consider module M1 with input, output and weight space pX1, Y1, W1q and module M2 with input, output and weight space pX2, Y2, W2q. M1 and M2 are composable if X2 “ Y1. Their composite module M “ M2 ˝ M1 has input, output and weight space pX1, Y2, W1 ˆ W2q and attributes: (a) M.forwardppw1, w2q, xqq “ M2.forwardpw2, M1.forwardpw1, xqq; (b) M.mass “ M1.mass ` M2.mass; (c) M.sensitivity “ M1.sensitivity ˚ M2.sensitivity; (d) M.normppw1, w2qq given by: max ˆ M2.sensitivity ˚ M.mass M1.mass ˚ M1.normpw1q, M.mass M2.mass ˚ M2.normpw2q ˙ , where if M1.mass or M2.mass is zero, the corresponding term in the max is set to zero. So the composite norm is taken to be a weighted max over the norms of the two sub-modules, where the weight space of the first module is coupled to the input sensitivity of the second module. The module masses provide freedom to tune the importance of each sub-module in the norm, and Large et al. (2024) prove that module mass provides precise control over the amount of feature learning that can happen in each sub-module. Module concatenation is defined in a similar way to module composition: Definition 7 (Module concatenation). Consider module M1 with input, output and weight space pX1, Y1, W1q and module M2 with input, output and weight space pX2, Y2, W2q. We say that M1 and M2 are concatenatable if their input spaces match: X1 “ X2. The tuple module M “ pM1, M2q has input, output and weight space pX1, Y1 ˆ Y2, W1 ˆ W2q and the following list of attributes: (a) M.forwardppw1, w2q, xqq “ pM1.forwardpw1, xq, M2.forwardpw2, xqq; (b) M.mass “ M1.mass ` M2.mass; (c) M.sensitivity “ M1.sensitivity ` M2.sensitivity; (d) M.normpw1, w2q given by: max ˆ M.mass M1.mass ˚ M1.normpw1q, M.mass M2.mass ˚ M2.normpw2q ˙ , where if M1.mass or M2.mass is zero, the corresponding term in the max is set to zero. A shortcoming of the paper by Large et al. (2024) is that the power of the modular norm is not fully leveraged. In particular, the authors do modular normalization of training, where weight updates to modules are sometimes just naïvely divided by their norm. In this paper we make fuller use of the geometry implied by the modular norm by constructing the corresponding duality map, which we call modular dualization. 5 4 Modular Dualization In this section, we construct a duality map for general neural architectures. Our strategy is to first write down duality maps for atomic modules, i.e. individual layers. We then extend to arbitrary compound modules, i.e. full neural networks, by showing how duality maps should pass through composition and concatenation. 4.1 Duality Maps for Atomic Modules To construct a duality map for an atomic module A, the idea is to first fix norms on the input and output spaces that respect the semantics of A.forward. We should select norms that describe both how large we would like the inputs and outputs to be, and in what geometry we would like the outputs to evolve. Then we place a norm on the weight space such that A is well-normed: this is typically the operator norm (Definition 3) induced by the input and output norms. Finally we are in position to solve for the duality map, which we shall call A.dualize. We now give some examples of this procedure for the basic layer types of Linear, Embed and Conv2D. The results are summarized in Table 1. We start with the canonical example of an atomic module: Example 5 (The Linear module). The Linear module sends inputs from X “ Rdin to outputs in Y “ Rdout. The weight space is given by the matrix space W “ Rdoutˆdin. We endow the Linear module with attributes: 1. Linear.forwardpW , xq “ W x, the matrix-vector product; 2. Linear.sensitivity “ 1; 3. Linear.mass “ µ, where µ ě 0 is a hyperparameter; 4. Linear.normpW q “ }W }RMSÑRMS, the RMS Ñ RMS induced operator norm. Since the Linear module is intended to map to and from vectors of roughly unit RMS norm, we place the RMS norm on both the input and output space: }¨}X “ }¨}RMS and }¨}Y “ }¨}RMS. Then Linear is well-normed if the inputs and weights belong to the unit balls ␣ x P Rdin : }x}X ď 1 ( and ␣ W P Rdoutˆdin : Linear.normpW q ď 1 ( . Referring back to Section 3.3, the duality map corresponding to Linear.norm is then given by: 5. Linear.dualizepGq “ b dout din ˆ UV J, where the gradient G P Rdoutˆdin has reduced SVD G “ UΣV J. This single duality map recovers essential features of both maximal update parameterization (Yang & Hu, 2021, µP) and Shampoo (Gupta et al., 2018). In particular, the factor of a dout{din in Linear.dualize recovers spectral update scaling (Yang et al., 2023) that leads to µP. (Initializing such that Linear.normpW q “ 1 also recovers µP initialization scaling.) And the mapping G ÞÑ UV J is equivalent to Shampoo without accumulation (Bernstein & Newhouse, 2024). As such, we believe that duality maps may help reconcile different strands of deep learning research and provide a unifying basis for fast and scalable training algorithms. The Embed module provides a useful counterpoint to the Linear module. The difference between the two modules stems from the fact that the input spaces of Embed and Linear have different semantics. Example 6 (The Embed module). The Embed module sends inputs from X “ Rdin to outputs in Y “ Rdout. The weight space is given by the matrix space W “ Rdoutˆdin. We endow the Embed module with attributes: 1. Embed.forwardpW , xq “ W x, the matrix-vector product; 2. Embed.sensitivity “ 1; 3. Embed.mass “ µ, where µ ě 0 is a hyperparameter; 4. Embed.normpW q “ }W }ℓ1ÑRMS, the ℓ1 Ñ RMS induced operator norm. Embed is intended to map from one-hot vectors to vectors of roughly unit RMS norm, so we place the ℓ1 norm on the input space and the RMS norm on the output space: }¨}X “ }¨}ℓ1 and }¨}Y “ }¨}RMS. Then Embed is well-normed if the inputs and weights belong to the unit balls ␣ x P Rdin : }x}X ď 1 ( and ␣ W P Rdoutˆdin : Embed.normpW q ď 1 ( . Referring back to Section 3.3, the duality map for Embed.norm is: 5. Embed.dualizepGq performs the mapping coljpGq ÞÑ coljpGq }coljpGq}RMS for each column index j “ 1, ..., din. 6 Module Weight Space W Module.norm Module.dualize Linear Rdoutˆdin W ÞÑ }W }RMSÑRMS G ÞÑ b dout din ˆ UV J Embed Rdoutˆdin W ÞÑ }W }ℓ1ÑRMS coljpGq ÞÑ coljpGq }coljpGq}RMS Conv2D Rdoutˆdinˆkˆk W ÞÑ k2 maxk i,j“1 }W¨¨ij}RMSÑRMS G¨¨ij ÞÑ 1 k2 b dout din ˆ UijV J ij Table 1: Duality maps for three atomic modules: Linear, Embed, and Conv2D. These atomic modules are sufficient to build convolutional neural networks and transformers. In Linear.dualize, we let UΣV J denote the reduced SVD of the gradient matrix G. In Conv2D.dualize, we let UijΣijV J ij denote the reduced SVD of the slice of the gradient tensor G¨¨ij at kernel indices i and j. Section 5 provides GPU-friendly algorithms for computing these duality maps based on a new family of Newton-Schulz iterations that we propose. Finally, we consider a Conv2D module with a k ˆ k kernel. Conv2D has a more involved tensor structure than Linear and Embed. The calculations work by slicing up the weight tensor into a collection of k2 matrices. Example 7 (The Conv2D module). The Conv2D module sends inputs from X “ RWinˆHinˆdin to outputs in Y “ RWoutˆHoutˆdout. We think of this as mapping an input image of width Win, height Hin and with din color channels to an output image of width Wout, height Hout and with dout color channels. The weight space is given by the tensor space W “ Rdoutˆdinˆkˆk, where k is the kernel size. We endow Conv2D with attributes: 1. Conv2D.forwardpW , xq “ W f x, where f denotes 2D convolution; 2. Conv2D.sensitivity “ 1; 3. Conv2D.mass “ µ, where µ ě 0 is a hyperparameter; 4. Conv2D.normpW q “ k2 maxk i,j“1 }W¨¨ij}RMSÑRMS, the max RMS Ñ RMS norm over kernel indices. We would like pixel intensities in the inputs and outputs to be order one and undergo order one change. We formalize this by taking the input and output norms to be the spatial maximum of the RMS norms of all the color channel vectors: }x}X “ maxWin w“1 maxHin h“1 }xwh¨}RMS and }y}Y “ maxWout w“1 maxHout h“1 }ywh¨}RMS. Then Conv2D is well-normed if the inputs and weights belong to the unit balls ␣ x P RWinˆHinˆdin : }x}X ď 1 ( and ␣ W P Rdoutˆdinˆkˆk : Conv2D.normpW q ď 1 ( . Since the duality map for a max of norms decouples into one duality map per sub-norm, the duality map corresponding to Conv2D.norm is given by: 5. Conv2D.dualizepGq does G¨¨ij ÞÑ 1 k2 b dout din ˆ UijV J ij , where G¨¨ij has reduced SVD UijΣijV J ij . 4.2 Duality Maps for Bond Modules Large et al. (2024) define another class of basic modules: bond modules. Bonds are handwritten modules without weights. An example of a bond is the ReLU nonlinearity. For a bond B, the weight space is the zero vector space W “ t0u and the modular norm B.norm “ 0 ÞÑ 0. As such, the corresponding duality map is also B.dualize “ 0 ÞÑ 0. In a software package, one need not write norms or duality maps for bond modules. 4.3 Duality Maps for Compound Modules First, given two composable modules M1 and M2, the duality map for the composite M “ M2 ˝ M1 is given by: M.dualizepg1, g2q “ ˆ 1 M2.sensitivity ˚ M1.mass M.mass ˚ M1.dualizepg1q, M2.mass M.mass ˚ M2.dualizepg2q ˙ . (11) And second, given two concatenatable modules M1 and M2, the duality map for the tuple M “ pM1, M2q is: M.dualizepg1, g2q “ ˆM1.mass M.mass ˚ M1.dualizepg1q, M2.mass M.mass ˚ M2.dualizepg2q ˙ . (12) The proofs of Equations (11) and (12) follow in a straightforward manner from Definitions 6 and 7. 7 5 Fast Duality Maps For modular dualization to be practically feasible, we need ways of computing duality maps quickly. Inspecting the duality maps listed in Table 1, we see that Embed.dualize is easy to implement since it just involves computing vector norms of matrix columns. But Linear.dualize and Conv2D.dualize involve the projection: G “ UΣV J ÞÑ UV J, (13) where UΣV J is the reduced SVD of the matrix G. Since computing SVDs can be slow (Carlson et al., 2015b; Flynn, 2017), here we discuss three fast approximations to Equation (13) via sketching, iterations for inverse matrix roots, and a new family of rectangular Newton-Schulz iterations that we propose. Which method works best may depend on the condition number of the matrix G or the available computational resources. 5.1 Sketching Sketching is a randomized method (Martinsson & Tropp, 2020) that can be used to build low-rank approxi- mations to the SVD. Carlson et al. (2015b) already used sketching to provide a fast approximation to their #-operator. More recent papers have experimented with sketching in the context of Shampoo-type algorithms (Feinberg et al., 2023). A potential downside of approximating Equation (13) via sketching is that randomized SVD methods usually try to accurately approximate the largest singular values of a matrix (Martinsson & Tropp, 2020, Section 11.2) while the value of Equation (13) may lie in its action on the small singular values. 5.2 Iterations for Inverse Matrix Roots Given a full rank matrix G with reduced SVD UΣV J, we have that: UV J “ pGGJq´1{4 G pGJGq´1{4 “ pGGJq´1{2 G “ G pGJGq´1{2. (14) This provides a route to approximating Equation (13) since one can compute inverse matrix roots such as pGGJq´1{2 via Newton iteration (Lakić, 1998). This is discussed in Chapter 7 of Higham (2008)’s book and also see Anil et al. (2020)’s paper. Care must be taken with inverses whenever the matrix G is ill-conditioned. 5.3 Rectangular Newton-Schulz Iteration We developed a novel “rectangular Newton-Schulz iteration” for computing UV J. In short, if we first normalize the matrix G according to X0 “ G{}G}ℓ2Ñℓ2 (or alternatively X0 “ G{}G}F ) and then iterate: Xt`1 “ 3 2 ¨ Xt ´ 1 2 ¨ XtXJ t Xt, (15) then as t Ñ 8, the sequence Xt Ñ UV J. To see this, one can plot the univariate cubic function fpxq :“ 3 2 ¨ x ´ 1 2 ¨ x3 and see that, for 0 ă x ă ? 3, iterating this cubic will push x closer and closer to `1. The final step is to realize that the effect of the iteration in Equation (15) is to apply this cubic fpxq to each singular value of Xt. This shows that the spectral normalization X0 “ G{}G}ℓ2Ñℓ2 is stronger than what is required: we need only ensure that X0 has singular values no greater than ? 3 for the iteration to converge. The iteration in Equation (15) has the advantage over sketching that it always works on all singular values, and since the iteration does not compute inverse matrix roots it is well-behaved even on low-rank matrices. Finally, there are in fact a family of degree 2n ` 1 polynomial iterations of the form Xt`1 “ a ¨ Xt ` b ¨ XtXJ t Xt ` c ¨ pXtXJ t q2Xt ` ... ` z ¨ pXtXJ t qnXt (16) for suitable a, b, c, ..., z that could be used instead of Equation (15). One should choose coefficients a, b, c, ..., z so that the univariate polynomial gpxq “ a ¨ x ` b ¨ x3 ` c ¨ x5 ` ... ` z ¨ x2n`1 is a suitable approximation to signpxq. One may try to further accelerate the iteration by “tuning” the coefficients a, b, c, ..., z empirically. We came up with Equation (15) by inspecting Equation 5.22 in Higham (2008)’s book, which provides a related iteration for computing the “matrix sign function” for square matrices. We developed the graphical understanding ourselves and used this as the basis for proposing the higher-order polynomial iterations. 8 6 Discussion This paper develops the theory of modular duality and the procedure of modular dualization as means to construct duality maps for general neural architectures. Here, we comment on implications and connections. 6.1 A Type System for Deep Learning Part of the inspiration for this work is the idea of building a fully-fledged type system for deep learning. We think that activation spaces should be typed by their intended norm and the intended size of activations in that norm. This information would help in the construction of well-normed modules (see Section 4.1). Modules should be typed according to Definition 4. And, as suggested in the introduction, gradients should be explicitly typed as dual vectors. A duality map should flip the type of a dual vector to a primal vector. We plan to use the Modula deep learning package (Large et al., 2024) as a testbed for these ideas. 6.2 Neural Network Speedrunning We believe that the ideas in this paper can help in the design of faster training methods. In fact, a new NanoGPT training speed record was recently set (Jordan, 2024) using our rectangular Newton-Schulz iteration. We communicated the iteration to Keller Jordan through our workshop paper (Bernstein & Newhouse, 2024). 6.3 Modular Duality: A Unifying Theoretical Framework for Fast and Scalable Training An important topic in contemporary optimization research is the design of fast and scalable training methods for neural networks. In fact, the theme of the Optimization for Machine Learning workshop at this year’s NeurIPS conference is “scaling up optimization” (OPT, 2024). Two popular methods in this research space are maximal update parameterization (Yang & Hu, 2021, µP), which allows for increasing network width without changing the optimal learning rate, and Shampoo (Gupta et al., 2018), a variant of which (Shi et al., 2023) won a speed challenge at the inaugural AlgoPerf optimization competition (Dahl et al., 2023). We showed in Section 4.1 that essential features of both µP and Shampoo are recovered from the single duality map Linear.dualize. We think that, on a basic theoretical level, µP and Shampoo should be viewed as partial approximations to this duality map. This observation helps put µP and Shampoo on a consistent theoretical footing, orients the methods with respect to overlooked prior work on spectral descent (Carlson et al., 2015b) and duality structure gradient descent (Flynn, 2017), and suggests new ways to generalize these methods to arbitrary layer types and network architectures via the modular norm and modular dualization. 6.4 On the Alignment of Activations and Updates Recent work (Yang et al., 2023; Everett et al., 2024; Large et al., 2024) has singled out the following question as important to the design of scalable deep learning systems: to what extent do gradient updates to neural network layers align with incoming activation vectors? This question is important since it helps inform how large weight updates need to be to induce a certain amount of change in layer outputs. Duality maps such as Linear.dualize and Conv2D.dualize may help simplify the answer to this question, since they project gradients to scaled semi-orthogonal matrices for which all singular values have the same magnitude. 6.5 A Numerical Paradox: The Weights Don’t Change! Past work (Lee et al., 2019; Jesus et al., 2021) has pointed out an apparent paradox in deep learning: the weights seem to move a vanishing amount from initialization in the limit of large network width. This finding has motivated a substantial amount of work on linearized training dynamics (Jacot et al., 2018). We attempted to resolve this paradox in prior work by showing that the weights move a roughly constant amount at any width when the change is measured in spectral norm (Yang et al., 2023). But duality maps change the story again: Linear.dualize ramps up the stable rank of updates, so the weights should move a non-trivial relative amount at large width even in the Frobenius norm—provided the batch size is not too small. 9 7 Conclusion This paper has proposed a recursive procedure called modular dualization for building duality maps for general neural architectures. The procedure unifies past strands of optimization research on Shampoo (Gupta et al., 2018) and µP (Yang & Hu, 2021). Partial implementations have already led to significant wall-clock speedups in transformer training (Jordan, 2024). Our rectangular Newton-Schulz iteration provides a GPU-friendly and numerically stable means of dualizing under the RMS Ñ RMS operator norm, while avoiding some of the downsides of sketching-based approaches (Carlson et al., 2015b). Overall, we hope that our theory of modular duality provides a clarifying toolkit for the design and analysis of deep learning systems. Acknowledgements Many ideas in this paper were developed jointly with Tim Large before he left to work at a tech company. We are grateful to Phillip Isola for invaluable discussions. We also thank Jack Gallagher, Keller Jordan, Simo Ryu, Rogier Brussee, Tongzhou Wang, Victor Butoi, Jeffrey Cider and Volkan Cevher for helpful conversations. References Shun-ichi Amari. Information Geometry and Its Applications. Springer, 2016. Cited on page 1. Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning. arXiv:2002.09018, 2020. Cited on page 8. Jeremy Bernstein and Laker Newhouse. Old optimizer, new norm: An anthology. In Workshop on Optimization for Machine Learning, 2024. Cited on pages 3, 6, and 9. Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, and Yisong Yue. Automatic Gradient Descent: Deep Learning without Hyperparameters. arXiv:2304.05187, 2023. Cited on page 2. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. Cited on page 1. David Carlson, Volkan Cevher, and Lawrence Carin. Stochastic spectral descent for restricted Boltzmann machines. In International Conference on Artificial Intelligence and Statistics, 2015a. Cited on pages 2 and 4. David Carlson, Ya-Ping Hsieh, Edo Collins, Lawrence Carin, and Volkan Cevher. Stochastic spectral descent for discrete graphical models. Selected Topics in Signal Processing, 2016. Cited on pages 2 and 4. David E. Carlson, Edo Collins, Ya-Ping Hsieh, Lawrence Carin, and Volkan Cevher. Preconditioned spectral descent for deep learning. In Neural Information Processing Systems, 2015b. Cited on pages 1, 2, 4, 8, 9, and 10. Sean M. Carroll. Spacetime and Geometry: An Introduction to General Relativity. Cambridge University Press, 2019. Cited on page 1. George E. Dahl, Frank Schneider, Zachary Nado, Naman Agarwal, Chandramouli Shama Sastry, Philipp Hennig, Sourabh Medapati, Runa Eschenhagen, Priya Kasimbeg, Daniel Suo, Juhan Bae, Justin Gilmer, Abel L. Peirson, Bilal Khan, Rohan Anil, Mike Rabbat, Shankar Krishnan, Daniel Snider, Ehsan Amid, Kongtao Chen, Chris J. Maddison, Rakshith Vasudev, Michal Badura, Ankush Garg, and Peter Mattson. Benchmarking neural network training algorithms. arXiv:2306.07179, 2023. Cited on page 9. Klaus Deimling. Nonlinear Functional Analysis. Springer Berlin, Heidelberg, 1985. Cited on page 2. Katie E. Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, and Jeffrey Pennington. Scaling exponents across parameterizations and optimizers. In International Conference on Machine Learning, 2024. Cited on page 9. 10 Vladimir Feinberg, Xinyi Chen, Y. Jennifer Sun, Rohan Anil, and Elad Hazan. Sketchy: Memory-efficient adaptive regularization with frequent directions. In Neural Information Processing Systems, 2023. Cited on page 8. Thomas Flynn. The duality structure gradient descent algorithm: Analysis and applications to neural networks. arXiv:1708.00523, 2017. Cited on pages 1, 2, 4, 8, and 9. Brendan Fong and David I. Spivak. An Invitation to Applied Category Theory: Seven Sketches in Composi- tionality. Cambridge University Press, 2019. Cited on page 4. Michael Charles Grant. Disciplined Convex Programming. PhD dissertation, Stanford University, 2004. Cited on page 2. Roger Grosse. Metrics. Lecture 3 of CSC2541: Neural Net Training Dynamics, 2022. Cited on pages 1 and 2. Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochastic tensor optimization. In International Conference on Machine Learning, 2018. Cited on pages 6, 9, and 10. Haskell Wiki Contributors. Combinator pattern. Haskell Wiki, 2007. URL https://wiki.haskell.org/ Combinator_pattern. Cited on page 4. Nicholas J. Higham. Functions of Matrices. Society for Industrial and Applied Mathematics, 2008. Cited on page 8. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Neural Information Processing Systems, 2018. Cited on page 9. Ricardo J. Jesus, Mário L. Antunes, Rui A. da Costa, Sergey N. Dorogovtsev, José F. F. Mendes, and Rui L. Aguiar. Effect of initial configuration of weights on training and function of artificial neural networks. Mathematics, 2021. Cited on page 9. Keller Jordan. New training speed record for @karpathy’s 124M-parameter NanoGPT setup: 3.28 Fineweb validation loss in 3.7B training tokens. https://x.com/kellerjordan0/status/1842300916864844014, 2024. Cited on pages 9 and 10. Slobodan Lakić. On the computation of the matrix k-th root. Journal of Applied Mathematics and Mechanics, 1998. Cited on page 8. Tim Large, Yang Liu, Minyoung Huh, Hyojin Bahng, Phillip Isola, and Jeremy Bernstein. Scalable optimization in the modular norm. In Neural Information Processing Systems, 2024. Cited on pages 2, 4, 5, 7, and 9. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In Neural Information Processing Systems, 2019. Cited on page 9. Per-Gunnar Martinsson and Joel A. Tropp. Randomized numerical linear algebra: Foundations and algorithms. Acta Numerica, 2020. Cited on pages 2 and 8. Arkady S. Nemirovsky and David B. Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983. Cited on page 1. OPT. Optimization for Machine Learning, 2024. URL https://opt-ml.org/. Cited on page 9. J. J. Sakurai and Jim Napolitano. Modern Quantum Mechanics. Cambridge University Press, 2020. Cited on page 1. Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, and Michael Rabbat. A distributed data-parallel PyTorch implementation of the distributed Shampoo optimizer for training neural networks at-scale. arXiv:2309.06497, 2023. Cited on pages 2 and 9. 11 Matthew Streeter. Universal majorization-minimization algorithms. arXiv:2308.00190, 2023. Cited on page 2. Matthew J. Streeter and Joshua V. Dillon. Automatically bounding the Taylor remainder series: Tighter bounds and new applications. arXiv:2212.11429, 2022. Cited on page 2. Tijmen Tieleman and Geoffrey Hinton. RMSprop. Coursera: Neural Networks for Machine Learning, Lecture 6.5, 2012. Cited on page 2. Dung T. Tran, Nobutaka Ono, and Emmanuel Vincent. Fast DNN training based on auxiliary function technique. International Conference on Acoustics, Speech and Signal Processing, 2015. Cited on page 2. Greg Yang and Edward J. Hu. Tensor programs IV: Feature learning in infinite-width neural networks. In International Conference on Machine Learning, 2021. Cited on pages 1, 6, 9, and 10. Greg Yang, James B. Simon, and Jeremy Bernstein. A spectral condition for feature learning. arXiv:2310.17813, 2023. Cited on pages 6 and 9. 12
An old idea in optimization theory says that since the gradient is a dual vector it may not be subtracted from the weights without first being mapped to the primal space where the weights reside. We take this idea seriously in this paper and construct such a duality map for general neural networks. Our map, which we call modular dualization, forms a unifying theoretical basis for training algorithms that are a) fast and b) scalable. Modular dualization involves first assigning operator norms to layers based on the semantics of each layer, and then using these layerwise norms to recursively induce a duality map on the weight space of the full neural architecture. We conclude by deriving GPU-friendly algorithms for dualizing Embed, Linear and Conv2D layers -- the latter two methods are based on a new rectangular Newton-Schulz iteration that we propose. Our iteration was recently used to set new speed records for training NanoGPT. Overall, we hope that our theory of modular duality will yield a next generation of fast and scalable optimizers for general neural architectures.
12
Modular Duality in Deep Learning Jeremy Bernstein [email protected] Laker Newhouse [email protected] MIT CSAIL In this paper, we pursue a rigorous and first-principles theoretical framework for designing neural network training algorithms. We hope that building such a framework will facilitate the design of a next generation of fast and scalable optimizers that are automatically tailored to different neural architectures. While gradient descent is the workhorse of modern machine learning, the most vanilla form of the algorithm does not, in our view, pass a basic type check. For a gradient update to type check, we insist that the gradient must be passed through a duality map before being multiplied by a learning rate and applied to the weights: weight ´ LR ˚ weight.grad type check: failed! (1) weight ´ LR ˚ dualizepweight.gradq type check: passed! (2) Why? The reason is that the loss function may not be equally smooth in all directions in weight space, and there is no reason for the sizes of different components of the raw gradient vector to respect this heterogeneity. In other words, the geometry of the loss function may be non-isotropic. Insisting on a type check should force the user to become cognizant of this issue and to find a suitable duality map. A good duality map should adjust the size and direction of the gradient to respect the smoothness structure of the loss function. Duality maps on vector spaces are commonplace in physics and applied math. Examples include the musical isomorphism in differential geometry (Grosse, 2022), raising and lowering indices in general relativity (Carroll, 2019) and the bra-ket notation in quantum mechanics (Sakurai & Napolitano, 2020). Duality maps are also central to several optimization theories including mirror descent (Nemirovsky & Yudin, 1983), natural gradient descent (Amari, 2016) and steepest descent on a normed space (Boyd & Vandenberghe, 2004). Despite the efforts of some prescient papers (Carlson et al., 2015b; Flynn, 2017), the latter kind of duality map involving normed vector spaces is yet to puncture the deep learning mainstream. We believe that duality is a key theoretical concept that will help in building performant large-scale machine learning systems. To support this belief, we show in this paper that two important and seemingly disparate methods in contemporary optimization research may be seen as approximations to a single duality map. These methods are maximal update parameterization (Yang & Hu, 2021), which is aimed at scalable training, 1 arXiv:2410.21265v1 [cs.LG] 28 Oct 2024 and Shampoo (Shi et al., 2023), which is targeted at fast training. We show in Section 4.1 that both methods emerge as partial approximations to a single duality map induced by the RMS–RMS operator norm. The main contribution of this paper is to describe a procedure for constructing duality maps for general neural architectures. The procedure, which we call modular dualization, works in three steps: Step 1: Operator norms are assigned to individual layers based on the input-output semantics of each layer; Step 2: Based on these operator norms, duality maps are constructed for individual layers; Step 3: Given the layerwise duality maps and the structure of the neural architecture, a single duality map is recursively induced on the full weight space of the architecture. To instantiate this procedure for a rich family of neural architectures—including convolutional networks and transformers—we write down duality maps for Linear, Embed and Conv2D layers. We also provide novel, GPU-friendly algorithms for computing these duality maps. Overall, we hope that modular dualization will help in the principled design of the machine learning systems of the future. 2 Related Work This paper constructs a duality map for general neural architectures. Our approach is based on assigning operator norms to individual network layers and using these layerwise norms to recursively induce a duality map on the full neural architecture. The most closely related prior work is a series of papers on spectral descent (Carlson et al., 2015a;b; 2016) and a paper on duality structure gradient descent (Flynn, 2017). Spectral descent has been applied to restricted Boltzmann machines (Carlson et al., 2015a) and discrete graphical models (Carlson et al., 2016), but let us focus on the more closely related paper on spectral descent for deep learning (Carlson et al., 2015b). In that paper, the authors propose assigning the Schatten-8 norm (a.k.a. spectral norm) to individual linear layers. This assignment is based on the observation that neural networks admit natural majorization bounds in the Schatten-8 norm. The authors call the corresponding duality map for linear layers the “#-operator”—a name presumably inspired by the musical isomorphism (Grosse, 2022). The authors propose a cheap approximation to the #-operator based on sketching (Martinsson & Tropp, 2020), and they also propose a way to mix RMSprop-style pre-conditioning information (Tieleman & Hinton, 2012) into the weight updates. In contrast to our work, the authors only derive duality maps for single linear layers, and these maps are then heuristically extended to all-layer updates. Nonetheless, the authors achieve substantial wallclock speedups using variants of spectral descent to train small networks. Now, let us turn our attention to duality structure gradient descent (Flynn, 2017), which constructs a duality map on the full weight space of the neural architecture based on identifying a Finsler structure (Deimling, 1985) inherent to neural networks. Similar to modular dualization, Flynn (2017)’s duality map works by assigning duality maps to each layer and then inducing a duality map on the full weight space. The substantial difference to our approach is that Flynn (2017) leverages a weighted sum (L1 combination) of layerwise norms to construct his full duality map. This leads to optimization methods that only update a single layer at each iteration, and the methods need to be heuristically extended to achieve all-layer updates. In contrast, we leverage the modular norm (Large et al., 2024), which takes a weighted max (L8 combination) of layerwise norms. In turn, our duality map leads directly to more conventional all-layer optimizers. Another important difference between our work on modular duality and prior work on duality structure gradient descent is that we fully “modularize” our theory—meaning that our construction is explicitly recursive—and as such it is easy to code up into a software package. In this regard, we are inspired by a line of work that attempts to build optimization algorithms that automatically adapt to the structure of general computation graphs. The earliest work we know of in this category is the PhD thesis of Grant (2004) on disciplined convex programming, which aims to infer the convexity properties of general functions by breaking them up into subexpressions and applying composition theorems from convex analysis. More recent progress in this vein includes work on universal majorization-minimization algorithms (Streeter & Dillon, 2022; Streeter, 2023) and related papers on automatic majorization (Tran et al., 2015; Bernstein et al., 2023). 2 3 Theoretical Preliminaries In this section, we introduce duality maps, a means of constructing duality maps based on norms, and finally a norm called the modular norm that is well-suited to describe the geometry of general neural architectures. 3.1 Duality Maps Given a vector space V, we say that a function f : V Ñ R is a linear functional on V if f is linear. We define the dual space V˚ to be the set of linear functionals on the vector space V. The dual space is itself a vector space provided that addition is defined pointwise pf ` gqpxq :“ fpxq ` gpxq and scalar multiplication is defined pointwise pαfqpxq :“ αfpxq for any scalar α. By duality map we simply mean any rule for identifying members of the dual vector space V˚ with members of the primal vector space V, or potentially vice versa. Let L : W Ñ R denote the loss of a differentiable machine learning model with weight space W “ Rn. The Taylor expansion of the loss at weight setting w P W is given by: Lpw ` ∆wq “ Lpwq ` ∇wLpwqJ∆w ` higher-order terms. (3) Observe that, in the first-order term, the gradient ∇wLpwq is acting as a linear functional: it is pairing with the weight vector ∆w P W in a linear way to produce a real number. As such, we shall say that the gradient belongs to the dual weight space: ∇wLpwq P W˚. We shall forbid ourselves from directly subtracting a member of the dual weight space W˚ from the weight space W. If we would like to conduct a gradient descent update, then we had better find a duality map to send the gradient back to the primal space W. This restriction may seem absurd! After all, here the weight space W and its dual W˚ are both just Rn. However, insisting upon this type check serves to remind us that the curvature of the loss function may be highly heterogeneous. The next section will show one way to construct duality maps to account for this. 3.2 Steepest Descent on a Normed Space Suppose that we have found a norm }¨} : W Ñ R and a sharpness parameter λ ą 0 that serve as a good model of the higher-order terms in the Taylor expansion of the loss function given in Equation (3): Lpw ` ∆wq « Lpwq ` ∇wLpwqJ∆w ` λ 2 ¨ }∆w}2. (4) In other words, the norm provides a good characterization of the heterogeneity in curvature of the loss function. Then it makes sense to solve for a weight update ∆w by minimizing the right-hand side of Equation (4). We will show that the minimizer can be expressed in terms of a dual norm and a duality map: Definition 1 (Dual norm). Given a norm }¨} : Rn Ñ R, the dual norm }¨}: of a vector g P Rn is given by: }g}: :“ max tPRn:}t}“1 gJt. (5) Definition 2 (Duality map based on a norm). Given a norm }¨} : Rn Ñ R, we consider the duality map: dualize}¨} g :“ arg max tPRn:}t}“1 gJt, (6) where, if the arg max is not unique, dualize}¨} returns any maximizer. Given these definitions, minimizing the expression in the right-hand side of Equation (4) can be done using the following standard proposition, for which Bernstein & Newhouse (2024) provide a proof: Proposition 1 (Steepest descent under a norm). For any g P Rn thought of as “the gradient”, any λ ě 0 thought of as “the sharpness”, and any norm }¨} : Rn Ñ R with dual norm }¨}: and duality map dualize}¨}: arg min ∆wPRn „ gJ∆w ` λ 2 }∆w}2 ȷ “ ´}g}: λ ˆ dualize}¨} g. (7) In words: to find the minimizer of a linear term penalized by a squared norm, we need only evaluate the dual norm and a duality map. In this paper, we focus on constructing a duality map for the modular norm, which is defined on general neural architectures. The next section reviews duality maps for more standard norms. 3 3.3 Basic Norms and Duality Maps Many basic norms and duality maps are already covered in prior work (Carlson et al., 2016; 2015a;b; Flynn, 2017). For some warmup examples, the following duality maps for vector norms are standard: Example 1 (Duality map for the Euclidean norm). For a vector g P Rd, we have dualize}¨}2 g “ g{}g}2. Example 2 (Duality map for the infinity norm). For a vector g P Rd, we have dualize}¨}8 g “ signpgq, where the sign function is applied entrywise and we are free to take signp0q “ 0. In neural networks, the weight spaces of individual layers tend to have matrix structure. And layers with the same shape weight matrix may have semantically different input and output spaces—think embedding versus linear layers in a transformer. As such, we will need duality maps for different induced operator norms: Definition 3 (Induced operator norm). Given a matrix M P Rdoutˆdin and two normed vector spaces pRdin, }¨}αq and pRdout, }¨}βq, the “α to β” induced operator norm is given by: }M}αÑβ “ max xPRdin }Mx}β }x}α . (8) For tensors, we define the duality map via dualize}¨} G :“ arg max}T }“1 flattenpGqJ flattenpT q. For linear layers, we will need the duality map for the RMS Ñ RMS induced operator norm. This ends up as a rescaled version of the spectral norm duality map from prior work (Carlson et al., 2015b; Flynn, 2017). Example 3 (Duality map for the RMS Ñ RMS operator norm). For a vector v P Rd, we define the RMS norm to be the normalized Euclidean norm: }v}RMS “ }v}2{ ? d. Given a matrix W P Rdoutˆdin, the RMS Ñ RMS induced operator norm resolves to a rescaled spectral norm: }W }RMSÑRMS “ a din{dout ˆ }W }˚, where }¨}˚ denotes the standard spectral norm. For a matrix G P Rdoutˆdin with reduced singular value decomposition G “ UΣV J, the corresponding duality map is given by dualize}¨}RMSÑRMS G “ a dout{din ˆ UV J. And for embedding layers, we will need the duality map for the ℓ1 Ñ RMS operator norm: Example 4 (Duality map for the ℓ1 Ñ RMS operator norm). Given a matrix W P Rdoutˆdin, the ℓ1 Ñ RMS induced operator norm resolves to the max RMS norm of the columns: }W }ℓ1ÑRMS “ maxi }colipW q}RMS. For a matrix G P Rdoutˆdin, the corresponding duality map dualize}¨}ℓ1ÑRMS G simply normalizes each column of G to have unit RMS norm: colipGq ÞÑ colipGq{}colipGq}RMS for each i “ 1, ..., din. 3.4 The Modular Norm The modular norm (Large et al., 2024) is intended to help characterize the heterogeneous curvature of general neural architectures. The construction first defines an , gradients should be explicitly typed as dual vectors. A duality map should flip the type of a dual vector to a primal vector. We plan to use the Modula deep learning package (Large et al., 2024) as a testbed for these ideas. 6.2 Neural Network Speedrunning We believe that the ideas in this paper can help in the design of faster training methods. In fact, a new NanoGPT training speed record was recently set (Jordan, 2024) using our rectangular Newton-Schulz iteration. We communicated the iteration to Keller Jordan through our workshop paper (Bernstein & Newhouse, 2024). 6.3 Modular Duality: A Unifying Theoretical Framework for Fast and Scalable Training An important topic in contemporary optimization research is the design of fast and scalable training methods for neural networks. In fact, the theme of the Optimization for Machine Learning workshop at this year’s NeurIPS conference is “scaling up optimization” (OPT, 2024). Two popular methods in this research space are maximal update parameterization (Yang & Hu, 2021, µP), which allows for increasing network width without changing the optimal learning rate, and Shampoo (Gupta et al., 2018), a variant of which (Shi et al., 2023) won a speed challenge at the inaugural AlgoPerf optimization competition (Dahl et al., 2023). We showed in Section 4.1 that essential features of both µP and Shampoo are recovered from the single duality map Linear.dualize. We think that, on a basic theoretical level, µP and Shampoo should be viewed as partial approximations to this duality map. This observation helps put µP and Shampoo on a consistent theoretical footing, orients the methods with respect to overlooked prior work on spectral descent (Carlson et al., 2015b) and duality structure gradient descent (Flynn, 2017), and suggests new ways to generalize these methods to arbitrary layer types and network architectures via the modular norm and modular dualization. 6.4 On the Alignment of Activations and Updates Recent work (Yang et al., 2023; Everett et al., 2024; Large et al., 2024) has singled out the following question as important to the design of scalable deep learning systems: to what extent do gradient updates to neural network layers align with incoming activation vectors? This question is important since it helps inform how large weight updates need to be to induce a certain amount of change in layer outputs. Duality maps such as Linear.dualize and Conv2D.dualize may help simplify the answer to this question, since they project gradients to scaled semi-orthogonal matrices for which all singular values have the same magnitude. 6.5 A Numerical Paradox: The Weights Don’t Change! Past work (Lee et al., 2019; Jesus et al., 2021) has pointed out an apparent paradox in deep learning: the weights seem to move a vanishing amount from initialization in the limit of large network width. This finding has motivated a substantial amount of work on linearized training dynamics (Jacot et al., 2018). We attempted to resolve this paradox in prior work by showing that the weights move a roughly constant amount at any width when the change is measured in spectral norm (Yang et al., 2023). But duality maps change the story again: Linear.dualize ramps up the stable rank of updates, so the weights should move a non-trivial relative amount at large width even in the Frobenius norm—provided the batch size is not too small. 9 7 Conclusion This paper has proposed a recursive procedure called modular dualization for building duality maps for general neural architectures. The procedure unifies past strands of optimization research on Shampoo (Gupta et al., 2018) and µP (Yang & Hu, 2021). Partial implementations have already led to significant wall-clock speedups in transformer training (Jordan, 2024). Our rectangular Newton-Schulz iteration provides a GPU-friendly and numerically stable means of dualizing under the RMS Ñ RMS operator norm, while avoiding some of the downsides of sketching-based approaches (Carlson et al., 2015b). Overall, we hope that our theory of modular duality provides a clarifying toolkit for the design and analysis of deep learning systems. Acknowledgements Many ideas in this paper were developed jointly with Tim Large before he left to work at a tech company. We are grateful to Phillip Isola for invaluable discussions. We also thank Jack Gallagher, Keller Jordan, Simo Ryu, Rogier Brussee, Tongzhou Wang, Victor Butoi, Jeffrey Cider and Volkan Cevher for helpful conversations. References Shun-ichi Amari. Information Geometry and Its Applications. Springer, 2016. Cited on page 1. Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning. arXiv:2002.09018, 2020. Cited on page 8. Jeremy Bernstein and Laker Newhouse. Old optimizer, new norm: An anthology. In Workshop on Optimization for Machine Learning, 2024. Cited on pages 3, 6, and 9. Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, and Yisong Yue. Automatic Gradient Descent: Deep Learning without Hyperparameters. arXiv:2304.05187, 2023. Cited on page 2. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. Cited on page 1. David Carlson, Volkan Cevher, and Lawrence Carin. Stochastic spectral descent for restricted Boltzmann machines. In International Conference on Artificial Intelligence and Statistics, 2015a. Cited on pages 2 and 4. David Carlson, Ya-Ping Hsieh, Edo Collins, Lawrence Carin, and Volkan Cevher. Stochastic spectral descent for discrete graphical models. Selected Topics in Signal Processing, 2016. Cited on pages 2 and 4. David E. Carlson, Edo Collins, Ya-Ping Hsieh, Lawrence Carin, and Volkan Cevher. Preconditioned spectral descent for deep learning. In Neural Information Processing Systems, 2015b. Cited on pages 1, 2, 4, 8, 9, and 10. Sean M. Carroll. Spacetime and Geometry: An Introduction to General Relativity. Cambridge University Press, 2019. Cited on page 1. George E. Dahl, Frank Schneider, Zachary Nado, Naman Agarwal, Chandramouli Shama Sastry, Philipp Hennig, Sourabh Medapati, Runa Eschenhagen, Priya Kasimbeg, Daniel Suo, Juhan Bae, Justin Gilmer, Abel L. Peirson, Bilal Khan, Rohan Anil, Mike Rabbat, Shankar Krishnan, Daniel Snider, Ehsan Amid, Kongtao Chen, Chris J. Maddison, Rakshith Vasudev, Michal Badura, Ankush Garg, and Peter Mattson. Benchmarking neural network training algorithms. arXiv:2306.07179, 2023. Cited on page 9. Klaus Deimling. Nonlinear Functional Analysis. Springer Berlin, Heidelberg, 1985. Cited on page 2. Katie E. Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, and Jeffrey Pennington. Scaling exponents across parameterizations and optimizers. In International Conference on Machine Learning, 2024. Cited on page 9. 10 Vladimir Feinberg, Xinyi Chen, Y. Jennifer Sun, Rohan Anil, and Elad Hazan. Sketchy: Memory-efficient adaptive regularization with frequent directions. In Neural Information Processing Systems, 2023. Cited on page 8. Thomas Flynn. The duality structure gradient descent algorithm: Analysis and applications to neural networks. arXiv:1708.00523, 2017. Cited on pages 1, 2, 4, 8, and 9. Brendan Fong and David I. Spivak. An Invitation to Applied Category Theory: Seven Sketches in Composi- tionality. Cambridge University Press, 2019. Cited on page 4. Michael Charles Grant. Disciplined Convex Programming. PhD dissertation, Stanford University, 2004. Cited on page 2. Roger Grosse. Metrics. Lecture 3 of CSC2541: Neural Net Training Dynamics, 2022. Cited on pages 1 and 2. Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochastic tensor optimization. In International Conference on Machine Learning, 2018. Cited on pages 6, 9, and 10. Haskell Wiki Contributors. Combinator pattern. Haskell Wiki, 2007. URL https://wiki.haskell.org/ Combinator_pattern. Cited on page 4. Nicholas J. Higham. Functions of Matrices. Society for Industrial and Applied Mathematics, 2008. Cited on page 8. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Neural Information Processing Systems, 2018. Cited on page 9. Ricardo J. Jesus, Mário L. Antunes, Rui A. da Costa, Sergey N. Dorogovtsev, José F. F. Mendes, and Rui L. Aguiar. Effect of initial configuration of weights on training and function of artificial neural networks. Mathematics, 2021. Cited on page 9. Keller Jordan. New training speed record for @karpathy’s 124M-parameter NanoGPT setup: 3.28 Fineweb validation loss in 3.7B training tokens. https://x.com/kellerjordan0/status/1842300916864844014, 2024. Cited on pages 9 and 10. Slobodan Lakić. On the computation of the matrix k-th root. Journal of Applied Mathematics and Mechanics, 1998. Cited on page 8. Tim Large, Yang Liu, Minyoung Huh, Hyojin Bahng, Phillip Isola, and Jeremy Bernstein. Scalable optimization in the modular norm. In Neural Information Processing Systems, 2024. Cited on pages 2, 4, 5, 7, and 9. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In Neural Information Processing Systems, 2019. Cited on page 9. Per-Gunnar Martinsson and Joel A. Tropp. Randomized numerical linear algebra: Foundations and algorithms. Acta Numerica, 2020. Cited on pages 2 and 8. Arkady S. Nemirovsky and David B. Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983. Cited on page 1. OPT. Optimization for Machine Learning, 2024. URL https://opt-ml.org/. Cited on page 9. J. J. Sakurai and Jim Napolitano. Modern Quantum Mechanics. Cambridge University Press, 2020. Cited on page 1. Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, and Michael Rabbat. A distributed data-parallel PyTorch implementation of the distributed Shampoo optimizer for training neural networks at-scale. arXiv:2309.06497, 2023. Cited on pages 2 and 9. 11 Matthew Streeter. Universal majorization-minimization algorithms. arXiv:2308.00190, 2023. Cited on page 2. Matthew J. Streeter and Joshua V. Dillon. Automatically bounding the Taylor remainder series: Tighter bounds and new applications. arXiv:2212.11429, 2022. Cited on page 2. Tijmen Tieleman and Geoffrey Hinton. RMSprop. Coursera: Neural Networks for Machine Learning, Lecture 6.5, 2012. Cited on page 2. Dung T. Tran, Nobutaka Ono, and Emmanuel Vincent. Fast DNN training based on auxiliary function technique. International Conference on Acoustics, Speech and Signal Processing, 2015. Cited on page 2. Greg Yang and Edward J. Hu. Tensor programs IV: Feature learning in infinite-width neural networks. In International Conference on Machine Learning, 2021. Cited on pages 1, 6, 9, and 10. Greg Yang, James B. Simon, and Jeremy Bernstein. A spectral condition for feature learning. arXiv:2310.17813, 2023. Cited on pages 6 and 9. 12
Adaptive Transfer Clustering: A Unified Framework
Adaptive Transfer Clustering: A Unified Framework Yuqi Gu∗ Zhongyuan Lyu† Kaizheng Wang‡ Abstract We propose a general transfer learning framework for clustering given a main dataset and an auxiliary one about the same subjects. The two datasets may reflect similar but different latent grouping structures of the subjects. We propose an adaptive transfer clustering (ATC) algorithm that automatically leverages the commonality in the presence of unknown discrepancy, by optimizing an estimated bias-variance decomposition. It applies to a broad class of statistical models including Gaussian mixture models, stochastic block models, and latent class models. A theoretical analysis proves the optimality of ATC under the Gaussian mixture model and explicitly quantifies the benefit of transfer. Extensive simulations and real data experiments confirm our method’s effectiveness in various scenarios. Keywords: Multiview clustering; Transfer learning; Adaptation; Bootstrap. 1 Introduction In recent years, data collection from multiple sources or views, each offering unique insights into the underlying structure, has become increasingly common. As a result, transfer learning has gained prominence as a powerful framework in machine learning [46]. While transfer learning has been widely applied to supervised settings [50] and semi-supervised settings [8, 38, 66], its application to unsupervised settings still remains at the infant stage. In this background, clustering has emerged as a central unsupervised task with applications in medical imaging [30, 47, 60], genetics [17, 43, 51, 57], sentiment analysis [11, 64], to name just a few. Existing transfer learning approaches for clustering primarily focus on the feature space — where the target domain and source domain are different but related through shared or similar parameters. Gaussian Mixture Model (GMM) [48] is one of the most canonical models for transfer clustering, which itself plays a fundamental role in statistics. A line of works [53, 56] on GMMs apply EM-type algorithms that leverage similar mean or covariance structures in source domain to improve the clustering performance of target data. Specifically, [53] provides theoretical guarantees by assuming closeness of discriminant coefficients in Euclidean distance. Nonetheless, these works typically assume similar parameter structures in the source and the target domains, which contain different subjects. Little attention has been given to the cases where the target and the source data consist of different aspects of the same subjects, which have similar but different labels and might even follow different models. Such problems naturally arise in Author names are sorted alphabetically. ∗Department of Statistics and Data Science Institute, Columbia University. Email: [email protected]. †Department of Statistics and Data Science Institute, Columbia University. Email: [email protected]. ‡Department of IEOR and Data Science Institute, Columbia University. Email: [email protected]. 1 arXiv:2410.21263v1 [stat.ME] 28 Oct 2024 many practical settings. For instance, in social science we may be interested in clustering a group of people based on their friendship network and attributes (e.g., age, education, occupation). However, the network and attributes often exhibit different signal strengths and may not reflect the same community structure. Another example in neuroimaging, as noted by [47], is that the community structures within brain networks may differ between subjects due to their physiological differences, responses to changing environmental conditions, and variations in imaging instruments. A related work [9] investigates the fundamental limits of community detection in multi-layer networks under label flipping for individual networks. However, their method is not adaptive to an unknown flipping probability. There is a crucial need for procedures that can adaptively leverage useful information from the source domain to assist clustering in the target domain. Main contributions Motivated by the above need, we propose a general framework for adaptive transfer clustering. Our target data X0 and source data X1 reflect different features of the same set of n subjects, where each dataset has K latent groups. In this framework, X0 and X1 may come from different mixture models, where a discrepancy parameter ε indicates the proportion of mismatched latent labels Z∗ 0, Z∗ 1 ∈[K]n. Our goal is to estimate Z∗ 0 from (X0, X1). The key challenge is how to adaptively leverage the information in X1 to cluster X0, without knowing the discrepancy ε. Intuitively, when ε = 0, i.e., the labels are perfectly matched, we should “pool” the data together; when ε is large enough, i.e., little label information about X0 can be inferred from X1, we may discard X1 and perform clustering on the target data X0 only. Inspired by this, we develop a model-based method with a penalty term encouraging the similarity of Z∗ 0 and Z∗ 1. Informally, we try to minimize the following objective with λ > 0: −log posterier of (Z0 | X0) −log posterier of (Z1 | X1) + λ · penalty (Z0, Z1) . (1) To summarize, our contributions are two-fold: • Methodology: Our framework is general to incorporate arbitrary mixture distributions of target and source data, making it applicable to a broad class of practical models including Gaussian mixture models, latent class models, and contextual stochastic block models. We design an adaptive procedure, named Adaptive Transfer Clustering or ATC for short, that can select the crucial parameter λ in (1) agonistic to the level of discrepancy ε. The key ingredient of achieving adaptivity is a combination of Goldenshluger-Lepski method [23] and parametric bootstrap. • Theory: We establish a sharp clustering error rate for ATC in cases where both the target and source data are generated from two-component d-dimensional GMMs. Denote by SNR the signal-to-noise ratio such that the best estimator using only the target data achieves a rate of exp (−SNR (1 + o(1))) as SNR →∞[20, 40, 42]. We show that the optimal rate in the transfer learning setting is exp  −SNR min  1 + log (1/ε) 4SNR , 2  (1 + o(1))  , and it can be achieved by our ATC procedure without knowing ε. This rate is always better than the aforementioned target-only rate. 2 Related work. Our work is broadly related to the emerging area of unsupervised learning in the transfer learning setting [14, 43, 49? ]. Below, we outline a few related topics with a non-exhaustive review of relevant literature. • Multi-view clustering: Our work connects to the paradigm of multi-view clustering, which aims to cluster multiple datasets representing different aspects (views) of the same subjects [4]. Existing works provide discriminative approaches that optimize carefully designed objective functions to maximize intra-cluster similarity and minimize inter-cluster similarity [35, 55], or generative approaches based on EM-type algorithms or Bayesian framework [21, 22, 33, 39]. See [7] for a survey. We will discuss comparisons in detail in Section 3. • Network community detection with side information: Our work is related to net- work community detection literature involving multi-layer networks [9, 10, 24, 28, 31? ] and contextual networks [3, 5, 13, 29, 41, 44, 45, 58, 59, 65? ? ]. In the multi-layer network setting, recent works such as [10, 12] propose multi-view stochastic block models that assume homogeneous label structures across views. [12] uses a variational Bayesian EM algorithm for parameter estimation and clustering. [10] provides algorithmic guarantees for weak recovery and exact recovery for a special case when each network only consists of two clusters. In the contextual network setting, [5] derives an exponential error for Lloyd-type algorithm for contextual stochastic block model (SBM) by assuming same clustering structures. [29] incor- porates joint and individual structures in both the network and the covariates, and proposes to use a spectral method followed by a refinement step. • Testing for clustering structures: There is also a growing body of theoretical and applied work investigating the testing problems for the common structures across different sources or views [16, 18, 21, 22, 43, 52, 61]. A closely related work is [18], which proposes to test whether two datasets from GMMs share a common clustering structure and derives a sharp detection boundary. However, for the task of clustering, it remains unknown whether a naïve dichotomous strategy, based on their testing procedure, is optimal. See Section 2 for discussion on difference between clustering (label estimation) and testing. Organization. Section 2 introduces the two-component symmetric univariate Gaussian mixture model as a warm-up example and presents the adaptive transfer clustering algorithm together with theoretical guarantees. Section 3 extends the methodology to a general framework applicable to other models. Section 4 provides theoretical analysis within the general framework, with an application to the two-component symmetric d-dimensional Gaussian mixture model fully adaptive to unknown parameters. Section 5 validates the effectiveness of the proposed algorithm through extensive simulations. Section 6 demonstrates the application of the our method to three real-world datasets. Section 7 concludes the paper and discusses future directions. All the proofs of theoretical results are included in the Appendix. Notation. Throughout the paper, the constants c0, C0, c1, C1, · · · may vary from line to line. We use plain letters x, z, X, Z, · · · to denote scalers and use boldface x, z, X, Z, · · · to denote either vectors or matrices. We write [n] := {1, 2, · · · , n} and sgn (x) := x/ |x| for x ∈R, or sgn (x) := (sgn (x1) , · · · , sgn (xn))⊤for x ∈Rn. We denote the ℓ2 vector norm as ∥·∥:= ∥·∥2. For nonnegative sequences an and bn, we write an ≲bn or bn ≳an or an = O(bn) if there exists a universal constant 3 C > 0 such that an ≤Cbn. We write an ≍bn if both an ≲bn and bn ≲an, and we write an = o (bn) or bn = ω (an) if an = O (cnbn) for some cn →0. In most cases, we omit the subscript of n when it is clear from context. 2 Warm-up: transfer learning for the Gaussian mixture model In this section, we consider the transfer learning problem for one-dimensional, two-component sym- metric Gaussian mixture model as a warm-up example. 2.1 Problem Setup Let {X0,i}n i=1 ⊆R be i.i.d. samples from a two-component symmetric Gaussian mixture model X0,i ∼1 2N(µ, σ2) + 1 2N(−µ, σ2) with known parameters µ ∈R and σ > 0. Assume µ ≥0 for the sake of identifiability. There exist i.i.d. Rademacher latent variables {Z∗ 0,i}n i=1 ⊆{±1} such that X0,i|Z∗ 0,i ∼N(Z∗ 0,iµ, σ2). The goal of clustering is to recover Z∗ 0 = (Z∗ 0,1, · · · , Z∗ 0,n) from X0 = (X0,1, · · · , X0,n). Define the normalized Hamming distance between any pair of label vectors: ℓ(Z, Z′) = 1 n n X i=1 I
We propose a general transfer learning framework for clustering given a main dataset and an auxiliary one about the same subjects. The two datasets may reflect similar but different latent grouping structures of the subjects. We propose an adaptive transfer clustering (ATC) algorithm that automatically leverages the commonality in the presence of unknown discrepancy, by optimizing an estimated bias-variance decomposition. It applies to a broad class of statistical models including Gaussian mixture models, stochastic block models, and latent class models. A theoretical analysis proves the optimality of ATC under the Gaussian mixture model and explicitly quantifies the benefit of transfer. Extensive simulations and real data experiments confirm our method's effectiveness in various scenarios.
52
Adaptive Transfer Clustering: A Unified Framework Yuqi Gu∗ Zhongyuan Lyu† Kaizheng Wang‡ Multiview clustering; Transfer learning; Adaptation; Bootstrap. 1 Introduction In recent years, data collection from multiple sources or views, each offering unique insights into the underlying structure, has become increasingly common. As a result, transfer learning has gained prominence as a powerful framework in machine learning [46]. While transfer learning has been widely applied to supervised settings [50] and semi-supervised settings [8, 38, 66], its application to unsupervised settings still remains at the infant stage. In this background, clustering has emerged as a central unsupervised task with applications in medical imaging [30, 47, 60], genetics [17, 43, 51, 57], sentiment analysis [11, 64], to name just a few. Existing transfer learning approaches for clustering primarily focus on the feature space — where the target domain and source domain are different but related through shared or similar parameters. Gaussian Mixture Model (GMM) [48] is one of the most canonical models for transfer clustering, which itself plays a fundamental role in statistics. A line of works [53, 56] on GMMs apply EM-type algorithms that leverage similar mean or covariance structures in source domain to improve the clustering performance of target data. Specifically, [53] provides theoretical guarantees by assuming closeness of discriminant coefficients in Euclidean distance. Nonetheless, these works typically assume similar parameter structures in the source and the target domains, which contain different subjects. Little attention has been given to the cases where the target and the source data consist of different aspects of the same subjects, which have similar but different labels and might even follow different models. Such problems naturally arise in Author names are sorted alphabetically. ∗Department of Statistics and Data Science Institute, Columbia University. Email: [email protected]. †Department of Statistics and Data Science Institute, Columbia University. Email: [email protected]. ‡Department of IEOR and Data Science Institute, Columbia University. Email: [email protected]. 1 arXiv:2410.21263v1 [stat.ME] 28 Oct 2024 many practical settings. For instance, in social science we may be interested in clustering a group of people based on their friendship network and attributes (e.g., age, education, occupation). However, the network and attributes often exhibit different signal strengths and may not reflect the same community structure. Another example in neuroimaging, as noted by [47], is that the community structures within brain networks may differ between subjects due to their physiological differences, responses to changing environmental conditions, and variations in imaging instruments. A related work [9] investigates the fundamental limits of community detection in multi-layer networks under label flipping for individual networks. However, their method is not adaptive to an unknown flipping probability. There is a crucial need for procedures that can adaptively leverage useful information from the source domain to assist clustering in the target domain. Main contributions Motivated by the above need, we propose a general framework for adaptive transfer clustering. Our target data X0 and source data X1 reflect different features of the same set of n subjects, where each dataset has K latent groups. In this framework, X0 and X1 may come from different mixture models, where a discrepancy parameter ε indicates the proportion of mismatched latent labels Z∗ 0, Z∗ 1 ∈[K]n. Our goal is to estimate Z∗ 0 from (X0, X1). The key challenge is how to adaptively leverage the information in X1 to cluster X0, without knowing the discrepancy ε. Intuitively, when ε = 0, i.e., the labels are perfectly matched, we should “pool” the data together; when ε is large enough, i.e., little label information about X0 can be inferred from X1, we may discard X1 and perform clustering on the target data X0 only. Inspired by this, we develop a model-based method with a penalty term encouraging the similarity of Z∗ 0 and Z∗ 1. Informally, we try to minimize the following objective with λ > 0: −log posterier of (Z0 | X0) −log posterier of (Z1 | X1) + λ · penalty (Z0, Z1) . (1) To summarize, our contributions are two-fold: • Methodology: Our framework is general to incorporate arbitrary mixture distributions of target and source data, making it applicable to a broad class of practical models including Gaussian mixture models, latent class models, and contextual stochastic block models. We design an adaptive procedure, named Adaptive Transfer Clustering or ATC for short, that can select the crucial parameter λ in (1) agonistic to the level of discrepancy ε. The key ingredient of achieving adaptivity is a combination of Goldenshluger-Lepski method [23] and parametric bootstrap. • Theory: We establish a sharp clustering error rate for ATC in cases where both the target and source data are generated from two-component d-dimensional GMMs. Denote by SNR the signal-to-noise ratio such that the best estimator using only the target data achieves a rate of exp (−SNR (1 + o(1))) as SNR →∞[20, 40, 42]. We show that the optimal rate in the transfer learning setting is exp  −SNR min  1 + log (1/ε) 4SNR , 2  (1 + o(1))  , and it can be achieved by our ATC procedure without knowing ε. This rate is always better than the aforementioned target-only rate. 2 Related work. Our work is broadly related to the emerging area of unsupervised learning in the transfer learning setting [14, 43, 49? ]. Below, we outline a few related topics with a non-exhaustive review of relevant literature. • Multi-view clustering: Our work connects to the paradigm of multi-view clustering, which aims to cluster multiple datasets representing different aspects (views) of the same subjects [4]. Existing works provide discriminative approaches that optimize carefully designed objective functions to maximize intra-cluster similarity and minimize inter-cluster similarity [35, 55], or generative approaches based on EM-type algorithms or Bayesian framework [21, 22, 33, 39]. See [7] for a survey. We will discuss comparisons in detail in Section 3. • Network community detection with side information: Our work is related to net- work community detection literature involving multi-layer networks [9, 10, 24, 28, 31? ] and contextual networks [3, 5, 13, 29, 41, 44, 45, 58, 59, 65? ? ]. In the multi-layer network setting, recent works such as [10, 12] propose multi-view stochastic block models that assume homogeneous label structures across views. [12] uses a variational Bayesian EM algorithm for parameter estimation and clustering. [10] provides algorithmic guarantees for weak recovery and exact recovery for a special case when each network only consists of two clusters. In the contextual network setting, [5] derives an exponential error for Lloyd-type algorithm for contextual stochastic block model (SBM) by assuming same clustering structures. [29] incor- porates joint and individual structures in both the network and the covariates, and proposes to use a spectral method followed by a refinement step. • Testing for clustering structures: There is also a growing body of theoretical and applied work investigating the testing problems for the common structures across different sources or views [16, 18, 21, 22, 43, 52, 61]. A closely related work is [18], which proposes to test whether two datasets from GMMs share a common clustering structure and derives a sharp detection boundary. However, for the task of clustering, it remains unknown whether a naïve dichotomous strategy, based on their testing procedure, is optimal. See Section 2 for discussion on difference between clustering (label estimation) and testing. Organization. Section 2 introduces the two-component symmetric univariate Gaussian mixture model as a warm-up example and presents the adaptive transfer clustering algorithm together with theoretical guarantees. Section 3 extends the methodology to a general framework applicable to other models. Section 4 provides theoretical analysis within the general framework, with an application to the two-component symmetric d-dimensional Gaussian mixture model fully adaptive to unknown parameters. Section 5 validates the effectiveness of the proposed algorithm through extensive simulations. Section 6 demonstrates the application of the our method to three real-world datasets. Section 7 concludes the paper and discusses future directions. All the proofs of theoretical results are included in the Appendix. Notation. Throughout the paper, the constants c0, C0, c1, C1, · · · may vary from line to line. We use plain letters x, z, X, Z, · · · to denote scalers and use boldface x, z, X, Z, · · · to denote either vectors or matrices. We write [n] := {1, 2, · · · , n} and sgn (x) := x/ |x| for x ∈R, or sgn (x) := (sgn (x1) , · · · , sgn (xn))⊤for x ∈Rn. We denote the ℓ2 vector norm as ∥·∥:= ∥·∥2. For nonnegative sequences an and bn, we write an ≲bn or bn ≳an or an = O(bn) if there exists a universal constant 3 C > 0 such that an ≤Cbn. We write an ≍bn if both an ≲bn and bn ≲an, and we write an = o (bn) or bn = ω (an) if an = O (cnbn) for some cn →0. In most cases, we omit the subscript of n when it is clear from context. 2 Warm-up: transfer learning for the Gaussian mixture model In this section, we consider the transfer learning problem for one-dimensional, two-component sym- metric Gaussian mixture model as a warm-up example. 2.1 Problem Setup Let {X0,i}n i=1 ⊆R be i.i.d. samples from a two-component symmetric Gaussian mixture model X0,i ∼1 2N(µ, σ2) + 1 2N(−µ, σ2) with known parameters µ ∈R and σ > 0. Assume µ ≥0 for the sake of identifiability. There exist i.i.d. Rademacher latent variables {Z∗ 0,i}n i=1 ⊆{±1} such that X0,i|Z∗ 0,i ∼N(Z∗ 0,iµ, σ2). The goal of clustering is to recover Z∗ 0 = (Z∗ 0,1, · · · , Z∗ 0,n) from X0 = (X0,1, · · · , X0,n). Define the normalized Hamming distance between any pair of label vectors: ℓ(Z, Z′) = 1 n n X i=1 I
BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference
BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference Changwoo Lee Soo Min Kwon Qing Qu Hun-Seok Kim University of Michigan {cwoolee,kwonsm,qingqu,hunseok}@umich.edu Abstract Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70% and 40%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at https://github.com/changwoolee/BLAST. 1 Introduction Foundation models built on large deep neural networks (DNNs) have demonstrated remarkable performance in vision and language tasks. However, the size of these large networks poses both computational and storage challenges, especially in resource-constrained environments such as edge devices. The size of a single DNN often exceeds the capacity of the supporting hardware devices [1–5]. For example, Llama-70B [1] demands at least 140GB of memory solely for loading its weights in half-precision floating point representation, while the state-of-the-art commercial GPU only accommodates 80GB of memory. Furthermore, inference with these networks involves numerous dense matrix-vector operations, which can be limiting when computing power is constrained. Fortunately, large (overparameterized) DNNs often exhibits parameter redundancy, where the intrinsic dimension of the weights is much lower than the ambient dimension. As such, the weights should be structured, possessing hidden properties such as low-rankness [6–9] or sparsity [10, 11]. Hence, it is possible to replace (or factorize) these dense existing weight matrices with structured ones without degrading performance [10–12]. However, using structured matrices that do not align with the true underlying structure of the weight matrices can result in significant performance degradation. We demonstrate this point in Figure 1 where we attempt to capture the structure of a diffusion model transformer (DiT) [13] using the low-rank structure to generate synthetic images. In Figure 1, we compress the model’s linear layers by approximately 50% of the total number of parameters using low-rank weight matrices via singular value decomposition (SVD) and generate images with the compressed model (see Section 4.2 and Appendix C.3 for details). As shown in Figure 1 (middle), simply using the low-rank structure introduces unwanted artifacts in the generated images. 38th Conference on Neural Information Processing Systems (NeurIPS 2024). arXiv:2410.21262v1 [cs.LG] 28 Oct 2024 Original Low-Rank BLAST Figure 1: Examples of generated images using DiT [13] starting from the same noise vectors and a deterministic solver. The original model is compressed by 50% through BLAST or low-rank matrices and re-trained for 10 epochs on ImageNet. The images from the model compressed via BLAST preserve the quality of the images of the original model, whereas the images generated by the low-rank model contain more undesired artifacts. To address this issue, many flexible structures for modeling DNN weights have been proposed to minimize the misalignment between imposed and true low-dimensional structures. For example, Dao et al. [14] proposed the Monarch matrix, a specific type of Block Low-Rank (BLR) structure [15], in which all blocks share the same rank, intended for use in the linear layers of transformers [16]. Matrix multiplication with a Monarch matrix can be performed efficiently using batched matrix multiplication routines. Additionally, Chen et al. [17] investigated a block sparse plus low-rank structure. However, all of these methods still suffer from the fact that the underlying structure of each weight matrix is not known a priori. By imposing one of these structures, performance degradation may still occur due to misalignment. Recently, Lee and Kim [12] introduced a data-driven design called Generalized Block Low-Rank (GBLR). This approach employs multiple rank-1 blocks with various sizes and locations learned from data via differentiable masks. Unfortunately, the GBLR matrix is optimized for custom-designed hardware, as the learned block patterns are random. It has limited usability on general GPUs as the computation of GBLR matrices does not accelerate well on them. In this work, we introduce the Block-Level Adaptive Structured (BLAST) matrix, a versatile and efficient design tailored to uncover various low-dimensional structures in the weight matrices of DNNs for accelerated inference on GPUs. Our matrix structure leverages shared bases across block matrices with block-wise diagonal coupling factors. This structure encapsulates different structures such as low-rank, block low-rank, block-diagonal matrices, and their combinations. BLAST matrices can be applied to the training scenario from scratch or compression after training. For training from scratch, we let the linear layers of the DNN to directly adopt the BLAST structure and learn its factors from data. The factors of the BLAST matrix are constructed to have well-defined gradients, allowing them to be optimized using popular methods like stochastic gradient descent (SGD) or Adam [18]. For compressing existing weights, we propose a factorization algorithm to learn the BLAST factors from pre-trained weights. The compression performance can be further improved by updating the BLAST factors using data, a process we call “re-training”. We demonstrate the efficiency of BLAST by training Vision Transformers (ViT) [19] and GPT-2 [20] from scratch on various datasets, showing that it can reduce complexity by 70% and 40%, respectively. We also compress existing ViT and Diffusion Transformer (DiT) [13] models with BLAST matrices by 70% and 50%, respectively, demonstrating that BLAST compression (and re-training) achieves higher accuracy / quality compared to existing methods for ViT and DiT (see Figure 1). For the language tasks, we compress Llama-7B [1] by 50% via BLAST and re-train on 0.49B tokens, showing the lowest accuracy degradation with significant inference speedup on a NVIDIA A100 GPU. Overall, our contributions can be summarized as follows: 2 Ui,j V T i,j Ai,j = Ui,jV T i,j Monarch [14] U V T A = UV T Low-Rank Di,i Ai,j = Di,i if i = j, O o.w. Block-Diagonal A = Pr′ k=1 ukvT k GBLR [12] r Ui V T j Si,j n b r Ai,j = UiSi,jV T j BLAST (Proposed) Figure 2: Existing structured matrices and our proposed BLAST matrix. The unique structure of BLAST allows for flexible matrix structures while enabling faster matrix multiplication compared to existing matrices. • We propose a novel block-structured matrix called BLAST that encompasses a wide range of matrix structures, allowing for faster matrix multiplication. Various existing structured matrices such as Low-Rank, Monarch [14], and Block Diagonal matrices can be expressed using the BLAST matrix. • We provide gradient descent-based methods to find the BLAST factors for DNN weights. We empirically show that standard DNN training with the BLAST weight matrices effectively recovers the original accuracy while achieving up to a 70% reduction in computational complexity. • In cases where pre-trained dense weights are available, we propose a preconditioned gradient descent factorization algorithm to decompose the weights to BLAST factors for compression and further re-training. Our experimental results show that pre-trained foundation models for vision or language tasks can be compressed by 50% using BLAST matrices. Notation and Organization. We use σ1(X) to denote the largest singular value of the matrix X. The notation ⊙indicates Hadamard product. The rest of the paper is organized as follows. In Section 2, we introduce the BLAST matrix and discuss its properties. In Section 3, we propose a methodology to train/compress DNNs with BLAST weight matrices. In Section 4, we demonstrate the effectiveness of the BLAST weights in improving efficiency without noticeable accuracy degradation. We discuss related works in Section 5, and conclude in Section 6. 2 Block-Level Adaptive Structured (BLAST) Matrix Consider a square matrix1 A ∈Rn×n for some n ∈N, which has an unknown intrinsic low- dimensional structure. We first equally partition the matrix A into b × b blocks of size p × p where b, p ∈N are constants such that n = bp: A =   A1,1 A1,2 · · · A1,b A2,1 A2,2 · · · A2,b ... ... ... ... Ab,1 Ab,2 · · · Ab,b  , Ai,j ∈Rp×p, i, j ∈[b]. (1) Then, the BLAST matrix parameterizes each block matrix Ai,j using three factors: Ai,j = UiSi,jV T j , (2) where Ui, Vj ∈Rp×r are the left and the right factors, respectively, and Si,j = diag(si,j) is an r × r diagonal matrix whose diagonal entries are si,j ∈Rr. We provide a visual representation on the rightmost side of Figure 2, and illustrate how this structure differs from other types of matrices. While the BLAST structure may appear similar to SVD, there are two notable differences: (i) the left and right factors do not need to be orthonormal, and (ii) the diagonal entries do not need to be positive. These distinctions make it more flexible in capturing different types of low-rank structures. As illustrated in Figure 2, the BLAST matrix also comes with two unique properties: 1For an m × n rectangular matrix, we partition m rows into b chunks assuming that b divides m as well. 3 • Factor Sharing: The left factor matrix Ui of size rp is shared across b blocks at the ith row, i.e., Ai,1, . . . , Ai,b. Likewise, the right factor Vj is shared across the blocks at the jth column. On the other hand, the diagonal factor si,j of size r is specific to each block Ai,j. Hence the total number of parameters of an n × n BLAST matrix with b × b number of blocks of rank r is 2rpb + rb2 = 2nr + rb2. This reduces the number of parameters b times by enforcing the blocks at the same row or column share the same bases. • Individual Diagonal Factors: The individual diagonal factors of each block matrix are the source of the adaptivity and flexibility of the BLAST matrix. By changing the values of the diagonal factors, the BLAST matrix can encompass a wide variety of matrix structures. These factors can be estimated using gradient descent, since si,j is a real-valued vector and Ai,j = Uidiag(si,j)V T j is linear to si,j. Low-Rank Matrices as Special Cases of BLAST To demonstrate how the BLAST matrix can capture different types of structures, we present an example showing how the BLAST matrix can encompass a low-rank matrix. Consider the case where all the diagonal factors are ones, i.e., si,j = 1r for all i, j = 1, 2, . . . , b. Then, we can write the block matrix as follows: UV T =   U1 U2 ... Ub  [V1 V2 · · · Vb] =   U1V T 1 U1V T 2 · · · U1V T b U2V T 1 U2V T 2 · · · U2V T b ... ... ... ... UbV T 1 UbV T 2 · · · UbV T b  . Hence, if the true underlying structure is low-rank, we can expect the BLAST matrix to learn this specific structure. Similarly, we show in Section A.1 that the BLAST matrix can construct low-rank, block-diagonal, and block low-rank matrices through different diagonal parameters. A combination of these canonical structured matrices, such as a low-rank with block-diagonal matrix, can also be achieved by simply concatenating the factors of each matrix. Algorithm 1 BLAST Matrix-Vector Product Require: U, V , s, x 1: [xT 1 , xT 2 , · · · , xT b ]T ←x 2: for j = 1, 2, . . . , b do ▷#Parallel 3: zj ←V T j xj 4: end for 5: for i = 1, 2, . . . , b do ▷#Parallel 6: yi ←Ui Pb j=1 si,j ⊙zj 7: end for 8: return y ←[yT 1 , . . . , yT b ]T Matrix Multiplication DNNs involve numerous matrix-vector (matrix-matrix) multiplications in the form of y = Ax (Y = AX). Algorithm 1 de- picts the BLAST matrix-vector multiplica- tion procedure. Consider the partitioned input vector x = [xT 1 , xT 2 , · · · , xT b ]T and the partitioned output vector y = [yT 1 , yT 2 , · · · , yT b ]T . The ith partitioned output vector yi is then computed by the sum of the b block-wise matrix-vector mul- tiplications along j = 1, . . . , b: yi = b X j=1 Ai,jxj = b X j=1 UiSi,jV T j xj = Ui   b X j=1 Si,j
Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70\\% and 40\\%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at \\url{https://github.com/changwoolee/BLAST}.
26
BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference Changwoo Lee Soo Min Kwon Qing Qu Hun-Seok Kim University of Michigan {cwoolee,kwonsm,qingqu,hunseok}@umich.edu Foundation models built on large deep neural networks (DNNs) have demonstrated remarkable performance in vision and language tasks. However, the size of these large networks poses both computational and storage challenges, especially in resource-constrained environments such as edge devices. The size of a single DNN often exceeds the capacity of the supporting hardware devices [1–5]. For example, Llama-70B [1] demands at least 140GB of memory solely for loading its weights in half-precision floating point representation, while the state-of-the-art commercial GPU only accommodates 80GB of memory. Furthermore, inference with these networks involves numerous dense matrix-vector operations, which can be limiting when computing power is constrained. Fortunately, large (overparameterized) DNNs often exhibits parameter redundancy, where the intrinsic dimension of the weights is much lower than the ambient dimension. As such, the weights should be structured, possessing hidden properties such as low-rankness [6–9] or sparsity [10, 11]. Hence, it is possible to replace (or factorize) these dense existing weight matrices with structured ones without degrading performance [10–12]. However, using structured matrices that do not align with the true underlying structure of the weight matrices can result in significant performance degradation. We demonstrate this point in Figure 1 where we attempt to capture the structure of a diffusion model transformer (DiT) [13] using the low-rank structure to generate synthetic images. In Figure 1, we compress the model’s linear layers by approximately 50% of the total number of parameters using low-rank weight matrices via singular value decomposition (SVD) and generate images with the compressed model (see Section 4.2 and Appendix C.3 for details). As shown in Figure 1 (middle), simply using the low-rank structure introduces unwanted artifacts in the generated images. 38th Conference on Neural Information Processing Systems (NeurIPS 2024). arXiv:2410.21262v1 [cs.LG] 28 Oct 2024 Original Low-Rank BLAST Figure 1: Examples of generated images using DiT [13] starting from the same noise vectors and a deterministic solver. The original model is compressed by 50% through BLAST or low-rank matrices and re-trained for 10 epochs on ImageNet. The images from the model compressed via BLAST preserve the quality of the images of the original model, whereas the images generated by the low-rank model contain more undesired artifacts. To address this issue, many flexible structures for modeling DNN weights have been proposed to minimize the misalignment between imposed and true low-dimensional structures. For example, Dao et al. [14] proposed the Monarch matrix, a specific type of Block Low-Rank (BLR) structure [15], in which all blocks share the same rank, intended for use in the linear layers of transformers [16]. Matrix multiplication with a Monarch matrix can be performed efficiently using batched matrix multiplication routines. Additionally, Chen et al. [17] investigated a block sparse plus low-rank structure. However, all of these methods still suffer from the fact that the underlying structure of each weight matrix is not known a priori. By imposing one of these structures, performance degradation may still occur due to misalignment. Recently, Lee and Kim [12] introduced a data-driven design called Generalized Block Low-Rank (GBLR). This approach employs multiple rank-1 blocks with various sizes and locations learned from data via differentiable masks. Unfortunately, the GBLR matrix is optimized for custom-designed hardware, as the learned block patterns are random. It has limited usability on general GPUs as the computation of GBLR matrices does not accelerate well on them. In this work, we introduce the Block-Level Adaptive Structured (BLAST) matrix, a versatile and efficient design tailored to uncover various low-dimensional structures in the weight matrices of DNNs for accelerated inference on GPUs. Our matrix structure leverages shared bases across block matrices with block-wise diagonal coupling factors. This structure encapsulates different structures such as low-rank, block low-rank, block-diagonal matrices, and their combinations. BLAST matrices can be applied to the training scenario from scratch or compression after training. For training from scratch, we let the linear layers of the DNN to directly adopt the BLAST structure and learn its factors from data. The factors of the BLAST matrix are constructed to have well-defined gradients, allowing them to be optimized using popular methods like stochastic gradient descent (SGD) or Adam [18]. For compressing existing weights, we propose a factorization algorithm to learn the BLAST factors from pre-trained weights. The compression performance can be further improved by updating the BLAST factors using data, a process we call “re-training”. We demonstrate the efficiency of BLAST by training Vision Transformers (ViT) [19] and GPT-2 [20] from scratch on various datasets, showing that it can reduce complexity by 70% and 40%, respectively. We also compress existing ViT and Diffusion Transformer (DiT) [13] models with BLAST matrices by 70% and 50%, respectively, demonstrating that BLAST compression (and re-training) achieves higher accuracy / quality compared to existing methods for ViT and DiT (see Figure 1). For the language tasks, we compress Llama-7B [1] by 50% via BLAST and re-train on 0.49B tokens, showing the lowest accuracy degradation with significant inference speedup on a NVIDIA A100 GPU. Overall, our contributions can be summarized as follows: 2 Ui,j V T i,j Ai,j = Ui,jV T i,j Monarch [14] U V T A = UV T Low-Rank Di,i Ai,j = Di,i if i = j, O o.w. Block-Diagonal A = Pr′ k=1 ukvT k GBLR [12] r Ui V T j Si,j n b r Ai,j = UiSi,jV T j BLAST (Proposed) Figure 2: Existing structured matrices and our proposed BLAST matrix. The unique structure of BLAST allows for flexible matrix structures while enabling faster matrix multiplication compared to existing matrices. • We propose a novel block-structured matrix called BLAST that encompasses a wide range of matrix structures, allowing for faster matrix multiplication. Various existing structured matrices such as Low-Rank, Monarch [14], and Block Diagonal matrices can be expressed using the BLAST matrix. • We provide gradient descent-based methods to find the BLAST factors for DNN weights. We empirically show that standard DNN training with the BLAST weight matrices effectively recovers the original accuracy while achieving up to a 70% reduction in computational complexity. • In cases where pre-trained dense weights are available, we propose a preconditioned gradient descent factorization algorithm to decompose the weights to BLAST factors for compression and further re-training. Our experimental results show that pre-trained foundation models for vision or language tasks can be compressed by 50% using BLAST matrices. Notation and Organization. We use σ1(X) to denote the largest singular value of the matrix X. The notation ⊙indicates Hadamard product. The rest of the paper is organized as follows. In Section 2, we introduce the BLAST matrix and discuss its properties. In Section 3, we propose a methodology to train/compress DNNs with BLAST weight matrices. In Section 4, we demonstrate the effectiveness of the BLAST weights in improving efficiency without noticeable accuracy degradation. We discuss related works in Section 5, and conclude in Section 6. 2 Block-Level Adaptive Structured (BLAST) Matrix Consider a square matrix1 A ∈Rn×n for some n ∈N, which has an unknown intrinsic low- dimensional structure. We first equally partition the matrix A into b × b blocks of size p × p where b, p ∈N are constants such that n = bp: A =   A1,1 A1,2 · · · A1,b A2,1 A2,2 · · · A2,b ... ... ... ... Ab,1 Ab,2 · · · Ab,b  , Ai,j ∈Rp×p, i, j ∈[b]. (1) Then, the BLAST matrix parameterizes each block matrix Ai,j using three factors: Ai,j = UiSi,jV T j , (2) where Ui, Vj ∈Rp×r are the left and the right factors, respectively, and Si,j = diag(si,j) is an r × r diagonal matrix whose diagonal entries are si,j ∈Rr. We provide a visual representation on the rightmost side of Figure 2, and illustrate how this structure differs from other types of matrices. While the BLAST structure may appear similar to SVD, there are two notable differences: (i) the left and right factors do not need to be orthonormal, and (ii) the diagonal entries do not need to be positive. These distinctions make it more flexible in capturing different types of low-rank structures. As illustrated in Figure 2, the BLAST matrix also comes with two unique properties: 1For an m × n rectangular matrix, we partition m rows into b chunks assuming that b divides m as well. 3 • Factor Sharing: The left factor matrix Ui of size rp is shared across b blocks at the ith row, i.e., Ai,1, . . . , Ai,b. Likewise, the right factor Vj is shared across the blocks at the jth column. On the other hand, the diagonal factor si,j of size r is specific to each block Ai,j. Hence the total number of parameters of an n × n BLAST matrix with b × b number of blocks of rank r is 2rpb + rb2 = 2nr + rb2. This reduces the number of parameters b times by enforcing the blocks at the same row or column share the same bases. • Individual Diagonal Factors: The individual diagonal factors of each block matrix are the source of the adaptivity and flexibility of the BLAST matrix. By changing the values of the diagonal factors, the BLAST matrix can encompass a wide variety of matrix structures. These factors can be estimated using gradient descent, since si,j is a real-valued vector and Ai,j = Uidiag(si,j)V T j is linear to si,j. Low-Rank Matrices as Special Cases of BLAST To demonstrate how the BLAST matrix can capture different types of structures, we present an example showing how the BLAST matrix can encompass a low-rank matrix. Consider the case where all the diagonal factors are ones, i.e., si,j = 1r for all i, j = 1, 2, . . . , b. Then, we can write the block matrix as follows: UV T =   U1 U2 ... Ub  [V1 V2 · · · Vb] =   U1V T 1 U1V T 2 · · · U1V T b U2V T 1 U2V T 2 · · · U2V T b ... ... ... ... UbV T 1 UbV T 2 · · · UbV T b  . Hence, if the true underlying structure is low-rank, we can expect the BLAST matrix to learn this specific structure. Similarly, we show in Section A.1 that the BLAST matrix can construct low-rank, block-diagonal, and block low-rank matrices through different diagonal parameters. A combination of these canonical structured matrices, such as a low-rank with block-diagonal matrix, can also be achieved by simply concatenating the factors of each matrix. Algorithm 1 BLAST Matrix-Vector Product Require: U, V , s, x 1: [xT 1 , xT 2 , · · · , xT b ]T ←x 2: for j = 1, 2, . . . , b do ▷#Parallel 3: zj ←V T j xj 4: end for 5: for i = 1, 2, . . . , b do ▷#Parallel 6: yi ←Ui Pb j=1 si,j ⊙zj 7: end for 8: return y ←[yT 1 , . . . , yT b ]T Matrix Multiplication DNNs involve numerous matrix-vector (matrix-matrix) multiplications in the form of y = Ax (Y = AX). Algorithm 1 de- picts the BLAST matrix-vector multiplica- tion procedure. Consider the partitioned input vector x = [xT 1 , xT 2 , · · · , xT b ]T and the partitioned output vector y = [yT 1 , yT 2 , · · · , yT b ]T . The ith partitioned output vector yi is then computed by the sum of the b block-wise matrix-vector mul- tiplications along j = 1, . . . , b: yi = b X j=1 Ai,jxj = b X j=1 UiSi,jV T j xj = Ui   b X j=1 Si,j
Quantum computing and persistence in topological data analysis
"arXiv:2410.21258v1 [quant-ph] 28 Oct 2024\nMIT-CTP/5802, YITP-24-131\nQuantum computing and persi(...TRUNCATED)
"Topological data analysis (TDA) aims to extract noise-robust features from a data set by examining (...TRUNCATED)
21
"arXiv:2410.21258v1 [quant-ph] 28 Oct 2024 MIT-CTP/5802, YITP-24-131 Quantum computing and persisten(...TRUNCATED)
One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
"Preprint\nONE-STEP DIFFUSION POLICY: FAST VISUOMOTOR\nPOLICIES VIA DIFFUSION DISTILLATION\nZhendong(...TRUNCATED)
"Diffusion models, praised for their success in generative tasks, are increasingly being applied to (...TRUNCATED)
18
"Preprint ONE-STEP DIFFUSION POLICY: FAST VISUOMOTOR POLICIES VIA DIFFUSION DISTILLATION Zhendong Wa(...TRUNCATED)
LongReward: Improving Long-context Large Language Models with AI Feedback
"LongReward: Improving Long-context Large Language Models\nwith AI Feedback\nJiajie Zhang1†, Zhong(...TRUNCATED)
"Though significant advancements have been achieved in developing long-context large language models(...TRUNCATED)
21
"LongReward: Improving Long-context Large Language Models with AI Feedback Jiajie Zhang1†, Zhongni(...TRUNCATED)
Capacity-Aware Planning and Scheduling in Budget-Constrained Monotonic MDPs: A Meta-RL Approach
"Capacity-Aware Planning and Scheduling in Budget-Constrained Monotonic\nMDPs: A Meta-RL Approach\nM(...TRUNCATED)
"Many real-world sequential repair problems can be effectively modeled using monotonic Markov Decisi(...TRUNCATED)
10
"Capacity-Aware Planning and Scheduling in Budget-Constrained Monotonic MDPs: A Meta-RL Approach Man(...TRUNCATED)
Zero-Shot Dense Retrieval with Embeddings from Relevance Feedback
"Zero-Shot Dense Retrieval with Embeddings from Relevance Feedback\nNour Jedidi1\nYung-Sung Chuang2\(...TRUNCATED)
"Building effective dense retrieval systems remains difficult when relevance supervision is not avai(...TRUNCATED)
15
"Zero-Shot Dense Retrieval with Embeddings from Relevance Feedback Nour Jedidi1 Yung-Sung Chuang2 Le(...TRUNCATED)
Flaming-hot Initiation with Regular Execution Sampling for Large Language Models
"Flaming-hot Initiation with Regular Execution Sampling for Large\nLanguage Models\nWeizhe Chen\nUni(...TRUNCATED)
"Since the release of ChatGPT, large language models (LLMs) have demonstrated remarkable capabilitie(...TRUNCATED)
9
"Flaming-hot Initiation with Regular Execution Sampling for Large Language Models Weizhe Chen Univer(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
13