[60000, "We consider the problem of 3D scene reconstruction and novel view synthesis from very sparse (i.e., as few as two) images in just one forward pass of atrained model. While remarkable progress has been made using neural scene rep_x0002_resentations, e.g., Scene Representation Networks (SRN) [34], Neural Radiance Fields (NeRF) [24] and Light Filed Networks (LFN) [33], these methods are still not satisfactory for practical applications due to expensive per-scene optimiza_x0002_tion [26, 39, 43], high memory cost [3, 18, 44] and slow rendering speed [40, 49]. Recently, 3D Gaussian Splatting (3DGS) [19] has emerged as an efficient and expressive 3D representation thanks to its fast rendering speed and high quality. Using rasterization-based rendering, 3DGS inherently avoids the expensive vol_x0002_umetric sampling process of NeRF, leading to highly efficient and high-quality 3D reconstruction and novel view synthesis. Very recently, several feed-forward Gaussian Splatting methods have been proposed to explore 3D reconstruction from sparse view images, notably Splat_x0002_ter Image [37] and pixelSplat [1]. Splatter Image regresses pixel-aligned Gaus_x0002_sian parameters using a standard image-to-image architecture, which achieves promising results for single-view object-level 3D reconstruction. However, recon_x0002_structing a 3D scene from a single image is inherently ill-posed and ambiguous, posing a significant challenge when applied to a more general and larger scene, which is the key focus of our paper. pixelSplat [1] proposes to regress Gaussian parameters for the binocular reconstruction problem. Specifically, it predicts a probabilistic depth distribution for each input view and then samples depths from that predicted distribution. Even though pixelSplat learns cross-view-aware features with an epipolar Transformer, it is still challenging to predict a reliable probabilistic depth distribution solely from image features, making pixelSplat\u2019s geometry reconstruction of comparably low quality and exhibiting noisy arti_x0002_facts (see ). For improved geometry reconstruction results, slow depth finetuning with an additional depth regularization loss is required. To accurately localize the 3D Gaussian centers, our solution is to build a cost volume representation via plane sweeping [7,46,48] in the 3D space. Specifi_x0002_cally, the cost volume stores cross-view feature similarities for all potential depth candidates, where the similarities can provide valuable geometry cues to the lo_x0002_calization of 3D surfaces (i.e., high similarity more likely indicates a surface point). With our cost volume representation, the task is formulated as learn_x0002_ing to perform feature matching to identify the Gaussian centers, unlike the data-driven 3D regression from image features in previous works [1, 37]. Such a formulation reduces the task\u2019s learning difficulty, enabling our method to achieve state-of-the-art performance with lightweight model size and fast speed. We obtain 3D Gaussian centers by unprojecting the multi-view-consistent depths estimated by our constructed multi-view cost volumes with a 2D network. Additionally, we also predict other Gaussian properties (covariance, opacity, and spherical harmonics coefficients), in parallel with the depths. This enables the rendering of novel view images using the predicted 3D Gaussians with the differ_x0002_entiable splatting operation [19]. \nOur full model MVSplat is trained end-to-end purely with the photometric loss between rendered and ground truth images. On the large-scale RealEstate10K [54] and ACID [21] benchmarks, MVS_x0002_plat achieves state-of-the-art performance with the fastest feed-forward infer-Sparse View Scene Reconstruction and Synthesis. The original NeRF and 3DGS methods are both designed for very dense views (e.g., 100) as inputs, which can be tedious to capture for real-world applications. Recently, there have been growing interests [1,3,5,12,26,36,39,43,44,51] in scene reconstruction and synthesis from sparse input views (e.g., 2 or 3). Existing sparse view methods can be broadly classified into two categories: per-scene optimization and cross_x0002_scene feed-forward inference methods. Per-scene approaches mainly focus on designing effective regularization terms [8, 11, 26, 39, 43, 50] to better constrain the optimization process. However, they are inherently slow at inference time due to the expensive per-scene gradient back-propagation process. \nIn contrast, feed-forward models [1,3,5,12,36,37,44,49,51] learn powerful priors from large_x0002_scale datasets, so that 3D reconstruction and view synthesis can be achieved via a single feed-forward inference by taking sparse views as inputs, which makes them significantly faster than those per-scene optimization methods. Feed-Forward NeRF. Early approaches used NeRF [24] for objects [6, 16, 18, 22, 27, 40, 49] and scenes [3, 5, 6, 10, 44] 3D reconstrution. pixelNeRF [49] pio_x0002_neered the paradigm of predicting pixel-aligned features from images for radi_x0002_ance field reconstruction. The performance of feed-forward NeRF models pro_x0002_gressively improved with the use of feature matching information [3, 5], Trans_x0002_formers [10, 25, 30] and 3D volume representations [3, 44]. The state-of-the-art feed-forward NeRF model MuRF [44] is based on a target view frustum vol_x0002_ume and a (2+1)D CNN for radiance field reconstruction, where the 3D volume and CNN need to be constructed and inferred for every target view. This makes MuRF expensive to train, with comparably slow rendering. Most importantly, all existing feed-forward NeRF models suffer from the expensive per-pixel volume sampling in the rendering process. Feed-Forward 3DGS. 3D Gaussian Splatting [4, 19] avoids NeRF\u2019s expensive volume sampling via a rasterization-based splatting approach, where novel views can be rendered very efficiently from a set of 3D Gaussian primitives. Very re_x0002_cently, a growing number of feed-forward 3DGS models [1, 2, 36, 37, 42, 51, 53] have been proposed to solve the sparse-view-to-3D task. Among them, Splatter Image [37] proposes to regress pixel-aligned Gaussian parameters from a single view with a U-Net model. However, it mainly focuses on object-level reconstruc_x0002_tion, while we target the more general scene-level reconstruction. Although its follow-up work Flash3D [36] manages to extend to scene-level reconstruction, its performance on complex scenes is inherently non-satisfactory due to the limitedTo enable high-quality rendering and reconstruction, it is crucial to predict the position \u00b5j precisely since it defines the center of the 3D Gaussian [19].\n In this paper, we present MVSplat, a Gaussian-based feed-forward model for novel view synthesis. Unlike pixelSplat [1] that predicts probabilistic depth, we develop an efficient and high-performance multi-view depth estimation model that enables unprojecting predicted depth maps as the Gaussian centers, in parallel with another branch for prediction of other Gaussian parameters (\u03b1j , \u03a3j and cj ). Our full model, illustrated in , is trained end-to-end using only a simple rendering loss for supervision. Next, we discuss the key components.The qualitative comparisons of the top three best models are visualized in . MVSplat achieves the highest quality on novel view results even un_x0002_der challenging conditions, such as these regions with repeated patterns (\u201cwin_x0002_dow frames\u201d in 1st row), or these present in only one of the input views (\u201cstair handrail\u201d and \u201clampshade\u201d in 2nd and 3rd rows), or when depicting large-scale outdoor objects captured from distant viewpoints (\u201cbridge\u201d in 4th row). The baseline methods exhibit obvious artifacts for these regions, while MVSplat shows no such artifacts due to our cost volume-based geometry representation. More evidence and detailed analysis regarding how MVSplat effectively infers the geometry structures are presented in Sec. 4.3.Assessing model efficiency. As reported in Tab. 1, apart from attaining su_x0002_perior image quality, MVSplat also shows the fastest inference time among all the compared models, accompanied by a lightweight model size, demonstrat_x0002_ing its efficiency and practical utility. It is noteworthy that the reported time encompasses both image encoding and rendering stages. For an in-depth time comparison with pixelSplat [1], our encoder runs at 0.043s, which is more than 2\u00d7 faster than pixelSplat (0.102s). Besides, pixelSplat predicts 3 Gaussians per_x0002_pixel, while our MVSplat predicts 1 single Gaussian, which also contributes to our faster rendering speed (0.0015s vs. 0.0025s) due to the threefold reduction in the number of Gaussians. More importantly, equipped with the cost volume_x0002_based encoder, our MVSplat enables fast feed-forward inference of 3D Gaussians with a much light-weight design, resulting in 10\u00d7 fewer parameters and more than 2\u00d7 faster speed compared to pixelSplat [1]. Assessing geometry reconstruction. MVSplat also produces significantly higher-quality 3D Gaussian primitives compared to the latest state-of-the-art pixelSplat [1]. pixelSplat requires an extra 50,000 steps to fine-tune the Gaussians with an additional depth regularization to achieve reasonable geometry reconstruction results. Our MVSplat instead gen_x0002_erates high-quality geometries by training solely with photometric supervision. In contrast, our MVSplat reconstructs much higher-quality 3D Gaussians, demonstrating the effectiveness of our cost volume representation", ["2403_14627v2_0", "2403_14627v2_1", "2403_14627v2_2"]] [60001, "Navigation is a fundamental skill of agents with visual-motor capabilities. We introduce a Navigation World Model (NWM), a controllable video generation model that predicts future visual observations based on past observations and navigation actions. To capture complex environment dynamics, NWM employs a Conditional Diffusion Transformer (CDiT), trained on a diverse collection of egocentric videos of both human and robotic agents, and scaled up to 1 billion parameters. In familiar environments, \nNWM can plan navigation trajectories by simulating them and evaluating whether they achieve the desired goal. Unlike supervised navigation policies with fixed behavior, NWM can dynamically incorporate constraints during planning. Experiments demonstrate its effectiveness in planning trajectories from scratch or by ranking trajectories sampled from an external policy. Furthermore, NWM leverages its learned visual priors to imagine trajectories in unfamiliar environments from a single input image, making it a flexible and powerful tool for next-generation navigation systems.Navigation is a fundamental skill for any organism with vision, playing a crucial role in survival by allowing agents to locate food, shelter, and avoid predators. In order to successfully navigate environments, smart agents primarily rely on vision, allowing them to construct representations of their surroundings to assess distances and capture landmarks in the environment, all useful for planning a navigation route. When human agents plan, they often imagine their future trajectories considering constraints and counter factuals. On the other hand, current state-of-the-art robotics navigation policies (Sridhar et al., 2024; Shahet al., 2023) are \u201chard-coded\u201d, and after training, new constraints cannot be easily introduced (e.g, \u201cno leftturns\u201d). Another limitation of current supervised visual navigation models is that they cannot dynamicallyallocate more computational resources to address hard problems. We aim to design a new model that canmitigate these issues.In this work, we propose a Navigation World Model (NWM), trained to predict the future representation of avideo frame based on past frame representation(s) and action(s) (see (a)). NWM is trained on videofootage and navigation actions collected from various robotic agents. After training, NWM is used to plannovel navigation trajectories by simulating potential navigation plans and verifying if they reach a target goal(see (b)). To evaluate its navigation skills, we test NWM in known environments, assessing its abilityto plan novel trajectories either independently or by ranking an external navigation policy. In the planningsetup, we use NWM in a Model Predictive Control (MPC) framework, optimizing the action sequence thatenables NWM to reach a target goal. In the ranking setup, we assume access to an existing navigation policy,such as NoMaD (Sridhar et al., 2024), which allows us to sample trajectories, simulate them using NWM, andselect the best ones. Our NWM achieves competitive standalone performance and state-of-the-art resultswhen combined with existing methods.NWM is conceptually similar to recent diffusion-based world models for offline model-based reinforcementlearning, such as DIAMOND (Alonso et al.) and GameNGen (Valevski et al., 2024). However, unlike thesemodels, NWM is trained across a wide range of environments and embodiments, leveraging the diversityof navigation data from robotic and human agents. This allows us to train a large diffusion transformermodel capable of scaling effectively with model size and data to adapt to multiple environments. Ourapproach also shares similarities with Novel View Synthesis (NVS) methods like NeRF (Mildenhall et al.,2021), Zero-1-2-3 (Liu et al., 2023), and GDC (Van Hoorick et al., 2024), from which we draw inspiration.However, unlike NVS approaches, our goal is to train a single model for navigation across diverse environmentsand model temporal dynamics from natural videos, without relying on 3D priors.To learn a NWM, we propose a novel Conditional Diffusion Transformer (CDiT), trained to predict the nextimage state given past image states and actions as context. Unlike a DiT (Peebles and Xie, 2023), CDiT\u2019scomputational complexity is linear with respect to the number of context frames, and it scales favorablyfor models trained up to 1B parameters across diverse environments and embodiments, requiring 4\u00d7 fewerFLOPs compared to a standard DiT while achieving better future prediction results.In unknown environments, our results show that NWM benefits from training on unlabeled, action- andreward-free video data from Ego4D. Qualitatively, we observe improved video prediction and generationperformance on single images (see (c)). Quantitatively, with additional unlabeled data, NWMproduces more accurate predictions when evaluated on the held-out Stanford Go (Hirose et al., 2018) dataset.Our contributions are as follows. \nWe introduce a Navigation World Model (NWM) and propose a novelConditional Diffusion Transformer (CDiT), which scales efficiently up to 1B parameters with significantlyreduced computational requirements compared to standard DiT. We train CDiT on video footage andnavigation actions from diverse robotic agents, enabling planning by simulating navigation plans independentlyor alongside external navigation policies, achieving state-of-the-art visual navigation performance. Finally,by training NWM on action- and reward-free video data, such as Ego4D, we demonstrate improved videoprediction and generation performance in unseen environments.Goal conditioned visual navigation is an important task in robotics requiring both perception and planningskills (Sridhar et al., 2024; Shah et al.; Pathak et al., 2018; Mirowski et al., 2022; Chaplot et al.; Fu et al.,2022). Given context image(s) and an image specifying the navigation goals, goal-conditioned visual navigationmodels (Sridhar et al., 2024; Shah et al.) aim to generate a viable path towards the goal if the environmentis known, or to explore it otherwise. Recent visual navigation methods like NoMaD (Sridhar et al., 2024)train a diffusion policy via behavior cloning and temporal distance objective to follow goals in the conditionalsetting or to explore new environments in the unconditional setting. Previous approaches like Active NeuralSLAM (Chaplot et al.) used neural SLAM together with analytical planners to plan trajectories in the 3Denvironment, while other approaches like (Chen et al.) learn policies via reinforcement learning. Here weshow that world models can use exploratory data to plan or improve existing navigation policies.Differently than in learning a policy, the goal of a world model (Ha and Schmidhuber, 2018) is to simulatethe environment, e.g, given the current state and action to predict the next state and an associated reward.Previous works have shown that jointly learning a policy and a world model can improve sample efficiencyon Atari (Hafner et al., b,a; Alonso et al.), simulated robotics environments (Seo et al., 2023), and evenwhen applied to real world robots (Wu et al., 2023). More recently, Hansen et al. proposed to use a singleworld model that is shared across tasks by introducing action and task embeddings while Yang et al.; Linet al. (2024b) proposed to describe actions in language, and Bruce et al. (2024) proposed to learn latentactions. World models were also explored in the context of game simulation. DIAMOND (Alonso et al.) andGameNGen (Valevski et al., 2024) propose to use diffusion models to learn game engines of computer gameslike Atari and Doom. \nOur work is inspired by these works, and we aim to learn a single general diffusionvideo transformer that can be shared across many environments and different embodiments for navigation.In computer vision, generating videos has been a long standing challenge (Kondratyuk et al.; Blattmannet al., 2023; Girdhar et al., 2023; Yu et al., 2023; Ho et al., 2022; Tulyakov et al., 2018b; Bar-Tal et al.,2024). Most recently, there has been tremendous progress with text-to-video synthesis with methods likeSora (Brooks et al., 2024) and MovieGen (Polyak et al., 2024). Past works proposed to control video synthesisgiven structured action-object class categories (Tulyakov et al., 2018a) or Action Graphs (Bar et al., 2021).Video generation models were previously used in reinforcement learning as rewards (Escontrela et al., 2024),pretraining methods (Tomar et al., 2024), for simulating and planning manipulation actions (Finn and Levine,2017; Liang et al., 2024) and for generating paths in indoor environments (Hirose et al., 2019b; Koh et al.,2021). Interestingly, diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are useful both for videotasks like generation (Voleti et al., 2022) and prediction (Lin et al., 2024a), but also for view synthesis (Chanet al., 2023; Poole et al.; Tung et al., 2025). Differently, we use a conditional diffusion transformer to simulatetrajectories for planning without explicit 3D representations or priors.As mentioned in the previous section, we design F\u03b8 as a stochastic mapping so it can simulate stochasticenvironments. This is achieved using a Conditional Diffusion Transformer (CDiT) model, described next.Conditional Diffusion Transformer Architecture. The architecture we use is a temporally autoregressivetransformer model utilizing the efficient CDiT block (see ), which is applied \u00d7N times over the inputsequence of latents with input action conditioning.CDiT enables time efficient autoregressive modeling by constraining the attention in the first attention blockonly to tokens from the target frame which is being denoised. To condition on tokens from past frames, weincorporate a cross attention layer, such that every query token from the current target attends to tokens frompast frames, which are used as keys and values. The cross attention then contextualizes the representationsusing a skip connection layerHere we experiment with adding unlabeled data, and ask whether NWM can make predictions in newenvironments using imagination. Clearly, In this experiment we train a model on all in-domain datasets, aswell as a susbet of unlabeled videos from Ego4D, where we only have access to the time-shift action. Wetrain a CDiT-XL model and test it on the Go Stanford dataset as well as other random images. We reportthe results in Table 4, finding that training on unlabeled data leads to significantly better video predictionsaccording to all metrics, including improved generation quality. We include qualitative examples in .Compared to in-domain (), the model breaks faster and expectedly hallucinates paths as it generatestraversals of imagined environments.", ["2412_03572v1_0", "2412_03572v1_0", "2412_03572v1_0", "2412_03572v1_1", "2412_03572v1_3", "2412_03572v1_2"]] [60002, "We introduce MVSplat360, a feed-forward approach for 360\u00b0 novel view synthesis(NVS) of diverse real-world scenes, using only sparse observations. This settingis inherently ill-posed due to minimal overlap among input views and insufficientvisual information provided, making it challenging for conventional methodsto achieve high-quality results. Our MVSplat360 addresses this by effectivelycombining geometry-aware 3D reconstruction with temporally consistent videogeneration. Specifically, it refactors a feed-forward 3D Gaussian Splatting (3DGS)model to render features directly into the latent space of a pre-trained Stable VideoDiffusion (SVD) model, where these features then act as pose and visual cuesto guide the denoising process and produce photorealistic 3D-consistent views.Our model is end-to-end trainable and supports rendering arbitrary views withas few as 5 sparse input views. To evaluate MVSplat360\u2019s performance, weintroduce a new benchmark using the challenging DL3DV-10K dataset, whereMVSplat360 achieves superior visual quality compared to state-of-the-art methodson wide-sweeping or even 360\u00b0 NVS tasks. Experiments on the existing benchmarkRealEstate10K also confirm the effectiveness of our model.The rapid advancement in 3D reconstruction and NVS has been facilitated by the emergence ofdifferentiable rendering [29, 31, 41, 40, 21]. These methods, while fundamental and impressive, areprimarily tailored for per-scene optimization, requiring hundreds or even thousands of images tocomprehensively capture every aspect of the scene. Consequently, the optimization process for eachscene can be time-consuming, and collecting thousands of images is impractical for casual users.In contrast, we consider the problem of novel view synthesis in diverse real-world scenes using a lim-ited number of source views through a feed-forward network. \nIn particular, this work investigates thefeasibility ofrendering wide-sweeping or even 360\u00b0 novel views using extremely sparse observations,like fewer than 5 images. This task is inherently challenging due to the complexity of scenes, wherethe limited views do not contain sufficient information to recover the whole 3D scene. Consequently,there is a necessity to ensemble visible information under minimal overlap accurately and generatemissing details reasonably.This represents a new problem setting in sparse-view feed-forward NVS. Existing feed-forwardmethods typically focus on two distinct scenarios: 360\u00b0 NVS with extremely sparse observations, butonly at object-level [20, 52, 66, 59, 27, 72, 63, 56, 62, 50, 46, 47, 18], or generating reasonable resultsfor scene-level synthesis, but only for nearby viewpoints [53, 7, 19, 43, 9, 13, 6, 60, 10, 70, 42, 61].In contrast, we argue that the time is ripe to unify these previously distinct research directions.Our goal should be to develop systems capable of synthesizing wide-sweeping or even 360\u00b0 novelviews of large, real-world scenes with complex geometry and significant occlusion. Specifically, thiswork explores synthesising 360\u00b0 novel views from fewer than 5 input images. We show that in thischallenging setting, existing feed-forward scene synthesis approaches [9, 13, 6, 60, 10, 57] struggleto succeed. This failure arises from two main factors: i) the limited overlap among input views causesmany contents to appear in only a few views or even a single one, posing significant challenges for3D reconstruction; ii) the extremely sparse observations lack sufficient information to capture thecomprehensive details of the whole scene, resulting in regions unobserved from novel viewpoints.In this paper, we propose a simple yet effective framework to address these limitations and introducethe first benchmark for feed-forward 360\u00b0 scene synthesis from sparse input views. Our key ideais to leverage prior knowledge from a large-scale pre-trained latent diffusion model (LDM) [35] to\u201cimagine\u201d plausible unobserved and disoccluded regions in novel views, which are inherently highlyambiguous. Unlike existing 360\u00b0 object-level NVS approaches [27, 63, 49, 56, 25, 72, 50], large-scalereal-world scenes comprise multiple 3D assets with complex arrangements, heavy occlusions, andvarying rendering trajectories, which makes it particularly challenging to condition solely on cameraposes, as also verified by concurrent work ViewCrafter [68].To develop a performant framework for scene-level synthesis, we opt to treat the LDM as a refinementmodule, while relying on a 3D reconstruction model to process the complex geometric informa-tion. Broadly, we build upon the feed-forward 3DGS [21] model, MVSplat [10], to obtain coarsenovel views by matching and fusing multi-view information with the cross-view transformer andcost volume. Although these results are imperfect, exhibiting visual artifacts and missing regions(see ), they represent the reasonable geometric structure of the scene, as they are renderedfrom 3D representation. Furthermore, we choose Stable Video Diffusion (SVD) [3] over otherimage-based LDM as the refinement module, since its strong temporal consistency capabilities alignbetter with the view-consistent requirement of the NVS task, as also observed by concurrent work3DGS-Enhancer [28]. Conditioning SVD with the 3DGS rendered outputs, our MVSplat360 producesvisually appealing novel views that are multi-view consistent and geometrically accurate (see ).We conduct a series of experiments, mainly on two datasets. First, we establish a new benchmark onDL3DV-10K dataset [23], creating a new training and testing split for feed-forward wide-sweepingand 360\u00b0 NVS. In this challenging setting, our MVSplat360 achieves photorealistic 360\u00b0 NVSfrom sparse observations and demonstrates significantly better visual quality, where the previousscene-level feed-forward methods [9, 6, 60, 10] fail to achieve plausible results. Second, we deployMVSplat360 on the existing RealEstate10K [74] benchmark. Following latentSplat [57], we estimateboth interpolation and extrapolation NVS, and report state-of-the-art performance.Our main contributions can be summarized as follows. 1) \nWe introduce a crucial and pressingproblem for novel view synthesis, i.e., how to do wide-sweeping or even 360\u00b0 NVS from sparseand widely-displaced observations of diverse in-the-wild scenes (not objects) in a feed-forwardmanner (no any per-scene optimization). 2) We propose an effective solution that nicely integrates thelatest feed-forward 3DGS and the pre-trained Stable Video Diffusion (SVD) model with meticulousintegration designs, where the former is for reconstructing coarse geometry and the latter is forrefining the noisy and incomplete coarse reconstruction. 3) Extensive results on the challengingDL3DV-10K and RealEstate10K datasets demonstrate the superior performance of our MVSplat360.We opt to go beyond per-scene optimisation [1, 2, 58, 15], and to deal with a more general feed-forward network capable of achieving 360\u00b0 NVS for unseen scenes, yet without the need ofadditionalper-scene training. This requires effectively matching information between sparse views in 3Dspace, as well as generating sufficient content based on only partial observations. To achieve that,our MVSplat360 framework, illustrated in , comprises two main components: a multi-viewgeometry reconstruction module (Section 3.1) and a multi-frame consistent appearance refinementnetwork (Section 3.2). The former is responsible for matching and fusing multi-view informationfrom sparse observations to create a coarse geometry reconstruction, whereas the latter is designedto refine the appearance with a pre-trained latent video diffusion model. While similar two-stepapproaches have been explored in recent related works, e.g., [4, 58, 38, 15], we are the first (to thebest of our knowledge) to explore it on wide-sweeping or even 360\u00b0 NVS for large-scale scenes fromsparse views (as few as 5), in a feed-forward manner.bserved and novel viewpoints selection. To enable 360\u00b0 scene synthesis, it is crucial to choose thecorrect camera viewpoints, so that they can cover most contents in diverse and complex scenes [15].It is impractical to assume a circular orbital camera trajectory like those object-level 360\u00b0 viewsynthesis [27, 46, 72, 47], whereas it is suboptimal to randomly choose a video sequence like existingscene-level nearby viewpoint synthesis [46, 6, 10]. To this end, we propose to choose views evenlydistributed within a set of targeted viewpoints as input. Specifically, for a given set of candidate views,we apply farthest point sampling over the camera locations to identify the input views and randomlychoose from the rest as target views. The number of candidate views gradually increases throughoutthe training, stably improving the model\u2019s capability toward handling 360\u00b0 scene synthesis.Metrics. latentSplat [57] improves the results with anadditional decoder and adversarial training. However, its resulting object geometry and image qualityare still far from satisfactory, suggesting that the GAN-based framework cannot provide enoughprior knowledge for refining 360\u00b0 NVS in diverse real-world scenes. Readers are referred to ourproject page for video results with more comprehensive comparisons, where our MVSplat360 shows", ["2411_04924v1_0", "2411_04924v1_0", "2411_04924v1_1"]] [60003, "We propose DrivingForward, a feed-forward Gaussian Splat-ting model that reconstructs driving scenes from flexiblesurround-view input. Driving scene images from vehicle-mounted cameras are typically sparse, with limited overlap,and the movement of the vehicle further complicates the ac-quisition of camera extrinsics. To tackle these challenges andachieve real-time reconstruction, we jointly train a pose net-work, a depth network, and a Gaussian network to predictthe Gaussian primitives that represent the driving scenes. Thepose network and depth network determine the position of theGaussian primitives in a self-supervised manner, without us-ing depth ground truth and camera extrinsics during training.The Gaussian network independently predicts primitive pa-rameters from each input image, including covariance, opac-ity, and spherical harmonics coefficients. At the inferencestage, our model can achieve feed-forward reconstructionfrom flexible multi-frame surround-view input. Experimentson the nuScenes dataset show that our model outperforms ex-isting state-of-the-art feed-forward and scene-optimized re-construction methods in terms of reconstruction.\n3D scene reconstruction is critical for understanding driv-ing scenes. Modern self-driving assistance cars are usu-ally equipped with several cameras to capture surround-ing scenes. Real-time reconstruction of driving scenesfrom sparse vehicle-mounted cameras contributes to variousdownstream tasks in autonomous driving, including onlinemapping (Li et al. 2022a), BEV perception (Li et al. 2022b;Liang et al. 2022; Liu et al. 2023), and 3D detection (Chenet al. 2023; Cai et al. 2023). However, the real-time comput-ing required by downstream tasks and the sparse surround-ing views challenge the driving scene reconstruction.Neural Radiance Fields (NeRF) (Mildenhall et al. 2021)and 3D Gaussian Splatting (3DGS) (Kerbl et al. 2023) havesignificantly progressed the development of 3D scene recon-struction. DrivingGaussian (Zhou et al. 2024), StreetGaus-sian (Yan et al. 2024), and AutoSplat (Khan et al. 2024)further explore the reconstruction of driving scenes. Whilethese methods demonstrate strong capability in novel viewsynthesis, they are scene-optimized methods that requiredozens of images and expensive computing time to recon-struct just one scene. These offline reconstruction meth-ods are unsuitable for real-time downstream tasks in au-tonomous driving, thereby limiting their practicality.Our goal is to achieve online, generalizable driving scenereconstruction from sparse surrounding views. Several at-tempts, such as pixelSplat (Charatan et al. 2024) and MVS-plat (Chen et al. 2024), have explored the generalizable re-construction. They learn powerful priors from large-scaledatasets during training and achieve fast 3D reconstructionfrom sparse input views through a feed-forward inference.Unfortunately, these methods are difficult to apply in driv-ing scenes. Since the number of vehicle-mounted camerasis limited (usually 6 cameras), the overlap of adjacent viewsis extremely small (as low as 10%). While these existingmethods often require densely overlapping (usually over60%) input images. Additionally, acquiring camera extrin-sics for each view at various timesteps in driving scenes iscostly. These methods depend on such data during training,limiting their practical applicability. A recent NeRF-basedwork DistillNeRF (Wang et al. 2024) attempts to developa generalizable 3D representation for driving scenes. How-ever, it gains a suboptimal performance and relies on Li-DAR to train numerous NeRF models for distillation, whichis extremely computationally expensive. Besides, previousfeed-forward methods typically have a fixed mode of inputviews, either using stereo images (e.g., MVSplat, pixelSplat)or single-frame images of surrounding views (e.g., Distill-NeRF). However, as the vehicle moves forward and capturessurround-view images frame-by-frame, we aim to supportflexible multi-frame inputs for reconstruction, such as pre-dicting the next frame\u2019s views from single-frame surround-ing views or synthesizing intermediate frame surroundingviews from two interval frames.\nBy unprojecting the consistent depth estimation, we ob-tain the position of Gaussian primitives. For other Gaus-sian parameters, we individually predict them from eachimage through a Gaussian network. The Gaussian networkis jointly trained with the pose network and depth net-work. It takes the depth map and image feature from thedepth network as input and outputs the covariance, opacity,and spherical harmonics coefficients of Gaussian primitives.Since Gaussian primitives are predicted independently fromsingle-frame images of surrounding views, our method is notconstrained by a fixed number of input frames. This allowsfor flexible multi-frame surround-view input, such as pre-dicting the next frames\u2019 views from the current frame or syn-thesizing the intermediate frame from two interval frames.Extensive experiments on the nuScenes dataset demon-strate that our DrivingForward outperforms other feed-forward methods on the novel view synthesis under variousinputs. It also achieves higher reconstruction quality com-pared to scene-optimized methods with the same input. Afunctional comparison of our DrivingForward with the lat-est related works is present in \nDrivingForward learns powerful priors from large-scaledriving scene datasets during training and achieves real-timedriving scene reconstruction in a feed-forward manner fromsparse vehicle-mounted cameras at the inference stage.We begin with N sparse camera imagesas input and aim to predict Gaussian primitives from input view im-ages. The overview framework is illustrated in . Apose network predict the vehiclemotion and estimate the scale-aware depth from the input.We assign each pixel to one Gaussian primitive and the po-sition is located through the estimated depth. Other parame-We unproject the Gaussian primitives from allviews into 3D space, render them to the target view in adifferentiable way, and jointly train the full model end-to-end. At the inference stage, the depth network and Gaussiannetwork are used for feed-forward reconstruction. Since thescale-aware localization and prediction of other parametersdo not depend on other frames, we can flexibly input differ-ent numbers of surround-view frames during inference.\nIn driving scenes, with multi-frame surrounding views,we take different inputs as the source image to compute thephotometric loss. First, we use the images from the samecamera at different frames, denoted as temporal contexts.Then, we use the images from adjacent cameras at the sameframe, denoted as spatial contexts. We also combined thetwo ways, using images from adjacent cameras at differentframes, denoted as spatial-temporal contexts. The key in-sight is to leverage the small overlap between spatially andtemporally neighboring images for matching, which pro-vides scale information and enables learning scale-awarecamera motion and depth maps during training.\nNote that this camera-to-vehicle transformation matrix isfixed for each camera across all timesteps and is relativelyeasy to obtain in practice, whereas the general world-to-camera extrinsics vary at every timestep and thus are costlyto collect. Leveraging the fixed camera-to-vehicle transfor-mation matrix and the camera motion predicted by the posenetwork, we do not require the world-to-camera extrinsicsduring training, which is another advantage of our method.\nUnlike pixelSplat (Charatan et al. 2024) and MVS-plat (Chen et al. 2024) that rely on paired views to predictGaussian parameters, we propose a Gaussian network to in-dependently predict these parameters from each image andaggregate the Gaussian primitives across all images, whichis more suitable for driving scenes with small overlap be-tween sparse vehicle-mounted cameras. To ensure the pre-dicted Gaussian parameters are consistent across all inputviews, we utilize the previous scale-aware depth and im-age feature from the depth network as input for the Gaus-sian network. Since the previous depth network learns scaleinformation from spatial and temporal context images, weargue that the scale-aware depth and image feature can en-hance the multi-view consistency of the remaining Gaussianparameters.By applying scale-aware localization and Gaussian param-eters prediction to each input view, we obtain the Gaussianprimitives for all images. These primitives are then aggre-gated into 3D space to form a 3D representation. Novel viewsynthesis can be achieved through the splat-based rasteriza-tion rendering in 3DGS (Kerbl et al. 2023).We jointly train the full model, including the depth net-work, the pose network, and the Gaussian network. For warpoperation of the depth and pose network, we use spatialtransformer network (Jaderberg et al. 2015) to sample thesynthesis image from the source image, which is fully differ-entiable (Godard, Mac Aodha, and Brostow 2017). For ren-dering novel views after obtaining the Gaussian primitivesin 3D space, the splat-based rasterization rendering is alsofully differentiable. These two operations along with otherdifferentiable parts enable the joint training end-to-end.We fuse image features from the Depth network into theGaussian network. This shared feature connects the scale-aware positions with predictions of other Gaussian parame-ters, allowing the Gaussian network to leverage scale infor-mation from temporal and spatial contexts. Additionally, itpromotes the convergence of the full model.We report the quantitative results of comparison with state-of-the-art feed-forward methods in Table 1. Although weadapt our method to align with the different settings of thebaselines, we outperform them across all metrics under thecorresponding configuration. The qualitative comparisons ofMVSplat, pixelSplat, and our method are visualized in . Our DrivingForward achieves the highest quality onnovel view results even for challenging details, such as thetraffic sign in the front left view and the monument withwords in the back right view. Other methods show obviousartifacts in these regions, while our method synthesizes clearno\nvel views without such artifacts.", ["2409_12753v1_0", "2409_12753v1_1", "2409_12753v1_2"]] [60004, "We propose GGS, a Generalizable Gaussian Splatting methodfor Autonomous Driving which can achieve realistic render-ing under large viewpoint changes. Previous generalizable3D gaussian splatting methods are limited to rendering novelviews that are very close to the original pair of images, whichcannot handle large differences in viewpoint. Especially inautonomous driving scenarios, images are typically collectedfrom a single lane. The limited training perspective makesrendering images of a different lane very challenging. To fur-ther improve the rendering capability of GGS under largeviewpoint changes, we introduces a novel virtual lane gen-eration module into GSS method to enables high-quality laneswitching even without a multi-lane dataset. Besides, we de-sign a diffusion loss to supervise the generation of virtual laneimage to further address the problem of lack of data in the vir-tual lanes. Finally, we also propose a depth refinement mod-ule to optimize depth estimation in the GSS model. Extensivevalidation of our method, compared to existing approaches,demonstrates state-of-the-art performance.\nNovel view synthesis is an essential task in the field of com-puter vision, with significant potential applications in au-tonomous driving (Yang et al. 2020;Wu et al. 2023; Liu et al.2023; Yang et al. 2023b, 2024a; Yu et al. 2024), object detec-tion, and digital human representations. To enhance the ro-bustness of autonomous driving systems, it is imperative toestablish a simulation environment for testing these systemseffectively. However, the majority of existing datasets arelimited to single-lane scenarios. This limitation presents sig-nificant challenges in inferring adjacent lane scenarios fromthe current viewpoint. If lane switching is not supported, thetest samples provided to the autonomous driving simulationsystem will be incomplete, making it impossible to conductbetter simulation testing and requiring a significant amountof data collection costs.The methods based on NeRF (Mildenhall et al. 2021;Yang et al. 2023a) often rely on lidar to better generate novelviews in autonomous driving scenarios. READ (Li, Li, andZhu 2023) introduces a new rendering method that adoptsa neural rendering approach different from NeRF. It learnsneural descriptors of the original point cloud with explicitgeometry to render images, instead of learning implicit ge-\nometry in NeRF methods. However, the efficiency of train-ing and rendering using these methods is very low.The efficiency in training and rendering speed, coupledwith the high reconstruction quality of 3D Gaussian Splat-ting (Kerbl et al. 2023), contributes to its widespread appli-cation of novel view synthesis in autonomous driving. Gaus-sianPro (Cheng et al. 2024) introduces multi-view stereoto improve the geometry of generated gaussian splats. DC-Gaussian(Cheng et al. 2023) introduces an adaptive imagedecomposition module to address the impact of glass reflec-tions on the quality of novel view synthesis. However, thesemethods still cannot have effective novel view synthesis inlane switching, as they do not address the main problem thatonly single lanes of data are collected.To address the problem of the sparse view synthesis, manymethods have sought to optimize this process using gener-ative models (Yu et al. 2021; Chen et al. 2024; Liu et al.2024; Wu et al. 2024; Tang et al. 2024). generative mod-els are trained across large amount of scenes to enhanceperformance in sparse view scenarios. However, the genera-tive model still lacks of data from multi-lanes to learn howto synthesize novel views for other lanes from single lanesdata.\nTherefore, we propose a virtual lane module into genera-tive Gaussian splatting to address the synthesis of new viewsinvolving lane changes, despite the lack of multi-lane train-ing datasets for supervision. In the module, we first use 3Dgaussians generated from images in the single lanes using agenerative model to predict images from virtual lanes, thenuse 3D gaussians generated from the virtual images to pre-dict back the image collected in the single lanes. In this way,we can let generative model to learn how to generate thebest images in the other lanes even with only single lanes ofdata. In addition, we propose a diffusion loss from a latentdiffusion model (Sohl-Dickstein et al. 2015; Song, Meng,and Ermon 2020; Nichol and Dhariwal 2021; Rombach et al.2022) to virtual generated images to further improve the laneswitching of our GGS. Finally, as improving geometry ofgenerated 3D gaussians also improves novel view synthe-sis in sparse view collections, we employs points from tra-ditional multiview stereo reconstruction to refine the depthestimated in GGS. The main contributions of this paper can be summarizedas follows:\u2022 We propose a novel virtual lanes module into the gen-erative gaussian splatting to improve the quality of laneswitching novel view with only single lanes of data.\u2022 We introduce a diffusion loss to directly supervise theimage from virtual lanes predicted by GGS to furtherimprove the novel view synthesis from limited collectedviews.\u2022 We propose to fusing MVS geometry into the generative3D gaussian splating to improve geometry estimation.\u2022 We conduct extensive experiments on a wide range ofscenarios to validate the effectiveness of our algorithm,and achieve state-of-the-art street novel view synthesiseven without LiDAR.\nAlthough generalized models can assist in synthesizingnovel views in sparse views, insufficient view informationleads to inaccurate depth estimation. Our method further op-timizes the generalization model. The overall framework di-agram of our GGS method is as shown in . We inputfour different frame images and introduce neighborhood fea-ture in the Mutli-View Depth Refinement Module to bet-ter address scenes with occlusions. And we introduce moreglobal information to optimize the predicted depth map Byusing MVS. In the Virtual Lane Generation Module, weintroduce the concept of virtual lanes and solve the problemof not having a multi-lane dataset by switching back afterswitching, allowing the model to flexibly switch lanes. Inaddition, we introduce the Multi-Lane Diffusion Loss tosupervise the novel view synthesis.\nMutli-View Depth Refinement ModuleWe enhanced MVSplat by our Mutli-View Depth Refine-ment Module, i.e. Modified MVSplat. It can produce moreaccurate 3D gaussian primitives and improve the quality ofnovel view synthesis. To better infer unknown regions, weincorporated the color feature information of the neighbor-hood of this view. Our model takes this into consideration.We use the back-projected point cloud map reconstructedthrough Agisoft Metashape as an additional input color fea-ture for U-Net. The feature representation of the neighbor-hood is:\nPrevious generalizable 3D gaussian splatting methods arelimited to rendering novel views that are very close to theoriginal pair of images, which cannot handle large differ-ences in viewpoint. Especially in autonomous driving sce-narios, images are typically collected from a single lane. Thelimited training perspective makes rendering images of a dif-ferent lane very challenging. We have obtained a 3D Gaus-sian using our modified depth refinement module. To furtherimprove the rendering capability of GGS under large view-point changes, We introduce the virtual lane approach thatenables high-quality lane switching even without a multi-lane dataset, inspired by (Huang et al. 2023).The virtual lane converter is used to select the appropri-ate virtual lane, so that after lane switching, no informationcan be seen from the virtual perspective due to excessiveswitching amplitude. Then generate a pose for the virtuallane by performing a vertical translation along the lane. Fi-nally, generate a virtual perspective based on the pose of thevirtual lane. After introducing the virtual lane module, ourGGS module process mainly includes two stages.In the first stage, we input a set of N images:is a renderedimage without shifting the view, and the rendered view isconsistent with the ground truth. The current lane generatesa collection of virtual lane rendering images through laneconverters. The rendered image of the virtual lane is repre-sented as:\nThis forms a closed-loopprocess of switching to a new lane and then switching back.The advantage of doing so is that even without the groundtruth of the left and right lanes, we can still enhance the qual-ity of the model\u2019s rendering of the left and right lanes by es-tablishing virtual lanes, allowing the model to improve thequality of lane switching, as shown in .\nThere is no ground truth available for training when switch-ing lanes. When the lane switching amplitude is large, ob-stacles can obstruct the view during lane changes, making itimpossible to collect information about the new lane fromthe current lane, as shown in . Therefore, in order tobetter address this issue, we use diffusion prior knowledgeto imagine color information from a novel lane view.The traditional diffusion model denoising method directlycompletes the generated image, but due to the diversity ofthe generated models, it can lead to inconsistent results be-tween frames. Our method calculates the loss of the de-noised image and the image before denoising, and generatesa new perspective supervised by diffusion. Additionally weconstruct multi-lane novel view images, instead of utilizingimage of the current lane as input for U-Net denoising. Thisapproach helps ensure that the autonomous driving lane re-mains visible in the image following a change in viewpoint.Specifically, we adapt the Stable Diffusion framework(Rombach et al. 2022), and use the Variational AutoEncoder(Kingma andWelling 2013) to encode the multi-lane imagesinto latent code, including the left lane, middle lane and right", ["2409_02382v1_0", "2409_02382v1_1", "2409_02382v1_2"]] [60005, "The Bird\u2019s-eye View (BeV) representation is widely usedfor 3D perception from multi-view camera images. It al-lows to merge features from different cameras into a com-mon space, providing a unified representation of the 3Dscene. The key component is the view transformer, whichtransforms image views into the BeV. However, actual viewtransformer methods based on geometry or cross-attentiondo not provide a sufficiently detailed representation of thescene, as they use a sub-sampling of the 3D space that isnon-optimal for modeling the fine structures of the envi-ronment. In this paper, we propose GaussianBeV, a novelmethod for transforming image features to BeV by finelyrepresenting the scene using a set of3D gaussians locatedand oriented in 3D space. This representation is then splat-tered to produce the BeV feature map by adapting recentadvances in 3D representation rendering based on gaus-sian splatting [12]. GaussianBeV is the first approach touse this 3D gaussian modeling and 3D scene rendering pro-cess online, i.e. without optimizing it on a specific scene anddirectly integrated into a single stage model for BeV sceneunderstanding. Experiments show that the proposed repre-sentation is highly effective and place GaussianBeV as thenew state-of-the-art on the BeV semantic segmentation taskon the nuScenes dataset [2].\nMulti-camera 3D perception tasks, such as semantic seg-mentation, are crucial for autonomous navigation applica-tions. A common strategy involves projecting and merg-ing features from different cameras into a bird\u2019s-eye view(BeV) representation, which is then analyzed by perceptionheads. The primary challenge in these approaches lies inaddressing the loss of 3D information during the projectionof the physical world into camera images, thus solving theinverse problem of transforming image views into the BeVRecent literature identifies three main subsets of methodsfor image-to-BeV transformation. First, depth-based meth-ods [8,10,13,18] achieve view transformation geometricallyby filling a 3D grid with features extracted from imagesbased on the prediction of discrete depth distribution. Thekey idea is to roughly localize the 3D position of each imagefeature and then accumulate them through a voxelizationstep. However, in these approaches, 3D feature localiza-tion depends on the depth discretization granularity and issub-optimal, as features are placed at the level of the visiblefaces of objects. Second, projection-based methods [3, 7]also use a geometric approach, project 3D grid points intothe cameras and gathers corresponding features from them.While straightforward, these methods do not yield accurate2D-to-3D back-projection, as all grid points along the samecamera ray receive the same feature. Third, transformer-based methods [15, 17, 27] utilize cross-attention to mergemulti-view features. Though effective for 3D object de-tection, their application to dense tasks like BeV semanticsegmentation incurs a high computational cost due to densespatial queries needed for BeV representation in the atten-tion process. Some works [17, 27] address this by reducingthe BeV resolution, leading to inevitable information loss.In this article, we propose a novel view transformationmethod called GaussianBeV, which enables fine 3D model-ing of scenes. Drawing on recent advances in explicit 3Drepresentation for rendering novel views based on GaussianSplatting (GS) [12], our method represents a scene using aset of 3D gaussians, each parameterized by a center, scale,rotation, opacity and semantic features (instead of colors inGS). Furthermore, unlike the original GS method, whichuses offline optimization for a specific scene to determinethe 3D gaussian representation, we propose to train a neuralnetwork to directly generate an online 3D gaussian repre-sentation of the scene from a set of multi-view images. Thisrepresentation is then rendered into a BeV feature map thatis then analyzed by semantic segmentation heads.The representation of a scene by a set of 3D gaussiansallows to model its entire content. Indeed, the geometricalproperties of the gaussians (position, size and rotation) en-able to cover 3D space with varying level of detail depend-ing on the structures encountered in the scene. Intuitively,a gaussian representing a lane marking will be rotated andelongated along its length. A gaussian representing a vehi-cle will be placed in the center of the vehicle and will followits shape. The d illustrates the intuition behind therepresentation proposed in this paper. Our contributions canbe summarized as follows. (1) Introduction of Gaussian-BeV for BeV feature map generation from images throughan online image-to-3D gaussian representation of the scene,allowing fine 3D content modeling. This representation isthen splattered in BeV using a rasterizer module. To ourknowledge, this is the first time that a gaussian splattingrepresentation that is not scene-specific is proposed and in-tegrated into a perception model. (2) Experiments demon-strating the effectiveness of our method, establishing it asthe new state-of-the-art in BeV semantic segmentation.\n2. Related workDepth-based. A series of models were built on the ex-plicit prediction of pixel-wise depth estimations along withimage features. Combined with camera calibration parame-ters, this enables the back-projection of 2D features into 3Dfeature point cloud which is finally aggregated in the BeVgrid. To accommodate for uncertainty in depth estimation,the features are actually propagated all along the ray thattraverses their pixel and modulated by a discrete depth prob-ability estimation [8, 10, 18]. To improves depth prediction,an explicit depth supervision scheme is proposed [13] usingLiDAR data during model training. However, depth-basedmethods are sensitive to the ray sampling strategy, usuallyback-projecting features along the ray and on object sur-faces (see a).Projection-based. Using a thorough comparative study, [7]proposes to discard depth estimation in favor of a simplerprojection scheme: a predefined set of 3D points is usedto describe the scene and camera features are probed byprojecting the point on the camera feature maps using cal-ibration data. This projection disregards actual objects andbackground placement but returns a denser representationof the scene without void beyond the depth of objects sur-faces. The computational and memory overhead of the gen-erating the BeV grid is reduced by opting for a sparse gridrepresentation [3]. Projection-based view transformationmethods are simple but result in a coarse BeV represen-tation because all voxels along the optical ray receive thesame features (see b).Attention-based. Capitalising on recent advances in Trans-former models, depth estimation is replaced by an attention-based feature modulation scheme [15\u201317, 22, 27]. Sev-eral optimization schemes are proposed to resolve compu-tational complexity of a pairwise matching between imageand BeV grid tokens: factorization of spatial and temporalattentions, deformable attention [15], injection of calibra-tion and timestamp priors [17]. For the segmentation task,attention-based view transformation is computationally andmemory intensive, due to the need to define a dense querymap [15]. This is why some methods [17,27] predict a low-resolution BeV (see c), which is then upsampled bysuccessive deconvolutions.Gaussian splatting. Gaussian splatting (GS) [12] is a 3Dscene rendering technique which uses 3D gaussians to de-scribe a scene. Each gaussian is parameterized by its po-sition, scale, rotation, opacity and a Spherical Harmonicscolor model. The entire rendering pipeline is differentiableallowing the optimization of the gaussian parameters to aparticular scene based on a set of images. GS is both fastand parallel, allowing real-time operation on GPUs. More-over, it is possible to trade off details for rendering speedby pruning smaller and transparent gaussians. In com-parison to sparse voxel grids, gaussians offer more efficient representations of a scene since individual gaussianscan describe large volumes while smaller ones can accu-rately encode finer details with arbitrary resolution. Detailsand rasterization resolutions are parameters of the render-ing pipeline, not the scene description. Several extensionshave been proposed, allowing the management of dynamicobjects [14, 25] or the distillation of semantic features fromfoundation models in the representation [19, 20, 29]. In allcases, the gaussian representation is scene-specific.In our work, we propose to use gaussian representationto overcome the drawbacks of previous view transformermethods. Unlike previous gaussian splatting works, whichlearns gaussian representations offline, we propose to learna neural network capable of providing an online gaussianrepresentation of the scene.The presents an overview of GaussianBeV.The model takes as input a set of multiview imagesextrinsic parameters are applied to transform 3D gaussiansfrom the camera to the world reference frame to finally con-catenate all the gaussians into the single set G.The third module is the BeV rasterizer (Sec 3.3) that per-forms a BeV rendering of the 3D gaussian set G to producethe BeV feature map\nGiven the input feature maps F, the 3D gaussian gen-erator predicts the 3D gaussian representation of the sceneusing several prediction heads. The illustrates howit operates on the feature maps.Gaussian centers. The 3D positions of the gaussians inthe scene are estimated by a depth head and a 3D offsethead applied to F. The first predicts an initial position ofthe 3D centers along the optical rays. The second refinesthis 3D position by adding a small 3D displacement to it,giving more flexibility in the positioning of gaussians bynot freezing them along the optical rays.Given the input feature maps F, the 3D gaussian gen-erator predicts the 3D gaussian representation of the sceneusing several prediction heads. The illustrates howit operates on the feature maps.Gaussian centers. The 3D positions of the gaussians inthe scene are estimated by a depth head and a 3D offsethead applied to F. The first predicts an initial position ofthe 3D centers along the optical rays. The second refinesthis 3D position by adding a small 3D displacement to it,giving more flexibility in the positioning of gaussians bynot freezing them along the optical rays.For each pixel, it calculates the corresponding 3D gaussian by passing through prediction heads(green boxes). Some of these predictions are then decoded (blue and red boxes) and transformed to be expressed in the world referenceframe (yellow boxes). All predicted gaussian parameters are then concatenated to produce the 3D gaussian representation G of the scene.Given the input feature maps F, the 3D gaussian gen-erator predicts the 3D gaussian representation of the sceneusing several prediction heads. The illustrates howThe shows qualita-tive results for vehicle and ground surface segmentation. Italso gives a visualization of the output of the BeV rasterizermodule. To visualize the BeV feature map, we appliedPrincipal Component Analysis (PCA) to the 3D gaussianfeatures before BeV rendering. This allows us to reduce thesize of the features to render the BeV features in color.BeV backbone. We conducted experiments to evaluate theinfluence of the choice of the BeV backbone. In particu-lar, GaussianBeV is trained (1) without using a BeV back-bone, by connecting the BeV feature directly to the seg-mentation heads (row 2 in the Table 4) and (2) with theit operates on the feature maps.", ["2407_14108v1_0", "2407_14108v1_0", "2407_14108v1_0", "2407_14108v1_0", "2407_14108v1_1", "2407_14108v1_2", "2407_14108v1_2", "2407_14108v1_2", "2407_14108v1_3"]] [60006, "World models have demonstrated superiority in autonomousdriving, particularly in the generation of multi-view driving videos. How-ever, significant challenges still exist in generating customized drivingarXiv:2403.06845v2 [cs.CV] 11 Apr 2024videos. In this paper, we propose DriveDreamer-2, which builds uponthe framework of DriveDreamer and incorporates a Large LanguageModel (LLM) to generate user-defined driving videos. Specifically, anLLM interface is initially incorporated to convert a user\u2019s query intoagent trajectories. Subsequently, a HDMap, adhering to traffic regula-tions, is generated based on the trajectories. Ultimately, we propose theUnified Multi-View Model to enhance temporal and spatial coherence inthe generated driving videos. DriveDreamer-2 is the first world model togenerate customized driving videos, it can generate uncommon drivingvideos (e.g., vehicles abruptly cut in) in a user-friendly manner. Besides,experimental results demonstrate that the generated videos enhance thetraining of driving perception methods (e.g., 3D detection and track-ing). Furthermore, video generation quality of DriveDreamer-2 surpassesother state-of-the-art methods, showcasing FID and FVD scores of 11.2and 55.7, representing relative improvements of \u223c30% and \u223c50%.World models for autonomous driving [23,25,59,62] have drawn extensive atten-tion from both the industry and academia in recent years. Benefiting from theirexcellent predictive capabilities, autonomous driving world models facilitate thegeneration of diverse driving videos, encompassing even long-tail scenarios. Thegenerated driving videos can be utilized to enhance the training of various driv-ing perception approaches, proving highly beneficial for practical applications inautonomous driving.\nWorld modeling in autonomous driving presents a formidable challenge dueto its inherent complexity and large sampling space. Early approaches [8, 22]mitigate these problems by incorporating world modeling within the Bird\u2019s EyeView (BEV) semantic segmentation space. However, these methods primarily ex-plore world models in simulated autonomous driving environments. In the recentevolution of autonomous driving technologies, there has been a substantial leapforward in the development of world models. This progress has been propelled bythe utilization of cutting-edge diffusion models [5, 17, 18, 39, 40, 46], exemplifiedby notable contributions such as DriveDreamer [59], Drive-WM [62], Magic-Drive [7], Panacea [64], and the integration of large language models like GAIA-1 [23], ADriver-I [25]. These sophisticated models have played a pivotal role inpushing the boundaries of world modeling capabilities, enabling researchers andengineers to delve into increasingly intricate and realistic driving scenarios. How-ever, it is important to note that a majority of these methods rely heavily onstructured information (e.g., 3D boxes, HDMaps, and optical flow) or real-worldimage frames as conditions. This dependence not only constrains interactivitybut also limits the diversity of generated videos.To tackle the aforementioned challenges, we propose DriveDreamer-2, whichis the first world model to generate diverse driving videos in a user-friendly man-ner. In contrast to previous methods [7,59,62] that rely on structured conditionseither from specific datasets or sophisticated annotations, DriveDreamer-2 em-phasizes generating customized driving videos by simulating various traffic con-ditions with user-friendly text prompts. Specifically, the traffic simulation taskhas been disentangled into the generation of foreground conditions (trajectoriesof the ego-car and other agents) and background conditions (HDMaps of laneboundary, lane divider, and pedestrian crossing). For foreground generation, afunctional library is constructed to finetune a Large Language Model (LLM), en-abling it to generate agent trajectories based on user text input. For backgroundconditions, we propose the HDMap generator that employs a diffusion model tosimulate road structures. In this process, the previously generated agent trajec-tories are involved as conditional inputs, which allows the HDMap generator tolearn the associations between foreground and background conditions in drivingscenes. Building upon the generated traffic structured conditions, we employ theDriveDreamer [59] framework to generate multi-view driving videos. It is notedthat we introduce the Unified Multi-view Video Model (UniMVM) within theDriveDreamer framework, which is designed to unify both intra-view and cross-view spatial consistency, enhancing the overall temporal and spatial coherencein the generated driving videos.\nExtensive experiment results show that DriveDreamer-2 is capable of produc-ing diverse user-customized videos, including uncommon scenarios where vehiclesabruptly cut in (depicted in ). Besides, DriveDreamer-2 can generate high-quality driving videos with an FID of 11.2 and FVD of 55.7, relatively improvingprevious best-performing methods by \u223c30% and \u223c50%. Furthermore, experi-ments are conducted to verify that driving videos generated by DriveDreamer-2can enhance the training of various autonomous driving perception methods,where the performance of detection and tracking are relatively improved by\u223c4% and \u223c8%.The main contributions of this paper can be summarized as follows:\u2013 We present DriveDreamer-2, which is the first world model to generate di-verse driving videos in a user-friendly manner.\u2013 We propose a traffic simulation pipeline employing only text prompts asinput, which can be utilized to generate diverse traffic conditions for drivingvideo generation.\u2013 UniMVM is presented to seamlessly integrate intra-view and cross-view spa-tial consistency, elevating the overall temporal and spatial coherence withinthe generated driving videos.\u2013 Extensive experiments are conducted to show that DriveDreamer-2 can craftdiverse customized driving videos. Besides, DriveDreamer-2 enhances theFID and FVD by \u223c30% and \u223c50% compared to previous best-performingmethods. Moreover, the driving videos generated by DriveDreamer-2 en-hance the training of various driving perception methods.\nThe primary objective of world methods is to establish dynamic environmen-tal models, endowing agents with predictive capabilities for the future. In theearly exploration, Variational Autoencoders (VAE) [31] and Long Short-TermMemory (LSTM) [19] are employed to capture transition dynamics and ren-dering functionality, showcasing remarkable success across diverse applications[9\u201313,13,29,36,48,65]. Constructing driving world models poses distinctive chal-lenges, primarily arising from the high sample complexity inherent in real-worlddriving tasks [3]. To address these challenges, ISO-Dream [42] introduces anexplicit disentanglement of visual dynamics into controllable and uncontrollablestates. MILE [22] strategically incorporates world modeling within the Bird\u2019s EyeView (BEV) semantic segmentation space. Recently, DriveDreamer [59], GAIA-1 [23], ADriver-I [25], and Drive-WM [62] have explored the training of drivingworld models in the real world, leveraging powerful diffusion models or naturallanguage models. However, most of these methods heavily depend on structuredinformation (e.g., 3D boxes, HDMaps, and optical flow) as conditions. This de-pendency not only constrains interactivity but also limits generation diversity.\nVideo generation and prediction are pivotal techniques for understanding the vi-sual world. In the early stages of video generation, methods like VariationalAutoencoders (VAEs) [4, 21], flow-based model [33], and Generative Adver-sarial Networks (GANs) [38, 47, 53, 56] are explored. Language models [20, 26,32, 45, 51, 55, 60, 63] are also employed for intricate visual dynamics model-ing. Recent advancements have seen diffusion models [5, 17, 18, 39, 40, 46] ex-tending their influence to video generation. Notably, video diffusion models[1, 14, 16, 28, 49, 58, 67] exhibit superior capabilities in generating high-qualityvideos with realistic frames and smooth transitions, offering enhanced control-lability. These models adapt seamlessly to various input conditions, includingtext, canny, sketch, semantic maps, and depth maps. In the realm of autonomousdriving, DriveDreamer-2 leverages powerful diffusion models for learning visualdynamics.Driving simulators stand as a cornerstone in self-driving development, aimingto offer a controlled environment to mimic real-world conditions. LCTGen [52]utilizes an LLM to encode detailed language descriptions to a vector and sub-sequently employs a generator to produce corresponding simulated scenarios.This method requires highly detailed language descriptions, including informa-tion such as the speed and orientation of agents. TrafficGen [6] comprehendsthe inherent relationships within traffic scenarios, enabling the generation ofdiverse and legitimate traffic flows within the same map. CTG [70] generatestraffic simulations by employing manually designed loss functions that adhere totraffic constraints. CTG++ [69] further extends CTG by utilizing GPT-4 [41]to convert user language descriptions into a loss function, which guides thescene-level conditional diffusion model to generate the corresponding scenario.In DriveDreamer-2, we construct a functional library to finetune the LLM toachieve a user-friendly text-to-traffic simulation, which eliminates intricate lossdesign or complex text prompt inputs.\n illustrates the overall framework of DriveDreamer-2. A customized traf-fic simulation is first proposed to generate foreground agent trajectories andbackground HDMaps. Specifically, DriveDreamer-2 utilizes a finetuned LLM totranslate user prompts into agent trajectories, and the HDMap generator is thenintroduced to simulate road structures using the generated trajectories as con-ditions. Leveraging the customized traffic simulation pipeline, DriveDreamer-2is capable of generating diverse structured conditions for the subsequent videogeneration. Building upon the architecture of DriveDreamer [59], the UniMVMframework is proposed to unify both intra-view and cross-view spatial consis-tency, thereby enhancing the overall temporal and spatial coherence in the gen-erated driving videos. In the subsequent sections, we delve into the details of thecustomized traffic simulation and the UniMVM framework.\nFinetuning LLM for Trajectory Generation Previous traffic simulationmethods [37, 69, 70] necessitate the intricate specification of parameters, involv-ing details such as the agent\u2019s speed, position, acceleration, and mission goal.To simplify this intricate process, we propose to finetune LLM with the con-structed trajectory-generation function library, allowing for the efficient trans-formation of user-friendly language inputs into comprehensive traffic simulationscenarios. As depicted in , the constructed function library encompasses 18functions, including agent functions (steering, constant speed, acceleration, andbraking), pedestrian functions (walking direction and speed), and other utilityfunctions such as saving trajectories. Building upon these functions, Text-to-Python-Scripts pairs are manually curated for finetuning LLM (GPT-3.5). Thescripts include a range of fundamental scenarios such as lane-changing, overtak-ing, following other vehicles, and executing U-turns. Additionally, we encompassmore uncommon scenarios like pedestrians abruptly crossing, and vehicles cut-ting into the lane. Taking the user input a vehicle cuts in as an example, thecorresponding script involves the following steps: initially generating a trajectoryof cutting in (agent.cut_in()), followed by generating the corresponding egocar trajectory (agent.forward()), and ultimately utilizing the saving functionfrom utilities to directly output the trajectory of the ego-car and other agents inarray format. For additional details, please refer to the supplementary materials.In the inference phase, we follow [37] to expand prompt inputs to a pre-definedtemplate, and the finetuned LLM can directly output the trajectory array.", ["2403_06845v2_0", "2403_06845v2_1", "2403_06845v2_2"]] [60007, "World models, especially in autonomous driving, are trending and drawing extensive attention due to their ca- pacity for comprehending driving environments. The estab- lished world model holds immense potential for the gen- eration of high-quality driving videos, and driving poli- cies for safe maneuvering. However, a critical limitation in relevant research lies in its predominant focus on gam- ing environments or simulated settings, thereby lacking the representation of real-world driving scenarios. Therefore, we introduce DriveDreamer, a pioneering world model en- tirely derived from real-world driving scenarios. Regarding that modeling the world in intricate driving scenes entails an overwhelming search space, we propose harnessing the \nSpurred by insights from AGI (Artificial General Intelli-gence) and the principles of embodied AI, a profound trans-formation in autonomous driving is underway. Autonomousvehicles rely on sophisticated systems that engage with andcomprehend the real driving world. At the heart of this evo-lution is the integration of world models [15,17\u201319]. Worldmodels hold great promise for generating diverse and real-istic driving videos, encompassing even long-tail scenarios,which can be utilized to train various driving perception ap-proaches. Furthermore, the predictive capabilities in worldmodels facilitate end-to-end driving, ushering in a new eraof autonomous driving experiences.Deriving latent dynamics of world models fromvisual signals was initially introduced in video predic-tion [8, 11, 19]. By extrapolating from observed visualsequences, video prediction methods can infer future statesof the environment, effectively modeling how objects andentities within a scene will evolve over time. However,modeling the intricate driving scenarios in pixel space ischallenging due to the large sampling space [5, 7]. Toalleviate this problem, recent research endeavors havesought innovative strategies to enhance sampling efficiency.ISO-Dream [52] explicitly disentangles visual dynamicsinto controllable and uncontrollable states. MILE [29]strategically incorporates world modeling within theBird\u2019s Eye View (BEV) semantic segmentation space,complementing world modeling with imitation learning.SEM2 [13] further extends the Dreamer framework intoBEV segmentation maps, utilizing Reinforce Learning(RL) for training. Despite the progress witnessed in worldmodels, a critical limitation in relevant research lies in itspredominant focus on simulation environments.In this paper, we propose DriveDreamer, which pioneersthe construction of comprehensive world models from realdriving videos and human driver behaviors. Considering theintricate nature of modeling real-world driving scenes, weintroduce the Autonomous-driving Diffusion Model (Auto-DM), which empowers the ability to create a comprehen-sive representation of the complex driving environment. Wepropose a two-stage training pipeline. In the first stage,we train Auto-DM by incorporating traffic structural infor-mation as intermediate conditions, which significantly en-hances sampling efficiency. Consequently, Auto-DM ex-hibits remarkable capabilities in comprehending real-worlddriving scenes, particularly concerning the dynamic fore-ground objects and the static background. In the second-stage training, we establish the world model through videoprediction. Specifically, driving actions are employed to it-eratively update future traffic structural conditions, whichenables DriveDreamer to anticipate variations in the drivingenvironment based on different driving strategies. More-over, DriveDreamer extends its predictive prowess to fore-see forthcoming driving policies, drawing from historicalobservations and Auto-DM features. Thus creating a exe-cutable, and predictable driving world model.The main contributions of this paper can be summa-rized as follows: (1) We introduce DriveDreamer, whichis the first world model derived from real-world drivingscenarios. DriveDreamer can jointly enable the generationof high-quality driving videos and reasonable driving poli-cies. (2) To enhance the comprehension of real-world driv-ing scenes and expedite the world model convergence, weintroduce the Autonomous-driving Diffusion Model and atwo-stage training pipeline. The first-stage training enablesthe comprehension of traffic structural information, and thesecond-stage video prediction training empowers the pre-dictive capacity. (3) DriveDreamer can controllably gener-ate driving scene videos that are highly aligned with trafficconstraints (see ), enhancing the training of drivingperception methods (e.g., 3D detection). Besides, Drive-Dreamer can generate future driving policies based on his-torical observations and Auto-DM features. Notably, Drive-Dreamer achieves promising planning results in open-loopassessments on the nuScenes dataset.\n2. Related Work2.1. Diffusion ModelDiffusion models represent a family of probabilistic gen-erative models that progressively introduce noise to dataand subsequently learn to reverse this process for the pur-pose of generating samples [73]. These models have re-cently garnered significant attention due to their exceptionalperformance in various applications, setting new bench-marks in image synthesis [1, 14, 49, 55, 57], video gener-ation [21, 23, 35, 60, 67, 74], and 3D content generation[6, 43, 53, 69]. To enhance the controllable generation ca-pability, ControlNet [76], GLIGEN [42], T2I-Adapter [48]\nand Composer [32] have been introduced to utilize variouscontrol inputs, including depth maps, segmentation maps,canny edges, and sketches. Concurrently, BEVControl [72],MagicDrive [12] and DrivingDiffuson [41] incorporate lay-out conditions to enhance image generation. The funda-mental essence of diffusion-based generative models lies intheir capacity to comprehend and understand the intricaciesof the world. Harnessing the power of these diffusion mod-els, DriveDreamer seeks to comprehend the complex realmof autonomous-driving scenarios.2.2. Video GenerationVideo generation and video prediction are effectiveapproaches to understanding the visual world. In therealm of video generation, several standard architectureshave been employed, including Variational Autoencoders(VAEs) [8,28], auto-regressive models [34,56,61,70], flow-based models [40], and Generative Adversarial Networks(GANs) [46, 58, 62, 65]. Recently, the burgeoning diffu-sion models [9, 24, 25, 49, 50, 57] have also been extendedto the domain of video generation. Video diffusion mod-els [21, 23, 35, 60, 67, 74] exhibit higher-quality video gen-eration capabilities, producing realistic frames and transi-tions between frames while offering enhanced controllabil-ity. They accommodate various input control conditionssuch as text, canny, sketch, semantic maps, and depth maps.Video prediction models represent a specialized formof video generation models, sharing numerous similari-ties. In particular, video prediction involves anticipatingfuture video changes based on historical video observa-tions [8, 11, 19, 27, 64]. DriveGAN [36] establishes asso-ciations between driving actions and pixels, predicting fu-ture driving videos by specifying future driving policies. Incontrast, DriveDreamer incorporates structured traffic con-ditions, text prompts, and driving actions as inputs, empow-ering precise, realistic video and action generation that arefaithfully aligned with real-world driving scenarios.2.3. World ModelsWorld models have been extensively explored in model-based imitation learning, demonstrating remarkable suc-cess in various applications [15\u201320, 37, 44, 59, 71]. Theseapproaches typically leverage Variational Autoencoders(VAE) [39] and Long Short-Term Memory (LSTM) [26]to model transition dynamics and rendering functionality.World methods target at establishing dynamic models of en-vironments, enabling agents to be predictive of the future.This aspect is of paramount importance in autonomous driv-ing, where precise predictions about the future are essentialfor safe maneuvering. However, constructing world mod-els in autonomous driving presents unique challenges, pri-marily due to the high sample complexity inherent in real-world driving tasks [5]. To address these problems, ISO-\nDream [52] introduces an explicit disentanglement of visualdynamics into controllable and uncontrollable states. MILE[29] strategically incorporates world modeling within theBEV semantic segmentation space, enhancing world mod-eling through imitation learning. SEM2 [13] extends theDreamer framework into BEV segmentation maps, em-ploying reinforcement learning for training. Despite theprogress witnessed in world models, a critical limitation inrelevant research lies in its predominant focus on simulationenvironments. The transition to real-world driving scenar-ios remains an under-explored frontier.3. DriveDreamerThe overall framework of DriveDreamer is depicted in. The framework begins with an initial referenceframe I0 and its corresponding road structural informa-tion (i.e., HDMap H0 and 3D box B0). Within this con-text, DriveDreamer leverages the proposed ActionFormerto predict forthcoming road structural features in the latentspace. These predicted features serve as conditions andare provided to Auto-DM, which generates future drivingvideos. Simultaneously, the utilization of text prompts al-lows for dynamic adjustments to the driving scenario style(e.g., weather and time of the day). Moreover, Drive-Dreamer incorporates historical action information and themulti-scale latent features extracted from Auto-DM, whichare combined to generate reasonable future driving actions.In essence, DriveDreamer offers a comprehensive frame-work that seamlessly integrates multi-modal inputs to gen-erate future driving videos and driving policies, thereby ad-vancing the capabilities of autonomous-driving systems.Regarding the extensive search space of establishingworld models in real-world driving scenarios, we introducea two-stage training strategy for DriveDreamer. This strat-egy is designed to significantly enhance sampling efficiencyand expedite model convergence. The two-stage training isillustrated in . There are two steps in the first-stagetraining. Step 1 involves utilizing the single-frame struc-tured condition, which guides DriveDreamer to generatedriving scene image, facilitating its comprehension of struc-tural traffic constraints. Step 2 extends its understandinginto video generation. The second-stage training enablesDriveDreamer to interact with the environment and predictfuture states effectively. This phase takes an initial frameimage along with its corresponding structured informationas input. Simultaneously, sequential driving actions are pro-vided, with the model expected to generate future drivingvideos and future driving actions. In the following sections,we delve into the specifics of the model architecture andtraining pipelines.", ["2309_09777v2_0", "2309_09777v2_2", "2309_09777v2_1"]] [60008, "We introduce pixelSplat, a feed-forward model that learnsto reconstruct 3D radiance fields parameterized by 3D Gaus-sian primitives from pairs of images. Our model featuresreal-time and memory-efficient rendering for scalable train-ing as well as fast 3D reconstruction at inference time. Toovercome local minima inherent to sparse and locally sup-ported representations, we predict a dense probability dis-tribution over 3D and sample Gaussian means from thatprobability distribution. We make this sampling operationdifferentiable via a reparameterization trick, allowing us toback-propagate gradients through the Gaussian splattingrepresentation. We benchmark our method on wide-baselinenovel view synthesis on the real-world RealEstate10k andACID datasets, where we outperform state-of-the-art lightfield transformers and accelerate rendering by 2.5 ordersofmagnitude while reconstructing an interpretable and ed-itable 3D radiance field. Additional materials can be foundon the project website. 1\nWe investigate the problem of generalizable novel viewsynthesis from sparse image observations. This line ofwork has been revolutionized by differentiable render-ing [29, 40, 41, 50] but has also inherited its key weak-ness: training, reconstruction, and rendering are notoriouslymemory- and time-intensive because differentiable render-ing requires evaluating dozens or hundreds of points alongeach camera ray [58].This has motivated light-field transformers [10, 37, 43,47], where a ray is rendered by embedding it into a query to-ken and a color is obtained via cross-attention over image to-kens. While significantly faster than volume rendering, suchmethods are still far from real-time. Additionally, they donot reconstruct 3D scene representations that can be editedor exported for downstream tasks in vision and graphics.Meanwhile, recent work on single-scene novel view syn-thesis has shown that it is possible to use 3D Gaussian prim-itives to enable real-time rendering with little memory costvia rasterization-based volume rendering [19].We present pixelSplat, which brings the benefits ofa primitive-based 3D representation\u2014fast and memory-efficient rendering as well as interpretable 3D structure\u2014to generalizable view synthesis. This is no straightforwardtask. First, in real-world datasets, camera poses are onlyreconstructed up to an arbitrary scale factor. We addressthis by designing a multi-view epipolar transformer that re-liably infers this per-scene scale factor. Next, optimizingprimitive parameters directly via gradient descent suffersfrom local minima. In the single-scene case, this can beaddressed via non-differentiable pruning and division heuris-tics [19]. In contrast, in the generalizable case, we needto back-propagate gradients through the representation andthus cannot rely on non-differentiable operations. We thuspropose a method by which Gaussian primitives can im-plicitly be spawned or deleted during training, avoiding lo-cal minima, but which nevertheless maintains gradient flow.Specifically, we parameterize the positions (i.e., means) ofGaussians implicitly via dense probability distributions pre-dicted by our encoder. In each forward pass, we sample\nMeanwhile, recent work on single-scene novel view syn-thesis has shown that it is possible to use 3D Gaussian prim-itives to enable real-time rendering with little memory costvia rasterization-based volume rendering [19].We present pixelSplat, which brings the benefits ofa primitive-based 3D representation\u2014fast and memory-efficient rendering as well as interpretable 3D structure\u2014to generalizable view synthesis. This is no straightforwardtask. First, in real-world datasets, camera poses are onlyreconstructed up to an arbitrary scale factor. We addressthis by designing a multi-view epipolar transformer that re-liably infers this per-scene scale factor. Next, optimizingprimitive parameters directly via gradient descent suffersfrom local minima. In the single-scene case, this can beaddressed via non-differentiable pruning and division heuris-tics [19]. In contrast, in the generalizable case, we needto back-propagate gradients through the representation andthus cannot rely on non-differentiable operations. We thuspropose a method by which Gaussian primitives can im-plicitly be spawned or deleted during training, avoiding lo-cal minima, but which nevertheless maintains gradient flow.Specifically, we parameterize the positions (i.e., means) ofGaussians implicitly via dense probability distributions pre-dicted by our encoder. In each forward pass, we sample\nGaussian primitive locations from this distribution. We makethe sampling operation differentiable via a reparameteriza-tion trick that couples the density of a sampled Gaussianprimitive to the probability of that location. When receivinga gradient that would increase the opacity of a Gaussian ata 3D location, our model increases the probability that theGaussian will be sampled at that location again in the future.We demonstrate the efficacy of our method by showcas-ing, for the first time, how a 3D Gaussian splatting represen-tation can be predicted in a single forward pass from just apair of images. In other words, we demonstrate how3D Gaus-sians can be integrated in an end-to-end differentiable system.We significantly outperform previous black-box based lightfield transformers on the real-world ACID and RealEstate10kdatasets while drastically reducing both training and render-ing cost and generating explicit 3D scenes.\n2. Related WorkSingle-scene novel view synthesis. Advancements in neu-ral rendering [50] and neural fields [29, 42, 57] have rev-olutionized 3D reconstruction and novel view synthesisfrom collections of posed images. Recent approaches gen-erally create 3D scene representations by backpropagatingimage-space photometric error through differentiable ren-derers. Early methods employed voxel grids and learnedrendering techniques [27, 31, 40]. More recently, neuralfields [2, 28, 29, 57] and volume rendering [27, 29, 49]have become the de-facto standard. However, a key hurdleof these methods is their high computational demand, asrendering usually requires dozens of queries of the neuralfield per ray. Discrete data structures can accelerate render-ing [6, 12, 25, 30] but fall short of real-time rendering at highresolutions. 3D Gaussian splatting [19] solves this problemby representing the radiance field using 3D Gaussians thatcan efficiently be rendered via rasterization. However, allsingle-scene optimization methods require dozens of imagesto achieve high-quality novel view synthesis. In this work,we train neural networks to estimate the parameters of a3D Gaussian primitive scene representation from just twoimages in a single forward pass.Prior-based 3D Reconstruction and View Synthesis. Gen-eralizable novel view synthesis seeks to enable 3D recon-struction and novel view synthesis from only a handful ofimages per scene. If proxy geometry (e.g., depth maps) isavailable, machine learning can be combined with image-based rendering [1, 22, 36, 56] to produce convincing re-sults. Neural networks can also be trained to directly regressmulti-plane images for small-baseline novel view synthe-sis [45, 53, 60, 61]. Large-baseline novel view synthesis,however, requires full 3D representations. Early approachesbased on neural fields [32, 41] encoded 3D scenes in indi-vidual latent codes and were thus limited to single-objectscenes. Preserving end-to-end locality and shift equivariancebetween encoder and scene representation via pixel-alignedfeatures [14, 23, 39, 52, 58] or via transformers [35, 54]has enabled generalization to unbounded scenes. Inspiredby classical multi-view stereo, neural networks have alsobeen combined with cost volumes to match features acrossviews [5, 7, 18, 26]. While the above methods infer inter-pretable 3D representations in the form of signed distancesor radiance fields, recent light field scene representationstrade interpretability for faster rendering [10, 37, 43, 46, 47].Our method presents the best of both worlds: it infers aninterpretable 3D scene representation in the form of 3DGaussians while accelerating rendering by three orders ofmagnitude compared to light field transformers.Scale ambiguity in machine learning for multi-view ge-ometry. Prior work has recognized the challenge of scenescale ambiguity. In monocular depth estimation, state-of-the-art models rely on sophisticated scale-invariant depthlosses [11, 13, 33, 34]. In novel view synthesis, recent single-image 3D diffusion models trained on real-world data rescale3D scenes according to heuristics on depth statistics and con-dition their encoders on scene scale [4, 38, 51]. In this work,we instead build a multi-view encoder that can infer thescale of the scene. We accomplish this using an epipolartransformer that finds cross-view pixel correspondences andassociates them with positionally encoded depth values [16].In practice, this means that a neural network makingWe report quantitative results in Table 1. Our method out-performs the baselines on all metrics, with especially sig-nificant improvements in perceptual distance (LPIPS). Compared to the baselines,our method is better at capturing fine details and correctlyinferring 3D structure in portions of each scene that are onlyobserved by one reference view.Training and Inference cost. As shown in Table 1, ourmethod is significantly less resource-intensive than the base-lines. Compared to the next-fastest one, our method\u2019s cost toinfer a single scene (encoding) and then render 100 images(decoding), the approximate number in a RealEstate10k orACID sequence, is about 650 times less. Our method alsouses significantly less memory per ray at training timpredictions about the geometry of a scene from a singleimage cannot possibly predict the depth that matches theposes reconstructed by structure-from-motion. In monocular. Scale ambiguity. SfM does not reconstruct camera posesin real-world, metric scale\u2014poses are scaled by an arbitrary scalefactor that is different for each scene. To render correct views, ourmodel\u2019s 3D reconstruction needs to be consistent with this arbitraryscale. We illustrate how our epipolar encoder solves this problem.Features belonging to the ray\u2019s corresponding pixel on the left arecompared with features sampled along the epipolar line on the right.Epipolar samples are augmented with their positionally-encodeddepths along the ray, which allows our encoder to record correctdepths. Recorded depths are later used for depth prediction.depth estimation, this has been addressed via scale-invariantlosses [11, 13, 33]. Our encoder similarly has to predictthe geometry of the scene, chiefly via the position of eachGaussian primitive, which depends on the per-scene scale si.Refer to for an illustration of this challenge.each Gaussian. illustrates the process by which we predict parameters ofpixel-aligned Gaussians from a single reference image.visualize the Gaussians it produces from views that are faroutside the training distribution in . We visualizepoint clouds using the version of our model that has beenfine-tuned with a depth regularizer. Note that while the re-sulting Gaussians facilitate high-fidelity novel-view synthe-sis for in-distribution camera poses, they suffer from thesame failure modes as 3D Gaussians optimized using theoriginal 3D Gaussian splatting method [19]. Specifically, re-flective surfaces are often transparent, and Gaussians appea", ["2312_12337v4_0", "2312_12337v4_0", "2312_12337v4_1", "2312_12337v4_2"]] [60009, "We present DeSiRe-GS, a self-supervised gaussian splat-ting representation, enabling effective static-dynamic decom-position and high-fidelity surface reconstruction in complexdriving scenarios. Our approach employs a two-stage opti-mization pipeline ofdynamic street Gaussians. In the firststage, we extract 2D motion masks based on the observationthat 3D Gaussian Splatting inherently can reconstruct onlythe static regions in dynamic environments. These extracted2D motion priors are then mapped into the Gaussian space ina differentiable manner, leveraging an efficient formulationthe introduced geometric regularizations, our method areable to address the over-fitting issues caused by data sparsityin autonomous driving, reconstructing physically plausibleGaussians that align with object surfaces rather than float-ing in air. Furthermore, we introduce temporal cross-viewconsistency to ensure coherence across time and viewpoints,resulting in high-quality surface reconstruction. Comprehen-sive experiments demonstrate the efficiency and effectivenessof DeSiRe-GS, surpassing prior self-supervised arts andachieving accuracy comparable to methods relying on ex-ternal 3D bounding box annotations. \n1. IntroductionModeling driving scenes [11, 28] is essential for au-tonomous driving applications, as it facilitates real-worldsimulation and supports scene understanding [46]. An ef-fective scene representation enables a system to efficientlyperceive and reconstruct dynamic driving environments. Re-cent 3D Gaussian Splatting (3DGS) [16] has emerged as aprominent 3D representation that can be optimized through2D supervision. It has gained popularity due to its explicitnature, high efficiency, and rendering speed.While 3D Gaussian Splatting (3DGS) has demonstratedstrong performance in static object-centric reconstructions,the original 3DGS struggles to handle dynamic objects in un-bounded street views, which are common in real-world sce-narios, particularly for autonomous driving applications. It isunable to effectively model dynamic regions, leading to blur-ring artifacts due to the Gaussian model\u2019s time-independentparameterization. As a result, 4D-GS [35] is proposed, mod-eling the dynamics with a Hexplane encoder. The Hexplane[1] works well on object-level datasets, but struggles withdriving scenes because of the unbounded areas in outdoorenvironments. Instead, we choose to reformulate the originalstatic Gaussian model as time-dependent variables with mi-nor changes, ensuring the efficiency of handling large-scaledriving scenes.In this paper, we present DeSiRe-GS, a purely Gaus-sian Splatting-based representation, which facilitates self-supervised static-dynamic decomposition and high-qualitysurface reconstruction in driving scenarios. For static-dynamic decomposition, existing methods such as Driving-Gaussian [47] and Street Gaussians [38], rely on explicit3D bounding boxes, which significantly simplifies the de-composition problem, since dynamic Gaussians in a movingbounding box can be simply removed. Without the 3D anno-tations, some recent self-supervised methods like PVG [5]and S3Gaussian [15] have attempted to achieve decomposi-tion but fall short in performance, as they treat all Gaussiansas dynamic, relying on indirect supervision to learn motionpatterns. However, our proposed method can achieve ef-fective self-supervised decomposition, based on a simpleobservation that dynamic regions reconstructed from 3DGSare blurry\u2014quite different from the ground truth images. De-spite the absence of 3D annotations, DeSiRe-GS producesresults comparable to, or better than, approaches that useexplicit bounding boxes for decomposition.Another challenge in applying 3D Gaussian Splatting(3DGS) to autonomous driving is the sparse nature of im-ages, which is more pronounced compared to object-centricreconstruction tasks. This sparsity often leads 3DGS to over-fit on the limited number of observations, resulting in inaccu-rate geometry learning. Inspired by 2D Gaussian Splatting(2DGS) [14], we aim to generate flatter, disk-shaped Gaus-sians to better align with the surfaces of objects like roadsand walls. We also couple the normal and scale of eachGaussian, which can be optimized jointly to improve surfacereconstruction quality.To further address the overfitting issue, we propose tempo-ral geometrical cross-view consistency, which significantlyenhances the model\u2019s geometric awareness and accuracy byaggregating information from different views across time.These strategies allow us to achieve state-of-the-art recon-struction quality, surpassing other Gaussian splatting ap-proaches in the field of autonomous driving.Overall, DeSiRe-GS makes the following contributions:\u2022 We propose to extract motion information easily fromappearance differences based on a simple observation that3DGS cannot successfully model the dynamic regions.\u2022 We then distill the extracted 2D motion priors in localframes into global gaussian space, using time-varyingGaussians in a differentiable manner.\u2022 We introduce effective 3D regularizations and temporalcross-view consistency to generate physically reasonableGaussian ellipsoids, further enhancing high-quality de-composition and reconstruction.We demonstrate DeSiRe-GS\u2019s capability of effectivestatic-dynamic decomposition and high-fidelity surface re-construction across various challenging datasets [11, 28].Yet, the separationremains incomplete and lacks thoroughness.Neural Surface Reconstruction. Traditional methodsfor Neural Surface Reconstruction focus more on real geom-etry structures. With the rise of neural radiance field (NeRF)technologies, neural implicit representations have shownpromise for high-fidelity surface reconstruction. Approacheslike [19, 33, 41, 43] train neural signed distance functions(SDF) to represent scenes. StreetSurf [13] proposes disen-tangling close and distant views for better implicit surfacereconstruction in urban settings, while [27] steps furtherusing sparse lidar to enhance depth details.3D GS has renewed interest in explicit geometric recon-struction, with recent works [2, 3, 9, 12, 14, 32, 42] focusingon geometric regularization techniques. SuGaR [12] alignsGaussian ellipsoids to object surfaces through introducingand additional regularization term, while 2DGS [14] directlyreplaces 3D ellipsoids with 2D discs and utilizes the trun-cated signed distance function (TSDF) to fuse depth maps,enabling noise-free surface reconstruction. PGSR [2] intro-duces single- and multi-view regularization for multi-viewconsistency. GSDF [42] and NeuSG [3] combine 3D Gaus-sians with neural implicit SDFs to enhance surface details.TrimGS [9] refines surface structures by trimming inaccu-rate geometry, maintaining compatibility with earlier meth-ods like 3DGS and 2DGS. While these approaches excel insmall-scale reconstruction, newer works like [4, 7, 10] aimto address large-scale urban scenes. [4] adopts a large-scenepartitioning strategy for reconstruction, while RoGS [10]proposes 2D Gaussian surfel representation which alignswith physical characteristics of road surfaces.\n4. DeSiRe-GSAs shown in , the training process is divided intotwo stages. We first extract 2D motion masks by calculat-ing the feature difference between the rendered image andGT image. In the second stage, we distill the 2D motioninformation into Gaussian space using PVG [5], enablingthe rectification of inaccurate attributes for each Gaussian ina differentiable manner.\n4.1. Dynamic Mask Extraction (stage I)During the first stage, we observe that 3D Gaussian Splat-ting (3DGS) performs effectively in reconstructing staticelements, such as parked cars and buildings in a drivingscene. However, it struggles to accurately reconstruct dy-namic regions, as the original 3DGS does not incorporatetemporal information. This limitation results in artifactssuch as ghost-like floating points in the rendered images,as illustrated in (stage 1). To address this issue, weleverage the significant differences between static and dy-namic regions to develop an efficient method for extractingsegmentation masks that encode motion information.Initially, a pretrained foundation model is employed toextract features from both the rendered image and the groundtruth (GT) image used for supervision. LetF\u02c6denote thefeatures extracted from the rendered image \u02c6I, and F repre-sent the features extracted from the GT image I. To distin-guish dynamic and static regions, we compute the per-pixeldissimilarity D between the corresponding features. Thedissimilarity metric D approaches 0 for similar features, in-dicating static regions, and nears 1 for dissimilar features,corresponding to dynamic regions.We observed that both3DGS and PVG could produce oversized Gaussian ellipsoidswithout additional regularization, particularly in unboundeddriving scenarios, as illustrated in (a).Our primary objective is to fit appropriately scaled Gaus-sians that support accurate image rendering and surface re-construction. While oversized Gaussian ellipsoids with lowopacity may have minimal impact on the rendered image,they can significantly impair surface reconstruction. This isa limitation often overlooked in existing methods focusedsolely on 2D image rendering. To address this issue, weintroduce a penalty term for each Gaussian:We observed that both3DGS and PVG could produce oversized Gaussian ellipsoidswithout additional regularization, particularly in unboundeddriving scenarios, as illustrated in (a).Our primary objective is to fit appropriately scaled Gaus-sians that support accurate image rendering and surface re-construction. While oversized Gaussian ellipsoids with lowopacity may have minimal impact on the rendered image,they can significantly impair surface reconstruction. This isa limitation often overlooked in existing methods focusedsolely on 2D image rendering. To address this issue, weintroduce a penalty term for each Gaussian:We compare the rendered depth of various methods in. S3Gaussian [15] fails to predict accurate depth map,because they use only LiDAR point clouds for initialization,where there are no points at the upper part. Other than theLiDAR points, PVG [5] and DeSiRe-GS randomly samplepoints, enabling us to render much better depth map.", ["2411_11921v1_0", "2411_11921v1_0", "2411_11921v1_1", "2411_11921v1_1", "2411_11921v1_2"]] [60010, "We find these directions using Sparse Autoencoders (SAEs) (Bricken et al., 2023; Cunningham et al.,\n2023). SAEs are an interpretability tool for finding a sparse, interpretable decomposition of model\nrepresentations.\nDictionary learning (Olshausen & Field, 1997) offers a powerful approach for disentangling features\nin superposition. Sparse Autoencoders (SAEs) have proven to be effective for this task (Sharkey\net al., 2022; Bricken et al., 2023).\nWe then categorize entities into 'known' or 'unknown'. Known entities are those where the model\ngets at least two attributes correct, while unknown are where it gets them all wrong, we discard\nany in-between. To measure correctness we use fuzzy string matching2.\nWe find that the separation scores of some of the SAE latents in the training set are high, i.e. they\nfire almost exclusively on tokens of either known or unknown entities, as depicted in the scatter plot\nin for Gemma 2 2B and , Appendix C for Gemma 2 9B. An interesting observa-\ntion is that latent separation scores reveal a consistent pattern across all entity types, with scores\nincreasing throughout the model and reaching a peak around layer 9 before plateauing ().\nThis indicates that latents better distinguishing between known and unknown entities are found in\nthe middle layers.\nWe also examine the level of generality of the latents by measuring their minimum separation score\nacross entity types (t): players, song, cities and movies. A high minimum separation score indicates\nthat a latent performs robustly across entity types, suggesting strong generalization capabilities.\nAppendix D for Gemma 2 9B suggests that more generalized latents\u2014those\nthat distinguish between known and unknown entities across various entity types\u2014are concentrated\nin these intermediate layers. This finding points to a hierarchical organization of entity represen-\ntation within the model, with more specialized, worse quality, latents in earlier layers and more\ngeneralized, higher quality entity-type-agnostic features emerging in the middle layers.\nWe hypothesize that entity recognition directions could be used by chat models to induce knowl-\nedge refusal. To evaluate this, we use a test set sample of 100 questions about unknown entities,\nand measure the number of times the model refuses by steering (as in Equation (4)) with the entity\nrecognition latents the last token of the entity and the following end-of-instruction-tokens.4 illustrates the original model refusal rate (blue bar), showing some refusal across entity\ntypes. We see that the entity recognition SAE latents found in the base model transfer to the chat\nmodel and, by increasing the known entity latent activation we are able to reduce refusal in three\nentity types. Conversely, increasing the unknown entity latent induces almost 100% refusal across\nall entity types. We also include an Orthogonalized model baseline, which consists of doing weight\northogonalization (Arditi et al., 2024) on every matrix writing to the residual stream. Weight orthog-\nonalization modifies each row of a weight matrix to make it perpendicular to a specified direction\nvector d.\nBy doing this operation on every output matrix in the model we ensure no component is able to\nwrite into that direction. The resulting orthogonalized model exhibits a large reduction in refusal\nresponses, suggesting this direction plays a crucial role in the model's knowledge refusal behav-\nior. We also include the average refusal rate after steering with 10 differents random latents, using\nthe same configuration (layer and steering coefficient) that the known and unknown entity latents\nrespectively. Additional analysis of the Gemma 2 9B model, detailed in Section F, reveals similar\npatterns, albeit with less pronounced effects compared to the 2B model.\n shows a refusal response for a well-known basketball player generated by steering\nwith the unknown entity latent. In (right) we observe that when asked about a non-existent\nplayer, Wilson Brown, the model without intervention refuses to answer. However, steering with the\nknown entity latent induces a hallucination.", ["2411_14257v1_0", "2411_14257v1_4", "2411_14257v1_1", "2411_14257v1_2", "2411_14257v1_3", "2411_14257v1_0"]] [60011, "Designing the Conversation Seed Prompt: The design of the conversation seed prompt is grounded in social identity perspective (Tajfel and Turner,2004), which posits that individuals form identi-ties through their association with various social groups, encompassing multiple simultaneous iden-tities, such as nationality, gender, and interests (Maet al., 2023). According to Abrams and Hogg(2010), contextual cues can render a social iden-tity more salient when compared to other social identities. Thus, to make the race/caste identities salient when generating conversations, our con-versation seed prompt includes the background context regarding the colleagues' group identi-ties (e.g. \u201cWhite\u201d, \u201cBrahmin\u201d) and the initial dia-logue, in which the applicant\u2019s group identity (e.g.\u201cBlack\u201d, \u201cDalit\u201d) is discussed (e.g.\u201che appears tobe [group]\u201d) (see ).\nColleagues & Applicant name selection: To intro-duce diversity in the name selection, we randomly selected names that are culturally indicative of dif-ferent races and castes. All models were provided with identical prompts, including the same names and groups, to ensure consistency across the exper-iments. We discuss additional details in \u00a7H.\nHiring occupation selection: We consider four occupations in our experiments: Software Devel-oper, Doctor, Nurse, and Teacher. These roles are chosen due to their varied societal perceptions and stereotypical associations along both race and caste dimensions, as highlighted in prior work on race(Ghosh and Caliskan, 2023; Veldanda et al., 2023) and caste (Pathania et al., 2023; Barua and Verma,2021; Kumbhar, 2021; George, 2015, 2019).\nLLM model selection: For a comprehensive anal-ysis, we selected eight LLMs \u2013 two models from OpenAI and six widely used open-source models,as listed in . We set the temperature to 0.7 for all models with a 512-token limit.\nWe introduce the Covert Harms and Social Threats(CHAST) metrics, a set of 7 metrics grounded in social science literature, such as the Social Iden-tity Threat Theory (Branscombe et al., 1999a; Maet al., 2023) and Intergroup Threat Theory (Stephan and Cookie, 2000). These frameworks offer a nu-anced understanding of the various forms of harm and threat to identity groups. The CHAST metrics are particularly relevant to our experimental setup,\nwherein LLMs generate conversations involving two colleagues discussing an applicant from a dif-ferent identity group (e.g. \u201cBlack\u201d or \u201cDalit\u201d).\nAccording to the Social Identity Threat Theory(SIT), social identity threat is evoked when peo-ple feel concerned about being negatively treated,devalued, or stereotyped based on group member-ship, often arising from intergroup communica-tion (Branscombe et al., 1999a; Ellemers et al.,2002). We selected three types of social iden-\ntity threat: categorization threat , which is felt when people experience being reduced to a single category; morality threat and competence threat which arise when the group's morality or compe-tence are undermined.\nWe also included metrics from Intergroup Threat Theory (ITT), which argues that intergroup threat is experienced when an in-group member perceives that another group is in a position to cause them harm (Stephan and Cookie, 2000). ITT offered two metrics: Realistic threat , which occurs when an in-group member is concerned about their physical safety, political and economic prosperity, or re-sources due to an out-group and symbolic threat ,\nwhich arises when an in-group member is con-cerned about the integrity of the in-group\u2019s values, standards, morals, or attitudes due to an out-group member (Stephan and Cookie, 2000).\nLastly, we incorporated disparagement as part of our metrics, which encapsulates any behavior by a model that reinforces a notion that certain groups are less valuable than others (Dev et al., 2022). Fol-lowing prior research on harms in job hiring scenar-ios, we additionally included opportunity harm(Yam and Skorburg, 2021; Roberts, 2015), which arises due to withdrawal or reduced chance of an opportunity based on background or group iden-\ntity. presents the CHAST metrics, including their definitions and examples extracted from LLM-generated conversations in our study.", ["2405_05378v1_0", "2405_05378v1_2", "2405_05378v1_1"]] [60012, "However, dividing long videos into short clips would inevitably introduce an accompanied chal-lenge, i.e., multi-granularity noisy correspondence (MNC). As shown in , MNC refers to the misaligned video-text pairs at two different granularities: i) Coarse-grained misalignment (Clip-caption). Coarse-grained misalignment includes asynchronous and irrelevant misalignments ac-cording to whether a clip/caption is alignable with the captions/clips in the long video. To be specific,asynchronous misalignment refers to temporal misalignment between subtitles and visual clips, e.g.,t1 in . It often occurs when people explain their actions before or after actually performing them, resulting in the mismatch between the order of statements and actions. On the other hand,irrelevant misalignment refers to irrelevant or meaningless captions that cannot be aligned with any available video clips (e.g., t2 and t6 in ), and vice versa for video clips. According to Han et al.(2022), only 30% of clip-caption pairs are visually aligned in HowTo100M (Miech et al., 2019), with even fewer 15% being naturally well-aligned; ii) Fine-grained misalignment (Frame-word). Within each video clip, the narration sentences may only partially correlate with the visual frames. As de-picted in , \u201cthe sugar goes on top\u201d in t5 is strongly correlated with visual content v5 while the action \u201cwatch the glaze take off\u201d is uncorrelated. Irrelevant words or frames can distort the identi-fication of crucial ones and result in inaccurate similarity measurements, further contaminating the clip-caption alignment. Note that only a few methods (Han et al., 2022) consider the coarse-grained misalignment problem in temporal learning while none of them realize this fine-grained misalign-ment problem. Undoubtedly, MNC poses a significant obstacle to effective temporal modeling.\nAs long videos are typically composed of a sequence of short video clips, we propose to use the optimal transport distance between video clips and captions as the similarity criterion for video-paragraph contrastive learning in a robust and efficient way.\nFine-grained Alignment. Most previous works (Xu et al., 2021; Yang et al., 2023b; Han et al.,2022) typically encode frames or words to a global feature using [CLS] token or averaging the frame\nor word embeddings. However, such strategies neglect fine-grained in-teractions between modalities and do not address the problem of frame-word misalignment.To address this issue, we propose a cross-modal late interaction mechanism to identify crucial words and key frames for fine-grained alignment inspired by Yao et al. (2022); Wang et al. (2022b).Take the front part for example, for each frame in the video clip, we identify the most impor-\ntant words through a soft-maximum operation, i.e., log-sum-exp approximation (Beck & Teboulle,2012), and then compute the average soft-maximum similarities of all frames as shown in Fig. 2.Similarly, for each textual token, we also find its related video frames in the latter part of Eq. (5). The parameter \u03b1 magnifies the importance of the most relevant words or frames. As \u03b1 approaches 0, the log-sum-exp approximates the maximum. Specifically, this soft-maximum operation allows us to reduce the negative influence of background words or frames on clip-caption similarity estimation.\nSince self-supervised contrastive learning (He et al., 2020) relies on the random sampling of neg-ative instances, captions that are semantically similar to the anchor clips can be treated as faulty negatives (Han et al., 2020; Zolfaghari et al., 2021), and vice versa. However, the existing one-hot target used in contrastive learning penalizes all negative predictions regardless of their correlations.To mitigate this issue, we propose to exploit the faulty negatives through optimal transport.\nResults. i) Background Removed: As shown in , TempCLR (Yang et al., 2023b) performs remarkably better than VideoCLIP (Xu et al., 2021) in terms of DTW and OTAM, as it is trained to explore the global temporal context. However, all these methods suffer from noisy correspon-dence in the temporal alignment. In contrast, our proposed robust optimal transport framework explicitly overcomes multi-granularity noisy correspondence. Specifically, our method effectively improves the performance of all measurements by a large margin (+ 1% Cap. Avg., 5.2% DTW, and 4% OTAM in terms of R@1), indicating that our method learns better temporal information.ii) Background Kept: As shown in , compared with the Background Removed results, the recall of all methods dropped as the irrelevant information in the background can distract the video features. Nevertheless, our proposed method consistently outperformed VideoCLIP and TempCLR,even under such challenging conditions.\n", ["2401_16702v1_0", "2401_16702v1_0", "2401_16702v1_0", "2401_16702v1_0", "2401_16702v1_1", "2401_16702v1_2"]] [60013, "Given an initial adversarial request Radv such as \u201cwrite a tutorial on how to make bomb\u201d that models always reject, the goal is to formulate a prompt that induces the target LLM (T) to generate content that fulfills the request. Our method, IRIS, uses the same LLM for the attacker (A). As shown in , IRIS consists of two main steps: Iterative Refinement, which iteratively refines the initial prompt, and Rate+Enhance, which maximizes the harmfulness of the output.\nDuring the Iterative Refinement step, IRIS operates through a feedback loop that progres-sively refines the adversarial prompt based on T's responses and A's modifications. At each iter-ation, the current prompt Pcurrent is presented to T, and its response R is evaluated to deter-mine whether T rejects the request based on a simple rule: reject if the response is less than 20 words; otherwise, do not reject. If T rejects the prompt, IRIS solicits an explanation from the attacker model A on why the attempt failed us-ing a template query QA(EXPLAIN:R). This self-\nexplanation step is vital for well-aligned LLMs(e.g., GPT-4), as it prevents an immediate rejection when directly asking models to refine the failed adversarial prompt, QA(MODIFY:Pcurrent). The output from the MODIFY query is a refined prompt, Prefined, which becomes the new basis for subse-quent iterations. The iterative prompt refinement process continues until Radv is found or the num-ber of attempts N is reached, which we set N = 4 in our experiments. Since there are 3 queries in each iteration, IRIS produces a maximum of 3N+1 = 13 queries to LLMs for our experiments, which is significantly more efficient than previous approaches (over 20 queries). However, over 80% of the time, only one or two iterations are used. Ex-periment artifacts show refined prompts Prefined\nproduced by IRIS always request the same harmful behavior as the original prompts Padv.\nIn the Rate+Enhance step, IRIS further prompts the target model to rate the harmfulness of Radv from 1 to 5 and refine the response to maximize its harmfulness rating, as Radv could just be a long out-put that containing safe educational content rather than harmful output. We provide an algorithmic implementation of IRIS in Algorithm 1.\nJailbreaking Methods for Comparison. In ad-dition to IRIS, we consider two state-of-the-art methods that use LLMs to refine jailbreak prompts: PAIR (Chao et al., 2023) and TAP (Mehrotra et al., 2023). PAIR uses Vicuna-13B (Chiang et al., 2023) to iteratively refine the prompt, while TAP further improves the method by incorporating the tree-of-thought reasoning (Yao et al., 2024). There is an- other method, PAP (Zeng et al., 2024), that fine-tunes GPT-3.5 to generate prompts but requires 400 queries when jailbreaking GPT-4. We exclude it from our comparisons.\n shows the main results that compare IRIS with TAP and PAIR, whose results were reported in Mehrotra et al. (2023). IRIS-2x represents an ensemble of two independent IRIS trials on each ad-versarial prompt, where the jailbreak is considered successful if at least one of the trials succeeds. The average number of queries for IRIS-2x is the sum of the queries in the two trials. We find that IRIS achieves higher jailbreak success rates with signif-icantly fewer queries than TAP and PAIR. IRIS has success rates of 98% and 92% for GPT-4 and GPT-4 Turbo, respectively, using under 7 queries on average. With two independent trials (IRIS-2x), these rates rise to 100% and 98% with under 13 queries on average, which is approximately 55% fewer queries than other methods while increasing the jailbreak success rate by at least 22%.", ["2405_13077v2_0", "2405_13077v2_1"]] [60014, " shows the overall architectural diagram of our proposed regeneration-based approach. The input to the framework is the paper and its review which we aim to determine whether they are written by AI or Human.\n[]\nThe idea behind this approach is that if a simi-lar prompt is given repeatedly to a large language model (LLM), the LLM is likely to generate re-views or responses that exhibit a consistent style,tone, and content, as outlined in the provided con-text. This consistency occurs because a large lan-guage model generally applies the patterns it has learned during training to the new content it gen-erates based on the given prompt. The study in (Hackl et al., 2023) found that GPT-4 demonstrated high inter-rater reliability, with ICC scores ranging from 0.94 to 0.99, in rating responses across mul-tiple iterations and time periods (both short-term and long-term). This indicates consistent perfor-mance when given the same prompt. Furthermore,the results showed that different types of feedbacks (content or style) did not affect the consistency of GPT-4's ratings, further supporting the model's ability to maintain a consistent approach based on the prompt.\nWe employ GPT to regenerate a review Rreg using the prompt Preg. We create two distinct embed-dings ER for Rreg and EF for R (review which we have to determine if the review is AI-generated or not). The idea is that if the reviewR is generated by an AI, we hypothesize that its embedding EF will exhibit a closer similarity to ER, the embedding of a known AI-generated review Rreg.Then, we quantify the similarity between the embeddings using the cosine similarity metric.\nHere, \u00b7 represents the dot product, and \u2225R\u2225 and \u2225F\u2225 represent the Euclidean norms of the embed-dings. This formula calculates the cosine of the an-gle between the two embeddings ER and EF, pro-viding a measure of similarity where values closer to 1 indicate higher similarity and thus a greater likelihood that both reviews are AI-generated.\nNext, we utilize the computed similarity score as input to train a neural network aimed at detect-ing AI-generated reviews. The training process involves optimizing the network's parameters via backpropagation. This optimization is directed by the cross-entropy loss function.\nWe propose an attack method to reduce the prob-ability of reviews being classified as AI-generated described in Algorithm-1 where we target the most frequent tokens in AI-generated reviews and re-place them with their synonyms, which are less frequent in the AI-generated content.\nHere, we focus exclusively on adjectives, refer-ring to this approach as the \"adjective attack.\" We chose adjectives because substituting nouns and adverbs with their synonyms often leads to nonsen-sical statements or drastically alters the meaning of the review. We discuss this in detail in Appendix C.\nIn the adjective attack, we substitute the top 100 highest probability adjective tokens (e.g., \"novel,\" \"comprehensive\") with their synonyms.\nTo obtain synonyms for the selected tokens, we utilize the NLTK WordNet database7. To preserve the original meaning of tokens as much as possible, we ensure that any synonym used to replace a token is also present in the AI corpus. If a suitable syn-onym is not found in the corpus, we do not replace the token.\nIn order to determine which tokens from the review should be replaced with their synonyms, we performed PoS tagging on the review. For example, if we are conducting an adjective attack, we replace only the adjective tokens in the review with their synonyms.\nWe also illustrate this with an example of an adjective attack, as shown in . In this ex-ample, the adjective tokens 'better' and 'various' from a review are among the top 100 AI token list. We replace them with their synonyms, 'improved' and 'numerous,' respectively.\n", ["2410_09770v1_0", "2410_09770v1_0", "2410_09770v1_1"]] [60015, "We first analyze how each pre- or inprocessing fairness method compares with each other, without the\neffects of postprocessing. shows the Pareto frontiers achieved by each method when using\nGBM base models (see also of the appendix). Overall, preprocessing methods (LFR and\nCR) achieved lacklustre fairness-accuracy trade-offs across all datasets, while the EG and FairGBM\ninprocessing methods performed best (highest area above Pareto frontiers). Specifically, LFR is\nindeed able to achieve high fairness fulfillment, but at a steep accuracy cost. To clarify: the plotted\ncolored Pareto frontiers correspond to multiple (up to 50) different underlying base models, while the\nblack dashed line corresponds to the postprocessings of the single GBM-based model with highest\naccuracy. The following subsection contains a more detailed analysis of postprocessing.\nInterestingly, some fairness methods were able to achieve higher test accuracy than unconstrained\nGBM models, suggesting improved generalization performance. shows one potential reason:\nfairness methods can take notoriously high compute resources to train, potentially giving them\na compute advantage with respect to their unconstrained counterparts. Recent related work has\nput forth other explanations for why fairness-constraining can improve learning. Wei et al. (2023)\nfind that fairness can improve overall performance under label noise by improving learning on tail\nsub-populations. On the other hand, Creager et al. (2021) establish ties between common fairness\nconstraints and goals from the robustness literature, suggesting that fairness-aware learning can\nimprove generalization under distribution shifts.\nFinally, shows results for a similar experiment where fairness constraints were learned only\non the two largest sub-groups (White and Black). This leads to an arguably easier problem to solve,\nwhich is reflected on the general compression of models on the vertical axis (reduced constraint\nviolation for all models). While previously the maximum unprocessed accuracy on ACSIncome was\nachieved at 0.38 constraint violation, on this binary-group setting it is achieved at 0.16 constraint\nviolation. Nonetheless, the same trend is visible on all studied datasets. Unconstrained models \u2014\neither trained in an unconstrained manner (circles) or made unconstrained via unprocessing (m\u2217) \u2014\noccupy regions of high accuracy and low fairness (high constraint violation). However, the Pareto\nfrontier that results from postprocessing m\u2217 (the best-performing unconstrained model) to different\nlevels of fairness relaxation again dominates or matches the remaining fairness methods.", ["2306_07261v5_0", "2306_07261v5_3", "2306_07261v5_1", "2306_07261v5_2"]] [60016, "Frequency spectrum analysis of the diffusion process. The natural images with different reso-\nlutions can be viewed as the result of visual signals sampled at varying frequencies. To compare\nthe frequency features of a 64 \u00d7 64 image and a 256 \u00d7 256 image, we can upsample the 64 \u00d7 64\none to 256 \u00d7 256, perform DCT and compare them in the 256-point DCT spectrum. The second\nrow of shows the signal noise ratio (SNR) at different frequencies and diffusion steps.\nIn , we clearly find that the same noise level on a higher resolution results in a higher\nSNR in the (low-frequency part of) the frequency domain. Detailed frequency spectrum analysis are\nincluded in Appendix C.\nAt a certain diffusion step, a higher SNR means that during training the neural network presumes\nthe input image more accurate, but the early steps may not be able to generate such accurate images\nafter the increase in SNR. This training-inference mismatch will accumulate over step by step during\nsampling, leading to the degradation of performance.\nWhy does the cascaded models alleviate this issue? Experiments in previous works (Nichol &\nDhariwal, 2021; Dhariwal & Nichol, 2021) have already shown that cascaded models perform better\nthan end-to-end models under a fair setting. These models usually use the same noise schedule in all\nstages, so why are the cascaded models not affected by the increase of SNR? The reason is that in\nthe super-resolution stages, the low-resolution condition greatly ease the difficulty of the early steps,\nso that even the higher SNR requires a more accurate input, the accuracy is within the capability of\nthe model.\nA natural idea is that since the low-frequency information in the high-resolution stage has already\nbeen determined by the low-resolution condition, we can continue generating directly from the up-\nsampled result to reduce both the training and sampling steps. However, the generation of low-\nresolution images is not perfect, and thus the solution of the distribution mismatch between ground-\ntruth and generated low-resolution images is a priority to \u201ccontinue\u201d the diffusion process.\nThe Effectiveness of block noise. We compare the performance of RDM with and without adding\nblock noise in and . With a sufficient phase of training, RDM with block noise outperforms\nthe model without block noise by a remarkable margin on both ImageNet and CelebA-HQ. This\ndemonstrates the effectiveness of the block noise. The addition of block noise introduces higher\nmodeling complexity of the noise pattern, which contributes to a slower convergence of training in\nthe initial stage, as illustrated by . We assume that training on a significantly smaller\nscale of samples leads to a fast convergence of the model, which obliterates such a feature, therefore\na similar phenomenon cannot be observed in the training of CelebA-HQ.", ["2309_03350v1_0", "2309_03350v1_1", "2309_03350v1_2", "2309_03350v1_3", "2309_03350v1_2"]] [60017, "There are various characteristics of this environment that make it open-ended (): Imperfect\ninformation, where players only have access to information discovered by their own units and\ncities. Stochastic dynamics with random events and crises that can disrupt plans. Multiple victory\npaths are possible (e.g., conquer, science, or highest score), requiring a balance between economic\nexpansion, military development, diplomacy, culture, and technology. A dynamic game space with\ncontinuous changes in state and action space for a single agent. Multi-agent interactions with built-\nin AI players or other models, providing the potential for self-play. General-sum game that allows\nalliance formation during gameplay, which changes the game structure and makes the victories of\ndifferent players non-exclusive. Changes in the number of players during a game due to revolts\nor conquers, leading to significant alterations in the joint state and action space. Communication\nbetween players through diplomatic actions and natural language chat, allowing agents to use their\nnatural language capabilities. In summary, CivRealm presents unique challenges and complexities,\nmaking it an open-ended testbed for decision-making agents. Please see \u00a7 A.1 for more details.\nAgent-architecture-agnostic framework. CivRealm empowers each agent to act as a player in\nthe open-source turn-based strategy game Freeciv [74]. CivRealm employs a server-proxy-client\nframework and implements proxy APIs so that a server hosts the game and the proxy establishes\nthe connection between agents (i.e., clients) and the server. The proxy distributes the game states\nreceived from the server to each agent and submits the actions returned by agents to the server. By\nthis design, agents with various architectures can seamlessly engage in Freeciv by interpreting the\nobservations provided by the proxy and generating actions that adhere to CivRealm's specifications..\nLLM-based-agent-friendly. Freeciv is a turn-based game that operates without the need for real-\ntime reactions. This affords players ample time for thoughtful deliberation. This pace aligns well\nwith the operation of LLM agents, which typically demand substantial time for inference.\nEvaluation platform for generalization ability. CivRealm offers multiple convenient methods to\ncreate novel scenarios, such as generating random maps with diverse landscapes and varying player\nand unit numbers, or modifying the rule sets that define the fundamental game rules. These elements\nresult in new configurations, demanding agents to reason the underlying game mechanics rather than\nrelying solely on memorized experiences and public knowledge. Therefore, CivRealm serves as an\neffective platform for assessing the generalization capabilities of decision-making agents.\nSupport for a variety of tasks. CivRealm offers a wide range of learning and reasoning tasks. These\ntasks include not only the comprehensive full game of Freeciv, but also smaller-scale mini-games\ndesigned using Lua scripts. In \u00a7 3.2, we will provide detailed descriptions of these tasks.\nIn CivRealm, players take the role of civilization leaders with the objective of guiding their civi-\nlization from its humble beginnings to greatness, where full games can last from several hours to\nseveral days. Civilizations evolve through eras, with an explosion in the number of controllable\nobjects as the game progresses, resulting in vast state spaces and joint actions (). Decisions\nin the game have multifaceted impacts, encompassing both long-term strategic consequences and\nshort-term tactical outcomes. This complexity necessitates a thought process that carefully weighs\nthe implications of these decisions since greedy moves can easily be non-optimal in the long term.\nObservations. Instead of directly processing raw pixel data of the game interface, we extract representative\n discrete information from graphics observed during human gameplay. These observations\nencompass data related to the map, units, cities, government, technology, and diplomacy. The map\ninformation includes whether a particular tile is observable, where a \u201ctile\u201d refers to a square space\non the grid-based map. The map provides details like the terrain type, owner of the tile, resource\noutput, additional infrastructure, and units present on the tile. The unit information provides insights\ninto a unit\u2019s health, location, owner, attack/defense strength, remaining movement points (indicating\nthe actions the unit can take in a turn), maintenance costs, etc. The city information covers details\nsuch as a city's location, owner, size, population, shield value, resource production, and more. The\ngovernment information indicates the current government type of the civilization, the tax rate, etc.\nThe technology information displays the technologies that have been researched and the technology\ncurrently being researched. The diplomacy information comprises data regarding diplomatic rela-\ntionships with other players. For a comprehensive list of these observations, please refer to \u00a7 A.1.1.\nActions. We have implemented a rich set of action classes that encompass the five primary facets\nof gameplay: unit, city, government, technology, and diplomacy. The unit actions are responsible\nfor controlling a player's units. They can be categorized into three main types: engineering actions,\nwhich handle tasks like city construction, planting, mining, and more; movement actions, including\nmoving, transportation, embarking, and so on; and military actions, such as attacking, fortifying,\nbribing, etc. The city actions pertain to the development and management of a city. They include unit\nproduction, building construction, city worker assignment, and more. The government actions allow\nplayers to change their government type to gain corresponding political benefits, adjust tax rates to\nbalance economic expansion and citizen happiness, etc. The technology actions enable players to set\nimmediate or long-term goals for their technology research. The diplomacy actions empower players\nto initiate negotiations, such as trading technologies, negotiating ceasefires, forming alliances, etc.\nFor an exhaustive list of the implemented actions, please refer to \u00a7 A.1.2.\nEvaluation Metrics. CivRealm offers evaluation metrics to assess playing performance across var-\nious dimensions, including population, constructed cities, researched technologies, produced units,\nexplored land, etc. An aggregated score is provided for overall evaluation. Please refer to \u00a7 A.1.3.", ["2401_10568v2_0", "2401_10568v2_1"]] [60018, "We introduce Heterogeneous Pre-trained Transformers (HPT), a family of architecture designed to scalably learn from data across heterogeneous embodiments. HPT modularizes a general policy network architecture () and pre-trains the policy representation of a latent transformer with supervised learning. Inspired by learning from multimodal data [1, 73, 19, 30], we use embodiment-specific tokenizers, dubbed \u201cstem\u201d, to align various sensor inputs such as camera views and pro-prioception inputs. The \u201ctrunk\u201d is shared and pre-trained across datasets and is transferred when adapting to new embodiments and tasks that are unknown during the pre-training times. Moreover,we use task-specific action decoders, dubbed \u201chead\u201d, to produce the action outputs. Crucially, after\n\u201ctokenizing each embodiment\u201d, HPT operates on a shared space of a short sequence of latent tokens.This hierarchy is motivated by how humans handle feedback loops between specific motor responses and perceived stimuli at the level of the spinal cord\u2019s neural circuitry [68].\nWe extensively investigated the scaling behaviors and various designs of policy pre-training to the extent of more than 50 individual data sources (2 times more than [55]) and model size of over 1 billion parameters. Analogous to the scaling laws [26, 28], we found that to some extent, HPT scales with the dataset quantity and diversity as well as the model and training compute.\nIn addition, heterogeneity can occur in different embodiment domains, such as real robot hardware, simulation domains, and human videos. We incorporate many available embodied datasets in different embodiments such as real robots [14, 75, 38], simulation [81, 89, 49, 20, 85, 80] and internet human videos [15] in the pre-training process and demonstrate the generality of our framework including embodiments beyond expensive real-world on-robot teleoperations.\nThrough transfer learning experiments across multiple simulation benchmarks [89, 49, 81] and real-world dexterous tasks, we compare with several baselines and the from-scratch counterparts.Overall, based on the pre-training objectives, HPT can scale with the model, data, compute, and the heterogeneity of the robotic datasets across real robots, simulations, and human videos. These pre-training procedures and models can simplify building reliable robotic policies for new embodiments and new tasks in terms of data requirements and generalized performance. As an attempt to scale heterogeneous pre-training, our code and weights are open-sourced, and we hope that HPT can shed some light on learning robot representations from heterogeneous embodiments and tasks.\nWe experiment beyond real-world robot teleop data, which is expensive to collect and scale. For the additional datasets, we consider 7 simulation datasets across many popular simulators Drake [81], Mujoco [89, 49], Isaac Sim [20], and PyBullet [85, 80], as well as Sapien [52] and Flex [66], with image inputs and expert demonstrations. For the human datasets that lack proprioception and action information, we use poses and 2D positions as surrogates for the supervised policy learning objectives.We use in total 300 trajectories from EPIC kitchen [15] and PoCo [82] with a maximum trajectory length 1000. See Appendix and Table 4 for more details on the dataset compositions.\nIn , we use a maximum of 1000 trajectories for each dataset and compare against the baseline of 27 datasets with evaluation on all the pre-trained datasets. We show that pre-training on additional embodiment datasets such as simulation and human video datasets can be possible, despite the large embodiment gaps with real robots. These datasets provide complimentary embodiment data to pure teleop data, and they illustrate how much heterogeneity can be handled in the HPT framework.", ["2409_20537v1_0", "2409_20537v1_2", "2409_20537v1_1"]] [60019, "This paper proposes a novel framework, shown in , referred to as the Boosting of Thoughts (BoT), which achieves the boosting mechanism that embraces aggregation and experience, thereby enabling the progressive refinement of unreliable reasoning steps (weak thoughts) by learning from errors to eventually solve various problems. Starting with a simple prompt without human annotations for LLMs, BoT may get weak thoughts. With aggregation, BoT is capable of deriving a more logical and effective thought chain from them, thereby guiding the subsequent refinement. This guidance in our framework is achieved by tuning the prompt with experience, which is the detailed error reports, advice, and instructions of each reasoning step obtained by exploiting LLMs to analyze the aggregated chain. When such experience accumulates in the prompt, it gradually leads to stronger thoughts.\nSpecifically, BoT implements such a Boosting mechanism as an experience-driven iteration process,as shown in . In each iteration, for a given prompt, BoT builds massive simplistic thought structures in parallel with the LLM. We select the tree structure as in ToT Yao et al. (2023) but significantly modify it to weighted binary trees with various growth strategies for our boosting purposes. After extracting the root-to-leaf branch with the highest score per tree, the aggregation component of BoT is performed to aggregate them into one single thought chain. Subsequently, this chain is evaluated by the same LLM to gain the experience, which is added to the prompt as guidance for the thought generation in the next iteration.\nThe primary experimental results are summarized in and , where we present insights into the overall performance of BoT. Our findings indicate that the proposed BoT with Boosting mechanism 1). obtains competitive problem-solving rates in most datasets without human annotations;2). is capable of reaching a new state-of-the-art on GSM8K and AQuA when provided with CoT examples. However, experimental results also demonstrate that BoT heavily relies on experience,thus is sensitive to the ability of LLMs.\nSpecifically, in , BoT, starting from a simple initial prompt and performing basic chatting,eventually obtains a GSM8K solve rate 0.1% higher than the current state-of-the-art (SOTA) CSV Zhou et al. (2023), which heavily relies on code interpreter of GPT-4. Considering AQuA, BoT is 2.5% higher than SOTA. This demonstrates that by adding error analysis and advice to the prompt without human annotations, LLMs are able to perform well on complex reasoning. The main reason is that a simple prompt can be iteratively refined by accumulating prior experience towards accurate problem-solving. After including CoT examples in the prompt, BoT+CoT outperforms SOTA by\n1.3% on average in GSM8K and AQuA datasets. We argue that the CoT examples can be regarded as the success cases in the experience, directly guiding the subsequent thought structures generation of BoT. Thus, cooperating with the iteration refinement, BoT+CoT reaches a new SOTA. It also deserves to show that because BoT can gradually collect analysis of various reasoning chains (bad or good) as experience, it is consistently close to the BoT+CoT. However, BoT and BoT+CoT, especially BoT, are at least 18% lower than SOTA in MATH. This observation means weak LLMs may not perform well with BoT due to their lower ability to analyze reasoning chains for an effective experience, as supported by .\n presents that with BoT, GPT-4 and Llama2 are respectively improved by 11.6% and 4.4% on average in three datasets. The two numbers show a clear trend that when the LLM is weaker, BoT's performance drops significantly. With powerful GPT-4, as presented in , BoT and BoT-CoT behave similarly to those shown in . Additionally, their performance escalates along a similar trend as the number of trees varies from 1 to 20. As Llama2 is weaker, BoT is unable to benefit from its analysis to perform the experience-driven iteration process, which is particularly shown by (a). When provided with valid success cases, i.e., 5-shots, BoT, through progressive refinement, can still help Llama2 to solve more problems than the baseline even though the improvement is limited.", ["2402_11140v1_0", "2402_11140v1_0", "2402_11140v1_1", "2402_11140v1_2", "2402_11140v1_1", "2402_11140v1_2", "2402_11140v1_2", "2402_11140v1_2", "2402_11140v1_1", "2402_11140v1_2"]] [60020, "In standard supervised HTC, there is an underlying assumption of abundant training samples(Zhao et al., 2023; Im et al., 2023; Song et al.,2023), which is often unattainable and expensive to construct manually. Moreover, HTC datasets are characterized by a complex hierarchical label structure, with leaf labels typically following a Zipfian distribution, resulting in very few data instances for these labels. As a result, the few-shot setting is more realistic, and has gained increasing interest recently (Ji et al., 2023; Bhambhoria et al., 2023; Wang et al., 2023b). Nevertheless, existing works often struggle with unsatisfactory performance in this setting. For example, BERT with the vanilla fine-tuning strategy performs extremely poorly in few-shot HTC. \nRecently, large language models (LLMs) have achieved notable success on various NLP tasks (Wang et al., 2023a; Drozdov et al., 2023; Zeng et al., 2023), which have significantly enhanced the efficacy of in-context learning (ICL) with relevant demonstrations in the few-shot setting (Shome and Yadav, 2023; Dai et al., 2023; Zhang et al., 2023). However, the application of ICL on HTC faces unique challenges, diverging from traditional text classification scenarios. These challenges are primarily due to two distinct characteristics of HTC, as delineated in \n. \nFirstly, HTC features a deep hierarchical labeling structure and expansive label sets, resulting in large label sets in ICL, which adversely impacts its performance. Secondly, as the hierarchy deepens, the semantic similarity between adjacent labels increases (Stein et al., 2019), making it very challenging to select relevant demonstrations that guide the learning process efficiently.\nIn this work, we introduce the first ICL-based framework for few-shot HTC. Specifically, we use a LLM as the foundation model for inference, and provide demonstrations to guide HTC label generation through ICL. Our success depends on finding suitable demonstrations for a given input. In order to achieve this, we build a retrieval database that can find the most-relevant demonstrations for the input. Further, in order to avoid providing an enormous set of multi-layer contextual HTC labels all at once, as is required for ICL, we suggest an iterative policy to infer the labels layer-by-layer, reducing the number of candidate labels greatly \n illustrates our ICL-based framework for HTC. We first train a PLM-based indexer and build a retrieval database containing reference samples (the training data).\nAfter that, we perform a similarity search in the retrieval database with the text to be inferred. Finally, we construct an ICL prompt with highly similar demonstrations for prediction.\nWe will introduce our ICL prompt policy for HTC (\u00a7 3.1), and then detail the retrieval database construction (\u00a7 3.2) and demonstration retrieval methods (\u00a7 3.3). Indexer Training. For indexer training, we apply the objectives of mask language modeling Lmlm, and layer-wise classification Lcls. Lmlm is used to predict the words that fill the random mask tokens in the inputs. Lcls is to predict HTC labels through each hierarchical layer index vectors.\nAdditionally, we propose DCL for indexer training, which uses label text information to select positive and negative samples. For x, positive samples are chosen from sentences with the same label as x. Additionally, the corresponding label description d could be treated as a positive sample. Negative samples consist of two parts. First, based on the similarity between d and descriptions of other labels, negative examples are sampled from highly similar label categories. Similarly, their corresponding label descriptions could be also treated as negative samples. In addition, a few randomly selected sentences from other labels are used as negative samples of x. Thus, compared to traditional random sampling methods, our negative sample selection approach opts for more instances with semantically similar labels as hard negative samples.\nThen the index vectors among the positive samples are pulled together and the negative ones are pushed apart. Taking x as an example, denote B = {x, x+, x\u2212 1 , . . . , x\u2212 n } as a group of input data. The contrastive loss can be calculated as: In this paper, we proposed a retrieval-style ICL framework for few-shot HTC. We uniquely identify the most relevant demonstrations from a retrieval database to improve ICL performance and meanwhile designed an iterative policy to inference hierarchical labels sequentially, significantly reducing the number of candidate labels. The retrieval database is achieved by using a HTC labelaware representation for any given input, enabling the differentiation of semantically-closed labels (especially the leaf adjacent labels). The representation learning is implemented by continual training on a PLM with three carefully-designed objectives including MLM, layer-wise classification, and a novel DCL objective.\nWe conducted experiments on three benchmark datasets to evaluate our method. The results show that our method is highly effective, which is able to gain large improvements among a serious of baselines. Finally, our method can bring the stateof-the-art results in few-shot HTC on the three datasets. Further, we performed comprehensive analysis for deep understanding of our method, spreading various important factors.\nThis work still includes several unresolved problems, which might be addressed in the future.\nFirstly, LLMs are currently confined to expanding text via label descriptions and their application to full training set expansion has not been effective.\nIn order to fully utilize LLMs in text expansion, we need further optimization. Second, the performance gap between supervised methods and our ICL-based approach appears to diminish with increasing training dataset size, suggesting the need for further analysis.\n", ["2406_17534v2_0", "2406_17534v2_1"]] [60021, "In this section, we elaborate on the details of the proposed LlamaDuo, which are depicted in . This LLMOps pipeline aims to ensure service LLMs continuity by transitioning knowledge and abilities from service-oriented LLMs to smaller, locally manageable LLMs without the need for human intervention.\n[]Tasks and coverage dataset. We select four categories of downstream tasks-summarization, classification, coding, and closed QA-based on their prevalent use and relevance to the operational scope of service LLMs. We utilize the open-source \u201cNo Robots\u201d [30] dataset as the coverage dataset. This coverage dataset consists of 10K high-quality prompt and response pairs across 10 categories, crafted by expert annotators. Specifically, we utilize four subsets of the coverage dataset, each corresponding to our targeted tasks. These subsets serve as seeds for generating synthetic data that can closely align with user expectations for LLM interactions. Service and local LLMs. Considering the API cost effectiveness, rate limit, and model utility, we select most popular service LLMs including GPT4o by OpenAI 4 , Claude 3 Sonnet by Anthropic 5 , and Gemini 1.5 Flash by Google 6 to serve as synthetic data generator and judges. As for the smallscale local LLMs to be fine-tuned, we opt for the open-source Gemma 2B and 7B [13], Mistral 7B [17], and LLaMA3 8B [25] as the base models. This selection is motivated by our aim to rigorously evaluate the efficacy and adaptability of our proposed pipeline across diverse settings. The varying scales of base models facilitate a nuanced comparison, allowing us to assess the impact of model scale on performance improvements. However, as a model-agnostic LLMOps pipeline, our LlamaDuo can be generalized to various forms of service and local LLMs beyond the aforementioned models. \\subsection{Coverage Dataset}\nUsers interact with service LLMs through prompt engineering efforts. \nThe historical trials composed of the user input prompt and the responses of service LLMs, and potential errors will be recorded and saved in local storage. \nSubsequently, users annotate and collect the most satisfied prompt and response pairs conformed with their real-world use cases. \nThe resulting instruction dataset is termed as coverage dataset, denoted as $\\mathcal{D}^{(0)} \\coloneq \\{\\mathcal{I}^{(0)}_i, \\mathcal{R}^{(0)}_i\\}^{N}_{i=1}$, and split as train and test subsets by ratio $\\Phi$. Here, $\\mathcal{I}^{(0)}_i$ denotes the $i$-th instruction (prompt) in $\\mathcal{D}^{(0)}$, $\\mathcal{R}^{(0)}_i$ is the corresponding response for the $i$-th instruction, and $N$ is the number of samples in $\\mathcal{D}^{(0)}$.\nSince coverage dataset is of high quality and satisfying the user's intent in real-world context, the train subsets $\\mid\\mathcal{D}^{(0)}_{train}\\mid=\\Phi\\cdot N$ will be served as seeds for synthetic datasets generation, while the test subset $\\mid\\mathcal{D}^{(0)}_{test}\\mid=(1-\\Phi)\\cdot N$ is reserved for performance evaluation of the fine-tuned local LLMs. \\subsection{Fine-tuning}\nTo efficiently and effectively adapt the local model to specific downstream task(s), we finetune the local LLM with the supervised learning paradigm on high-quality instruction data. \nAt the initial cyclicality of the pipeline, the selected local LLM is fine-tuned on the train subsets $\\mathcal{D}^{(0)}_{train}$ of the coverage dataset, obtaining the fine-tuned model $\\pi^{(0)}$.\nAt subsequent cyclicality $t$, if the performance of fine-tuned model does not reach or surpass the predetermined evaluation threshold $\\mathbf{\\varepsilon}$ of specific tasks, the local LLM $\\pi^{(t)}$ will be continuously fine-tuned on the increasing number of synthetic data $\\{\\mathcal{D}^{(1)}_{synth}, \\mathcal{D}^{(2)}_{synth}, \\ldots, \\mathcal{D}^{(t-1)}_{synth}\\}$ generated from service LLMs with $\\mathcal{D}^{(0)}_{train}$ as seed dataset. Consequently, when $t\\ge 1$, the objective of the fine-tuning phase can be formulated as \n\\begin{equation}\n \\mathcal{L}_{\\mathrm{SFT}}(\\pi^{(t)}, \\mathcal{D}^{(t)}) = -\\mathbb{E}_{\\mathcal{I}^{(t)}\\sim\\mathcal{D}^{(0)}_{train},\\mathcal{R}^{(t)}\\sim \\{\\mathcal{D}^{(0)}_{train},\\{\\mathcal{D}^{(\\tau)}_{synth}\\}_{\\tau=1}^{t-1}\\}}\n \\left[\\log P_{\\pi^{(t-1)}}(\\mathcal{R}^{(t)}\\mid \\mathcal{I}^{(t)})\\right], \n\\end{equation} \n\\subsection{Batch Inference}\nAfter the fine-tuning stage, the fine-tuned local model is prompted with prompts $\\mathcal{I}^{(0)}$ sampled from the test subsets $\\mathcal{D}^{(0)}_{test}$ of the coverage dataset to produce corresponding response $\\mathcal{\\hat{R}} \\sim \\pi^{(t)}(\\mathcal{R}^{(0)}\\mid \\mathcal{I}^{(0)})$. \nTo improve the diversity and robustness of responses, the local model generates a batch of $K$ responses $\\{\\mathcal{\\hat{R}}_1, \\mathcal{\\hat{R}}_2,\\ldots,\\mathcal{\\hat{R}}_K\\}$ for each given prompt $\\mathcal{I}^{(0)}$. \nTotally, it will construct prompt and responses pairs $\\{(\\mathcal{I}^{(0)}_i,\\mathcal{\\hat{R}}_i)\\}_{i=1}^{(1-\\Phi)\\cdot N \\cdot K}$. Formally,\n\\begin{equation}\n\\mathcal{\\hat{R}}_k \\sim \\pi^{(t)}(\\mathcal{R}^{(0)}\\mid \\mathcal{I}^{(0)}), \\mathrm{for}\\enspace k\\in \\{1,2,\\ldots, K\\},\n\\end{equation}\n\\begin{equation}\n \\mathcal{I}^{(0)} \\sim \\mathcal{D}^{(0)}_{test}.\n\\end{equation} \\subsection{Evaluation}\nIn the evaluation stage, we employ ``service LLMs-as-judge'', denoted as $\\mathcal{E}_{\\mathrm{LLM}}(\\cdot)$, to conduct performance evaluation of local model on $\\{(\\mathcal{I}^{(0)}_i,\\mathcal{\\hat{R}}_i)\\}_{i=1}^{(1-\\Phi)\\cdot N \\cdot K}$. \nFollowing the works \\cite{zheng2024judging,yuan2024self}, the service LLMs evaluate each response triple $(\\mathcal{I}^{(0)},\\mathcal{\\hat{R}},\\mathcal{R}^{(0)})$, comprising prompt, the corresponding generated response, and the ground truth, by $M$ times with pairwise comparison and single answer grading strategies.\nThis evaluation process guarantees the trustworthy and reduces the inherent bias of results. \nMoreover, when employing LLMs as evaluators, the evaluation metrics can be more flexibly adapted to specific tasks, along with a thorough evaluation guide.\nIn this paper, we measure the similarity between $\\mathcal{\\hat{R}}$ and $\\mathcal{R}^{(0)}$, and how precise $(\\mathcal{I}^{(0)},\\mathcal{\\hat{R}})$ the responses generated by the local LLM answer the given instructions.\nTherefore, $\\{(\\mathcal{I}^{(0)}_i,\\mathcal{\\hat{R}}_i,\\mathcal{R}^{(0)}_i)\\}_{i=1}^{(1-\\Phi)\\cdot N \\cdot K}$ invokes service LLMs to perform evaluation by $(1-\\Phi)\\cdot N \\cdot K \\cdot M$ times. Subsequently, the evaluation results can be leveraged according to the intention of the operator performing this LLMOps pipeline. \nFor example, actions can be taken to increase the reliability of service LLM as an evaluator by calculating the mean or median. In this study, we adopt the mean score $V_{\\pi^{(t)}}$ and coverage percentage $C_{\\pi^{(t)}}$ with $\\zeta$ score as evaluation results. Here, the coverage percentage $C_{\\pi^{(t)}}$ indicates the proportion of responses that have met or exceeded the quality benchmark.\nFormally, \\begin{equation}\n V_{\\pi^{(t)}} = \\frac{1}{(1-\\Phi)\\cdot N \\cdot K}\\sum_{j=1}^{(1-\\Phi)\\cdot N \\cdot K}V_{\\pi^{(t)}}^j, \n\\end{equation}\n\\begin{equation}\n C_{\\pi^{(t)}} = \\frac{1}{(1-\\Phi)\\cdot N \\cdot K}\\sum_{j=1}^{(1-\\Phi)\\cdot N \\cdot K}\\mathds{1}(V_{\\pi^{(t)}}^j\\ge \\zeta), \n\\end{equation}\n\\begin{equation}\n V_{\\pi^{(t)}}^j = \\frac{1}{M}\\sum_{m=1}^{M}\\mathcal{E}_{\\mathrm{LLM}}(\\mathrm{prompt}^{(eval)}, d_j), \n\\end{equation}\n\\begin{equation}\nd_j \\sim \\{(\\mathcal{I}^{(0)}_i,\\mathcal{\\hat{R}}_i,\\mathcal{R}^{(0)}_i)\\}_{i=1}^{(1-\\Phi)\\cdot N \\cdot K}, \n\\end{equation}\nwhere $V_{\\pi^{(t)}}$ and $C_{\\pi^{(t)}}$ denote the performance of local LLM at $t$-th cyclicality, $\\mathds{1}(\\cdot)$ is an indicator function, $\\zeta$ denotes a threshold score of $C_{\\pi^{(t)}}$, $\\mathrm{prompt}^{(eval)}$ is the system prompt used for LLM-as-a-Judge. \n\\subsection {Data Synthesis}\nIf the performance of fine-tuned local LLM $V_{\\pi^{(t)}}$ or $C_{\\pi^{(t)}}$ fails to reach or surpass the predetermined evaluation threshold $\\mathbf{\\varepsilon}$ of specific tasks, it indicates that fine-tuned local LLM's capabilities are insufficient for the tasks at hand.\nThus, the local LLM cannot yet serve as a substitute for the service LLM and necessitates further refinement.\nTo achieve this, we utilize service LLMs to generate additional synthetic datasets for fine-tuning local LLM in the next cyclicality. \nTo maintain the consistency of data distribution of coverage dataset $\\mathcal{D}^{(0)}$ constructed from real-world scenarios, we employ the train subsets $\\mathcal{D}^{(0)}_{train}$ as seeds and apply the same framework \\cite{wang2023self,alpaca} for synthetic dataset generation.\nDuring synthetic dataset generation, we perform data deduplication to exclude identical samples from $\\{\\mathcal{D}^{(0)}_{train}, \\{\\mathcal{D}^{(1)}_{synth}, \\mathcal{D}^{(2)}_{synth}, \\ldots, \\mathcal{D}^{(t-1)}_{synth}\\}\\}$ and filter out low-quality samples based on carefully designed rules. \nFinally, we conduct rigorous data decontamination for the synthetic dataset to remove samples that closely resemble those in the test subset $\\mathcal{D}^{(0)}_{test}$ of the coverage dataset.\nFormally, the data synthesis stage can be formulated as \n\\begin{equation}\n\\mathcal{D}^{(t)}_{synth} \\leftarrow \\bigcup \\psi(\\mathcal{D}^{(t)}_{synth},\\{\\mathcal{D}^{(0)}_{train}, \\{\\mathcal{D}^{(\\tau)}_{synth}\\}_{\\tau=1}^{t-1}\\},\\mathcal{D}^{(0)}_{test}), \n\\end{equation}\n\\begin{equation}\n \\mathcal{D}^{(t)}_{synth} \\sim \\mathcal{S}_{\\mathrm{LLM}}(\\mathrm{prompt}^{(synth)}, seed), \n\\end{equation}\n\\begin{equation}\n seed \\sim \\mathcal{D}^{(0)}_{train}, \\mathrm{for}\\enspace V_{\\pi^{(t)}} < \\mathbf{\\varepsilon}\\enspace \\mathrm{or}\\enspace C_{\\pi^{(t)}} < \\mathbf{\\varepsilon}, \n\\end{equation}\nwhere $\\bigcup\\psi(\\cdot,\\cdot,\\cdot)$ represent a series of data post-processing operations, $\\mathcal{D}^{(t)}_{synth}$ denotes synthetic data generated from service LLMs at $t$-th cyclicality, $\\mathcal{S}_{\\mathrm{LLM}}$ and $\\mathrm{prompt}^{(synth)}$ are the service LLM and system prompt used for the data synthesis, respectively. For GPT4o, as of August 2024, the pricing is \\$5 and \\$15 per million tokens for input and output, respectively. We estimate that a light workload, utilizing 10 million input tokens and 1 million output tokens per day, incurs a daily cost of \\$65. Conversely, a heavy workload, consuming 50 million input tokens and 10 million output tokens per day, is estimated to cost \\$325 daily.\nThe monthly operational cost comparison between Gemma 7B and GPT-4o under different workloads is summarized in Table \\ref{tab:spec-cost-comparison}, demonstrating a significant advantage in fine-tuning and deploying a local LLM. Moreover, as depicted in , after the first two months, the cost of using GPT-4 under both light and heavy workloads exceeds that of setting up and running a local model deployed on $1\\times\\mathrm{L4}$ GPU and $8\\times\\mathrm{L4}$ GPU. After one year, GPT-4's costs surpass those of deploying a local model in all scenarios.\nThese findings highlight the substantial economic benefits of investing in local LLM fine-tuning and deployment for long-term use. \nMeanwhile, avoiding recurring token-based charges and maintaining control over model customization further enhances the appeal of the \\llamaduo for cost-conscious users and scenarios.", ["2408_13467v2_0", "2408_13467v2_0", "2408_13467v2_1"]] [60022, "In this study, we advance minimally supervised hierarchical text classification by enriching the weak supervision with additional textual clues and leveraging LLMs tailored for the hierarchical label structure.\nFirst, we enrich the existing label taxonomy with class-specific terms derived from two sources: LLM generation and automated extraction from the corpus.\nFor example, the ``conditioner'' class in Figure~\\ is enriched with key terms like ``moisture'' and ``soft hair'', which distinguish it from other classes.\nThese terms enhance the supervision signal by combining knowledge from LLMs and text corpus and improve the pseudo label quality for classifier training. \nSecond, we further explore the application of LLMs in hierarchical text classification from two perspectives.\nSpecifically, we enhance LLM annotation efficiency and effectiveness through a taxonomy-guided candidate search and optimize LLM document generation to create more precise pseudo data by using taxonomy paths. \n\\subsection{Core Class Refinement with Enriched Taxonomy} \\label{sec:class-refine} With the enriched class-indicative terms for each class, we propose to further utilize them to refine the initial core classes. \nIn this paper, we adopt an embedding-based document-class matching method.\nUnlike previous methods in flat text classification~\\cite{Wang2021XClass} that use keyword-level embeddings to estimate document and class representations, here, we are able to define class representations directly based on document-level embeddings thanks to the rough class assignments we created in the core class annotation step (c.f. Sec.~\\ref{sec:core-class}). To obtain document representations, we utilize a pre-trained Sentence Transformer model~\\cite{reimers-2019-sentence-bert} to encode the entire document, which we denote as $\\vec{d}$. Then, for each class $c$, we identify a subset of its assigned documents that explicitly mention at least one of the class-indicative keywords and thus most confidently belong to this class, $D_c = \\{d \\in D^0_c | \\exists w \\in T_c, w \\in d\\}$. Then, we use the average of their document embeddings as the class representation, $\\vec{c} = \\frac{1}{|D_c|} \\sum_{d \\in D_c} \\vec{d}$. \nFinally, we compute the document-class matching score as the cosine similarity between their representations. Based on the document-class matching scores, we make an observation that the true core classes often have much higher matching scores with the document compared to other classes. Therefore, we use the largest ``similarity gap'' for each document to identify its core classes. Specifically, for each document $d_i \\in \\D$, we first get a ranked list of classes according to the matching scores, denoted as $[c^i_1, c^i_2, \\dots, c^i_{|\\C|}]$, where $\\text{diff}^{\\,i}(j) := \\cos(\\vec{d}_i, \\vec{c}^i_j) - \\cos(\\vec{d}_i, \\vec{c}^i_{j+1}) > 0$ for $j \\in \\{1, \\dots, |\\C|-1\\}$. Then, we find the position $m_i$ with the highest similarity difference with its next one in the list. \nAfter that, we treat the classes ranked above this position as this document's refined core classes $\\mathbb{C}_i$, and the corresponding similarity gap as the confidence estimation $conf_i$.\n% \\vspace{-0.5em}\n\\begin{align}\\label{eq:refine-class}\n% \\small\n\\begin{split}\n conf_i = \\text{diff}^{\\,i}(m_i),\\quad\n \\mathbb{C}_i &= \\{c^i_1, \\dots, c^i_{m_i}\\},\\\\\n m_i = \\argmax_{j \\in \\{1, \\dots, |\\C|-1\\}}& \\text{diff}^{\\,i}(j).\n\\end{split}\n% \\vspace{-0.5em}\n\\end{align}\nFinally, we select the top 75\\% of documents $d_i$ and their refined core classes with the highest confidence scores $conf_i$, denoted as $\\D^\\text{core}$. \n\\subsection{Text Classifier Training with Path-Based Data Augmentation} \n\\label{sec:classifier} The final step of \\Our is to train a hierarchical text classifier using the confident refined core classes. One straightforward way is to directly use the selected core classes as a complete set of pseudo-labeled documents and train a text classifier in a common supervised way. However, such a strategy is ineffective, because the core classes are not comprehensive enough and cannot cover all the classes in the taxonomy. \nThis is because the hierarchical label space naturally contains fine-grained and long-tail classes, and they are often not guaranteed to be selected as core classes due to their low frequency. Empirically, for the two datasets we use in our experiments, Amazon and DBPedia, the percentages of classes never selected as core classes are 11.6\\% and 5.4\\%, respectively. These missing classes will never be used as positive classes in the training process if we only train the classifier with the selected core classes. Therefore, to overcome this issue, we propose the idea of \\emph{path-based document generation by LLMs} to generate a small number of augmented documents (e.g., $q=5$) for \\emph{each} distinct path from a level-1 node to a leaf node in the taxonomy. By adding the generated documents to the pseudo-labeled data, we can ensure that each class of the taxonomy will be a positive class of at least $q$ documents. Moreover, we use a path instead of a single class to guide the LLM generation, because the meaning of lower-level classes is often conditioned on their parents. For example, in Figure~\\, a path ``hair care'' $\\rightarrow$ ``shampoo'' can guide the LLM to generate text about hair shampoo instead of pet shampoo or carpet shampoo that are in different paths. To promote data diversity, we make one LLM query for each path and ask it to generate $q$ diverse documents. Appx.~\\ref{appx:prompts} shows the prompts we used. We denote the generated documents as $\\D^\\text{gen}$.\n[] Now, with two sets of data, the pseudo-labeled documents $\\D^\\text{core}$ and LLM-generated documents $\\D^\\text{gen}$, we are ready to introduce the classifier architecture and the training process. \\smallskip\n\\noindent\\textbf{Classifier architecture.} We use a simple text matching network~\\citet{shen-etal-2021-taxoclass} as our model architecture, which includes a document encoder initialized with a pre-trained BERT-base model~\\cite{Devlin2019BERTPO} and a log-bilinear matching network. Class representations are initialized by class name embeddings (c.f. Sec.~\\ref{sec:taxo-enrich}). \n\\smallskip\n\\noindent\\textbf{Training process.} For each document with refined core classes, we construct its positive classes as the union of its refined core classes and their ancestors in the label taxonomy, and its negative classes are the ones that are not positive classes or descendants of any core class. This is because the ancestors of confident core classes are also likely to be true labels, and the descendants may not all be negative given that the automatically generated core classes are not optimal. For the LLM-generated documents, we are more confident in their labels, so we simply treat all the classes in the corresponding label path as positive classes and all other classes as negative. We train the text-matching model with the standard binary cross-entropy loss. The loss terms of two sets of data are weighted by their relative size, $\\frac{|\\D^\\text{core}|}{|\\D^\\text{gen}|}$. Notice that we do not continue training the classifier with self-training that is commonly used in previous studies~\\cite{meng2019weakly, shen-etal-2021-taxoclass}. Using self-training may further improve the model performance, which we leave for future exploration. Algorithm~\\ref{alg} in Appendix summarizes \\Our.\nRecently, large language models (LLM) such as GPT-4~\\cite{OpenAI2023GPT4TR} and Claude 3~\\cite{claude3} have demonstrated strong performance in flat text classification through zero-shot or few-shot prompting~\\cite{sun-etal-2023-text}.\nHowever, applying LLMs in hierarchical settings, with large and structured label spaces, remains challenging.\nDirectly including hundreds of classes in prompts is ineffective, leading to structural information loss, increased computational costs, and diminished clarity for LLMs at distinguishing class-specific information.\nAlong another line of research, \\citet{meng2019weakly} propose to train a moderate-size text classifier by utilizing a small set of keywords or labeled documents for each class.\nHowever, compiling keyword lists for hundreds of classes and obtaining representative documents for each specific and niche category still demand significant human efforts.\n\\citet{shen-etal-2021-taxoclass} study the hierarchical text classification with \\emph{minimal supervision}, which takes the class name as the only supervision signal.\nSpecifically, they introduce TaxoClass which generates pseudo labels with a textual entailment model for classifier training.\nHowever, this method overlooks additional class-relevant features in the text and suffers from the unreliable pseudo label selection due to domain shift of the entailment model. ", ["2403_00165v2_0", "2403_00165v2_1", "2403_00165v2_1"]] [60023, "We explore a range of techniques to guide LLaMA-2 models (Touvron et al, 2023) to improve MT outputs using fine-grained feedback derived from Multidimensional Quality Metric (MQM) annotations (Freitag et al, 2021), as shown in . Following prior work on refinement, we start with evaluating the impact of such feedback when prompting LLMs in zero-shot and few-shot settings (\u00a75). Different from prior work, we then explore fine-tuning the LLM to advance its ability to improve translations based on the feedback provided in the prompt, in an instruction following style (Taori et al, 2023) (\u00a76). Through extensive experiments with three language pairs (Chinese-English, English-German, and English-Russian), we show that prompting LLMs to edit MT with feedback reliably improves translation quality as measured by automatic metrics, particularly in the few shot settings where the LLaMA-2 7B model achieves close peformance to the 13B version (\u00a75). However, the models are unable to make the most of the fine-grained feedback which performs roughly on par with generic prompts for improvement. Instruction fine-tuning shows stronger improvements on translation quality based on both automatic and human evaluation (\u00a76). Our analysis reveals that prompting the finetuned LLMs with fine-grained feedback not only helps fix the errors highlighted in the prompt (\u00a77), but also leads to more natural outputs We ask bilingual human annotators to assess the post-edited outputs obtained by fine-tuning in the bilingual setting as it is the stronger approach based on automatic scores. We randomly select 50 instances for each language pair for annotation. Each instance is examined by 3 human annotators. For each instance of source text, original MT with MQM annotation, post-edited MT, the annotator is asked to rate on a 5-point Likert scale (1 strongly disagree to 5 strongly agree) whether the translation quality has improved, and to what extent the annotated errors are actually resolved through postediting. Ordinal Kripendorff\u2019s alpha (Krippendorff, 2011) 6 , which measure the inter-annotator agreement is moderate for the Overall quality: 0.527, 0.479, 0.421 for Zh-En, En-De, and En-Ru. Annotators are also given the option to provide free form comments. Refer to Appendix F for further details on the annotation set-up.\nAs illustrated in , our human evaluation results confirm that fine-tuning with error annotations enhances overall translation quality (Overall Quality) and effectively resolves errors in the initial translation (Resolve Errors). While this improvement is notably evident in Zh-En and En-De pair, for the En-Ru pair, approximately 40/150 annotations lean towards the Disagree category. Some of the feedback from En-Ru annotators who choose to Disagree state that there are cases when the output translation from the fine-tuned model is more precise in the target language, but loses some of the nuance in the source text. Further, feedback from the annotators support our own observation that the post-editing via finetuning does not only fix targeted errors in the original translation but rewrites for naturalness in the target language. They comment that the fine-tuning translation \u201cbetter explains the context\u201d and \u201cflows better in the target language\u201d compared to the original translation which seems to be directly translated without consideration of the context. We list further comments in Appendix Table 20.", ["2404_07851v1_0", "2404_07851v1_1"]] [60024, "How can we devise a fine-grained explanationbased text generation metric capable of pinpointing concrete error locations, identifying error types, assigning severity labels, and justifying the final score\u2014all simultaneously without relying on human-annotated data. In this paper, we propose INSTRUCTSCORE, a method to learn an explainable text generation metric without using human annotated ratings. InstructScore provides both a numerical score and a natural language error explanation. To this end, we first extract latent evaluation knowledge from an instruction-following model, such as GPT-4 (OpenAI, 2023), to construct a synthetic dataset with a predetermined explanation structure. Next, we determine a range of explanation failure modes and devise automated feedback to meta-evaluate error explanations. Finally, we further fine-tune INSTRUCTSCORE model on self-generated outputs that optimize feedback scores, resulting in diagnostic reports that are better aligned with humans.\n[] We leverage GPT-4 to extract representative explainable knowledge that can greatly contribute to the subsequent Exp-Generator learning process. Specifically, we collected raw sentences in the target language from diverse domains and topics via GPT-4 (Details are included in Section 5.1 and Appendix A), resulting in data across diverse tasks. This corpus is used as the starting point to inject errors. Then, we prompt GPT-4 to synthesize designated generation errors, as shown in Table 1. For each text, we specify the number of errors, error types, and severity labels, and ask GPT-4 to generate a candidate output with the specified error descriptions and 2) an explanation for this error annotation. If an evaluation task is multi-dimensional, error types will be separately assigned to each dimension (An example is included in the Appendix). Benefiting from the large-scale pre-training process, GPT-4 is able to generate diverse errors and meet the requirements with specified instructions. To avoid the model\u2019s over-reliance on the lexical and structural similarities between the candidate and raw text, we request GPT-4 to rephrase the raw text sentence to construct a pseudo-reference sentence. By specifying error type t, severity label se, and raw text, GPT-4 is able to generate a synthetic error sentence x with annotated error location l and a pseudo reference y with explanation e.\nHow can we devise a fine-grained explanationbased text generation metric capable of pinpointing concrete error locations, identifying error types, assigning severity labels, and justifying the final score\u2014all simultaneously without relying on human-annotated data. In this paper, we propose INSTRUCTSCORE, a method to learn an explainable text generation metric without using human annotated ratings. InstructScore provides both a numerical score and a natural language error explanation. To this end, we first extract latent evaluation knowledge from an instruction-following model, such as GPT-4 (OpenAI, 2023), to construct a synthetic dataset with a predetermined explanation structure. Next, we determine a range of explanation failure modes and devise automated feedback to meta-evaluate error explanations. Finally, we further fine-tune INSTRUCTSCORE model on self-generated outputs that optimize feedback scores, resulting in diagnostic reports that are better aligned with humans.\nINSTRUCTSCORE assesses the quality of generated texts based on an explainable diagnostic report. Building upon this report, INSTRUCTSCORE provides an intuitive way to comprehend a model\u2019s generation capability, resulting in easier comparison among different models. In particular, we begin by extracting concise yet representative explainable knowledge from a large-scale instructionfollowing model, which is then utilized to train our Exp-Generator. After carefully analyzing the diagnostic reports produced by our Exp-Generator, we summarize common failure modes in diagnostic report and ask GPT-4 to identify them. Then we transform the GPT-4\u2019s feedback into alignment scores using our predefined criteria. Finally, we select diagnostic reports that have the highest alignment scores, and further finetune our Exp-Generator on those self-refined outputs. The overall framework is illustrated in . The quality score s for each candidate y is determined based on the number of errors and their severity labels in the diagnostic report. Minor errors are given a score of \u22121 and major errors are given a score of \u22125. These penalties for errors are weighted to calculate the final score. Similar to previous practices (Freitag et al, 2021a), our metric identifies up to five errors per sentence.", ["2305_14282v3_0", "2305_14282v3_1"]] [60025, "Impact of selective retrieval. When optimizing the utility model, we add an empty string es in the training process to leverage the knowledge of LLMs to achieve selective retrieval. Here, we conduct some case studies to examine this mechanism. As we can see from Table 3, there are example queries that the utility model deems no retrieval and in total there are 20.9% queries which the utility model ranks the empty string es higher than other documents. As we can see from Table 3, many of the listed queries are commonsense knowledge that has been memorized in LLMs\u2019 parameters and we can easily call an LLM for the required answer instead of retrieval, which illustrates that in retrieval-augmented generation, despite the further knowledge external corpus introduces, the inherent knowledge of LLMs when deploying retrievalbased techniques is worth investigation and we believe that our work has provided a straightforward and plausible solution. However, this design still has certain limitations, i.e., the LLM used for end task needs to be the same or stronger than the one In this work, we argue that similarity is not always the \u201cpanacea\u201d for retrieval-augmented generation and totally relying on similarity would sometimes degrade the performance. As is shown in the upper part of , when a user types in the query \u201cTell me about author George RR Martin\u201d, a similarity-driven retrieval system would rank the documents in a given corpus according to similarity metrics, i.e., the semantic relevance or TFIDF based metric (Robertson and Zaragoza, 2009). Even though the retrieved documents barely provide useful information, e.g., \u201cGeorge RR Martin is an author\u201d, it would rank higher due to high similarity score and the document that states the publications \u201cThe Song of Ice and Fire\u201d of George RR Martin with higher information gain would rank lower due to inadequate low similarity score. Besides, given the fact that the retrieved documents are often more than one, using them in isolation due to the context limitation of LLMs (Shi et al, 2023)or simply aggregating the Top-k document without considering the relationships between them makes it difficult to capture the commonalities and characteristics among them and even confuse LLMs due to excessive text length thus incurring information loss and probably performance degradation (Mallen et al, 2023). Passage window size matters in utility model performance! Since the utility model plays a key role in bring in the supervision of LLMs into passage selection process, we take a closer look at how the passage window size \u2225D\u2022\u2225 8 would influence the final performance by directly incorporating the utility model trained under different settings to the final task. Due to the space limit, we average the metrics in four datasets and present the result in and find that the model performance improves as the window size grows, which demonstrates that the growing passage window size endows more LLMs\u2019 powers for distinguishing passage significance among diverse inputs to the utility model thus improving performance on downstream tasks. However, due to the computational burden (the model training time grows linearly with the passage window size) that a large passage window size imposes during training, there is need to balance the trade-off between performance and cost.", ["2405_19893v1_0", "2405_19893v1_1"]] [60026, "Recent advancements in neural audio codecs such as SpeechTokenizer [107], Encodec [22] and VALL-E [83] have provided evidence of the advantages of multi-layer residual vector quantizers (RVQs) in accurately representing speech with discrete speech tokens for high-quality and efficient audio transmission, regardless of sound type or language. 2. We aim to develop the neural codec architecture into an effective decoupling model that separates mixed speech tokens into standalone semantic and acoustic tokens. As illustrated in , our proposed decoupling model based on the codec architecture (CDM) comprises three core components: an encoderdecoder architecture, a HuBERT-equipped RVQs module, and a discriminator. The encoder-decoder\u2019s primary function of precisely reconstructing the original audio compels the encoder to extract the key features from speech signals. The HuBERT-equipped RVQs further decouple these features and hierarchically quantize them into discrete semantic and acoustic tokens. The discriminator enforces that the encoder and RVQs optimize their learned representations,aiming for comprehensive retention of the original audio\u2019s details. Through this structure, we can achieve effective decoupling of speech signals. The decoupled semantic and acoustic audio samples can be found on our demo page [1]. \nThe bottleneck layer aims to reduce the dimensions of acoustic tokens A from $R^{7\ud835\udc36\u00d7\ud835\udc47\ud835\udc5b}$ to a more compact space $R^{\ud835\udc36\u00d7\ud835\udc47\ud835\udc5b}$ by using 1D convolution and batch normalization. This layer serves a dual purpose: first, it enhances computational efficiency and reduces trainable parameters, facilitating subsequent layers to operate on a compact representation; second, it acts as a regularizer.By randomly rearranging the elements across the temporal dimension \ud835\udc47\ud835\udc5b, this layer nullifies speech comprehension that is highly dependent on the temporal order of phonemes and words [51]. We empirically set a shuffling window of 1 second, corresponding to 50 frames, to obscure word-level intelligibility (as each token representation is extracted from a 20ms waveform).In addition to decoupling speech information, the shuffle layer serves to augment content protection by further scrambling the\ncondensed acoustic tokens $A^\ud835\udc4f$ . As shown in , By randomly rearranging the elements across the temporal dimension \ud835\udc47\ud835\udc5b, this layer nullifies speech comprehension that is highly dependent on the temporal order of phonemes and words [51]. We empirically set a shuffling window of 1 second, corresponding to 50 frames, to obscure word-level intelligibility (as each token representation is extracted from a 20ms waveform). Thereby, the likelihood of\nattackers deciphering and correcting these sequences is extremely low, given the sheer number of possible permutations for a 4-second audio ($50!^{4}$ , approximately $8.56 \u00d7 10^{257}$, details are discussed in \u00a78). Our experiments also confirm the dual content protection by decoupling and shuffling, thwarting the advanced ASR techniques and human auditory analysis.\n", ["2409_09272v1_0", "2409_09272v1_1"]] [60027, "Large-scale SSL enables audio encoders to capture rich information from speech. Inspired by the BERT-based speech SSL framework [27, 2, 8, 10], we developed our audio encoder, a conformerbased [22] model that captures both global and local structures stored in audio signals. In this work, we primarily focus on speech signal. Since it is trained on large-scale unsupervised data, we term the trained audio encoder as LUISE, which represents Large-scale Unsupervised Iterative Speech Encoder\nAdhering to the concept of BERT [14], LUISE adopts a learning paradigm of masked language prediction. The training procedure is illustrated in . Specifically, the sequence of melfilterbank feature extracted from the waveform is first input to the tokenizer module to obtain discrete labels for each frame. Then, the training of LUISE is conducted using the cross-entropy criterion, with the loss function calculated only for the masked frames. After training, the softmax layer is removed, and the encoder part of LUISE is used for subsequent supervised fine-tuning.\nWe utilize an iterative fixed tokenizer method to obtain the corresponding discrete labels for each frame. In the first iteration, we apply a random-projection layer [10] to project speech feature to a randomly initialized codebook, and map them to discrete labels through finding the nearest vector in the codebook. In the second iteration, we perform K-means clustering on the representations of an intermediate layer of the previously trained encoder to obtain a new codebook. The discrete labels are then obtained by finding the closest vector in the new codebook to the representation from the same intermediate layer. Based on the aforementioned motivation, we propose Seed-ASR, a large-scale speech recognition model built on the framework of audio conditioned LLM (AcLLM). By inputting encoded continuous speech representations together with a task instruction and relevant contexts into a pretrained LLM,Seed-ASR can leverage the rich text knowledge and the reasoning ability of the LLM to generate the corresponding text transcription of speech. The overall framework is shown in Audio is a different modality from text. To enable LLMs better understand diverse speech inputs, we adopt the concept of large-scale pretraining in LLMs. Specifically, we construct an audio encoder with nearly 2 billion parameters and conduct self-supervised learning (SSL) on tens of millions of hours of data. The pre-trained audio encoder gains strong speech representation ability, which facilitates rapid convergence during supervised fine-tuning (SFT). After the large-scale SSL stage, we implement a simple and effective stage-wise training recipe within the framework of AcLLM (shown in []). In the stage of SFT, we establish the mapping relationship between speech and text by training on a large amount of speech-text pairs. In the stage of context SFT, we use a relatively small amount of context-speech-text triples to elicit the LLM\u2019s ability to capture speech-relevant clues from context. These triple data can be customized according to specific scenarios. In the stage of reinforcement learning, we apply the training criteria of MWER [37] and some improvements to further strengthen the ability of our model. In the following subsections, we will introduce these methods in more detail.", ["2407_04675v2_0", "2407_04675v2_1", "2407_04675v2_2"]] [60028, "As shown in , the architecture for investigation is simply a speech encoder with an LLM. For each sample, the given prompt for transcribing (i.e., transcribe the speech), the speech utterance, and the corresponding transcript during training are denoted as P, S and T, respectively. We tokenize the prompt and the transcript using the tokenizer and embedding matrix of the LLM to obtain feature vector sequences Ep and Et as: $Ep = Embedding(Tokenizer(P)), (1)$ $Et = Embedding(Tokenizer(T)). (2)$ For the input audio S, we first extract features by passing the audio through a speech encoder to obtain encoder output Hs, denoted as: $Hs = Encoder(S). (3)$ Then, Hs is passed to a projector and further goes through a linear layer to obtain a feature sequence Es with the same dimensionality as the input to the LLM, denoted as: $Es = Linear(Projector(Hs)), (4)$ where the dimension of the feature output by the projector is the same as that of the speech encoder, and the linear layer is responsible for mapping the feature dimension to the embedding dimension of the LLM. Next, we concatenate Es, Ep, and Et to obtain the final feature and pass it to the LLM to obtain the output transcript Y , denoted as: $Y = LLM(Regulation(Ep, Es, Et)). (5)\"$ \nWe compare the effects of two types of projectors, Qformer and Transformer. In our experiment, we fix the encoder as HuBERT and LLM as Atom-7B and only unfreeze the projector module for one epoch training to compare the different effects of Qformer and Transformer, respectively. Although we expect Qformer to perform better than transformer similar to Blip [26] and SALMONN [19], results in show otherwise. We hypothesize this is because Qformer was originally designed to accommodate the unique data structures in image processing, and should be redesigned to better adapt to speech for future research.", ["2405_02132v3_0", "2405_02132v3_1"]] [60029, "Similarly to SpeechTokenizer (Zhang et al, 2024b), we distill semantic information from a self-supervised model (WavLM8 (Chen et al, 2022) in our case) into the first level of the RVQ. WavLM projects a 16kHz waveform into 1024-dimensional embeddings sampled at 50Hz, while Mimi projects a 24kHz waveform into 512-dimensional at 12.5Hz. During training, we thus produce targets for distillation by downsampling the input waveform to 16kHz before computing WavLM embeddings followed by average pooling with a stride of 4 and a kernel size of 8, to reach 12.5 Hz. Interestingly, we observed that it was critical for performance to perform this average pooling in a non-causal way, which is compatible with streaming inference as these embeddings are only used during training. We apply a linear projection with an output dimension of 1024 to the output of the first RVQ level, parallel to the actual embedding going into the decoder. We then compute a cosine distance between the output of the first quantizer and the transformed WavLM embeddings, to perform distillation. Table 3 shows that this distillation loss conflicts with reconstruction and adversarial losses targeting quality. Indeed, while distillation significantly improves the phonetic discriminability of the first quantizer (as measured by ABX (Schatz et al, 2013)), it also affects audio quality negatively. We hypothesize that this is due to distilling semantic information into the first level of a single RVQ: As higher-order quantizers operate on the residual of the first one, the latter needs to trade audio quality for phonetic discriminability. We address this issue by proposing a split RVQ. Rather than a single RVQ with 8 levels, we distill semantic information into a plain VQ and apply an RVQ with 7 levels in parallel. We sum their outputs, such that while both can be used for reconstruction, we remove the constraint that acoustic information should be conserved in the residual of the semantic quantizer. illustrates this architecture and Table 3 shows that this solution provides a better semantic-acoustic trade-off overall. Moshi is a multi-stream speech-to-speech Transformer model, which allows for full-duplex spoken dialogue with a user thanks to an innovative architecture summarized in . Moshi is built on top of Helium, a text LLM which we build from scratch (Section 3.2), relying on high-quality text data to provide strong reasoning abilities to the model. We also propose Inner Monologue (Section 3.4.4), a training and inference procedure in which we jointly model text and audio tokens. This allows the model to fully exploit the knowledge imparted from the text modality, while remaining a speech-to-speech system. To enable real-time dialogue, we also design Moshi as a multi-stream architecture from the get-go (Section 3.4.3): The model is able to both speak and listen to the user at the same time, and does not need to explicitly model speaker turns. In addition, to capture the input user audio and output Moshi\u2019s voice with high quality and in an efficient manner, we propose Mimi (Section 3.3), a neural audio codec combining semantic and acoustic information into a single tokenizer by using residual vector quantization and knowledge distillation. To jointly model the audio streams from Moshi and the user, as well as Moshi\u2019s text tokens, we rely on a Depth Transformer compatible with streaming inference (Sections 3.4.1, 3.4.2).", ["2410_00037v2_0", "2410_00037v2_1"]] [60030, "Evaluation Protocol\nWe now discuss how we evaluate the retrieval performance. The most widely adopted retrieval paradigm is to rank passages in the corpus based on the relevance scores predicted by the retrieval model and retrieve the Top-K candidates to form a ranked list. The performance of the retrieval model is then assessed based on the ranked list using well-defined ranking metrics such as NDCG@K and MAP@K. However, such metrics are not continuous due to their discrete nature and reliance on a cutoff parameter, K. Because the ranking metrics of a ranked list would not change unless the sequence of the passages changes, these ranking metrics are not sensitive to the changes of model outputs in many cases. Also, with the cutoff in ranking metric, a positive passage only contributes to the metric when ranked within the top K results. If it falls beyond K, whether at K+1 or further, it has no impact on the metric score. The characteristics of these existing ranking metrics make them unsuitable for the investigation of scaling laws in dense retrieval. To solve these problems, we propose to utilize a continuous metric that sensitively reflects the overall retrieval capability of the models. Inspired by the analysis of scaling laws in large language models, which utilize the perplexity of token generations as evaluation metrics, we propose to use the contrastive entropy as our evaluation metric. Formally, for each query-passage pair in the test set, we randomly select a fixed number (256 in this paper) of negative passages and define the contrastive entropy as: \\begin{align} -\\log{\\exp\\left(s(q_i, p_{i}^+;\\theta)\\right) \\over {\\exp\\left(s(q_i, p_{i}^+;\\theta)\\right) + \\sum_{j} \\exp\\left(s(q_i, p_{j}^-;\\theta)\\right)}} \\end{align} We investiagte the correlation between the contrastive entropy and existing ranking metrics. We train multiple dense retrieval models. To efficiently evaluate their retrieval performance, we sample a subset corpus that contains 100,000 passages during evaluation. Figure~\\ shows the contrastive entropy and ranking metrics, including MAP@10, NDCG@10, and Recall@1000. We can see that the correlation between the contrastive entropy and existing ranking metrics is strong and positive. It is close to a linear correlation. Therefore, we believe that using contrastive entropy is an effective measure to assess the overall retrieval ability of models in our study.\nModel Size Scaling\nWe finetune models of various sizes using the human-annotated training pairs. The finetuning is performed on the entire training sets. We do not utilize early stopping and instead report the best test set loss throughout the training process. This is mainly to mitigate the influence of suboptimal early stopping, which could lead to models being underfitted or overfitted.\nFigure~\\ illustrates the contrastive entropy on the test set with respect to model sizes. As shown in the figure, the retrieval performance improves (indicated by a lower test loss) as the model size increases. On the left side of the diagram, red stars represent the official checkpoints of variously sized BERT models, while blue points denote other official variants released concurrently. These variants differ in aspects such as the number of attention heads or feed-forward dimensions. The right diagram, in contrast, only features red stars, as the different shape variants of ERNIE are not publicly available.\nBased on the observation, we propose to fit the scaling law in terms of model sizes as follows:\n\\begin{equation}\n\\label{eq:model_size_scaling_law}\nL(N) = \\left( {A \\over N} \\right)^{\\alpha} + \\delta_N\n\\end{equation}\nwhere $N$ represents the number of non-embedding parameters of the model, and $L(N)$ denotes the model's contrastive entropy on the test set. Parameters $A$, $\\alpha$ and $\\delta_N$ are the coefficients.\nNote that we introduce a parameter $\\delta_N$, which represents a irreducible loss term. It means that a sufficiently large model (setting $N$ to infinity) can only reduces the loss to $\\delta_N$ rather than zero. This irreducible loss is reasonable given the incomplete annotations and subjective understanding of relevance. On one hand, some relevant passages may not be annotated because they are not sucessfully recalled and are outside the annotation pool. On the other hand, relevance may be subjective to different annotators, which results in even imperfect agreement among different human annotators. Consequently, it is hard for models to perfectly agree with human annotations. Therefore, we believe there should be a irreducible term in the scaling law.\nWe employ least squares method to fit the linear curve. The coefficients are detailed in Table~\\ref{tab:model_size_param}. The coefficient of determination (R\u00b2) suggests a good fit. Based on these results, we validate that the contrastive entropy follows a power-law scaling in relation to the size of non-embedding parameters.\nSuch discoveries offer new perspectives for future research experiments. For example, given this scaling law, we can initially train smaller models, fit the corresponding scaling curves, and then extrapolate them to predict the performance of larger models. This significantly reduces the cost of conducting experiments directly on larger models and instead offers the opportunity to experiment with different training strategies on smaller models to validate the effectiveness of new approaches.", ["2403_18684v2_0", "2403_18684v2_1"]] [60031, "5. FollowRAG Benchmark\nTo bridge the gap in automatic instruction-following evaluation for RAG systems, we introduce FollowRAG in this section. We provide a detailed introduction from two aspects: Data Construction'' and Evaluation and Statistics''.\n5.1. Dataset Construction\n\\textbf{Instruction Collection \\& Extraction.} FollowRAG aims to assess the model's ability to follow user instructions in complex multi-document contexts. Drawing from general IF datasets like IFEval~\\citep{ifeval} and FollowBench~\\citep{followbench}, we collect and verify definitions and examples of atomic instructions using rules (e.g., code), excluding those irrelevant to RAG scenarios. Ultimately, we identify 22 types of instruction constraints, encompassing language, length, structure, and keywords.\n\\textbf{Instruction Reforming.} We use widely-used question-answering (QA) datasets, such as Natural Questions ~\\citep{NQ}, as the foundation for constructing FollowRAG samples. For a query sampled from the QA datasets, we need to generate a complex instruction containing $n$ atomic instruction constraints (with $n$ ranging from 1 to 4). To enhance the diversity of atomic instruction representations, we employ GPT-4o as the instruction generator. Specifically, given a query, we first sample $n$ instructions from the atomic instruction set and perform conflict detection. Subsequently, with examples as demonstrations, we prompt the LLM to generate a new varied instruction text for each type of atomic instruction, along with parameters for instruction-following evaluation.\n\\textbf{Combination.} Finally, we integrate the retrieved passages, query and atomic instructions to construct the sample input for FollowRAG. To avoid mechanically concatenating the query and instructions in a template-based manner, we prompt supervised model to naturally blend the multiple atomic instructions and the query into a coherent instruction-query paragraph. We then add the top-$K$ document passages retrieved based on the query to the instruction-query paragraph, forming the complete sample input.\n% Finally, we integrate the retrieved passages, query, and atomic instructions to construct the sample input for FollowRAG. To avoid simply concatenating the query and instructions, we prompt the supervised model to blend them into a coherent instruction-query paragraph. We then add the top-$K$ document passages retrieved based on the query to this paragraph to form the complete sample input.\n5.2. Evaluation and Statistics\nAfter obtaining the model's output, we evaluate it from two perspectives: instruction following and question answering (QA) under the RAG paradigm:\n\\begin{itemize}\n\\item \\textit{\\textbf{Instruction Following:}} Utilizing the verifiable nature of our atomic instructions and following the IFEval approach, we automate the verification of the model\u2019s adherence to each instruction through code validation. We then calculate the average pass rate for each atomic instruction across all samples to determine the instruction-following score in FollowRAG.\n\\item \\textit{\\textbf{RAG:}} Under new instruction constraints, the model's target output differs from the gold answers in the original QA dataset, rendering traditional metrics like Exact-Match ineffective. To address this, we use the original gold answers as a reference and utilize GPT-4o to evaluate whether the model's outputs correctly address the questions. The scoring criteria are as follows: Completely correct (1 point), Partially correct (0.5 points), Completely incorrect (0 points). The average score of all samples is taken as the RAG score for FollowRAG.\n\\end{itemize}\n% \\subsection{Statistics}\nFor detailed statisticsin in Figure~\\, FollowRAG is the first instruction-following evaluation dataset under RAG scenario comprising 2.8K samples, covering 22 fine-grained atomic instructions across 6 categories. The queries in FollowRAG are sourced from 4 QA datasets across 3 types: 1) Open-Domain QA: \\textbf{Natural Questions (NQ)}~\\citep{NQ} and \\textbf{TriviaQA (TQA)}~\\citep{TriviaQA}; 2) Multi-Hop QA: \\textbf{HotpotQA (HQA)}~\\citep{HotpotQA}; and 3) Knowledge Base QA: \\textbf{WebQuestionsSP (WebQSP)}~\\citep{webqsp}. To further construct varying levels of instruction-following difficulty, FollowRAG includes 0.9K samples of single and dual atomic instructions, as well as 0.5K complex multi-instruction samples containing 3 and 4 atomic instructions, respectively.\n6.4. Quantitative Analysis\n\\textbf{Ablation Study.} To examine the effects of various components in VIF-RAG, we conduct an ablation study in Table \\ref{tab:ablation}. The term \"w/o\" indicates versions where specific components are removed. Our key observations are:\n\\begin{itemize}\n\\item Removing any component from VIF-RAG results in decreased performance, indicating that all components, such as the complex instruction composition strategy and quality verification design, are crucial to its effectiveness.\n\\item The largest performance decline in FollowRAG is observed when executor verification is removed. This underscores the critical role of automated instruction-response validation in improving synthetic data quality and confirms the advantage of using LLMs to oversee instruction-following abilities through other core skills like coding.\n\\item Surprisingly, the consistency verification proves beneficial in preserving RAG capabilities. It effectively filters out samples with high-level semantic conflicts between instructions and queries, reducing noise in IF tasks and maintaining RAG performance integrity.\n\\end{itemize}\n\\textbf{Scaling Analysis.} To explore the impact of retrieved document quantity on instruction-following performance in RAG scenarios, we refer to Table \\. For the baseline models (SFT versions), instruction-following capability declines as the number of passages increases. Specifically, performance drops sharply by over 6\\% when the document quantity in FollowRAG increases from 0 to 1. Further increasing the number to 10 leads to a significant performance decline, with Qwen-14B-SFT experiencing a drop of over 10\\%. This indicates that integrating knowledge through retrieval-augmented techniques challenges the instruction-following abilities of existing models.\nIn contrast, VIF-RAG shows a minor performance drop ($<$3\\%) when encountering the first document. As the number of documents increases to 10, VIF-RAG\u2019s performance remains relatively stable, demonstrating its robustness.\n\\textbf{Instruction Difficulty Analysis.} To explore the effect of different instruction quantities (i.e., instruction-following difficulty) on model performance in RAG scenarios, we evaluate VIF-RAG and various baseline models on the FollowRAG benchmark, using test sets with 1, 2, and 3 instructions. As shown in Figure \\, as the number of instructions increases, all models generally show a decline in instruction-following capability, but VIF-RAG consistently outperforms the rest. Notably, even with 3 instructions present simultaneously, VIF-RAG still demonstrates over a 5\\% IF prompt (strict acc.), further validating its superior capability in handling complex instruction-following tasks in RAG scenarios.", ["2410_09584v1_0", "2410_09584v1_2", "2410_09584v1_1"]] [60032, "Methodology\nIn this section, we first recap the typical RAG pipeline (Sec.~\\ref{sec:method:preliminary}), then present our VisRAG framework (Sec.~\\ref{sec:method:ours}) and the construction of our training and evaluation data (Sec.~\\ref{sec:method:data}).\nPreliminary: Retrieval-augmented Generation\n\\label{sec:method:preliminary}\nA typical retrieval-augmented generation (RAG) pipeline consists of a retriever and a generator, both built on large language models (LLMs)\\footnote{In many cases, the retriever uses language models smaller than 1B parameters, which may not be considered ``large'', but we use the term LLM for simplicity.}.\nThis pipeline operates on a knowledge corpus $\\mathcal{D}$, which is processed into units for retrieval and generation, denoted as $\\mathcal{D} = \\{d_1, \\ldots, d_n\\}$, where $n$ is the number of retrieval units.\nGiven a text query $q$ and the retrieval corpus $\\mathcal{D}$, the retriever functions as $\\mathcal{R}:(q, \\mathcal{D}) \\rightarrow \\mathcal{D}{\\mathcal{R}}$, taking $q$ and $\\mathcal{D}$ as inputs and producing a candidate set $\\mathcal{D}{\\mathcal{R}} \\subset \\mathcal{D}$.\nTo enable efficient search, the units in the knowledge corpus $\\mathcal{D}$ are pre-encoded into embeddings.\nDuring RAG pipeline inference, approximate nearest neighbor (ANN) search is applied to retrieve $\\mathcal{D}{\\mathcal{R}}$, which serves as the knowledge source for generation.\nThe generation process can be defined as a function $\\mathcal{G}:(q, \\mathcal{D}{\\mathcal{R}}) \\rightarrow a$, where $a$ represents the answer and $\\mathcal{G}$ denotes the LLM generator.\nThis is achieved by prompting the LLM with the query and the retrieved units $\\mathcal{D}{\\mathcal{R}}$ to generate an answer.\nAs shown in Figure~\\ (left), traditional RAG frameworks (TextRAG) typically utilize text-based units for retrieval and generation.\nHowever, in real-world scenarios, data often appear in complex, multi-modal documents, requiring an additional parsing step to obtain text.\nIn this paper, we propose to use the \\textit{page} as the fundamental unit for retrieval and generation, which is directly processed by vision language models (VLMs) as an image without further processing during retrieval and generation.\nIn subsequent sections, we use the terms page'' and document'' interchangeably.\nVisRAG: Vision-based Retrieval-augmented Generation\n\\label{sec:method:ours}\nIn this section, we present \\textbf{Vis}ion-based \\textbf{R}etrieval-\\textbf{a}ugmented \\textbf{G}eneration (VisRAG), as shown in Figure~\\ (right).\nIn contrast to traditional RAG frameworks which use text segments for both retrieval and generation, VisRAG leverages the image of the document to preserve all information.\n\\subsubsection{Retrieval}\nThe first stage of VisRAG, VisRAG-Ret, aims to retrieve a set of pages from the corpus $\\mathcal{D}$ given $q$.\nWe follow the dual-encoder paradigm in text-based dense retrieval models but employ a VLM rather than an LLM to encode the query and page.\nSpecifically, the query and page are encoded separately as text and image in the VLM, producing in a sequence of hidden states.\nTo derive the final embedding, and given that we use generative VLMs with causual attention, we adopt the position-weighted mean pooling over the last-layer VLM hidden states, giving higher weights to later tokens:\n\\begin{equation}\n\\small\n\\mathbf{v}=\\sum{i=1}^S w_i \\mathbf{h}i,\n% \\quad \\text { where } \\quad w_i=\\frac{i}{\\sum{i=1}^S i},\n\\end{equation}\nwhere $\\mathbf{h}i$ is the $i$-th hidden state, $S$ is the sequence length, $w_i=\\frac{i}{\\sum{j=1}^S j}$ is the $i$-th weight, and $\\mathbf{v}$ is the query or page embedding.\nThe similarity score is calculated by the cosine similarity of the query and page embedding.\nVisRAG-Ret is optimized using the InfoNCE loss:\n\\begin{equation}\n\\label{eq:loss}\n\\small\nl(q,d^+,D^-) = -\\log\\frac{\\exp(s(q,d^+)/\\tau)}{\\exp(s(q,d^+)/\\tau)+\\sum_{d^- \\in D^-}\\exp(s(q,d^-)/\\tau)},\n\\end{equation}\nwhere $d^+$, $D^-$ are positive document and the negative document set of $q$, respectively, $s(q,d)$ is the similarity score between $q$ and $d$, and $\\tau$ is the temperature.\nGeneration\nThe second stage of VisRAG, VisRAG-Gen, focuses on generating the answer according to the user query and retrieved pages using a VLM.\nWe propose the following mechanisms to enable VisRAG-Gen to handle multiple retrieved pages in $\\mathcal{D}\\mathcal{R}$ for generation.\nThe prompts used for generation is presented in Appendix~\\ref{appendix:generation_prompt}.\n\\paragraph{Page Concatenation.}\nA straightforward approach is to concatenate all pages in $\\mathcal{D}\\mathcal{R}$ into a single image to accommodate most VLMs that are trained to accept a single image.\nFormally,\n\\begin{equation}\n\\small\na \\xleftarrow{} \\text{VLM-Single}(q,\\text{Concat}(\\{d|d \\in \\mathcal{D}\\mathcal{R}\\})),\n\\end{equation}\nwhere VLM-Single is a VLM that accepts a single image with text prompt and Concat is the image concatenation operation.\nIn this paper, we experiment with horizontal concatenation.\n\\paragraph{Weighted Selection.}\nAnother approach is to ask the VLM to generate an answer for every page from top-$k$, and select a final one with the highest confidence.\nThe final confidence is defined as the weighted generation probability of the answer:\n\\begin{equation}\n\\small\nP(a|q,\\mathcal{D}\\mathcal{R}) = P(a|q,d) \\cdot \\lambda(q,d),\n\\end{equation}\nwhere $P(a|d,q)$ is calculated as the reciprocal of the perplexity of generating the answer $a$ conditioned on the single document $d$, and $\\lambda(d,q)$ is the normalized retrieval score:\n\\begin{equation}\n\\small\n\\lambda(q,d)=\\frac{e^{s(q,d)}}{\\sum{d' \\in \\mathcal{D}\\mathcal{R}} e^{s(q,d')}}.\n\\end{equation}\n\\paragraph{VLMs Accepting Multiple Images.}\nSome recent VLMs like MiniCPM-V 2.6 and Qwen-VL 2 are designed and trained to accept multiple images as input to perform cross-image reasoning.\nThis capability may be useful for the generation as the required information could be located on a single page from the retrieved document set $\\mathcal{D}\\mathcal{R}$ for single-hop questions or spread across multiple pages for multi-hop questions.\nFormally, we have\n\\begin{equation}\n\\small\na \\xleftarrow{} \\text{VLM-Multi}(q,\\{d|d \\in \\mathcal{D}\\mathcal{R}\\}),\n\\end{equation}\nwhere VLM-Multi is the VLM that accepts multiple images with text prompt.\nTraining Data Efficiency\n\\label{sec:result:data_efficiency}\nAs retrieval acts as the bottleneck in an RAG pipeline, it is crucial to have an effective retrieval component to maintain optimal performance.\nIn this experiment, we study the training data efficiency of VisRAG-Ret by evaluating the performance of VisRAG-Ret trained under different amounts of synthetic training data, i.e. in the out-of-domain setting.\nAs shown in Figure~\\, when only trained on 20k q-d pairs, VisRAG can surpass bge-large (OCR).\nAfter training on 150k pairs, it can further surpass NV-Embed-v2 (OCR), the SOTA 8B-sized text embedding model trained on millions of curated text pairs.\nThis highlights VisRAG-Ret's high training data efficiency and strong generalization capability, as all models are evaluated out-of-domain.\nWhen compared with MiniCPM (OCR), which uses extracted text for training, VisRAG-Ret consistently achieves a performance gain of about 17\\% and exhibits a more stable training process.\nThe results show VisRAG-Ret's potential for further performance improvements by scaling up the training data.\nPerformance on Different Data Subsets\n\\label{sec:result:performance_grouped}\nIn this experiment, we assess the retrieval and generation performance of VisRAG and TextRAG defined in Figure~\\, as well as VisRAG (SigLIP), which replaces the retriever in VisRAG with SigLIP.\nWe report their performance across different data subsets by categorizing queries based on the lengths of their positive documents, measured by the number of tokens of the extracted text.\nDocuments with a higher volume of extracted text may prioritize textual information over visual content.\nAs illustrated in Figure~\\, queries in ArxivQA and InfographicsVQA are divided into equal-sized bins according to the lengths of their relevant documents.\nFor each bin, we calculate and plot the average performance differences between VisRAG and TextRAG, as well as between VisRAG (SigLIP) and TextRAG, to compare how each model performs relative to TextRAG.\nWe observe that, in general, the relative performance of VisRAG and VisRAG (SigLIP) improves as the length of the relevant document decreases.\nThis suggests that models with vision encoders can better understand documents that emphasize visual information.\nHowever, VisRAG (SigLIP) consistently underperforms VisRAG across all data subsets and, in some cases, even performs worse than TextRAG.\nIn contrast, VisRAG consistently outperforms TextRAG, indicating that the underlying language model in VisRAG is crucial for better understanding the semantics conveyed through visual cues.", ["2410_10594v1_0", "2410_10594v1_0", "2410_10594v1_1", "2410_10594v1_3", "2410_10594v1_2"]] [60033, "Introduction\n\\textbf{Background.}\nWith the boom of digital information, billions of user requests are produced daily, so recommender systems (RSs) have become an integral part of Internet platforms.\nTo capture users' interests more accurately, RSs have gone through several milestones, such as the logistic regression with hand-crafted features (e.g., FM), the neural networks (e.g., WideDeep, YoutubeNet), the sequential signal (e.g., DIN, SIM), and the multi-hop graph signal (e.g., PinSage, DGRec).\nIn retrospect, these effective methods are based on the collaborative filtering (CF) idea and extend the boundaries of RSs. However, the CF framework also limits them under the case of cold-start and data sparsity problems.\nThe reason is that the CF idea aims to mine the user/item pattern intelligence from data, to discover and recommend high-click candidate items for a user while it is hard to understand the users' fine-grained and multi-aspect interests.\nIn fact, instead of mining user preferences from massive user-item interaction logs, the user always leaves some reviews/comments to explain further his/her feelings about this interaction, which provides an explicit way to understand the users' complex interests in language semantic space.\nAnd the recent effort MoRec claimed that multi-modal information yield better results in sequential recommendation.\n\\textbf{Related work.}\nTo extract valuable content semantic information, the pioneering works are formed as a rating prediction task: for a user-item pair in test set, give the historical user/item contents in training set (e.g., reviews), then predict their possible interaction rating.\nIn early years, the DeepCoNN employed two convolutional neural networks (CNNs) towers to aggregate user/item content tokens individually to measure their dot-similarity, and the D-Attn further extended DeepCoNN by introducing local and global attention mechanism to replace CNNs to aggregate tokens.\nFollowing the D-Attn, the ALFM and ANR focused on extracting the fine-grained multi-aspect semantic information and assigning different weights for aspects.\nThe recent progress is RGCL, which used BERT to generate the user-item content scores and then leveraged them as user-item graph edge weights to conduct a multi-hop graph neural network to make prediction.\n\\textbf{Motivation.}\nAlthough these methods raised model ability with content information, they ignore the following problems:\n\\begin{itemize}[leftmargin=,align=left]\n\\item \\textit{Transfer semantic knowledge across domains}: Actually, RS needs to serve several domains simultaneously, such as electronics, clothing, books, etc.\nSince different domains always express different aspects of users' interests (e.g., food is delicious, price is appropriate, etc.), the previous methods need to re-tune their semantic component for different domains, which is time-consuming.\n\\item \\textit{Enhancing the correlation between content and ID}:\nThe user-item content and ID-interaction information can be seen as two different modalities connected by users.\nNevertheless, previous methods focus on utilizing separate components to model the two corresponding content/ID spaces while ignoring how to exploit the correlation and align them in a unified space.\n\\end{itemize}\n\\textbf{Our Work.}\nTo alleviate the above problems, we propose \\textbf{LoID}, a LLM-based model for transferring semantic knowledge across domains based on LoRA, and aligning content/ID information with contrastive objectives.\nIt mainly includes two steps:\n\\begin{enumerate}[leftmargin=,align=left]\n\\item \\textit{Pretraining semantic plugin'}: On the one hand, the ideal semantic information should act as a universal role to support all domains. On the other hand, different domains have their main aspects of semantic information that are related to recommendation. We borrow the LoRA strategy idea to train a small set of parameters for each domain, which could serve as plugins. Then we fused them by DARE without further re-training. \\\\item \\\\textit{Aligning the content/ID feature spaces}: As discussed before, the content and ID information can be seen as two modalities of user-item interaction. To minimize their gap, we introduce the contrastive idea of maximizing content/ID features' mutual information to align their feature spaces. \\\\end{enumerate} Finally, to validate our LoID effectiveness, we extensively test LoID under 11 different domain datasets to show its superior ability. \\\\textbf{Contributions.} Our contributions are summarized as follows: \\\\begin{itemize}[leftmargin=*,align=left] \\\\item We give a plugin' idea to transfer semantic knowledge, which sheds light on building a new paradigm for recommendation\\footnote{Codes at \\url{https://github.com/cjx96/LoID}.}.\n\\item We devise a novel content/ID feature alignment objective.\n\\item We conducted detailed analyses with SOTA methods and LLMs.\n\\end{itemize}\nMethodology\nOverview\nThe architecture of our method LoID is illustrated in Figure~\\.\nIn part (a), we train the LoRA parameters of source domains as \"plug-ins\" to enhance target domain prediction without further re-training.\nIn part (b), we extract historical contents of user/item to obtain user/item semantic representation, and then align the ID and semantic to make target domain rating prediction, \\textbf{note that the source domain LoRA plugin is an optional choice}.", ["2404_00236v2_0"]] [60034, "Fact-aware Capability Control\nThe factual similarity threshold in Equation \\ref{eq:radgraph_similarity} plays a critical role in controlling the fact-awareness of our multimodal retriever.\nWe examine the performance of FactMM-RAG under different thresholds, as shown in Figure \\ref{fig:hyperparameter_search}.\nNot only utilizing F1RadGraph thresholds, we also employ F1CheXbert to curate additional thresholds from the report's diagnostic labels to mine report pairs.\\\\\n\\newline\nUnder the same F1CheXbert threshold for mining report pairs, we observe that an increase in the F1RadGraph threshold correlates with an improvement in factual performance.\nHowever, adopting stricter thresholds for identifying report pairs does not yield further improvements and reaches saturation.\nAfter calculating the average number of report pairs per query, we find that high thresholds can exclude many relevant report pairs, as shown in Figure \\. This exclusion results in the potential loss of factually useful pairs, thereby hindering the training of our multimodal retriever driven by additional factual medical knowledge.\\\\\n\\newline\nRather than relying on diagnostic labels from CheXbert to identify high-quality report pairs, Figure \\ demonstrates that the F1RadGraph threshold alone can also effectively mine factual report pairs for training our multimodal retriever.\nAs the F1RadGraph threshold increases, FactMM-RAG even matches the performance under high threshold settings in Figure \\.\nThis signifies that employing our training strategy with curated factual query-report pairs still imposes useful supervision signals without relying on explicit diagnostic label guidance.", ["2407_15268v1_0", "2407_15268v1_1", "2407_15268v1_2"]] [60035, "Slide-LoRA: Enhancing Image and Text Generation Consistency\nTo harmonize multi-modal generation tasks in a single model, the parameter space would be optimized for inconsistent or conflict training objectives, as stated before. We propose Slide-LoRA (which is shaped like a ``slide\" between different LoRA experts as shown in Figure \\ref{algorithm}), a novel module that can be conveniently inserted into Transformer layers as Low-Rank Adaptation (LoRA) and introduces limited parameter increase. As such, Slide-LoRA spontaneously processes text generation and image generation in separate parameter spaces, thus relieving the inconsistent training problem.\nAs shown in Figure \\, Slide-LoRA is composed of a gating network $\\mathcal{G}$ and three LoRA modules $(\\boldsymbol{R}^{T}, \\boldsymbol{R}^{I}, \\boldsymbol{R}^{S})$. $\\boldsymbol{R}^{T}$ and $\\boldsymbol{R}^{I}$ serve as the separate parameter space for text and image generation, respectively, while $\\boldsymbol{R}^{S}$ aims to learn the knowledge shared by both text and image generation. Specifically, Given the input token sequence $\\boldsymbol{x} \\in \\mathbb{R}^{L \\times D}$, the gating network $\\mathcal{G}$ (concatenation of two linear layers in specific) determines whether the processing of the input token sequence requires knowledge of text generation or image generation, and produces a scalar $\\gamma = \\mathcal{G}(\\boldsymbol{x}) \\in [0, 1]$. The output of the Slide-LoRA layer can be formulated as\n\\begin{equation}\n\\boldsymbol{O} = \\frac{1}{2}\\cdot\\{[\\gamma \\geq 0.5] \\cdot \\boldsymbol{R}^{T}(\\boldsymbol{x}) + [\\gamma < 0.5] \\cdot \\boldsymbol{R}^{I}(\\boldsymbol{x}) + \\boldsymbol{R}^{S}(\\boldsymbol{x})\\},\n\\end{equation}\nwhere $[\\cdot]$ equals $1$ if the condition inside is true and $0$ otherwise. Slide-LoRA incorporates task-specific and task-shared knowledge from input tokens, thus separating the inconsistent training objective and learning the shared knowledge of text and image generation.\n\\subsection{Multi-Modal Pre-Training and Comprehensive Fine-Tuning}\nTextHarmony training process consists of two stages. In the multi-modal pre-training stage, TextHarmony is trained on text-rich image-text corpus and learns to produce multi-modal outputs. In the comprehensive fine-tuning stage, we concurrently cultivate the text and image generation capabilities of TextHarmony by training on a series of text-centric tasks.\nStage 1: Multi-Modal Pre-Training\nTextHarmony is pre-trained based on the pre-training weight of MM-Interleaved , with extra text-rich datasets including MARIO-LAION and DocStruct4M. MARIO-LAION contains 9M web images with brief captions and the according OCR results. DocStruct4M consists of 2M documents and 1M natural images with text-oriented structure annotations. We use MARIO-LAION for both text and image generation ({\\it, i.e.}, either predict the caption of the image or generate the image based on the caption), and we use DocStruct4M for text generation only. In this stage, we freeze the vision encoder and the LLM, training only the Q-Former and the image decoder to obtain basic image understanding and generation capabilities.\nStage 2: Comprehensive Fine-Tuning\nWe integrate various text-centric datasets and employ uniform instructions for all tasks. In this stage, the vision encoder, Q-former, image decoder, and the proposed Slide-LoRA are trained to enhance the multi-modal generation and human-instruction-following capabilities of TextHarmony.\n\\textbf{Visual Text Generation.} In this task, TextHarmony generates images according to the text description and is required to render accurate and coherent text. Although MARIO-LAION contains captions of text-rich images, the description is oversimplified and lacks concentration on the textual elements within the image. To this end, we sample 100K images from MARIO-LAION and generate detailed captions about them, termed DetailedTextCaps-100K. The captions focus on both visual and textual elements in the images. This is achieved by prompting Gemini Pro, a pioneer multi-modal large language model, to generate detailed descriptions based on the sampled image and the OCR results. As shown in Figure \\, the image description from DetailedTextCaps-100K is more comprehensive compared with MARIO-LAION and can better depict the textual elements in the image.\n\\textbf{Visual Text Editing.} In this task, TextHarmony substitutes or renders text in the given location of the image and keeps the background consistent. We randomly mask the image with the help of MARIO-LAION's OCR results and fine-tune TextHarmony in a self-supervised manner.\n\\textbf{Visual Text Comprehension.} We employ the training set collected by Monkey for the text-centric VQA fine-tuning. The training set involves 1.4M QA pairs and covers various text-rich scenarios.\n\\textbf{Visual Text Perception.} For the basic OCR capabilities, we randomly sample 1M images from MARIO-LAION and leverage the OCR annotations.", ["2407_16364v2_0", "2407_16364v2_1"]] [60036, "Overreliance on Unimodal Priors\nOverreliance on unimodal priors is a key factor contributing to hallucinations in LMMs.\nThis issue arises when the model over-relies on the knowledge learned from one modality during training, rather than integrating knowledge of all available modalities.\nIn such cases, the model defaults to strong unimodal priors learned during training, leading to outputs that follow familiar unimodal patterns even when those patterns are not supported by the multimodal input.\nFollowing the general issue of overreliance on unimodal priors, we categorize this into three distinct types: \\textbf{Language Dominance}, \\textbf{Visual Dominance}, and \\textbf{Audio Dominance}. Each form of dominance presents unique challenges for LMM performance and contributes to hallucinations in different ways.\n\\noindent\\textbf{Language Dominance}, also know as language biases~\\citep{rohrbach2018object,leng2024mitigating,guan2024hallusionbench,wang2024mitigating}, arises when a model excessively depends on pre-trained large language models (LLMs), generating responses that adhere to linguistic patterns or prior knowledge from large language corpora, even when visual or audio inputs provide contradictory information. This issue is particularly prevalent in LMMs that integrate LLMs as their decoder base. These LLMs~\\cite{chiang2023vicuna,jiang2023mistral,yang2024qwen2}, due to their proficiency in capturing linguistic structures and semantic relationships, often dominate the decision-making process, overshadowing contributions from visual or audio modalities.\nAs illustrated in Fig.~\\, a video depicts finger skateboarding with shoes on fingers. When asked by the language-biased question \u201cDid you see shoes worn on feet?\u201d\u2014reflecting commonsense event that follows linguistic priors\u2014LMMs respond \u201cyes,\u201d contradicting the actual content and inducing hallucination. This demonstrates LMMs\u2019 tendency to rely on language priors over factual multimodal inputs.\n\\noindent\\textbf{Visual Dominance} occurs when a model over-relies on visual information, underutilizing or disregarding linguistic and auditory cues. In such cases, the model-generated outputs are heavily influenced by visual context, often neglecting important information from the other modalities.\nAs illustrated in Fig.~\\, a video depicts a person planning a woodworking project with a hammer in sight, while the audio track contains only the person speaking and bird chirping. Despite this, advanced LMMs may over-rely on the visual presence of the \u201chammer\u201d and incorrectly infer a \u201chammer hitting\u201d sound, ignoring the actual audio content where no such sound is present.\n\\noindent\\textbf{Audio Dominance} arises when a model excessively relies on auditory input, disregarding visual or linguistic information.\nAs illustrated in Fig.~\\, a video captures a person recording a village view through a window, showing dark clouds. The audio track contains evident thunderstorm sounds, but no lightning is visible. Despite this, LMMs may over-rely on the audio cues, hallucinating that lightning is visible in the scene, thereby disregarding the actual visual content.", ["2410_12787v1_0", "2410_12787v1_1", "2410_12787v1_2"]] [60037, "Literature database construction \\label{sec:liter-data-constrct}\nJust like human researchers, reading other literature and drawing inspirations from them is an important process for LLMs to generate valuable ideas.\nHowever, online literature reading is a very time-consuming process, so we collect a literature database in advance for the following literature retrieval and idea proposal process.\nTo be specific, we collect papers published in ICLR, NeurIPS, ICML, ACL, NAACL, and EMNLP in past ten years, yielding a database with 48,895 papers. For each paper, we parse the PDF file and extract its title, abstract, introduction, method, and references sections. Then, as shown in Figure~\\ , given an LLM $f$, we prompt it to read and summarize the paper as:\n\\begin{equation}\n\\begin{aligned}\n% (T_b, T_i, T_m) = f(T_p, T_r),\n\\mathbb{E}^{(p)} &= f(\\tau_1, T_a^{(p)}), \\\\\n(T_b^{(p)}, T_s^{(p)}, T_i^{(p)}) &= f(\\tau_2, T_t^{(p)}, T_a^{(p)}, T_n^{(p)}), \\\\\nT_d^{(p)} &= f(\\tau_3, T_m^{(p)}, T_i^{(p)}),\n\\end{aligned}\n\\end{equation}\nwhere $T_t^{(p)}, T_a^{(p)}, T_n^{(p)}, T_m^{(p)}$ are the paper $p$'s title, abstract, introduction, and method sections. $\\mathbb{E}^{(p)}, T_b^{(p)}, T_s^{(p)}, T_i^{(p)}, T_d^{(p)}, T_r^{(p)}$ are extracted entities, background, summary, main ideas, detailed ideas, and core references, as shwon in Figure~\\.\n$\\tau_i, i \\in \\{1, 2, 3\\}$ represent our designed prompt templates, and specific prompts are shown in the Appendix~\\ref{apd:pfee}. In practice, we use GLM-4\\footnote{We use the GLM-4 released in May 20th, 2024 (glm4-20240520).} as $f$.\nBesides, ``Core References'' in Figure~\\ means extracting papers referenced in introduction and method sections, because we believe these references have the greatest impact on paper $p$ among all references.\nAdditionally, the background, summary, and main ideas are also encoded with Sentence-BERT for their embeddings $e_b^{(p)}, e_s^{(p)}$ and $e_i^{(p)}$, respectively. All extracted information are recorded into our literature database.\nTo retrieve literature faster, we also construct a paper-entity graph in the database.\nwe also store all occurrence relationships of papers and entities in the database. As shown in Figure~\\, if an entity $T_{e1}$ appears in the paper $p1$, there will be an edge between the two paper nodes. Idea proposal \\label{sec:idea-proposal}\nUpon completion of the literature retrieval, we propose three approaches for generating research paper ideas. In essence, the idea generation process can leverage two types of information: the first is derived from the content of the retrieved papers, which inspires the LLM to generate ideas; the second involves the LLM freely brainstorming to produce new ideas. Based on this principle, we delineate three methods of idea generation that vary in their application of brainstorming.\nAs illustrated in Figure~\\(a), the direct proposal method (SciPIP-A), does not use brainstorm. While the first dual-path proposal method (SciPIP-B), as Figure~\\(b) shows, utilizes the user-provided background into two branches. The first branch employs this background for literature retrieval, problem summarization, and idea generation based on the retrieved literature, while the second branch engages in brainstorming solutions directly from the user-provided background. Following the independent generation of ideas in both branches, the outputs are merged and subsequently filtered and refined to yield the final ideas. Similarly, as shown in Figure~\\(c), the second dual-path proposal method (SciPIP-C) follows a process analogous to SciPIP-B, with the key distinction being that the content generated through the LLM's brainstorming is utilized not only for idea generation but also integrated with the user-provided background for entity extraction and other literature retrieval processes. We will provide a detailed exposition of these three methods of idea proposal in the following sections. We use GPT-4o\\footnote{We use the GPT-4o released in May 13th, 2024 (gpt-4o-2024-05-13), which has an October 2023 knowledge cutoff.} by default in this section.\nDirect idea proposal method\nAs depicted in Figure~\\(a), in the direct proposal method, we first retrieve papers following the pipeline described in Section~\\ref{sec:liter-retrive}. Then, the user-provided background along with the retrieved papers are utilized to prompt the LLM to summarize the core problem we aim to address and provide justifications. The specific prompts can be found in the Appendix~\\ref{apd:pfee}.\nWith the summarized problem and justifications, the LLM is prompted to generate around 10 initial ideas. In the prompt, both the problem, the justification and the retrieved papers are provided. The LLM is encouraged to generate clear, innovative, valid, and comprehensive ideas. The specific prompts for this step can be also found in the Appendix~\\ref{apd:pfee}.\nThough the prompt has declared, the initially generated ideas may still have shortcomings in terms of novelty or relevance to the problem. To address this, we filter the initial ideas using prompt engineering (prompts are illustrated in the Appendix~\\ref{apd:pfee}), with the primary criterion being that the ideas are generated in response to the given problem. Additionally, the ideas must exhibit a high degree of novelty and feasibility. During this process, each generated idea is evaluated independently, and about half of them will be filtered.\nThen, the LLM is encouraged to further improve the filtered ideas by considering their inter-relationships. That is, the LLM is tasked with considering the compatibility of the ideas, ensuring that it does not generate conflicting or repetitive ideas. Moreover, the LLM is required to generate formulas or algorithms to better elaborate the ideas if needed. The prompt is shown in the Appendix~\\ref{apd:pfee}. Finally, about 3 to 4 refined ideas will be proposed.\nDual-path idea proposal methods\nWe find that the directly generated ideas often rely heavily on the retrieved literature, sometimes closely resembling the methods presented in those papers. They frequently involve transferring approaches from other fields or making minor improvements to existing methods within the same field, resulting in relatively ordinary novelty and rarely yielding breakthrough thinking.\nTherefore, we further propose idea proposers that incorporates brainstorming, encouraging the LLM to produce more novel thoughts. Specifically, brainstorming can play a role in both processes of idea generation. As shown in Figure~\\(b), the SciPIP-B has two paths, where one path follows the direct proposal approach, while the other path uses the LLM to brainstorm possible solutions based on the user-input background, outputting these as ideas. Ultimately, these ideas will be merged with those generated based on the retrieved papers, filtered and refined to produce the final ideas. In this model, the results of brainstorming are independent of the generation based on retrieved papers.\nIn another approach, as shown in Figure~\\(c), brainstorming generates ideas independently while also being utilized in literature retrieval. Specifically, we extract entities from the brainstorming results and incorporate them as part of the entity set in the literature retrieval process. With this method, some keywords arising from the brainstorming will also help enhance the effectiveness of literature retrieval. The ideas generated through brainstorming will also be merged with those produced after literature retrieval.", ["2410_23166v1_0", "2410_23166v1_0", "2410_23166v1_0", "2410_23166v1_0", "2410_23166v1_1", "2410_23166v1_1", "2410_23166v1_1", "2410_23166v1_1", "2410_23166v1_1", "2410_23166v1_1"]] [60038, "Knights and Knaves Logical Reasoning Benchmark\n\\label{sec:proposal_dataset}\n\\emph{Knights and Knaves (\\kk)} is a type of logical puzzle where some characters can only answer questions truthfully, and others only falsely. The goal is to infer each character\u2019s truthfulness.\nAn example of the puzzle and its corresponding answer is provided in~\\.\nBased on the \\kk puzzle, we design a \\emph{dynamic} benchmark that supports generating new puzzles and perturbing existing puzzles. Our library automatically solves the \\kk puzzles and generates detailed reasoning steps and solutions\nfor evaluation and training. Moreover, to support measuring memorization, we also provide a procedure to perturb a given puzzle and recompute the new solution after the perturbation.\nSpecifically, our benchmark consists of two modules:\n\\textbf{The Abstract Module} %\nhas four components that can generate and manipulate \\kk puzzles in an abstract form (see \\cref{app:kk-abstract-repr}). (1) The \\textbf{Generator} creates a random \\kk puzzle given a specification $(N,D,W)$ that determines the difficulty level. Specifically, it generates a puzzle with $N$ people, and for each person, a statement that consists of a random tree of maximum width $W$ and depth $D$, where each node is sampled uniformly at random from the following candidate sets.\nThe \\textit{leaf} node can be a claim that a specific person is lying (i.e., knave) or truth-telling (i.e., knight), and the \\textit{branching} node can be \\emph{and}, \\emph{or}, \\emph{not}, \\emph{imply}, and \\emph{equivalence}. (2) The \\textbf{Solver} finds the solution to a given puzzle. In our study, we ignore puzzles with no or more than one solution. So we implement the solver by converting the puzzle to a Boolean satisfiability problem and enumerate all Boolean assignments so that we can easily obtain a list of all valid solutions to filter out unwanted puzzles.\n(3) The \\textbf{Reasoner} generates a step-by-step reasoning procedure that leads to the solution. We design it to mimic the reasoning steps used by humans and some LLMs: instead of enumerating all Boolean assignments, it examines each person sequentially, makes an assumption (knight/knave) and checks if it leads to a contradiction with statements from people with specific identities assumed. If not, it continues to examine the next person; otherwise it will try an alternative assumption or backtrack to a previously examined person (details in \\Cref{app:kk-solver-reasoner}). (4) The \\textbf{Perturber}, given a puzzle, generates a locally perturbed version that is (superficially) similar to the original puzzle, and solvable with the same underlying principle and at a similar difficulty level. The Perturber replaces either an entire statement or a leaf node in a statement with a newly sampled one. The process is rerun until the perturbed puzzle has a unique solution different from the original puzzle, or until a maximum number of attempts is reached. This rarely happens for $N\\geq 3$ people puzzles. When it happens we skip the puzzle in our perturbation analysis.\nSee \\cref{tab:example_2ppl_puzzle} for concrete examples.\n\\textbf{The Natural Language Module} has three components that operate in natural language space. (1) The \\textbf{NL-Generator} takes an abstract \\kk puzzle and formats it in natural language. It is template-based, with randomly sampled person names and a random template for making claims, and it uses a few heuristics to convert tree-structured logical statements to natural language. (2) The \\textbf{NL-Reasoner} converts the reasoning steps computed by the abstract Reasoner to the natural language format in a similar manner. See \\c for dataset length distributions. (3) The \\textbf{NL-Perturber} generates perturbed puzzles by keeping the abstract puzzle intact and manipulating the language-level descriptions as follows (See \\Cref{app:kk-nl-perturber}): (i) replace character names with uncommon names; (ii) replace knight/knave with other similar pairs of role names, e.g., saint/sinner; (iii) reorder the statements from the characters; (iv) flip the role name from knight/knave to knave/knight. Note that the flipped role perturbation is somewhat adversarial as it goes against the common intuition that a good character would tell the truth and a bad one would lie, so we include it mostly for reference purposes.\nFine-tuning with 10k 8-people puzzles.\nGiven the significant performance improvement from fine-tuning, a natural question arises: can brute-force fine-tuning on a very large number of puzzles eventually solve the \\kk puzzles, by observing/memorizing a variety of combinations of persons' claims and their corresponding answers?\nWe \\ft \\gptfouromini on $1k$/$10k$ of the most challenging 8-people puzzles for 5 epochs. ~\\\nshows that \\textbf{(1)} $10k$-FT significantly outperforms $1k$-FT across all tasks, reaching $\\sim90\\%$ test accuracy on moderately difficult 4/5-people puzzles. \\textbf{(2)} \\cotft is generally more effective than \\ft with $10k$ samples, likely due to the guidance provided by reasoning steps. \\textbf{(3)} An exception is the 2-people task, where the training and testing distribution gap causes the {\\cotft}ed model to occasionally get stuck in a loop of listing assumptions and contradictions, resulting in long, repetitive responses without reaching a conclusion\\footnote{We observe similar accuracy drop on 2-people task for \\llamathree (see \\c) when it is {\\ft}ed for overly long epochs.\nWe provide more examples and discussions in \\cref{appsubsec:4omini_eval_reason}.}.\n\\textbf{(4)} \\ft with $10k$ puzzles achieves surprisingly high test accuracy on all tasks, e.g., 52\\% on 8-people tasks, where the un-FTed model scores near 0. Notably, the models do not see reasoning steps during training and rely solely on memorizing answers. %\nWe also observe high transferability for $10k$ {\\ft}ed \\llamathree in~\\, e.g., $87\\%$ test accuracy on 3-people puzzles.", ["2410_23123v1_0", "2410_23123v1_3", "2410_23123v1_1", "2410_23123v1_2", "2410_23123v1_2"]] [60039, "Introduction\nAlthough AI has brought transformative changes to various aspects of life, its impact on researchers unfolds in a nuanced manner. On the one hand, AI assists in various research disciplines, such as Social Science \\citep{DBLPNeumanCY23}, Finance \\citep{ai4finance}, Medicine \\citep{rakhimov2022artificial}, GeoScience \\citep{praskievicz2018river}, etc., significantly expediting academic processes. However, many of these applications are superficial, often limited to data-driven clustering or classification. On the flip side, the AI era poses challenges for researchers. Despite its ability to streamline some activities, researchers still face demanding, cognitively intensive tasks such as staying current through extensive paper reading, rapidly generating ideas in response to fast-paced advancements, conducting rigorous experiments to substantiate claims, and managing an increasing volume of peer reviews. Then a question looms: \\emph{How effectively can AI assist researchers in tasks that are domain-specific, expertise-demanding, and reasoning-intensive?}\nExisting works proved the promising potential for using LLMs in assisting AI research. \\citet{si2024can} conducted a large-scale human study and found that LLMs can generate creative research ideas. \\citet{lu2024aiscientist} proposed an autonomous agent to handle complicated research workflow and write a whole research paper.\nHowever, most of these works focus on addressing highly subjective problems that require a high degree of expertise, making evaluation laborious and hard to reproduce. This underscores the need for a comprehensive benchmark that rigorously assesses LLMs' capabilities in expertise-intensive research activities.\nTo this end, in this work, we introduce \\textsc{AAAR-1.0}, a novel benchmark that aims to comprehensively assess the LLMs' capacity on expert-level research tasks. As illustrated in Figure~\\, \\textsc{AAAR-1.0}~decomposes four distinct expert-level AI research tasks from the researcher's daily activities, including i) \\textsc{EquationInference}, investigating whether the LLMs can infer the equation correctness based on the paper context; ii) \\textsc{ExperimentDesign}, validating LLMs' ability on designing reliable experiments for a research idea; iii) \\textsc{PaperWeakness}, testing the quality of weaknesses discovered by LLMs from paper drafts; and iv) \\textsc{ReviewCritique}, investigating whether LLMs can identify and explain the deficient/unreliable human-written paper reviews. To ensure data quality, senior AI researchers with extensive domain expertise perform data annotation for \\textsc{AAAR-1.0}, followed by rigorous multi-round data examination and filtering.\nAll four tasks require models to possess strong domain knowledge covering various cutting-edge research findings, as well as expert-level research experience, to the extent that even humans need substantial research accumulation to tackle the tasks we designed.\nBenefiting from the proposed automatic metrics, we conduct extensive experiments across numerous mainstream LLMs, where we find that:\n\\vspace{-0.5em}\n\\begin{itemize}[leftmargin=2em]\n\\itemsep0em\n\\item With a random guess baseline of 25\\%, the performance of most LLMs on \\textsc{EqInfer}~hovers just slightly above chance, with the top models reaching around 60\\%. This highlights the difficulty of the task, despite its reliance only on local context reasoning.\n\\item In \\textsc{ExpDesign}, LLM-designed experiments are innovative and more diverse than those by humans; however, many are trivial, lack feasibility, and stray from the original research objectives.\n\\item In \\textsc{PaperWeakness}, LLM-identified weaknesses often lack depth and specificity, making them broadly applicable and less useful for providing feedback on paper drafts.\n\\item In \\textsc{ReviewCritique}, LLMs struggle to effectively identify deficient human reviews, indicating limited usefulness in assisting meta-reviewers in evaluating the quality of individual paper reviews.\n\\end{itemize}\n\\paragraph{Settings.}\nAs different LLMs have distinct context windows, to ensure a fair comparison, we fix the maximum input length for all models. According to the data statistics of Table~\\ref{tab:equations_statistics}, we empirically use 1,000 words for both contexts before and after equations, i.e., 2,000 surrounded words.\n\\paragraph{Main results.}\nTable~\\ref{tab:equation_main_tab} shows the main results. Firstly, the open-source LLMs, especially the Falcon and Gemma, perform unexpectedly disappointing (even worse than random guesses). These screwed scores are mainly due to the poor long-context instruction following ability, where we find some open-source LLMs are confused with the massive input and often copy the LaTeX code from the input. In contrast, closed-source LLMs generally achieve superior accuracy, probably owing to the richer scientific knowledge from the larger model parameters. However, considering the conventional multi-choice QA formulation of~\\textsc{EqInfer}, the recently-released GPT-4o solely gets 43.18, implying the unique challenge of \\textsc{EqInfer}~compared with other scientific QA benchmarks~\\citep{song2023nlpbench}. Notably, with the help of internal CoT, o1 gains stronger performances than GPT-4/GPT-4o, indicating the potential benefits of adopting reasoning for this task.\n\\paragraph{$\\mathcal{Q}$: Do more contexts boost performance?} Table~\\ref{tab:equation_main_tab} unifies the input context lengths to 1,000 tokens for various LLMs. To answer this question, we experiment with long-context LLMs to investigate the impact of the input context lengths. Particularly, we scale the input length (per side) from 100 to 1,500 words. As shown in Figure~\\, for the open-source LLMs (Llama and Qwen), after 300 words length, increasing the input context doesn't help the performance and even significantly drops Qwen's scores. While for the closed-source GPT-4-Turbo and GPT-4o, scaling up input length gradually boosts the performances at the first 1,000 words, but stabilizes afterwards. This is in line with human intuition, i.e., surrounding context is required for the equation inference, as the adjacent context usually provides important information, such as the target algorithm description or the notation definition. However, after exceeding a specific threshold, more context information is not beneficial anymore and even confuses those LLMs with poor long-context handling capacity~\\citep{wang2024leave,liu2024lost}.", ["2410_22394v1_1", "2410_22394v1_0"]] [60040, "We first give an overview of the proposed Code-as-Monitor (\\fname)~(\\cref{subsec: overview}). \nThen, we elaborate on the constraint element in Constraint Painter, especially constraint-aware segmentation~(\\cref{subsec: constraint element proposal}).\nFinally, we present Constraint Monitor for real-time detection~(\\cref{subsec: real-time monitor module}). \n we give the necessary implementation details in \\cref{{subsec: implementation details}}.\nOverview}\n{subsec: overview}\nThe proposed {\\fname} comprises three key modules: the Constraint Generator, Painter, and Monitor.\nWe focus on long-horizon manipulation task instructions $\\mathcal{L}_{\\mathrm{global}}$ (\\eg, \\textit{``Move the pan with the bread to the stove, and be careful not to let the bread fall out''}), using RGB-D observations $\\mathcal{O}$ from two camera views~(front and top). \nAs shown in \\c, the RGB images~$\\mathcal{O}$, along with instructions~$\\mathcal{L}_{\\mathrm{global}}$, previous subgoal~${l}_{\\mathrm{pre}}$, and failure feedback from the Constraint Monitor~$f_{\\mathrm{pre}}$~(\\eg, subgoal success or failure reason), are fed into the Constraint Generator~$\\mathcal{F}_{\\mathrm{VLM}}$~(\\ie, GPT-4o~\\cite{achiam2023gpt}) to generate the next subgoal~${l}_{\\mathrm{next}}$ and associated textual constraints~$\\mathcal{C}$.\nThis process can be formulated as follows:\nThese outputs are integrated and refined through manual inspection to produce the multi-granularity dataset, which is combined with LISA's training data to fine-tune our model. More details are in Supp.~\\ref{supsec: implementation details}.\n We use this dataset and LISA's training data to co-fintune our model. Further details are in Supp.~A.\n Our dataset utilizes images from BridgeData V2~\\cite{walke2023bridgedata}.\nTo determine per-frame subgoals and constraints, we require annotations at the frame level; however, BridgeData V2 provides instructions only at the trajectory level. \nherefore, we sample pick-and-place data and process it using external references such as gripper open/close states, resulting in $10,181$ trajectories comprising $219,356$ images.\no generate ground-truth annotations, we employ GPT-4o~\\cite{achiam2023gpt} to decompose trajectory instructions into per-stage subgoals, corresponding textual constraints, and associated objects and parts. \ne then employ Grounded SAM~\\cite{ren2024grounded} for instance-level segmentation and Semantic SAM~\\cite{li2023semantic} for part-level segmentation. \nBy integrating these outputs, we achieve multi-granularity annotations, which are further refined through manual inspection.\n We use this dataset and LISA's training data to co-fintune our model.\nFor more details, please check Supp.~A.We use BridgeData V2~\\cite{walke2023bridgedata} as the image source for our dataset. We need to determine the subgoal of each frame in a trajectory to ascertain the constraint of the current frame. Then, fine-grained part-level masks \\( \\mathcal{M}_{p} \\) and corresponding element type descriptions \\( l_{\\mathrm{e}} \\) are produced, as shown in \\c. However, BridgeData V2 provides instructions only for entire trajectories. Therefore, we sampled pick-and-place data from BridgeData V2 and processed the dataset using external reference information such as the gripper's open/close states.\nUltimately, we sampled $10,181$ trajectories to construct our dataset, which contains a total of $219,356$ images.\nAfter sampling the image data, we used a combination of off-the-shelf foundation models to obtain the ground truth. \nSpecifically, we input the entire trajectory instruction provided by BridgeData V2 into GPT-4o for decomposition. \nThis process yielded the subgoals of each stage, the corresponding textual constraints, and the object and part associated with each constraint. \n We then employed Grounded SAM~\\cite{ren2024grounded} for instance-level segmentation and Semantic SAM~\\cite{li2023semantic} for part-level segmentation. \n By combining these results, we obtained annotations at multiple granularities. \nFinally, we conducted manual inspections to further improve annotation quality.", ["2412_04455v1_0", "2412_04455v1_1"]] [60041, "To characterize the performance of VPA in correcting the Q-estimations, we leverage Spearman\u2019s rank correlation coefficient, which measures the strength and direction of a monotonic relationship between two ranked variables. The reason is that the relative ranking of Q-values are more important than the absolute values for policy update.\nSpecifically, given a dataset collected by rolling out the offline policy in the environment, we compare the ranking of learned reward/cost Q-values before and after VPA with that of estimated actual return by using Monte Carlo simulations. A large Spearman\u2019s rank correlation coefficient implies that the distribution of learned Q-values is more aligned with the distribution of true Q-values. \nAs shown in Table \\ref{table:Spearman}, it is evident that the coefficient increases significantly after VPA for both reward and cost Q-values, no matter if the offline policy rolls out from a seen state-action pair in the offline dataset or from a randomly selected OOD state-action pair. This clearly demonstrates the effectiveness of VPA in aligning the pretrained Q-functions.\nsubsection{Finetune Phase}\n{Challenge II: Lagrange multiplier mismatch.~}\nConventional value-based online safe RL relies on updating Lagrange multipliers alongside the policy and Q-functions during training, so as to push the overall cost below the limit while striking a right balance between maximizing the reward and minimizing constraint violations. While the policy and Q-functions can benefit from offline pretraining for a warm start, offline safe RL algorithms like CPQ \\citep{xu2022constraints} and BEAR-lag \\citep{ray2019benchmarking} cannot accurately estimate Lagrange multipliers with regularizing strengths matching with the cost of the offline policy, e.g., a small Lagrange multiplier is not power enough to push down the policy cost, while a large multiplier prevents active exploration of high-reward state-action pairs. \nFor instance, in the BallCircle environment, the offline pretrained Lagrange multiplier value obtained using the BEAR-lag algorithm is approximately 1500, whereas during online finetuning, the SAC-lag requires a value of only about 0.65. The gap between these values clearly precludes the direct use of offline pretrained Lagrange multipliers.\nImproper initialization can lead to extensive constraint violations \u00a0or training stagnation, an issue we term as the \\textbf{Lagrange multiplier mismatch}. \nOn the other hand, as VPA promotes active exploration of high-reward state-actions by optimistically estimating rewards and pessimistically estimating costs, it inevitably increases the risk of exploring high cost state-actions, which in turn amplifies the need for appropriate Lagrange multipliers to quickly reduce the constraint violations.\nFigure \\ shows the online finetuning performance comparison after VPA in two environments between 1) empirically setting a good initial value for the Lagrange multiplier and 2) setting it to zero, where the traditional dual ascent method is used to update the multiplier. It is clear that a good initial multiplier can manage the cost very well, \u00a0while the policy with a very small initial multiplier value suffers from large constraint violations and takes a much longer time to reduce the cost below the limit. The results also imply that although VPA aligns the distributions of Q-values, it may introduce high costs for online finetuning, which can be addressed with an appropriate initial Lagrange multiplier. \nction{Experiments}\nIn this section, we conduct extensive experiments to verify the effectiveness of our approach, aiming to answer the following questions: 1) {\\bf RQ1:} How does our method compare with naive finetuning and other SOTA baselines in both reward and cost? 2) {\\bf RQ2:} How do different components in Marvel affect the performance? Due to the space limit, we delegate the experimental details and some additional results to \\cref{sec:more_exp_} and cref{sec:Experimental_Details}.\nubsection{Evaluation Setup}\nmph{Benchmarks.} We consider the DSRL benchmark \\citep{liu2023datasets} and select xtbf{ten} environments from the { Bullet Safety Gym \\citep{gronauer2022bullet} and Safety Gymnasium \\citep{ji2023safety}}: BallRun, BallCircle, CarRun, CarCircle, HalfCheetah, \u00a0AntCircle, AntRun, DroneCircle, \u00a0Hopper, and Swimmer (results for the last four are in \\cref{sec:more_exp}). The cost threshold is set to be 20 in these environments. \nAs mentioned earlier in \\cref{preliminary}, we \nchoose CPQ and SAC-lag as base algorithms in our proposed framework Marvel for offline training and online finetuning, respectively, due to the effectiveness and representativeness of them.\nAs shown in \\c, Marvel demonstrates better or comparable performance compared to all baselines consistently across all environments, i.e., achieving the higher return while keeping the cost below the threshold.\nIn stark contrast, the naive warm start method proves largely ineffective, often causing performance drop or stagnation during training.\nWithout aligning the Q-estimations, both JSRL and PEX struggles a lot to improve during online learning and fails to control the cost. Besides, PEX also suffers from poor training stability and high variance across different settings.\nWhile SO2 mitigates the inaccuracies of Q-estimations related to O2O RL, it does so only to a limited extent and cannot maintain its performance consistently across different environments, although aPID has already been used to boost its performance. On the other hand, the fact that SO2 performs better than other baselines further indicates 1) the great potentials of enabling fast and safe online learning through policy finetuning (compared to using the pretrained policy only as a guide policy as in JSRL and PEX) and 2) the need of correcting pretrained Q-estimations before online finetuning.\nEach experiment was conducted using five random seeds, and the results were averaged to generate the final learning curves. {We use a dataset that includes data provided by DSRL \\citep{liu2024offlinesaferl} and random data generated by a random policy to control the quality of the offline dataset.}\nmph{Baselines.} While Guided Online Distillation \\citep{li2024guided} is the only work studying O2O safe RL, its usage of large pretrained model leads to an unfair comparison with standard RL frameworks using typically small-scale policy networks. In this work, we compare Marvel with \\textbf{JSRL} \\citep{uchendu2023jump}, \nas Guided Online Distillation mainly follows this approach except using DT as the pretrained policy. Besides, we further adapt some SOTA approaches in O2O RL to O2O safe RL, including \\textbf{SO2} \\citep{zhang2024perspective} and \\textbf{PEX} \\citep{zhang2023policy}, and a \\textbf{Warm Start} approach as baselines. \nSO2 improves Q-value estimation through Perturbed Value Updates, JSRL and PEX utilize offline pretrained policies for exploration, and Warm Start directly finetunes the policy and Q-networks from offline safe RL without modifications. ", ["2412_04426v1_0", "2412_04426v1_1"]] [60042, "textbf{Online 3D Scene Perception:}\nAccurate comprehension of 3D scenes is an indispensable capability for embodied agents. \nMany tasks, such as 3D occupancy prediction\\cite{monoscene, yu2024monocular} and object detection\\cite{Qi_2019_ICCV, wang2022cagroup3d}, are direct manifestations of this capability.\nCurrently, most works on 3D scene perception\\cite{qi2017pointnet, graham20183d, yi2019gspn} was conducted offline, taking pre-acquired and reconstructed 3D data to obtain a relatively lagging perception.\nBased on this situation, Online3D\\cite{xu2024online} introduced an adapter-based model that equips mainstream offline frameworks (in Figure~\\) with the competence to perform online scene perception, which means they can process a real-time RGB-D sequences.\nHowever, this framework still fails to overcome the intrinsic limitation of conventional point cloud modality. \nIn a more general embodied scenario, real-time monocular visual input for scene perception can further advance the research on embodied agents.\ntextbf{3D Gaussian Splatting:}\n3D Gaussian Splatting\\cite{kerbl3Dgaussians} uses anisotropic 3D Gaussians to model a 3D scene, renowned for its fast speed and high quality in the field of radiance field rendering.\nThe explicit physical characteristics of 3D Gaussians and the splat-based rasterization employed during rendering have also motivated rapid advancements in research fields such as scene editing\\cite{gu\u00e9don2024gaussianfrostingeditablecomplex, palandra2024gseditefficienttextguidedediting, silva2024contrastive}, dynamic scenarios\\cite{Lu_2024_CVPR, gao2024gaussianflowsplattinggaussiandynamics, xiao2024bridging}, and SLAM\\cite{yugay2023gaussian, yan2024gs, deng2024compact, li2025sgs}. \nGaussianFormer\\cite{gaussianformer} pioneers the application of 3D Gaussians in outdoor 3D semantic occupancy prediction, updating Gaussians through comprehensive features extracted from multi-view images. \nThese Gaussians are ultimately converted into local 3D occupancy prediction through an elaborately designed Gaussian-to-voxel splatting module. \nCompared to conventional voxel-based methods, using 3D Gaussian representation constitutes a more flexible approach. \nIn this paper, we will leverage this significant attribute to accomplish embodied occupancy prediction in indoor scenarios.\nTo this end, we introduce an additional tag \\(\\gamma \\) for all the Gaussians in the memory. \nWhen initializing a novel scene, tags of these Gaussians are set to \\(0\\). \nEvery time we put a number of updated Gaussians back into the memory, their tags are set to (1\\).\nFor the Gaussians taken from the memory, we generate a set of confidence values \\(\\Theta \\) based on their tags \\(\\Gamma\\). \nFor those Gaussians within the frustum and marked as having been previously updated(\\(\\Gamma=\\mathbf{1} \\)), we set their confidence values to a certain value between (0\\) and \\(1\\), and they are slightly updated in the current update. \nFor those Gaussians that have never been updated, we set their confidence values to \\(0\\), indicating that they will be the focus in the current update.\nDuring the refinements, we have:\nhere we use \\(\\oplus\\) to represent the composition of rotation quaternions and the add operation of other parts.\nWe use Figure~\\ to illustrate how we maintain the Gaussian memory and refine Guassians according to their confidence values.\nction{EmbodiedOcc: An Embodied Framework}\nWe present the training framework of our EmbodiedOcc model for indoor embodied occupancy prediction.\nDuring the whole prediction process, we use the current monocular input to update our Gaussian memory in real time, which can be easily converted into 3D occupancy prediction.\nWe first train our local occupancy prediction module using the focal loss \\(\\mathit{L}_{focal}\\), the lovasz-softmax loss \\(\\mathit{L}_{lov}\\), the scene-class affinity loss \\(\\mathit{L}_{scal}^{geo}\\) and \\(\\mathit{L}_{scal}^{sem}\\) following RetinaNet\\cite{lin2017focal}, TPVFormer\\cite{tpvformer} and Monoscene\\cite{monoscene}. We use monocular occupancy within the frustum \\(\\mathbf{Y}_{mono}^{fov}\\) and the corresponding ground truth \\(\\mathbf{Y}_{gt}^{fov}\\) to compute the loss.", ["2412_04380v1_0", "2412_04380v1_1"]] [60043, "h{Teacher-student training} \nhe teacher-student approach to generalization in deep RL assumes access to $c_t \\in \\mathcal{C}_{\\textnormal{ID}}$ at every timestep throughout training, and leverages this privileged context information to train a teacher policy that can adapt to different contexts. The teacher policy applies a context encoder $f: \\mathcal{C} \\rightarrow \\mathcal{Z}$ that maps the context to a latent feature $z_t = f(c_t)$, which is then provided as an input to the policy $\\pi: \\mathcal{S} \\times \\mathcal{Z} \\rightarrow P(\\mathcal{A})$ and critic $V^{\\pi}: \\mathcal{S} \\times \\mathcal{Z} \\rightarrow \\mathbb{R}$. In this work, we consider the latent feature space $\\mathcal{Z} = \\mathbb{R}^d$. The context encoding $z_t = f(c_t)$, policy $\\pi(a_t \\mid s_t, z_t)$, and critic $V^\\pi(s_t, z_t)$ are trained to minimize the average actor-critic RL loss over ID contexts given by\nhere $\\mathcal{L}_{\\pi}(c)$ and $\\mathcal{L}_{V}(c)$ represent the policy loss and critic loss, respectively, of a given RL algorithm for the context $c \\sim \\mathcal{C}_{\\textnormal{ID}}$. In our experiments, we apply Proximal Policy Optimization (PPO) \\citep{schulman_2017} as the RL algorithm. See the Appendix for details. \note that the teacher policy cannot be applied at deployment time because it requires privileged information about the context $c_t$ in order to compute the latent feature $z_t$. For this reason, RL training is followed by a supervised learning phase where a student policy is trained to imitate the teacher policy using only the recent history of states and actions from the last $H$ timesteps $h_t = \\left(s_{t-H}, a_{t-H}, \\ldots, s_t \\right) \\in \\mathcal{H}$. In particular, an adaptation module $\\phi: \\mathcal{H} \\rightarrow \\mathcal{Z}$ that maps recent history to a latent feature $\\hat{z}_t = \\phi(h_t)$ is trained to minimize the loss\nwhere expectation is taken with respect to trajectories sampled using the student policy in ID contexts $c \\sim \\mathcal{C}_{\\textnormal{ID}}$ during training. This training represents a form of implicit system identification across ID contexts. Using the history encoding $\\hat{z}_t = \\phi(h_t)$, the policy $\\pi(a_t \\mid s_t, \\hat{z}_t)$ can be applied at deployment time because it does not require privileged information as input.\nparagraph{Robust adaptation module}\nIn the teacher-student architecture, the adaptation module $\\phi$ is trained to identify ID contexts from history, resulting in strong performance across $c \\in \\mathcal{C}_{\\textnormal{ID}}$. However, because it is only possible to train on ID contexts, the adaptation module $\\phi$ and resulting policy $\\pi(a_t \\mid s_t, \\hat{z}_t)$ may not generalize well to OOD contexts $c \\in \\mathcal{C}_{\\textnormal{OOD}}$ that were not seen during training. Because $\\phi$ is a learned module, its output $\\hat{z}_t$ is only reliable for the distribution of history inputs $h_t$ that were observed during training. This suggests that $\\hat{z}_t = \\phi(h_t)$ may not be useful if the environment dynamics of OOD contexts result in trajectories that are different from the ID trajectories seen during training. \nn order to quantify the level of uncertainty present in the \nThe robust latent feature $z_{\\textnormal{rob}}$ defines an anchor point in latent feature space where we can apply robust training. Because we consider a single unified policy architecture, note that the RL loss $\\mathcal{L}_{\\textnormal{RL}}$ in \\eqref{eq:rl_loss} incentivizes the privileged context encoding $z_t = f(c_t)$ to move away from $z_{\\textnormal{rob}}$ if it will lead to better performance. Given this intuition, we propose the use of $z_{\\textnormal{rob}} = \\mathbf{0}_d$ in our implementation of GRAM to allow the outputs of a randomly initialized context encoder to start near $z_{\\textnormal{rob}}$ at the beginning of training and move away from it as needed. Then, at deployment time we bias the latent feature estimate back towards the robust anchor point $z_{\\textnormal{rob}}$ if the estimate is unreliable due to OOD environment dynamics. See the right side of ig for an illustration.\nBy using a special robust latent feature $z_{\\textnormal{rob}}$, we incorporate the failure mode of OOD dynamics directly into the existing adaptation framework as a single instance in latent feature space. This allows us to leverage the existing training procedure and architecture for adaptation in ID contexts, while applying tools from robust RL to encode robust behavior into $\\pi(a_t \\mid s_t, z_{\\textnormal{rob}})$ for OOD generalization \\emph{within the same architecture}. The use of a single architecture also allows for implicit regularization benefits of this robust training beyond $z_{\\textnormal{rob}}$. In the next section, we provide details on the robust RL techniques that we use during training to achieve OOD generalization.\nction{Training for robust adaptation}\\label{sec:ra_train}\nOur robust adaptation module provides an intuitive structure for achieving both ID and OOD dynamics generalization within a single architecture. In order to accomplish this goal, we jointly train our policy $\\pi(a_t \\mid s_t, z_t)$ for adaptive performance in ID environments and robust performance in OOD environments (i.e., when $z_t = z_{\\textnormal{rob}}$). For a given iteration of RL training, we assign each training environment to either \\emph{ID training} or \\emph{OOD training}. This assignment determines how the latent feature vector is calculated, as well as how data collection occurs in the environment. See g for an overview.\nWithin each iteration, all data for a given training environment is collected according to either standard ID data collection or adversarial OOD data collection, as described in the following paragraphs. This provides temporal consistency when training the policy for adaptive or robust performance, respectively. However, we alternate these assignments between iterations, which allows full trajectories to contain a mixture of both forms of data. As we show in our experiments, this mixed data collection design provides additional robustness benefits compared to using the same training assignment for entire trajectories.", ["2412_04323v1_0", "2412_04323v1_1"]] [60044, "subsection{More About No-Skill Performances}\nabel{sec:no-skill-performances}\nmysection{How can we rank no-skill performances ex aequo?}\nThe ranking scores allow ranking all no-skill performances (\\ie, those for which the groundtruth and predicted classes are independent) ex aequo when some constraints on the compared performances are added. The more common constraint is undoubtedly that the priors are fixed. In this case, the canonical ranking scores that put the no-skill performances on the same footing are located on the curve\neCurvePriors : \\priorpos^2 \\, a \\, b = \\priorneg^2 \\, (1-a) \\, (1-b)$. \nAnother interesting constraint is that the rates of predictions are fixed. By symmetry, the canonical ranking scores that put the no-skill performances on the same footing are located on the curve\nleCurveRates : \\ratepos^2 \\, a \\, (1-b) = \\rateneg^2 \\, (1-a) \\, b$. \nThe $\\tileCurvePriors$ and $\\tileCurveRates$ curves are depicted in and .\nWe can see that $\\scoreBalancedAccuracy$ is perfectly correlated with the ranking scores at the intersection between the curve tileCurvePriors$ and the rising diagonal, which is at $(\\priorneg,\\priorneg)$, and that $\\scoreCohenKappa$ is perfectly correlated with the ranking scores at the intersection between the curve ileCurvePriors$ and the median horizontal, which is at frac{\\priorneg^2}{\\priorneg^2+\\priorpos^2},\\frac{1}{2})$. \nlobal\\long\\def\\randVarCanonicalImportanceBis{\\randVarImportance_{a',b'}}\nlobal\\long\\def\\canonicalRankingScoreBis{\\rankingScore[\\randVarCanonicalImportanceBis]}\nmysection{Correction for chance.}\nThe idea of correcting a score for what can be achieved by chance is common in the literature. Scott~\\cite{Scott1955Reliability} and Fleiss~\\cite{Fleiss1971Measuring} proposed a correction for accuracy. Cohen~\\cite{Cohen1960ACoefficient} proposed another correction for it, that differs in what is considered to be achievable by chance. We noticed that Scott's $\\scoreScottPi$ and Fleiss's $\\scoreFleissKappa$ do not satisfy the axioms of ranking, even in the case of fixed priors. Cohen's correction, on the other hand, allows nothing more than what we can do with ranking scores, with fixed priors: correcting $\\canonicalRankingScore$ in the same way as Cohen~\\cite{Cohen1960ACoefficient} did with $\\scoreAccuracy$, that is\n\u00a0 \u00a0 \u00a0 \u00a0 \\canonicalRankingScore-\\canonicalRankingScore\\circ\\opNoSkill\n \u00a0 \u00a0 \u00a0 1-\\canonicalRankingScore\\circ\\opNoSkill\nleads to a score that is perfectly rank-correlated with $\\canonicalRankingScoreBis$, where $a'=\\frac{\\priorneg^2(1-b)}{\\priorneg^2(1-b)+\\priorpos^2b}$ and $b'=b$. %Taking a=b=\\nicefrac12$, we recover the result that $\\ordering_\\scoreCohenKappa$ is located at (\\aCohen,\\bCohen)$ on the \\tile.section{A new look at the balanced accuracy $\\scoreBalancedAccuracy$ and Cohen's kappa and shows the rank correlations for both (on the left) and $\\scoreCohenKappa$ (on the right).\nGeometrically, an entire horizontal line of the \\tile is crushed into a single point (the intersection it has with the $\\tileCurvePriors$ curve). There is thus an enormous loss of diversity \nwhen applying Cohen's correction to our scores.", ["2412_04309v1_0", "2412_04309v1_1", "2412_04309v1_2", "2412_04309v1_3"]] [60045, "egin{abstract}\nLarge Language Models (LLMs) exhibit impressive performance across various tasks, but deploying them for inference poses challenges. Their high resource demands often necessitate complex, costly multi-GPU pipelines, or the use of smaller, less capable models. While quantization offers a promising solution utilizing lower precision for model storage, existing methods frequently experience significant performance drops at lower precision levels. Additionally, they typically provide only a limited set of solutions at specific bit levels, many of which are extensively manually tuned. To address these challenges, we propose a new method called \\textbf{SKIM}: Scaled K-means clustering wIth Mixed precision. Our approach introduces two novel techniques: 1. A \\textit{greedy algorithm} to solve approximately optimal bit allocation across weight channels, and 2. A \\textit{trainable scaling vector} for non-differentiable K-means clustering. These techniques substantially improve performance and can be adapted to any given bit. Notably, in terms of model perplexity, our method narrows the gap between 3-bit quantized LLaMA models and their full precision counterparts by xtbf{16.3\\%} on average.\nd{abstract}\nction{Introduction}\narge Language Models (LLMs) including GPT \\cite{radford2019language} and LLaMA \\cite{touvron2023llama}, have achieved remarkable performance across a diverse range of tasks. These models not only excel in language processing \\cite{achiam2023gpt, dubey2024llama, chowdhery2023palm, zhang2022opt} but also adapt effectively to multimodal applications \\cite{wang2024visionllm, driess2023palm}, marking a crucial step toward artificial general intelligence \\cite{bubeck2023sparks}. However, the computational and memory demands of LLMs pose significant challenges. For instance, when loading parameters in FP16, GPT requires 350GB of memory, while LLaMA-65B needs at least 130GB, both of which far exceeding the capabilities of A100-80G GPUs. Even when conducting inference with the smallest LLaMA model, which has 7 billion parameters, an Out-of-Memory exception can occur on a widely used 24GB GPU. These challenges significantly complicate the storage and practical deployment of such models.\nOne promising technique to address these issues is quantization, which involves transforming high-precision data into lower-precision formats, such as converting FP16 parameters to INT4. This method directly reduces the memory required to deploy and load LLMs and improves inference latency due to the phenomenon of the memory wall \\cite{gholami2024ai}, which identifies the memory bandwidth as a key bottleneck in LLM inference. In addition, quantization has shown promising performance benefits. For example, previous studies have shown that both LLM weights and activations can be stored in 8 bits \\cite{xiao2023smoothquant}, or only LLM weights can be stored in 4 bits \\cite{kim2023squeezellm}, with little performance degradation. This encourages researchers to explore lower-precision solutions while maintaining reasonable performance levels. \nHowever, standard quantization techniques in recent methods can suffer a significant drop in performance when using low bit widths. To mitigate this decline, these methods often introduce additional techniques that incur extra memory costs. For example, SqueezeLLM \\cite{kim2023squeezellm} retains certain sensitive elements and outliers with full precision using a sparse tensor, while AWQ \\cite{lin2024awq} divides the quantization group into smaller ones, requiring the storage of more quantization factors. Additionally, the extra memory needed to achieve a reasonable trade-off between memory usage and performance often requires manual tuning and selection, making the process cumbersome.\nextbf{Contribution} In this paper, we address the above issues with our proposed method, Scaled K-means clustering wIth Mixed Precision (SKIM), which optimizes the bit allocation using a greedy algorithm and regularizes the column-wise difference with a scaling vector. Our method can easily adapt to any specified bit, including even non-integer values, and achieve better performance. Figure \\ illustrates how our method breaks the fixed bit grid and delivers better results. Our main contributions can be summarized as follows: (1) We conduct a mathematical analysis of two optimization targets: layer-wise and sensitivity-based quantization, identifying a unified framework that highlights their core differences and allows us to evaluate their effectiveness. (2) We observe a significant disparity in data distribution across channels and propose a greedy algorithm for approximately optimal bit allocation in response to this disparity. Our mixed precision method adapts to any specified bit level and significantly improves performance. (3) For the non-differentiable K-means clustering operator, we incorporate a trainable scaling vector based on our novel iterative optimization strategy. This vector effectively regularizes the data across columns and serves as a valuable complement to the mixed precision method.\nsection{Methodology}\nlabel{sec:method}\nAs previously mentioned, the quantization error of a weight matrix $W$ is the sum of that of its individual rows. However, each row, represent a channel, soften exhibits distinct data distributions and quantization errors, as illustrated in Figure \\. These observations indicate that applying the same bit-width for quantizing all rows results in an disproportionate allocation of resources. Motivated by these insights, we propose the adaptive channel-wise \\textbf{Mixed Precision} technique. It begins by allocating different bits to each channel using a greedy algorithm, based on their quantization errors. Next, weighted K-means clustering is applied to each channel, generating centroids and labels with the allocated bits. Finally, we incorporate the scaling vector and train it through an iterative optimization strategy, keeping the labels fixed to enable gradient-based optimization. The dequantized weights can be recovered from the final labels and centroids, along with the scaling vector. Full algorithm of our SKIM method can be found in Appendix \\ref{sec:full}. In the following subsections, we will explain our Mixed Precision and Scaling Vector techniques, as well as the principles behind our objective selection, in detail.\nsubsection{Objective Selection}\nabel{sec:obj}\nAs discussed, our SKIM framework consists of three main steps: the greedy algorithm, weighted K-Means clustering, and the scaling vector. A key question arises regarding how optimization objectives should be selected for these steps. Specifically, should the same objective be used consistently across all steps, or is cross-objective optimization feasible? Moreover, under different computational scenarios, which objective is the most effective, and how should it be chosen?\nThese questions can be addressed through our previous reformulation, from which we can conclude that although the layer-wise and sensitivity-based objectives differ, they can ultimately be transformed into either the \\textit{full} or \\textit{diag} forms. In these forms, both objectives exhibit a similar structure, ensuring their synchronization towards the final goal and allowing for cross-objective optimization in our work. Related experimental results are detailed in Section \\ref{sec:iter}. The only difference lies in whether $g_{y_i}$ is introduced to serve as a guide for the subsequent model architecture.\nTherefore, we can intuitively assess the effectiveness of each objective in the following order, from best to worst: \\textit{S-full}, \\textit{L-full}, \\textit{S-diag}, and extit{L-diag}. This is because the \\textit{S} form incorporates gradient information as a guide, while the \\textit{diag} form takes an aggressive diagonal approximation. The analysis aligns with our experimental results, which will be discussed in more detail in Section ef{sec:eff}.\nHowever, due to the interdependence among the elements of the gradient and the input, we cannot compute the four objectives using directly recorded expectations of $g$ or $X$. This limitation prevents us from adopting the most effective \\textit{S-full} objective, as recording the corresponding $E(g^\\top g)$ for each row requires quadratic memory complexity which is impractical for LLMs.\nConsequently, in any scenario requiring a complete error calculation, such as the proposed mixed precision and scaling vector methods, we adopt the \\textit{L-full} form. In contrast, for scenarios involving element-wise sums, such as the weighted K-means clustering, we adopt the \\textit{S-diag} form. This will be the main principle for our objective selection.\nsubsection{Mixed Precision}", ["2412_04180v1_0", "2412_04180v1_1"]] [60046, "subsection{On-device DNN inference}\nIn recent years, there has been an explosion of Artificial Intelligence (AI) applications and services thanks to significant advancements in DL. These applications span a wide range, from personal assistants and recommendation systems to autonomous vehicles and healthcare diagnostics. Furthermore, the widespread adoption of mobile computing and the Internet of Things (IoT) has led to billions of interconnected mobile and IoT devices, collectively generating an immense volume of data at the network edge \\cite{ZhiEdgeIntelligence2019}. This has created the need to push the execution of DNN applications at the edge of the network leading to a substantial surge in the deployment of DL models on resource-constrained devices \\cite{oodin2021smartcomp}. Although on-device training of DL models remains a challenging endeavor due to the limited computational resources and memory constraints of such devices, significant progress has been made in on-device inference. Several techniques have been proposed to enable efficient on-device inference, including:\ntextbf{Lightweight model design:} One approach to enable on-device inference is the design of lightweight DL models. Models like MobileNetV2 \\cite{sandler2018mobilenetv2}, EfficientNet-Lite \\cite{tan2019effnet} and NasNet-Mobile \\cite{zoph2017LearningTA} have been specifically crafted to achieve high accuracy while minimizing computational and memory requirements.\nPrevious research on cascade architectures is predominantly centered around isolated scenarios, where a single device enjoys exclusive access to a dedicated server. Nevertheless, this assumption no longer aligns with the reality of AI-driven indoor environments like the one in Fig.~\\. The pervasive integration of AI technologies has given rise to an expanding array of AI-powered devices, resulting in a pressing demand for simultaneous support from a shared server. \nxtbf{Model quantization:} Quantization techniques \\cite{han2015deep} reduce the precision of model weights and activations, effectively decreasing memory and computational demands without substantial loss in accuracy.\nextbf{Model pruning:} Pruning methods, such as channel pruning \\cite{he2017channel}, aim to reduce the size of DNNs by removing unimportant neurons, thereby reducing computational overhead.\nextbf{Knowledge distillation:} Knowledge distillation \\cite{hinton2015distilling} involves training a smaller, more efficient model to mimic the predictions of a larger, complex model. This allows for the transfer of knowledge from larger models to smaller ones.\nxtbf{Optimized scheduling:} Scheduling and runtime optimization techniques \\cite{CoBiCh2022} help allocate computational resources efficiently, ensuring that DL inference tasks run smoothly on constrained devices.\nespite these advancements, modern intelligent environments like smart homes and offices are often equipped with small-form-factor, resource-constrained devices (e.g., smart cameras, AI speakers). These devices lack the processing power to support high-accuracy, computationally-intensive models, driving the need for distributed collaborative inference approaches.\nubsection{Distributed collaborative inference}\nDistributed collaborative inference systems leverage a central server to assist mobile and em\nIn Fig.~\\, we present the comprehensive system architecture of a multi-device cascade, specifically designed for executing DL inference tasks on IoT devices in a collaborative setting. Within this architecture, all IoT devices are engaged in performing a common task, such as object detection, albeit they may host different DL models tailored to their computational capabilities and requirements.\nThe main components of this system architecture include the following.\ntextbf{IoT devices:} These devices are the primary endpoints where the DL inference tasks are executed. Each IoT device is equipped with its own DL model designed to process incoming data efficiently. These devices generate predictions independently based on their respective models.\nextbf{Device-hosted models:} DL models on the devices are trained on the same task, operating independently in all other aspects. Each model can be of different architecture, complexity and efficiency, depending on the computational resources and demands of the respective device. Thus, it is of paramount importance that the scheduler maintains high performance in scenarios of device heterogeneity. \ntextbf{Forwarding decision function:} The output predictions generated by each IoT device are first given to a forwarding decision function. This decision function assesses the confidence of the DL model's output on each device. If the model is sufficiently confident in its prediction, the result remains unchanged, and no further action is taken. However, if there is uncertainty or low confidence in the prediction, the sample is earmarked for further analysis.\nextbf{Server:} Samples that require additional scrutiny are forwarded to a centralized server for in-depth processing. The server hosts a more accurate and computationally-intensive DL model capable of refining the predictions made by the IoT devices.\ntextbf{Request queue:} The forwarded samples from all IoT devices are temporarily stored in a request queue at the server. This queue serves as a staging area where samples awaiting processing are collected. The request queue ensures efficient and organized data flow from the IoT devices to the server.\ntextbf{Server-hosted model:} The server-side model processes the samples drawn from the request queue. This model is shared among all connected IoT devices, allowing for collaborative refinement of predictions by leveraging the advanced \"knowledge\" of the network.\ntextbf{Result distribution:} Finally, the results produced by the server-side model are distributed back to their corresponding IoT devices as soon as they become available. This seamless distribution of refined predictions ensures that each IoT device benefits from the improved accuracy achieved by the server.", ["2412_04147v1_0", "2412_04147v1_1"]] [60047, "Firstly, we assume that the adversary can manipulate a dataset and upload the resulting dataset online, \nwhich will be downloaded and used for training DNNs by developers. \u00a0\n{However, the adversary cannot \\emph{modify} any other aspects of training such as DNN architecture and training algorithm. }\nAccordingly, the adversary chooses to insert the backdoor by poisoning training datasets \ninstead of modifying DNN parameters or adding malicious sub-modules. \n{Despite this, the adversary may \\emph{know} the details of target DNNs. \nThis is reasonable considering the wide adoption of public well-known model architectures and typical training algorithms (e.g., Adam~\\cite{ADAM}).}\nSecondly, the adversary cannot directly modify the digital inputs when launching attacks at inference time, \nso needs to launch physical backdoor attacks by manipulating the physical objects. \nRegarding the physical attack, we assume that the adversary cannot touch the physical objects \nto add or remove any triggers, e.g., many traffic signs are erected in high places, \n{the adversary cannot touch the face of victims for causing DoS.} \nDespite this, the adversary can maintain a reasonable distance from the physical objects \nsuch that the physical objects can be recognized by the adversary\nand projected by common lasers. \nubsection{Overview of \\attackname}\\label{sec:overview}\nabel{sec:attack}\nh overview of our attack \\attackname is depicted in \\figurename~\\, consisting of two stages, namely, backdoor embedding and backdoor triggering. mallskip\\noindent {\\bf Backdoor embedding.} This stage injects a backdoor into DNNs by poisoning the training dataset, \u00a0 ollowing the common practice of previous backdoor attacks~\\cite{gu2017badnets, turner2019label, chen2017targeted, liu2020reflection, zhong2020backdoor, liu2017trojaning, composite-attack,input-aware-attack, LIRA}. \nFirst, the adversary specifies a target label $y^t$ \nand designs a \\emph{digital} laser-based trigger $\\delta$ (cf.~\\Cref{sec:trigger-design}). \nThen, he selects a subset \n{T_{select}}$\nof images from a legitimate training dataset \n{T_{train}}=\\{(x_i,y_i)\\}_{i=1}^{N}$ \nwhere \n\\alpha=\\frac{|{T_{select}}|}{|{T_{train}}|}$\nis a small ratio, \u00a0\nand adds the trigger $\\delta$ to each image $x$ in the chosen subset \n{T_{select}}$ \nand associates each modified image $x^\\delta$ with the target label $y^t$, \nobtaining the poisoned training dataset \n{P_{train}=T_{train} \\cup \\{(x^{\\delta}, y^t)|x\\in T_{select}\\}}\nFinally, the adversary publishes the poisoned dataset on the Internet, \nwaiting for developers to download the dataset for training their DNNs, \nduring which the backdoor will be naturally embedded into DNN models \nas a result of training with the poisoned dataset. \nThe dataset poisoning scheme enables our attack to process inherent model generalizability, \ni.e., the poisoned training dataset can be used by and affect any model. \nWe will test our attack on four different models in \\Cref{sec:exper-result}. \nompared with previous backdoor attacks, \\attackname features the laser-based trigger, \nthe design of which will be elaborated in \\Cref{sec:trigger-design}.\nsmallskip\nnoindent {\\bf Environment setup.}\nTwo operators collaboratively collected the images in China's urban streets. \n{The laser pointer is equipped with a button to adjust the focus of the laser spot. The size of the spot projected onto traffic signs depends on both the focus setting and the pointer's (adversary's) distance from the signs. In practice, the adversary can adjust the focus based on their distance to achieve a laser spot size resulting in a clear projection on the traffic signs, called reasonable size.}\nTherefore, one operator holds the laser pointer more than 30 meters away from the traffic sign and adjusts the laser spot to a reasonable size. \nThe other operator photographs the traffic signs with the photographing device from 5 to 30 meters \nin front of the traffic sign and 1.5 meters off the ground. \nTo collect more diverse images, we adjusted the angle and distance between the laser pointer and the photographing device \nand collect 2--5 images per setting. \nDuring \nthe collection, \nwe avoided projecting the lasers to those areas of traffic signs that will impact their visibility, \ne.g., areas with black-color text.\nmallskipoindent {\\bf Open-source LaserMark.}\nAfter collection, we appropriately cropped each image so that the traffic sign is complete, centered, and occupy most of the pixel space.\nFinally, we have {676} road traffic sign images with physical laser-based triggers, \u00a0\namong which 235, 224, and {217} images have red, green, and blue laser points, respectively. \nThese traffic sign images constitute the poisoned test dataset $P_{test}$, \nwhich will be used to evaluate the attack success rate in our experiments.\nIn addition to the poisoned physical images $P_{test}$ with physical triggers, we also collect {158} clean physical images without physical triggers and add them into the clean test dataset _{test}$. \nIn total, we have 834 images, covering 32 different traffic sign categories, where\nthe number of images contained in each category of LaserMark is shown in igurename~\\.\nThey are published as an open-source dataset. ubsection{Trigger Settings}abel{sec:trigger-setting}\nAs mentioned in \\Cref{sec:attack}, we propose an effective approach to optimize the parameters of the digital triggers toward a more powerful physical backdoor attack. \n{The laser pointer typically includes a focus adjustment button, which affects the size and center brightness of the laser spot. Adversaries can freely project the laser spot onto any position of a traffic sign, and stronger natural light makes the spot appear more transparent. \nThese factors motivate us to optimize the four parameters associated with the size, transparency, location, and center brightness of laser spots.", ["2412_03993v1_0", "2412_03993v1_1"]] [60048, "Researchers have proposed automated detection methods for centralization risks in smart contracts. \n Pied-piper \\cite{ma2023pied} summarizes five common patterns of code with centralization risks and uses static Datalog analysis \\cite{immerman1998descriptive} to identify contracts conforming to these patterns, supplemented by directed fuzzy testing \\cite{bohme2017directed} to reduce false positives. \nSimilarly, Tokeer \\cite{zhou2024stop} analyzes transfer-related functional modules in contract code, generates oracles based on four known rug pull contract patterns, and detects rug pull contracts using Datalog analysis. \nThe rug pull contracts \\cite{sun2024sok}, a type of malicious fraud contract, often incorporate code with centralization risks for asset transfer, making detection methods for this category applicable to centralization risks as well.\nResearchers have proposed automated detection methods for centralization risks in smart contracts. \nPied-piper \\cite{ma2023pied} identifies five common patterns of code associated with centralization risks. \nIt employs static Datalog analysis \\cite{immerman1998descriptive} to detect contracts conforming to these patterns, followed by directed fuzzy testing \\cite{bohme2017directed} to minimize false positives. \nSimilarly, Tokeer \\cite{zhou2024stop} focuses on transfer-related functional modules in contract code. \nIt generates oracles based on four known rug pull contract patterns and uses Datalog analysis for detection. \nRug pull contracts \\cite{sun2024sok}, a type of malicious fraud contract, often incorporate code for asset transfer with centralization risks. Consequently, detection methods developed for rug pull contracts can also be applied to identify centralization risks in smart contracts. However, existing methods relying on predefined patterns need manual intervention for pattern modification or supplementation when facing backdoors with unknown patterns or variants of known backdoors, or else underreporting occurs, which makes the current methods unsuitable for real-world scenarios with numerous contracts and diverse backdoor attack patterns\\cite{event1}\\cite{event2}\\cite{event3}.Based on this observation, we propose an automated detection framework for centralization risks, as illustrated in Fig. \\. \nResearchers have already proposed some automatic detection methods for smart contract backdoors.\n or instance, Pied-piper\\cite{ma2023pied} summarizes 5 common code patterns of smart contract backdoors and employs Datalog analysis to determine if a contract conforms to a given pattern, supplemented by directed fuzzy testing to reduce false positives.\n Similarly, Tokeer\\cite{zhou2024stop} builds a transfer model to analyze the functional modules in the contract code implementing transfers, generating oracles based on 4 known rug pull contract patterns, and ultimately detects the rug pull contract based on the oracles through Datalog analysis.\n The rug pull contracts here are a kind of malicious fraud contracts, which often necessitate the insertion of backdoors for asset transfer, rendering detection methods applicable to this category also capable of uncovering smart contract backdoors.\nHowever, existing methods face challenges in accurately detecting diverse code with centralization risks in real-world scenarios due to their reliance on predefined behavior patterns.\nHowever, existing methods struggle to accurately detect diverse backdoors in real-world scenarios due to their reliance on predefined behavior patterns. \nThey may underreport when faced with unknown patterns or variants of known behavior patterns. \nAdditionally, inaccurate predefined patterns can lead to misclassification of secure contracts as risky ones.\nHowever, due to the reliance on predefined attack patterns, it is difficult for existing methods to accurately detect diverse backdoors in real-world scenarios.\n the one hand, when facing backdoors with unknown patterns or variants of known behavioral patterns, existing methods suffer from underreporting.\nOn the other hand, an inaccurate predefined pattern may also cause misclassifying secure contracts as contracts with backdoors.\n However, existing methods relying on predefined patterns need manual intervention for pattern modification or supplementation when facing backdoors with unknown patterns or variants of known backdoors, or else underreporting occurs, which makes the current methods unsuitable for real-world scenarios with numerous contracts and diverse backdoor attack patterns\\cite{event1}\\cite{event2}\\cite{event3}.\nection{Background}\nubsection{Solidity Smart Contracts}\nA smart contract is an automatically executing program on the blockchain, with Solidity being the most popular language for writing these contracts. \nThis paper focuses on two blockchain platforms that support Solidity: Ethereum and Binance Smart Chain (BSC), both of which are peer-to-peer networks composed of accounts. \nThese accounts are categorized into contract accounts, which are associated with and controlled by smart contracts, and external accounts, which are without code. \nEach account is uniquely identified by an address.\nmart contracts are automatically executing programs on the blockchain, with Solidity \\cite{solidity} being the most popular language for their development. \nThis paper focuses on two blockchain platforms that support Solidity: Ethereum and Binance Smart Chain (BSC). \nBoth platforms are peer-to-peer networks composed of two types of accounts: contract accounts, which are associated with and controlled by smart contracts, and external accounts, which contain no code. Each account is uniquely identified by an address.\n Solidity smart contract consists of state variables and functions. \nFor example, in the \\texttt{Example} contract shown in Fig. \\, Lines 2 and 3 define the state variables, while Lines 4 through 17 define the functions. State variables, persistently stored on the blockchain, can be read and modified by different functions. \nIn this example, \\texttt{owner} represents the account address of the contract owner, while \\texttt{balances} represents the token balances of different accounts.\nFunctions in the contract can be invoked by accounts to execute predefined logic. \nThe \\texttt{constructor} in the \\texttt{Example} contract is called upon contract deployment to initialize state variables.\nThe \\texttt{owner\\_transfer} function transfers tokens from \\texttt{\\_from} to \\texttt{\\_to}. \nThis function is restricted by the modifier \\texttt{onlyOwner}, which ensures that each call checks if the caller (\\texttt{msg.sender}) is equal to the \\texttt{owner}. \nIf this condition is met, the function proceeds; otherwise, the call fails.In this paper, we propose two criteria for identifying variables that represent privileged accounts (e.g., \\texttt{owner} in Fig.~\\): 1) they are address-type state variables of contracts, and 2) their values can only be specified by the developer or other privileged accounts.", ["2412_03938v1_0", "2412_03938v1_1", "2412_03938v1_1"]] [60049, "Among all methods using this intrinsic reward function, ATP~\\citep{liu2021behavior} first computes kNN distances in a representation space obtained through contrastive learning. Subsequently, \\cite{yarats2021reinforcement} propose training an encoder simultaneously during exploration to obtain a prototypical representation space for the kNN computation in Eq. (\\ref{Eq: knn-reward}). RE3~\\citep{seo2021state} asserts that random encoders are sufficient in many cases, rather than requiring contrastive learning or a prototypical representation. Recently, RISE~\\citep{yuan2022renyi} extends it to R\\'enyi entropy. Despite their flexibility and satisfactory performance in non-tabular environments, these methods lack episodic exploration, leading to the previously mentioned issue of vanishing lifelong rewards.\n\u00a0pure intrinsic exploration \\citep{amin2021survey} aims to develop an intrinsic reward function $r^{i}(\\mathbf{s}_{t},\\mathbf{a}_{t})$ as a substitute. It is important to note that most intrinsic rewards are \\textbf{non-stationary}, which contradicts the standard MDP framework. The ELEMENT reward consists of two terms represented in blue and red respectively in Fig. \\: (1) episodic reward $r_{ep}$ rapidly encourages visiting different states within the same episode, i.e., maximizing episodic state entropy, (2) lifelong reward $r_l$ slowly discourages visits to past states, visited many times in the history, i.e., maximizing lifelong state entropy. Nonetheless, they achieve state-of-the-art performance when utilized to train deep reinforcement learning (DRL) agents in non-tabular environments, where r^{i}(\\mathbf{s}_{t},\\mathbf{a}_{t})$ does not change rapidly. \u00a0Approaches following the maximum state entropy principle \\citep{lee2019efficient,liu2021behavior, seo2021state, yuan2022renyi} provide $r^i$ directly proportional to entropy-related measure in a \\textbf{non-parametric} and model-free way. Active Pre-Training (APT)~\\citep{liu2021behavior} first adopts nearest neighbors as intrinsic rewards in a representation space obtained by contrastive learning. After that, RE3~\\citep{seo2021state} proposes to implement random encoders instead of contrastive learning. Recently, RISE~\\citep{yuan2022renyi} extends it to R\\'enyi entropy. Despite their flexibility and satisfactory performance in non-tabular environments, these methods lack episodic exploration, leading to the previously mentioned issue of vanishing lifelong rewards.\nn the other hand, maintaining records of lifelong visited states rely on dense memory and thus a large number of visited states (or their representations) can make the computation of distances prohibitive \\citep{seo2021state, yuan2022renyi}. \nnother category of maximum state entropy exploration is rooted in sampling instead of intrinsic rewards. From the perspective of intrinsic learning, these methods explore how to achieve an intrinsic objective by decomposing it into multiple \\textbf{stationary} reward functions (i.e., MDPs). Early works by \\citep{hazan2019provably} offers a provably efficient algorithm which internally splits state entropy in entropy gradients defined using probability density function of state. Multiple approaches have\n% been proposed to enhance the method including R\\'enyi variant \\citep{zhang2021exploration}, and efforts to reduce sample complexity \\citep{tiapkin2023fast}. \nThese sampling-based \nethods involve iterative sampling of new states using the current policy to obtain corresponding entropy-related reward functions, and then update policies by solving the new MDP after fixing these reward functions. \nWhile these sampling-based methods have made significant theoretical contributions, they operate under the assumption that we can ``compute the optimal policy'' to solve a MDP at each iteration given the current entropy-related reward function. This assumption is often unrealistic in real-world environments, where the complexity and unpredictability can hinder consistently reaching near-optimal solutions. Therefore, they usually validate their theoretical results on tabular setups. In this work, we do not compare to sampling-based methods in this category~\\citep{hazan2019provably, nedergaard2022k, yarats2021reinforcement,mutti2022importance, jain2024maximum}.\nbsection{\\textbf{Parametric Methods for Exploration}}\nIn contrast to MaxEnt-based methods, parametric intrinsic rewards \\citep{pathak2017curiosity, burda2018exploration, badia2020never, ecoffet2019go} usually utilize an internal model to predict the next state and use the prediction error as the intrinsic motivation. These methods encourage agents to explore novel states in a lifelong manner by assigning greater rewards to states that are less frequently visited by estimating predictive forward models and use the prediction error as the intrinsic motivation. These \\emph{curiosity}-driven approaches, have their roots traced back to the 70's when Pfaffelhuber introduced the concept of ``observer's information\"~\\citep{pfaffelhuber1972learning} and Lenat \\citep{lenat1976artificial} introduced the concept of ``interestingness\" in mathematics to promote the novel hypotheses and concepts \\citep{amin2021survey}. Recently popular prediction error-based approaches fall under this category. The recent surge in their popularity is strongly linked to the advancements in deep neural networks (DNNs). For instance, ICM \\citep{pathak2017curiosity} and RND \\citep{burda2018exploration}, utilize a CNN as the internal model to predict the next image, while GIRIL implements variational autoencoder (VAE) to model the transitions in environments. After that, some approaches find the novelty vanishing problem and try to solve it by introducing episodic mechanism \\citep{ecoffet2019go, badia2020never}. However, the internal model introduces an auxiliary predictive task that brings additional complexities such as more hyper-parameters and increased running time.\nsubsection{Fixed neural encoder $f: \\mathcal{O} \\rightarrow \\mathcal{S}$.} \u00a0The parametric encoder maps the current observation to a $d$-dimensional state $\\mathbf{s}$. Consider a changing encoder that leads to very different representation (state) for the same observation, an agent could visit a large number of different `states' without taking any actions. \u00a0To avoid such meaningless exploration, the encoder must be pre-trained and fixed before the exploration agent starts to learn. In this paper, we adopt two encoders for Mujoco and Mario environments, respectively. For Mujoco, we implement a fixed randomly initialized neural encoder which has been empirically validated for state entropy estimation citep{seo2021state}.In this section, we extend the experiments presented in Fig. \\ and analyze the impact of $\\beta$ in detail. The Ant agent in Mujoco can navigate freely in all directions within a three-dimensional space. Both agents are reset to randomly initialized starting points near (0, 0) if they fail to meet the health conditions specified by the default gym-Mujoco package. The default number of steps for truncation is set to $1000$, which is directly adopted as $T_{\\tau}$ without any fine-tuning. Both $r_{ep}$ and $r_l$ are normalized to the range $[0, 1]$ using min-max normalization. The Soft Actor-Critic (SAC) algorithm serves as the backbone for this analysis. For visualization purposes, we record the x-y coordinates and color them based on logarithmic probability.\n \u00a0For Mario, we pre-train an encoder using Spatiotemporal Deep Infomax citep{anand2019unsupervised}, a contrastive state representation learning method that maximizes the mutual information across both the spatial and temporal axes. This could become a limitation in environments where it is difficult to collect sufficient data for representation learning, as detailed in Section \\ref{sec:limits}.\nubsection{Episodic intrinsic reward.} \nThe major challenge for episodic entropy maximization is that intrinsic rewards based on episodes instead of current state-action pairs subtly changes the underlying MDP into an episodic POMDP problem: \u00a0an agent receives a single feedback $R_{ep}(\\tau) = H_{\\mathbf{s} \\in \\tau }(\\mathbf{s})$ only at the conclusion of each episode, as discussed above. \nTo solve this problem, we assume the existence of an underlying standard MDP reward function $r_{ep}(\\mathbf{s}_{t}, \\mathbf{a}_{t})$ that approximates the episodic reward $R_{ep}(\\tau)$ in a sum-form decomposition $R_{ep}(\\tau) \\approx \\sum_{t=1}^{T_{\\tau}}r_{ep}(\\mathbf{s}_t)$, which is a common trick for trajectory-wise feedback problems \\citep{gangwani2020learning, ren2021learning, efroni2021reinforcement}.\nMore formally, the episodic objective is decomposed to learn an optimal policy by maximizing: ", ["2412_03800v1_0", "2412_03800v1_1"]] [60050, "\\subsection{Version 1.1} \\textcolor{black}{ During the open, live phase of pre-training the first version of TinyLlama, a few implementation issues were identified within the training framework.\nFor example, there were a few issues related to the learning rate scheduler\nand data loading processes. We therefore re-trained a new model from scratch after fixing those issues. The new model is named TinyLlama v1.1. In addition to fixing all those implementation issues, we took the opportunity to incorporate several key modifications in TinyLlama v1.1:}\n\\begin{itemize}\n \\item To reduce communication overhead, we only shard the model parameters within a node in FSDP. \\textcolor{black}{We trained with a cluster consisting of 16 nodes, each equipped with four A100-40G GPUs, and set the batch size to approximately 1.8 million tokens.}\n \\item We reduced the total number of pre-training tokens from 3 trillion to 2 trillion. Despite the reduction in training data volume, a marginal improvement in performance on downstream tasks was observed, as compared to the original TinyLlama (refer to Section~\\ref{sec:results}).\n \\item Expanding beyond the singular pre-training phase of the original model, we introduced a three-stage pre-training process inspired by recent research \\citep{wei2023skywork}. This includes basic pre-training, continual pre-training targeted at specific domains, and a cooldown phase. An illustrative overview of this approach is provided in Figure~\\, with detailed discussions to follow in subsequent sections.\n\\end{itemize} \n\\paragraph{Basic pre-training} In the first stage of the pre-training, we trained our model with only SlimPajama~\\citep{cerebras2023slimpajama} to develop its commonsense reasoning capabilities. We only trained 1.5 trillion tokens during this stage, setting the foundation for more specialized training in subsequent stages. \\paragraph{Continual pre-training with specific domain} During this phase, we diversified the training by incorporating three distinct types of corpora. The first corpus, identical to the basic pre-training stage, solely consisted of SlimPajama data~\\citep{cerebras2023slimpajama}. The second corpus combined code and mathematical content, leveraging integrations with Starcoder~\\citep{li2023starcoder} and Proof Pile 2~\\citep{azerbayev2023llemma}. \\textcolor{black}{For Starcoder, we only considered the ``Python'' and ``Jupyter'' splits of the original Starcoder dataset.} The third corpus focused on Chinese language data, utilizing Skypile~\\citep{wei2023skywork}. This strategic corpus diversity facilitated the development of three main TinyLlama v1.1 model variants, each tailored to a different need:\n\\begin{itemize}\n\\item TinyLlama v1.1: A foundational model for general applications.\n\\item TinyLlama v1.1 - Math\\&Code: Enhanced specifically for mathematical and coding tasks.\n\\item TinyLlama v1.1 - Chinese: Specialized for processing and understanding Chinese text.\n\\end{itemize} In this stage, all three variants are trained with 350 billion tokens. For the Math\\&Code and Chinese variants, we linearly increase the sampling proportion of the domain-specific corpus for the beginning 6 billion tokens. This warmup sampling increasing strategy was designed to gradually adjust the pre-training data distribution, aiming to ensure smoother and more stable training. Following this initial phase of adaptive sampling, we maintained a consistent sampling strategy for the remainder of the training until approximately 1.85 trillion tokens. Detailed of the data sampling ratios are provided in Appendix~\\ref{appendix:data_sampling}. \n\\paragraph{Cooldown} Implementing a cooldown phase is essential for enhancing model convergence towards the end of the pre-training process. Traditionally, this is achieved by modifying the learning rate, as seen in approaches like MiniCPM~\\citep{hu2024minicpm} and DeepSeek LLMs~\\citep{bi2024deepseek}. However, due to the use of a cosine schedule for our model, the learning rate is already low at the later stage of training.\n \nHence, we opted to modify the batch size instead. Specifically, during the cooldown stage, the batch size was increased from 1.8 million tokens to 7.2 million tokens, while maintaining the original cosine learning rate schedule. This adjustment was applied uniformly across all variants, with each undergoing an additional 150 billion tokens of training during this phase. The training curves for all three variants are shown in Figure~\\.", ["2401_02385v2_0", "2401_02385v2_1"]] [60051, "\\subsection{Analysis}\n\\label{sec:ana} \\paragraph{Scaling Law of Teacher} Although it is intuitive that we can distill better student models from larger teacher models, \\cite{teaching_assistant} has shown that increasing the teacher models' sizes does not guarantee the improvement of student models, sometimes even harming the distillation performance. It is not clear how \\textsc{MiniLLM} works when we scale up the teacher models' sizes. Therefore, we compare \\textsc{MiniLLM} and SeqKD using teacher models with different sizes and fix the size of the student model. We present the results based on the GPT-2 family in Figure \\ and that based on the OPT family in Appendix \\ref{app:ts_opt}. We can see that \\textsc{MiniLLM} constantly outperforms SeqKD, and the student model performance is positively correlated with the teacher model sizes. This shows the potential of our method to compress models with massive parameters. \\paragraph{Exposure Bias} \nLanguage generation models trained to minimize \\textit{forward} KLD suffer from exposure bias~\\citep{exposure_bias} caused by the discrepancy between teacher-forcing training and free-run generation. When training \\textsc{MiniLLM}, the student model sees samples generated by itself, alleviating the training-inference mismatch~\\citep{cold}. In Figure \\, we use the ExAccErr metric~\\citep{eb_measure} defined in Appendix \\ref{app:eb} to measure the excess accumulated error due to exposure bias. The experiment is based on GPT-2-125M, with GPT-2-1.5B as the teacher, using Dolly as the test set. For each prompt, we sample 10 responses to reduce the variance. We can see that the ExAccErrs of the baselines continuously grow during generation, while \\textsc{MiniLLM} has a much lower ExAccErr, and the error stops accumulating in long-text generation ($>$ 150 tokens). \\paragraph{Calibration} \\cite{gpt4} has shown that models trained with policy optimization are likely to be poorly calibrated. We test the calibration of \\textsc{MiniLLM} and the KD baselines on two widely-used text classification datasets: SST2~\\citep{sst-2} and BoolQ~\\citep{boolq}, based on LLaMA-7B. We design zero-shot classification instructions (see Appendix \\ref{app:eval_detail}) and take the probability of the label words to compute the ECE scores~\\citep{ece}. From Table \\ref{tab:calibration}, we observe that KD and SeqKD models are worse calibrated than the teacher model, which potentially explains their low performance on canonical benchmarks~\\citep{false_imitate}. We suspect that minimizing \\textit{forward} KLD causes the models to push high probabilities to void regions of the target distribution, which leads to significant distribution differences between the student and the teacher (see the example in Figure \\). In contrast, \\textsc{MiniLLM} focuses on accurately learning the major parts of the target distribution and narrows the ECE scores gap between the student and the teacher. \n\\paragraph{Performance on Different Response Length} We study the models' performance when the golden response lengths belong to different ranges. In Figure \\, we illustrate the Rouge-L scores of different KD models against the SFT models on three S-NI subsets split by the length of the ground truth responses. We can see that all methods achieve low scores on prompts that expect short responses ($\\le 5$ tokens), probably because most responses in our training set are long sentences, introducing a distribution shift between training and evaluation~\\citep{ITGPT4}. Furthermore, the output spaces of these prompts are relatively small, allowing the student model to cover most modes of the teacher, and thus \\textit{reverse} KLD and \\textit{forward} KLD have similar performance. For prompts with longer responses ($\\ge 6$ tokens), the teacher distribution contains more modes than the students due to the complex output spaces, which shows the advantage of \\textsc{MiniLLM} against standard KD models. Similar results on UnNI are shown in Appendix \\ref{app:length_uni}. \\paragraph{Generation Diversity} \\cite{gan_falling_short} has found that the model optimized by minimizing \\textit{reverse} KLD is likely to lose modes, which affects the generation diversity. We follow~\\cite{cold} to discuss generation diversity from three aspects:\n(i) generating multiple distinct responses given a prompt. (ii) generating linguistically complex responses. (iii) the ability to generate contents that have high coverage of the real data distribution. For (i), we argue that for many NLP applications, generating one \\textbf{correct} response is sufficient, especially for those scenarios demanding high truthfulness and reliability~\\citep{halu_survey}. For (ii) and (iii), we report the responses' distinct 4-gram proportion and the language modeling loss on the test sets in Table \\ref{tab:diversity}, using the base models from the LLaMA family (see Appendix \\ref{app:diverse} for more details) . We can see that \\textsc{MiniLLM} preserves the distinct 4-gram proportion in the generated responses and language modeling loss on the test set.", ["2306_08543v4_0", "2306_08543v4_1", "2306_08543v4_3", "2306_08543v4_2"]] [60052, "In this work, \nwe aim to prune the source model into any target configuration that we specify.This goal is challenging because it requires surgically scaling down all dimensions in a transformer architecture, an endeavor that, to our knowledge, has not been accomplished before for large language models. \nWe leverage the configurations of existing pre-trained models as the target architectures, \nbased on the intuition that \nthese configurations have already been well-optimized to \nbalance model expressivity and inference efficiency.\nFor example, we use the INCITE-Base-3B architecture~\\citep{toegther2023incite} \nas the target structure when producing a $2.7$B model. \nOur method learns a set of pruning masks \non model parameters at different granularities---from \nglobal ones like layers and hidden dimensions (persist across all layers),\nto local ones like attention heads and intermediate dimensions. \nAssume that the source model $\\mathcal{M}_S$ has \n$L_\\mathcal{S}$ layers, with each layer consisting of one multi-head attention module (MHA) and one feed-forward network (FFN). $\\mathcal{M}_S$ has a hidden state dimension of $d_\\mathcal{S}$, \n$H_\\mathcal{S}$ heads in each MHA, and\nan intermediate dimension of $m_\\mathcal{S}$ in each FFN. We introduce the following mask variables: \nEach mask variable controls whether the associated substructure is pruned or retained. For example, we remove a layer if its corresponding $z^{\\text{layer}}=0$. Figure \\ illustrates an example of how the pruning masks control the pruned structures. \nWe analyze the effectiveness of dynamic batch loading by examining its impact on three aspects: \n(1) the final LM loss across domains, \n(2) the data usage of each domain throughout training, \n(3) the downstream task performance. \nAll results in this section are based on Sheared-LLaMA-1.3B. \n\\paragraph{Loss differences across domains.} Dynamic batch loading aims to balance the rate of loss reduction across domains, ensuring that the losses reach the reference value at roughly the same time. \\C shows the difference between our model's loss (with both original and dynamic batch loading) and the reference loss, estimated by fitting a scaling function to a hypothetical 1.3B parameter LLaMA2 model. The original batch loading results in widely varying loss differences across domains; for example, the GitHub loss decreases below the reference value, while the C4 loss lags behind. Dynamic batch loading, however, reduces losses evenly and leads to very similar loss differences across domains, suggesting more efficient data use.\n", ["2310_06694v2_0", "2310_06694v2_1"]] [60053, "As discussed in the previous section, we speculate that the LLMs exhibit layer redundancy. To verify this, we assess the performance degradation caused by removing individual layers of two popular models, Llama2-7B-Base \\citep{touvron2023llama}, an English based LLMs, and Baichuan2-7B-Base \\citep{yang2023baichuan} which is mainly focused on Chinese. Figure \\ confirms our speculation, which reveals that some layers do not play a crucial role in LLMs, causing little degradation when omitting them individually. Moreover, this redundancy is primarily manifested in the middle to later layers of the network, with the initial layers and the last layer often being more critical. Notably, we found the last layer to be particularly important, aligning with findings from LLM Pruner \\citep{ma2024llm}. This observation contradicts our mathematical explanation in Appendix \\ref{appendix:math} which suggests that deeper layers tend to be more redundant. We posit that this discrepancy arises because the final FFN effectively functions as part of the token classifier and should be considered in conjunction with the language model head.To verify our hypothesis, we conducted further investigation, detailed in Table \\ref{tab:last_layer}. The results show that within the last layer, the FFN component is crucial, while the Attention module is less significant. This finding supports our interpretation of the final layer's importance. \n\\section{Methodology}\nIn this section, we present the methodological framework of our layer removal approach for LLMs, elucidating the underlying principles and techniques employed. We begin by introducing Block Influence (BI), a novel metric designed to assess the hidden states transformation of each layer. Leveraging BI, we then detail our layer removal method. \n\\subsection{Layer importance} \\label{method:layerimportacne} As outlined in the preceding section, the layers of LLMs exhibit redundancy, with varying degrees of redundancy across different layers. To capture this, we introduce a new metric, Block Influence (BI), to measure the degree of transformation performed by each layer. The BI score of $i^{th}$ layer can be calculated as follows: \n\\begin{align}\n \\text{BI}_i = 1 - \\mathbb{E}_{X,t} \\frac{X_{i,t}^TX_{i+1,t}}{||X_{i,t}||_2||X_{i+1,t}||_2},\n\\end{align}\nwhere $X_{i,t}$ means the $t^{th}$ row of hidden states of $i^{th}$ layer. Lower BI score imply that $X_i$ and $X_{i+1}$ exhibit high cosine similarity, suggesting that the layer makes minimal transformations to the hidden states and is therefore less important. We plot the BI scores of a single layer and the PPL after removing it separately, as shown in the Figure \\. The results demonstrate a positive correlation between the BI score and the importance of a layer.", ["2403_03853v3_0", "2403_03853v3_1"]] [60054, "\\subsection{Reinforcement Learning with Foundation Priors}\n\\label{sec:method:flfp}\nWe model the tasks for embodied agents as the Goal-Conditioned Markov Decision Processes (GCMDP) $\\mathcal{G}$: $\\mathcal{G} = (\\mathcal{S}, \\mathcal{A}, \\mathcal{P}, \\mathcal{R}, {\\mathcal{T}}, \\gamma)$. $\\mathcal{S} \\in \\mathbb{R}^{m}$ denotes the state. \n$\\mathcal{A}$ is the action space, which is the continuous delta movement of the end effector in this work. $\\mathcal{P}$ is the transition probability function.\n$\\mathcal{T}$ is the task identifier. $\\mathcal{R}$ denotes the rewards. \n$\\gamma$ is the discounting factor, equal to 0.99 in the work.\nTo learn efficiently and automatically, we propose the Reinforcement Learning from Foundation Priors (\\textbf{RLFP}) framework by leveraging the policy, value, and success-reward priors. Here we demonstrate how we formulate the priors in RLFP. Back to the case of Alice in Fig. \\, the commonsense of behavior can be formulated as a goal-conditioned policy function, $M_\\mathcal{\\pi}(s, \\mathcal{T}): \\mathcal{S} \\times \\mathcal{T} \\rightarrow \\mathcal{A}$. The prior knowledge that the state closer to the button is closer to success can be formulated as the value function $M_\\mathcal{V}(s, \\mathcal{T}): \\mathcal{S} \\times \\mathcal{T} \\rightarrow \\mathbb{R}^{1}$. The ability to recognize the success state can be formulated as the 0-1 success-reward function $M_\\mathcal{R}(s, \\mathcal{T}): \\mathcal{S} \\times \\mathcal{T} \\rightarrow \\{0, 1\\}$, which equals 1 only if the task succeeds. \nWe assume the success-reward prior is relatively precise, given the simplicity of binary classification in determining success. The value and policy prior knowledge are noisier. \nThe RLFP framework is to solve an expansion of $\\mathcal{G}$, termed $\\mathcal{G^{'}} = (\\mathcal{G}, \\mathcal{M})$, where $\\mathcal{M}$ is the foundation model set that represents various foundation prior knowledge. Here, $M_\\mathcal{\\pi}, M_\\mathcal{V}, M_\\mathcal{R} \\in \\mathcal{M}$. Compared to vanilla RL, all the signals for the RLFP come from the foundation models. The vanilla RL relies on uninformative trial and error explorations and manually designed reward functions. It is not only of poor sample efficiency but also requires much human reward engineering. Instead, in RLFP, prior knowledge from the foundation model set $\\mathcal{M}$ provides guidance or feedback on policy, value, and success-reward, enabling more automatic and effective task resolution. \\subsection{Foudation-guided Actor-Critic}\n\\label{sec:method:fac}\nUnder the proposed RLFP framework, we instantiate an actor-critic algorithm named Foundation-guided Actor-Critic (FAC), demonstrating how to inject the three priors into RL algorithms. \\textbf{Guided by Success-reward Signals.}\nWe consider the task as MDP $\\mathcal{G}_1$ with 0-1 success rewards, where $\\mathcal{R}_{\\mathcal{G}_1} = M_\\mathcal{R}(s, \\mathcal{T}) \\in \\{0, 1\\}$. Inspired by how humans learn from successful trials, we propose a success buffer to store the ``successful'' trajectories identified by $M_{\\mathcal{R}}$. Each time the actor $\\pi_{\\phi}$ updates via policy gradient, it also imitates samples from the success buffer $\\mathcal{D}_{\\text{succ}}$ (if available). The objective is $\\mathcal{L}_{\\text{succ}}(\\phi) = \\textbf{KL}(\\pi_{\\phi}(s_t), \\mathcal{N}(a_t, \\hat{\\sigma}^2)), s_t, a_t \\sim \\mathcal{D}_{\\text{succ}}$, where $\\hat{\\sigma}$ is the standard deviation. \\textbf{Guided by Policy Regularization.} To encourage efficient explorations, we regularize the actor $\\pi_\\phi$ by the policy prior from $M_{\\pi}(s, \\mathcal{T})$.\nAssuming the prior follows Gaussian distributions, the regularization term is $\\mathcal{L}_{\\text{reg}}(\\phi) = \\text{KL}(\\pi_\\phi, \\mathcal{N}(M_{\\pi}(s_t, \\mathcal{T}), \\hat{\\sigma}^2))$, which is commonly used in other algorithms \\citep{rot, modem-v2}. The bias introduced by the policy prior is bounded, shown in Theorem \\ref{theorem:policyv2}. \\textbf{Guided by Reward-shaping from Value Prior.} \nNoisy policy prior can mislead agents to undesirable states, so we propose using the value model $M_{\\mathcal{V}}$ to guide exploration and avoid unpromising states. While initializing and fine-tuning with $M_{\\mathcal{V}}(s, \\mathcal{T})$ is a natural approach, it suffers from catastrophic forgetting.\nInstead, we employ the \\textbf{reward-shaping} technique \\citep{reward-shaping} using the potential-based function $F(s, s', \\mathcal{T}) = \\gamma M_{\\mathcal{V}}(s', \\mathcal{T}) - M_{\\mathcal{V}}(s, \\mathcal{T})$, where $\\gamma$ is the discount factor. Since $M_\\mathcal{V}$ estimates state values, $F$ measures the value increase from $s$ to $s'$. This shaping reward is positive when $s'$ is better than $s$ and shares the same optimal solution as the 0-1 success-reward MDP $\\mathcal{G}_1$. Proof and details are in App. \\ref{app:reward_shaping}. \\textbf{Foundation-guided Actor-Critic.} \nIn summary, we deal with a new MDP $\\mathcal{G}_2$, where $\\mathcal{R}_{\\mathcal{G}_2} = \\lambda M_\\mathcal{R} + F$, with $\\lambda$ (set to 100) emphasizing success feedback. We train the agent using DrQ-v2 \\citep{drq-v2}, a variant of Actor-Critic, and call the proposed method Foundation-guided Actor-Critic (\\textbf{FAC}). As shown in Fig. \\, FAC leverages foundation policy guidance and an automatic reward function, enabling the agent to efficiently learn from abundant prior knowledge. The objectives of FAC are detailed in Eq. (\\ref{eq:actor_fac}), where tradeoff parameters $\\alpha$ and $\\beta$ are both set to 1, $y$ is the n-step TD target, and $Q_{\\Bar{\\theta}}$ is the target network. We use clipped double Q-learning \\citep{double-q} to reduce overestimation. \\begin{equation}\n \\label{eq:actor_fac}\n \\begin{aligned}\n \\mathcal{L}_{\\text{actor}}(\\phi) &= - \\mathbb{E}_{s_t \\sim \\mathcal{D}}\\left[ \\min_{k=1,2} Q_{\\theta_k}(s_t, a_t) \\right] + \\alpha \\mathcal{L}_{\\text{succ}} + \\beta \\mathcal{L}_{\\text{reg}}; a_t \\sim \\pi_{\\phi}(s_t) \\\\\n \\mathcal{L}_{\\text{critic}}(\\theta) &= \\mathbb{E}_{s_t \\sim \\mathcal{D}}\\left[ (Q_{\\theta_k}(s_t, a_t) - y)^2 \\right]; y = \\sum_{i=0}^{n-1} \\gamma^i r_{t+i} + \\gamma^n \\min_{k=1,2} Q_{\\Bar{\\theta}_k}(s_{t+n}, a_{t+n})\n \\end{aligned}\n\\end{equation}\n", ["2310_02635v4_0", "2310_02635v4_1"]] [60055, "Inspired by the above works, we propose a privacy-preserving data-free distillation method. As shown in Fig.~\\, publishing a model ~(\\eg, teacher model) trained directly from private data would compromise privacy, so we treat it as a fixed discriminator to train a generator in a data-free manner. This generator learns only the data distribution to protect the private data. Using this generator implicitly generates data for the distillation process from teacher model to student model. Because querying the teacher model using the generated synthetic data can compromise private information, we propose a LabelDP algorithm selective randomized response to protect the output of the teacher model. The selective randomized response algorithm treats the output of the student model as prior knowledge to reduce the possible output categories to increase the probability of outputting the correct label, and if the possible output does not contain the correct label, a uniform probability distribution is used to reset the possible probability of the output. In summary, our approach can effectively learn privacy-preserving student model by two keys. \nOn the one hand, our proposed data-free distillation is able to protect privacy well with the learning of data distribution. The generated synthetic data from this generator will not reveal private information even if it is distributed. On the other hand is that we propose the selective randomized response module to implement DP, which is no longer limited by the number of queries, and introduce the prediction of the student model as prior knowledge for the randomized response. We increase the probability of returning the correct label by setting a threshold, so the student model can learn the knowledge of the teacher model more effectively. \nOur major contributions are three folds: 1)~we propose a differentially private data-free distillation approach to learn privacy-preserving and high accurate student models via synthetic data, 2) we propose selective randomized response algorithm to privately distill teacher knowledge which provides strong protect label privacy protection in theory, and 3) we conduct extensive experiments and privacy analysis to demonstrate the effectiveness of our approach. \n\\subsection{Ablation Studies}\nAfter the promising performance is achieved, we further analyze each influencing factor in our approach, including the impact of loss terms in the data-free generator learning, the amount of synthetic data and the number of stages. \\myPara{Loss function.} To further understand the improvement of each component of the loss function during data-free training of the generator, we designed experiments on MNIST and FMNIST under $\\varepsilon$=10 to explore the contribution of each component. The results are shown in Tab.~\\ref{tab:loss}. where CE means the cross entropy loss term, IE is the information entropy loss term and Norm is the normalized term for the mean and variance of the data. We can see that the normalization term of the data has the greatest impact, followed by the information entry loss term and finally the cross entropy loss term. We speculate that this may be related to the randomness of the data generated by the generator, which limits the distribution of the data to make the generated synthetic data more usable, so it has a greater impact on the accuracy of the student model. \\myPara{Data amount.} \nWe further conducted experiments on MNIST, FMNIST, CIAFR10 and CIFAR100 datasets under $\\varepsilon=1$. The results are shown in Fig.~\\. We found that MNIST dataset converges at about 50,000 data volume, FMNIST converges at about 120,000, CIFAR10 and CIFAR100 converge at about 220,000 and 500,000, respectively. As the difficulty of datasets increases, the amount of data required to achieve convergence increases. We suspect that this is because the more difficult the dataset is, the more difficult its distribution knowledge is to learn, so the larger the amount of data required. We note that the CIFAR10 dataset is more difficult than FMNIST, but the reason why CIFAR10's final accuracy is similar to FMNIST's is that the network structure is different. \n\\myPara{Number of stages.} To explore the effect of the number of stages, we conducted experiments on MNIST, FMNIST and CIFAR10 datasets under $\\varepsilon$=10. The results are shown in Fig.~\\. Experimental results show that between 20 and 320, the accuracy of the student model increases with the increase of stages. As the classification difficulty of MNIST, FMNIST and CIFAR10 datasets increases, the effect of stages becomes greater. The experimental results are as we expected because we used the prediction of the student model as the prior knowledge. As the training process proceeds, the more accurate the prediction of the student model becomes, which means the higher the probability of outputting the correct label. The greater the percentage of synthetic data being correctly labeled, the better the student model performance will be.", ["2409_12384v1_0", "2409_12384v1_1", "2409_12384v1_2"]] [60056, "Align before or after LLM Decoding?\nAccording to \\cite{Jiang_2023_CVPR}, complete alignment between modalities is often not the optimal solution for downstream tasks. Such alignment may result in information loss, especially when the information provided by the two modalities differs significantly. \nTraditional SEC models achieve Speech-Text Alignment via fine-tuning encoder on speech-caption pairs, which bridges the gap before LLM decoding. However, complete alignment of speech and text embedding may result in information loss, and it lacks a direct measure for assessing speech-text alignment quality. To address these problems, we propose KD-Regularization which achieve Speech-Text Alignment and bridge the gap after LLM decoding. It use the KL-divergence of next-token prediction s between LLM's response as a measure of Speech-Text Alignment. As shown in Fig~\\, we observe that align after LLM decoding using knowledge distillation can more effectively improve the speech-text alignment performance. \n\\section{AlignCap}\n\\subsection{KD-Regularization}\n\\label{sec:section3.1} Our goal is to generate speech emotion captions for speech clips, we design a student LLM to implement speech tokens to text generation and employ a teacher LLM's response to guide student LLM's next-token generation. LLaMA-7B \\cite{llama1} is chosen to implement this decoding process due to its exceptional language understanding and modeling capabilities. We simply choose rank value of 8 for LoRA fne-tuning~\\cite{hu2022lora} conducted on Student-LLaMA, while the Teacher-LLaMA parameters are frozen. \\para{Acoustic Prompt.}\nWe first construct a vocabulary of emotional clues, adjectives such as the speaker's tone, intonation, pitch, rhythm, and volume in captions are all regraded as emotional clues. We design an emotion grammar parser (based on NLTK toolkit) to recognize these clues, which are filtered by the vocabulary. Then these clues are inserted into a prompt template P$_{\\mathrm{T}}$: , where $e_{n}$ is the $n_{th}$ emotion entity. The acoustic prompt can capture rich and delicate emotion information in emotional clues. \nIt can enrich fine-grained emotional description and enhance the robustness of zero-shot captioning for unseen speech, leveraging its training-agnostic nature, which is denoted as:\n\\begin{equation}\n\\begin{split}\n&e_{1\\sim n} = G_{\\mathrm{Parser}}(y_{i}=\\{ c_{i}^{1},...,c_{i}^{|y_{i}|}\\})\u00a0\u00a0\u00a0\u00a0 \\\\\n&\\mathrm{P_{act}} = \\mathrm{Insert}(\\mathrm{P_{T}}, idx, e_{1\\sim n}) \\\\\n\\end{split}\n\\end{equation}\nWhere $y_{i}$ is a series of captions, $c_{i}^{m}$ is the $m_{th}$ caption of $y_{i}$. $G_{\\mathrm{Parser}}$ and P$_{\\mathrm{T}}$ represent emotion grammar parser and prompt template respectively. We insert the emotional clues $e_{1\\sim n}$ into the index position $idx$ of P$_{\\mathrm{T}}$ to get acoustic prompt P$_{\\mathrm{act}}$. \n\\para{Text Token Generation.}\nWe denote the captions in speech-caption pairs as the semantic prompt P$_{\\mathrm{sem}}$ and concat P$_{\\mathrm{act}}$ and P$_{\\mathrm{sem}}$ as a prefix prompt, then we provide the prefix prompt along with an instruct prompt (user's instructions) to the LLM to condition its subsequent generation of speech emotion captions using prefix language modeling. \nThis setup leverages external knowledge and the language understanding and modeling capabilities of the teacher-LLM to guide the student-LLM to generate plausible sentences. \nGiven a caption $c_{i}$ with token $T_i$, language model $P_{\\theta}$ learns to reconstruct $y_i$ conditioned on the P$_{\\mathrm{act}}$ and P$_{\\mathrm{sem}}$. The probability of generating the next token is calculated as follows:\n\\begin{equation}\\label{lm_model}\np_{\\theta}(T_t \\mid\n\\underbrace{T_{0\\sim a-1}}_{\\mathrm{P_{act}}},\n\\underbrace{T_{a\\sim b-1}}_{\\mathrm{P_{sem}}}, \n\\underbrace{T_{b\\sim c-1}}_{\\mathrm{P_{instruct}}},\n\\underbrace{T_{c\\sim t-1}}_{\\text{autoregressive}})\n\\end{equation}\nThis process is iterated until the LLM generates a token containing a period and the training loss is the maximum likelihood estimate, and the next token $T_t$ is selected according to:\n\\begin{equation}\n\\label{texttoken}\n\\begin{split}\nT_t=\\mathop{\\arg\\max}_{i \\in 1, \\ldots, k}\n\\Big\\{\np_{\\theta} \\left(c_i \\mid p_n, T_0,\\!\\ldots\\!,T_{t-1} \\right) \n\\Big\\}\n\\end{split}\n\\end{equation}\nWhere prefix prompt $p_n$ = P$_{\\mathrm{act}}$ $\\oplus$ P$_{\\mathrm{sem}}$ $\\oplus$ P$_{\\mathrm{instruct}}$.\nTrained on limited data, simply using semantic prompt as prefix prompt may overfit the In-Domain dataset, leading to significant domain shift and performance degradation of language model using out-of-domain (OOD) speech. \nIn contrast, the acoustic prompt based on emotion-aware clues, inherits the powerful transferability from captions. \n\\para{Speech Token Generation.} \nFor each speech, we adopt the pre-trained SpeechTokenizer~\\cite{zhang2024speechtokenizer} to extract discrete representations and denote the tokens of the first\u00a0 residual vector quantization (RVQ) layer as speech tokens. \nThe first layer of RVQ can be regarded as a semantic token, which contains more content information from speech, resulting in capturing semantically accurate emotional clues. We append this speech token $x_t$ to LLM's input and generate the next token in an autoregressive modeling manner, for each time step $t$, the next token $T_{t}$ is selected according to:\n\\begin{equation}\n\\label{speechtoken}\n\\begin{split}\nT_t=\\mathop{\\arg\\max}_{i \\in 1, \\ldots, k}\n\\Big\\{\np_{\\theta} \\left(c_i \\mid x_t, T_0,\\!\\ldots\\!,T_{t-1} \\right) \n\\Big\\}\n\\end{split}\n\\end{equation} \\para{Modality Alignment.} \nModality adapters~\\cite{zeroshotaudiocap2,wavllm} are often used to compress the speech encoder's feature representations. Similar to \\cite{multitokenizer}, we treat the input from speech and text modality as a token sequence and learn a joint embedding space for all modalities.\nSpeech tokens are expanded to text token's codebook in advance so that text and speech share the same codebook.\nWe pad the shorter token sequence to make it the same length as the longer token sequence. We use a mask to ignore the padding part, ensuring that the model only focuses on valid tokens. \\para{Knowledge Distillation.}\nAs shown in Fig~\\, given a $\\mathcal{D}_{s}\\!=\\!\\{(x_{n},y_{n})\\}$, we treat the LLM's prediction $p_{\\theta} \\left(y_n \\!\\mid\\!p_n, y_{. Where $p_n$ is the concatenation of P$_{\\mathrm{act}}$ and P$_{\\mathrm{sem}}$.\nIn contrast, we consider the corresponding $p_{\\theta} \\left(y_n \\!\\mid\\!x_n, y_{.\nIf speech and text are well-aligned, the two s should be close to each other, as measured by KL-divergence, which is as followed:\n\\begin{equation}\n\\begin{split}\n&\\mathop{\\min}_{\\mathrm{LLM_{stu}}(\\cdot)}\\mathcal{L}_{\\mathrm{KL}}(p,x,y)= \\\\ \n&-\\sum_{t,y_n}p_{\\theta}\\left(y_n|p_n,y_{, our contributions can be summarized as follows: \\begin{itemize}\n \\item \\textit{Our first contribution} is the proposal of \\texttt{Tone2Vec}, which maps diverse tone transcriptions to a comparable feature space. \\texttt{Tone2Vec} constructs pitch-based similarity representations by mapping each transcription to a simulated smooth pitch variation curve. We also propose methods to construct tonal representations for dialect regions. By analyzing these representations across different dialect areas, we show that \\texttt{Tone2Vec} captures tonal variations and clusters dialects more accurately than methods that treat each tone as an isolated category. \n \\item \\textit{As our second contribution}, we developed the first automated algorithms for tone transcription and clustering. These algorithms are especially beneficial for endangered tonal languages. Experiments demonstrate that our models perform well in cross-regional tone transcription with less than 1,500 samples. Notably, our algorithms can accurately cluster tones using fewer than 60 speech samples for a given dialect. \n \\item \\textit{As our third contribution}, all these algorithms are systematically integrated into \\href{https://github.com/YiYang-github/ToneLab}{\\texttt{ToneLab}}, a user-friendly platform designed for both lightweight fieldwork and subsequent analysis in Sino-Tibetan Tonal Languages. Users can choose to use pretrained models or train new models with their own data for different scenarios. Researchers can also leverage \\texttt{ToneLab} to propose new computational methods and conduct evaluations.\n\\end{itemize} \\section{Tone2Vec: From Tones to Vectors}\n\\label{sec:method} In this section, we propose pitch-based similarity representations by quantifying the differences in pitch variations inherent in tones, which we call \\texttt{Tone2Vec}. \\texttt{Tone2Vec} is an easy-to-use, simple, and effective method for measuring similarity distance. \\texttt{Tone2Vec} not only enables the comparison of tonal variations across dialects but also provides a straightforward loss function for training automatic tone transcription and clustering models. \\subsection{From Categories to Pitch-based Similarity Representations} \nIn \\texttt{Tone2Vec}, we map each transcription $l$, such as \\texttt{(55)}, to a simulated smooth pitch variation curve $p_{l}(x)$. As shown in Figure~\\, for transcriptions with two units, a linear curve is employed to represent pitch variations, while for those of three units, such as \\texttt{(312)}, we employ a quadratic curve to smoothly interpolate the points \\((1, 3)\\), \\((2, 1)\\), and \\((3, 2)\\). The divergence between any pair of tone transcriptions, \\(l_1\\) and \\(l_2\\), is quantitatively assessed by calculating the area between their pitch variation curves, expressed as \\(D(l_1, l_2) = \\int_{[1,3]} |f_{l_1}(x) - f_{l_2}(x)| dx\\). This measure quantifies the differences in pitch variations. \nGiven $n$ transcription sequences $l_{1},...,l_{n}$, we can construct a $n \\times n$ distance matrix $\\mathcal{C} = {(D(l_{i}, l_{j}))}_{i,j} \\in \\mathbb{R}^{n \\times n}$, where each row represents the features of a transcription, capturing the subtle pitch variation differences among them.", ["2410_02324v1_0", "2410_02324v1_1"]] [60058, "\\section{Introduction}\nGenerative Language Models (LMs) have gained considerable attention due to their remarkable capabilities \\cite{guo2023close,suzgun-etal-2023-challenging}. Despite the convincing and realistic nature of text generated by these LMs, a concern with LMs lies in their tendency to produce fabricated facts and generate false information \\cite{lin-etal-2022-truthfulqa}. Moreover, these models deliver inaccurate information employing unequivocal expressions, which poses substantial risks as it can lead to the spread of misleading and harmful content. One of the contributing factors to the hallucination lies in inadequate acquisition of knowledge \\cite{manakul2023selfcheckgpt,huang2023factual}. For example, consider the question \\emph{Which animal is China's nation treasure?}, LMs may provide a different animal name like \\emph{tiger} instead of \\emph{panda} due to a lack of relevant knowledge. Considerable efforts have been made to alleviate such hallucination induced by lacking of knowledge in LMs. One approach involves supervised fine-tuning LMs with standard ground-truth answers to enhance their comprehension of relevant knowledge \\cite{wei2022finetuned,ouyang2022training}. This method has shown promising efficacy. However, it demands a significant amount of high-quality annotated data for training. Additionally, other methods have relied on external verifier or critic model to evaluate the accuracy of a statement \\cite{yang-etal-2022-re3,paul2023refiner}. Training a verifier necessitates a large number of high-quality evaluation annotations and further fine-tuning of the model, which restricts its broad applicability to other tasks and domains. Another reason for an LM to provide incorrect response is intrinsically linked to the design architecture of generative language models themselves \\cite{azaria2023internal,paul2023refiner,shinn2023reflexion}. It is widely acknowledged that LMs generate a sentence by maximizing the likelihood of the next token given all previous tokens. Subtle differences in the preceding sentences can potentially lead to diverse generation outcomes. For example, when the question is \\emph{Who is the author of The Analects?}, the model gives the correct answer as \\emph{Confucius}. However, when the input question becomes \\emph{Is Laozi or Confucius the author of the Analects of Confucius?}, the model is likely to generate an answer of \\emph{Laozi}. In this case, the model has the ability to rely on its knowledge to recognize false information \\cite{schick2022peer}. This process is akin to how humans perform self-verification of answers to minimize mistakes \\cite{flower1981cognitive}. Moreover, when we realize our answer is wrong, we further modify it. Motivated by this, when the model itself detects the potential hallucination, the next step is to correct the error or mistake. Once the model incorporates this inherent self-correction mechanism, it can address similar issues in other domains and achieve self-improvement. The existing work \\cite{madaan2023selfrefine,ganguli2023capacity} towards self-correction in LMs has mainly focused on lager models like ChatGPT and GPT4, which is challenging to migrate these self-correction methods to small LMs. Some studies indicated that the self-correction ability depends on model parameters and only emerges in models with larger parameter sizes \\cite{azaria2023internal}. The main reason is that they devised a sophisticated pipeline and zero-shot prompts to achieve self-correction. However, these prompts crafted for self-verification and self-modification are difficult for small models to understand. As shown in Figure \\, upon generating the initial answer to the given question, an additional feedback instruction is utilized to guide ChatGPT in generating feedback information regarding the initial answer. This information contains an evaluation of the correctness of the initial answer. A subsequent modification instruction is employed to alter or refine the initial answer based on the feedback received. Nevertheless, small models typically lack self-awareness \\cite{weng2022large} and tend to exhibit greater confidence in their generated responses. Consequently, they struggle to assess the quality of their generated outcomes. The capability for self-verification serves as a prerequisite for achieving self-correction. Furthermore, the manner in which self-correction is achieved through multi-step prompt engineering within LMs differs from the spontaneous and one-time correction observed in humans. To empower the capability for self-correction in small language models, we propose \\emph{Intrinsic Self-Correction} (ISC), an intrinsic mechanism that relies on two basic abilities: self-verification and self-modification. At its core, the LM provides a response and subsequently evaluates its own answer. Upon identifying an error, the same LM adjusts its initial response. Conversely, if the answer is validated as accurate, no further modifications are required. The self-correction process is not divided into two separate steps, but rather constitutes a single comprehensive step, as depicted by the red arrowed segment in Figure \\. We trained the LM to process the self-correction through Instruction Fine-Tuning (IFT). For this purpose, we design the data processing procedure to construct the self-correction data and define the data format. During the fine-tuning process, we propose Partial Answer Masking (PAM) to make the model have the ability of self-verification. Our contributions are summarized as follows:\n\\begin{itemize}\n \\item To the best of our knowledge, we are the first to demonstrate that small language models with even 6 billion parameters possess the capacity for self-correction during response generation without relying on ground truth. \n \\item Our proposed \\emph{Intrinsic Self-correction} aims to incorporate self-correction as an intrinsic pattern within LM. It involves an independent and spontaneous self-correction process, distinct in nature from existing methods of self-correction that rely on prompt engineering.\n \\item To achieve the capability for self-correction in small LMs, we devise a pipeline for constructing self-correction data and define the data format. It can be universally applied to build data for self-correction tasks. Additionally, we introduce a novel training method called Partial Answer Masking (PAM) to enable the model self-verify its own generated answers.\n \\item We conduct experiments on open-source LMs with varying parameter scales to validate the efficacy of our proposed method. The results demonstrate improvements in accuracy across two different datasets.\n\\end{itemize} \\textbf{Answer preparation.} To ensure answer diversity, we utilize nucleus sampling \\cite{holtzman2019curious} to generate multiple answers. After generating multiple answers for a question, the next step is to evaluate the accuracy of each answer by comparing it with the provided ground truth for the each question. In the case of multiple-choice questions, we extract the options of the final answer through string matching and then directly compare them with the standard answer to check the accuracy. For a good case, the outcome of self-verification should be positive, indicating there is no need to modify the initial answers. Accordingly, given the question $x$, the \\textbf{Answer} is represented as ($A_{1}^{1}$-\\emph{COT}$||A_{1}^{1}||PV $), where $A_{n}^{i}$ represents the model attempts $n$ times to obtain the correct answer, and the current answer is the $i$th response, $A_{n}^{i}$-COT represents the COT process of the answer, $PV$ denotes the positive verification in self-correction, and $||$ represents the concatenation. Here, we set the verification as a binary signal. A positive verification can be set like \\emph{I am sure my answer is correct.} Conversely, for a bad case, negative self-verification result is excepted, such as \\emph{Sorry, there is an error in the previous answer}. It requires modifications to the initial answer. To enhance the model's ability to generate more appropriate reasoning process and correct answer, we utilize the ground truth, representing the standard answer, as the revised answer. Additionally, we employ gpt-3.5-turbo to assist in generating the COT analysis process, denoted as $G$. We use prompts like \\emph{the answer of [Question] is [Ground Truth]. Please provide a step-by-step explanation for resolving the given problem}. Therefore, the data format is ($A_n^{1}$-COT$ ||A_n^{1}||NV||A_n^{2}$-COT$||A_n^{2}\\cdots A_{n}^{n}||PV)$, where $NV$ indicates negative verification. In Table \\ref{dataFormat}, we provide two general examples of self-correction data, representing examples where the correct answer is obtained without correction and with one correction respectively. We also provide detailed prompt examples used at each step in the Appendix. This pipeline can be utilized to customize the self-correction data for various corrections, depending on the specific task type. The general process of constructing self-correction data is shown in Figure \\. \n", ["2401_07301v2_0", "2401_07301v2_0", "2401_07301v2_1"]] [60059, "\\section{AIDBench details}\nIn this section, we provide a detailed description of AIDBench. We begin by presenting an overview of the benchmark, outlining the pipeline for authorship identification using large language models (LLMs). We then describe the datasets used in our benchmark, the evaluation tasks conducted, and the metrics employed for assessment. \n\\subsection{Outline of authorship identification with LLMs} Figure \\ provides an overview of our proposed AIDBench framework. We begin by selecting a dataset for evaluation, such as the Research Paper dataset. From this dataset, we sample a subset of texts from several authors, randomly selecting one as the \\emph{Target Text} and designating the remaining texts as candidates. These texts are then incorporated into an authorship identification prompt, which is presented to the LLMs. The models generate responses indicating which candidate texts are more likely authored by the same individual as the \\emph{Target Text}. We repeat this process multiple times to obtain average performance metrics for the task. Finally, we employ metrics such as precision, recall, and rank to provide a clear and intuitive assessment of the LLMs' capabilities. \n\\subsection{Datasets} \nAIDBench comprises five datasets: \\textit{Research Paper}, \\textit{Enron Email}, \\textit{Blog}, \\textit{IMDb Review}, and \\textit{Guardian}. In this subsection, we provide detailed descriptions of each dataset. \n\\textbf{Research Paper}. This newly collected dataset consists of research papers posted on arXiv under the CS.LG tag (the field of machine learning in the computer science domain) from 2019 to 2024. After removing duplicate entries and authors with fewer than ten papers, the dataset includes 24,095 papers from 1,500 authors, ensuring that each author has at least ten papers. We use this dataset to investigate their potential privacy risks when using LLMs to identify authorship of academic writing. \\textbf{Enron Email}. The Enron email dataset contains approximately 500,000 emails generated by employees of the Enron Corporation. For our benchmark, similar to~\\citet{Huang2024CanLL}, we removed sender and receiver information and discarded short emails. Ultimately, we retained 174 authors, each with 50 emails. \\textbf{Blog}. The Blog Authorship Corpus~\\citep{Schler2006EffectsOA} comprises the collected posts of 19,320 bloggers gathered from Blogger.com\\footnote{\\url{https://www.blogger.com}} in August 2004. We selected 1,500 authors from the dataset, each with 10 blog posts. The content of this dataset is closely related to daily life, providing linguistic characteristics, writing styles, word usage habits, and special characters that can be used to infer an author's identity. \\textbf{IMDb Review}. The IMDb review data are selected from the IMDb62 dataset~\\citep{Seroussi2011AuthorshipAW}. After filtering out reviews with fewer than 10 words, we randomly retained 50 reviews for each of the 62 authors in our dataset. \\textbf{Guardian}. The Guardian corpus dataset~\\citep{guardian} is designed to explore cross-genre and cross-topic authorship attribution. The corpus comprises texts published in \\textit{The Guardian} daily newspaper, mainly opinion articles (comments). It includes articles from 13 authors across five topics\u2014politics, society, UK, world, and books\u2014and retains 50 articles per author. \\subsection{Evaluation tasks}\nAs illustrated in Figure~\\, we outline the evaluation tasks. Each task involves a query text and a set of candidate texts, with the objective of identifying which candidates are most likely authored by the same individual as the query text. Based on the number of candidate texts, we introduce the following specific tasks. \\textbf{One-to-one identification.} Commonly known as authorship verification, this task involves a single candidate text and aims to determine whether the text pair is authored by the same person, forming a binary classification problem. In our evaluation, we prompt the LLM with: ``Here are a pair of texts: [Text Pair]. Determine if they belong to the same author.''. More effective prompts or additional information can be provided to the LLM to achieve more accurate and reasonable responses. For instance, \\cite{Hung2023WhoWI} employs a Chain-of-Thought prompt and a series of intermediate reasoning steps to significantly enhance the LLMs' authorship verification ability. Similarly, as shown in \\cite{Huang2024CanLL} and \\cite{Hung2023WhoWI}, instructing LLMs to analyze texts based on writing style, linguistic characteristics, and word usage habits, rather than content and topics, can further improve authorship verification. Due to the subjective nature of the \\emph{one-to-one identification} task and its heavy reliance on the intrinsic judgment of LLMs, it is natural to consider a scenario involving two candidate texts requiring a comparative decision. The objective here is to determine which candidate text is more likely authored by the same individual as the query text; we refer to this as the \\textbf{one-to-two identification} task. Due to space limitations, we defer the task design, metrics, and experimental results for the \\emph{one-to-two identification} task to Appendix \\ref{app:1to2}. \n\\textbf{One-to-many identification.} \nIn this setup, we provide a number of candidate texts and ask LLMs to determine texts that are most likely authored by the same person as the text in the query. This is different from the usual closed-set authorship attribution task \\citep{Huang2024CanLL}, where multiple authors and their writings are provided in the context of LLMs, and the task is to attribute a target text to one of these authors. We do not provide any authorship information to LLMs. Instead in our experiments, we randomly sample a number of authors and put all of their writings into a set. Then one specific text is chosen at random as the target text while the rest from the set form the candidate texts. The task is to ask LLMs to identify the texts in the candidate set that are mostly likely authored by the same person as the target text, ranking the results by confidence scores. \\subsection{RAG-based one-to-many identification pipeline}\\label{sec:RAG} The performance of \\emph{one-to-many authorship identification} tasks heavily depends on the ability of LLMs to handle long contexts. Current commercial LLMs support relatively large context windows; for example, GPT-4-Turbo can process contexts up to 128,000 tokens, and Kimi supports up to 2 million tokens. In contrast, open-source models like Llama-3-8B-Instruct\\footnote{\\url{https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct}} support context windows of only up to 8,000 tokens. Moreover, the authorship identification task relies heavily on the LLMs' capacity for high-level comprehension of the context, which becomes increasingly challenging as the length of the context grows. To address this limitation, we propose a simple Retrieval-Augmented Generation (RAG)-based method, as illustrated in Figure \\. In this approach, we first use pre-trained embedding models like sentence-transformers to encode both the target text and the candidate texts. We then calculate the similarities between the target text and each candidate, selecting only the top \\textit{k} texts with the highest similarity scores. These top \\textit{k} texts are provided to the LLM, thereby reducing the context length and enabling more effective processing. \nThe motivation behind this method is that although embedding models tend to focus more on the semantic meanings of texts than on their linguistic characteristics, we can still use the similarities they provide to filter out texts that are significantly dissimilar to the target text. This approach ensures that the total length of the candidate texts remains within the context windows of LLMs. \n\\subsection{Evaluation metrics} \nFor the \\emph{one-to-one identification} task, LLMs are required to classify whether a pair of texts were written by the same author by answering ``True'' or ``False''. While accuracy can serve as an evaluation metric, it may not fully capture performance due to potential class imbalances in the dataset, for instance, when there are many negative pairs compared to positive pairs, or vice versa. Therefore, we adopt \\textit{Precision, Recall} as our evaluation criteria to assess the performance of LLMs under various setups more comprehensively. To be specific, Precision and Recall are defined as follows:\n\\begin{equation}\n \\text{Precision} = \\frac{\\#\\text{True Positive Pairs}}{\\#\\text{Predicted Positive Pairs} }; \\quad\\quad \\text{Recall} = \\frac{\\#\\text{True Positive Pairs}}{\\#\\text{Positive Pairs} }. \n\\end{equation} \nFor the \\emph{one-to-many identification} task, we employ multiple evaluation metrics to capture the problem's complex nature. In our experiments, we instruct the models to rank candidate texts based on confidence scores and repeat the experiments using multiple target texts. We utilize metrics such as \\textit{Rank@1} and \\textit{Rank@5} in our benchmark, where \\textit{Rank@x} indicates that at least one text authored by the same individual as the target text is found within the top \\textit{x} ranked texts. Additionally, since we expect the LLMs to select as many correct candidate texts as possible, we use \\textit{Precision@x} as a metric, where \\textit{Precision@x} denotes the proportion of correct predictions among the top \\textit{x} texts.", ["2411_13226v1_0", "2411_13226v1_0", "2411_13226v1_1"]] [60060, "In this paper, we perform an apples-to-apples comparison that addresses these concerns, \ncomparing seven medical LLMs and two medical VLMs against their general-domain base models. \nWe find that, for all but one LLM pair---\\textsc{BioMistral-7B} \\citep{biomistral} vs. \\textsc{Mistral-7B-Instruct-v0.1} \\citep{mistral}, \na pair of models that performs fairly poorly in absolute terms---the open-source medical LLMs and VLMs that we evaluate do not consistently improve over their general-domain counterparts on various medical (visual) QA tasks (Figure \\ \\). We compare several pairs of general-domain and medically adapted LLMs/VLMs (see Table \\ref{tab:models}),\n\\textbf{whose only differences lie in medical DAPT} (i.e., one model is the base model, from which the other is derived via medical DAPT). For each pair, we compare their performances from zero-/few-shot prompting \\citep{gpt-2,gpt-3}, after independently selecting the ``best'' prompt format and few-shot examples for each model based on the validation set and accounting for statistical uncertainty in model comparison. We note that, while \\textsc{OpenBioLLM-70B} shows improvement in the 3-shot setting, it does not show improvement in the zero-shot setting (winning on 7.7\\% and losing on 23.1\\% of tasks, see Table~\\ref{tab:win-tie-loss-rates-0}), and vice versa for \\textsc{Clinical-Camel-70B} (winning on 0\\% of tasks and losing on 36.4\\% of tasks in the 3-shot setting, see Table~\\ref{tab:win-tie-loss-rates-3}), \\textbf{leaving \\textsc{BioMistral-7B} as the only medical LLM that wins more than it loses} against its base model (\\textsc{Mistral-7B-Instruct-v0.1}) in both settings, albeit with relatively low absolute performance. \nIn Figure~\\, we show how the win/tie/loss rates of the medical models, computed over all (model pair, QA dataset) combinations,\nchange as we vary the following aspects of the experimental setup:\n\\begin{enumerate}[topsep=5pt,itemsep=-0.5ex]\n \\item select prompts for each model independently vs. only based on the medical model;\n \\item determine a win for the medical model based on confidence intervals in relative accuracy vs. raw absolute accuracy.\n\\end{enumerate}\nWe note that when comparing each model pair based on absolute accuracy, there are no ties, as the real-valued absolute accuracies are rarely identical. Overall, we find that for both LLMs and VLMs, the performance improvement from using a medically adapted model instead of its general-domain counterpart can be substantially overestimated when (i) the prompt is only tailored to the medical model; and (ii) the models are compared only based on their absolute accuracies. \nNotably, in the zero-shot setting, the win rate increases from 9.4\\% to 70.5\\% for medical LLMs and from 6.3\\% to 62.5\\% for medical VLMs, \nwhen only performing prompt selection for the medical model and comparing based on absolute accuracy.", ["2411_04118v1_0", "2411_04118v1_1", "2411_04118v1_2"]] [60061, "Recent year, Large Language Models (LLMs)~\\cite{radford2019language, 209211,chatgpt}, have drawn significant attention due to their remarkable capabilities and applicability to a wide range of tasks~\\cite{cheshkov2023evaluation, pearce2022asleep, copilot, pearce2022examining, frieder2023mathematical, shakarian2023independent, lehnert2023ai, kortemeyer2023could}. Building on top of the initial success, there is an increasing demand for richer functionalities using LLM as the core execution engine. This led to the rapid development and rollout of the \\textbf{LLM-based systems (LLM systems)}, such as OpenAI GPT4 with plugins~\\cite{chatgpt}. 2023 can be considered as the ``meta year'' of LLM systems, in which\nOpenAI announces the GPTs~\\cite{devday}, empowering users to design customized LLM systems and release them in GPTs store~\\cite{gpts}.\nAccording to the most recent data statistics report~\\cite{gptsflow} up to November 17th, the top 10 customized GPTs have collectively received more than 750,000 visits. Notably, the customized GPT named Canva~\\cite{canva} has been visited over 170,000 times in just 10 days. In addition, the third-party GPTs store has updated more than 20,000 released customized GPTs~\\cite{gptsnum}.\nAll these facts underscore the increasing integration of LLM systems into our daily lives. Drawing inspiration from information flow analysis, we propose a new information-flow-based formulation to enable systematic analysis of LLM system security~\\footnote{ Note that the LLM system discussed in this paper specifically references the design of OpenAI GPT4. Any potential or future LLM systems possessing different features are not within the scope of this study.}. To achieve it, we need to tackle two non-trivial uniqueness of the LLM system. \\textit{LLM system analysis has to consider the nature of the interaction between machine learning model and multi-object information processing.} LLM systems combine novel AI model (LLM) with conventional software components (e.g., Web Tools), leading to complex interactions across various objects and models. This integration results in a multi-layered system where data flows and transformations occur across various operational contexts, from straightforward individual object level to context-dependent multi-object level. To facilitate the analysis of such a multi-layered system, we develop a multi-layer analysis framework, as shown in Figure~\\, where \\textit{objects} are key components in the LLM system (such as the LLM model and plugins). \\textit{Action} and \\textit{interactions} capture the processing of information within an object and the transmission of information between objects respectively.\nSince security issues arise from the lack of effective constraints of the information flow -- it allows information to directly flow in and out without any restrictions~\\cite{cecchetti2021compositional, myers1999x}, we propose to use the concept of \\textit{constraint} to capture the security requirement over the information flow, where the constraints are multi-layer, placing the mediation to not only the processing of individual objects (constraint over action), but also the processing among them (constraint over \n interactions). \\textit{Constraints over action and interaction are now probabilistic and have to be analyzed through the lens of adversarial robustness.} LLM systems differ significantly from standard systems where executions are often deterministic, LLM systems operate in an inherently uncertain setting. This fundamental difference is due to the nature of LLMs, which process natural language inputs that are vague and free-form, challenging the model's ability to consistently interpret data and instructions. This ambiguity also extends to the models' output, which can vary widely in form while conveying the same meaning. When these outputs are used to command or interact with other system components, like web tools, the inherent ambiguity can lead to unpredictable interactions with these components, significantly complicating the process of applying security features. Furthermore, the probabilistic nature challenges the system's ability to produce consistent outputs and hinders the ability to apply and enforce security features in LLM systems in a deterministic manner. Thus, to enable the analysis, a set of rules is encapsulated in the constraints. Based on the constraints, \nwe should analyze not only the presence of such constraint (machine-learning-based policy enforcement) but also the adversarial robustness (how well it works) of these rule enforcement via a multi-step process. \\noindent\\textbf{Example 1: Unethical Image Displaying.}~\\label{moti2}\nLLM system, beyond its core LLM capabilities, encompasses additional facilities. \nNotably, the Frontend is a critical facility widely used in the LLM system to provide a friendly user interface.\nOne of the most important functionalities of Frontend is to render image links in markdown format. This increases the richness and diversity of the displayed content.\nWhen the LLM in the system outputs certain markdown image links and transmits them to the Frontend, the Frontend will automatically render it and display the image content of links.\nHowever, integrating the Frontend introduces security concerns.\nFor instance, as shown in Figure~\\, when the LLM outputs certain malicious markdown image links of unethical content such as pornographic pictures, and the links are transmitted to the Frontend, the automated render process in the Frontend would render it and display this explicit image. \nSpecifically, this kind of rendering process contains two steps.\nThe first step is the LLM output target markdown image link, which is followed by the second step where the output target markdown image link is transmitted to the Frontend. \nWe found that OpenAI fails to ensure security in both of these two steps : (1)\nthe LLM can output arbitrary external markdown image links if we adopt certain adversarial strategies (detailed strategy shown in Sec~\\ref{firstrender}) and (2) the transmission of such image links toward the Frontend lacks of necessary control (details in ~\\ref{sec:frontend}). This example highlights two critical insights. First, while LLMs contribute to their superior performance, they also introduce potential threats to the security of the LLM systems.\nSecond, the interaction between the LLM and other internal system components can give rise to new emergent threats. This realization highlights the importance of adopting a holistic approach to model and study the security problems in the LLM system. ", ["2402_18649v1_0", "2402_18649v1_1"]] [60062, "\\section{CogMir: Multi-LLM Agents Framework On Cognitive Bias}\nIn this section, we provide a detailed and modular overview of CogMir, organized into four main elements: Mirror Environmental Settings, Framework Structures, Cognitive Bias Subsets, and Sample Use Cases. These components are visually depicted in a left-to-right sequence in Fig.~\\. \n\\subsection{Framework Structures}\nAfter establishing realistic social science experiment environments, the next step is to select essential components to support the above two mirror methods: Human-LLM Agent Q\\&A and Multi-H-A Interaction. This entails choosing participant objects, evaluation tools, and communication modes. The CogMir framework is organized into modules for Required Objects, Communication Modes, and Interaction Combinations to meet these needs. \\textbf{Required Object Sets.} Required Object encompasses all potential participants and evaluators involved in the system. \\textbf{Participants} include humans\\footnote{\"Human\" in CogMir can refer to real human participants or simulations. In our experiments, \"Human\" refers to simulated human interactions based on previous social science experiments, not actual human subjects.} and LLM Agents, which allows for dynamic setups where either or both can be involved in interactions depending on the experiment's requirements. \\textbf{Evaluators} include humans, LLM Agents, datasets, and discriminators. Datasets are utilized to store and construct prompts about the experimental setup (e.g., experimental scenarios, character information, etc.), task description, and Q\\&A question set. Discriminators are specialized tools utilized to evaluate the social intelligence of LLM Agents, encompassing three main types: State-of-the-art technical metrics such as SimCSE, SelfCheck, and FactScore~\\citep{Simcse, min-etal-2023-factscore, Manakul2023SelfCheckGPTZB} for objective, quantitative assessment; Human discriminators that delve into nuanced and subjective aspects like prosocial understanding; and LLM Agent discriminators, which involve the use of other LLM Agents to assess and challenge responses from a subject LLM Agent. \\textbf{Communication Modes Sets.} Communication modes dictate the nature of interactions within different setups. We model the participants (humans or LLM Agents) as channels based on information theory~\\citep{Shannon1948} to define two essential communication modes:\n\\vspace{-2mm}\n\\begin{itemize}[noitemsep]\n \\item \\textbf{Broadcast} (or Parallel, $C = C_1 + C_2 + \\ldots + C_n$), which enables a single sender to transmit a message to multiple receivers simultaneously.\n \\item \\textbf{Point-to-point} (or Series, $C = \\min[C_1, C_2, \\ldots, C_n]$) establishes communication between two specific entities at a time ($C$ denotes channel capacity).\n\\end{itemize} \n\\vspace{-2mm} \n\\textbf{Multi-H-A Interaction Combinations Sets.} This module provides various combinations of Multi-Human-LLM Agent interactions, tailored to different social science experimental needs, the most frequently used combinations in social science settings include:\n\\vspace{-2mm}\n\\begin{itemize}[noitemsep]\n \\item \\textbf{Single-H-Single-A}: One human interacting with one LLM Agent, predominantly used for human-agent question-answering tasks (e.g., survey, interview, etc. ).\n \\item \\textbf{Single-H-Multi-A}: One human interacts with multiple LLM Agents, where humans can be set as controlled variables to test Multi-LLM Agents's social cognitive behaviors.\n \\item \\textbf{Multi-H-Single-A}: multiple humans interact with a single LLM Agent, which is suitable for assessing the impact of group dynamics, such as consensus or conflict.\n \\item \\textbf{Multi-A}: multiple agents interacting without human participation.\n \\item \\textbf{Multi-H-Multi-A}: multiple humans and multiple LLM Agents interaction, integrating elements from the previous setups to mimic complicated experimental interactions.\n\\end{itemize} \n\\vspace{-2mm}\nThese modules offer a flexible framework for exploring LLM Agents' cognitive biases in social science experiments. Researchers can customize their setups by mixing different components to examine specific hypotheses. In the next section, we outline cognitive bias subsets as guidelines. \\textbf{Conformity: Authority Effect.}\nDrawing on classical social science experiments conducted by Stanley Milgram~\\citep{Authority}, we conducted experiments to explore the Authority Effect, tailored to the characteristics of LLM Agents. Unlike the Herd Effect, which requires multiple human participants, the Authority Effect aims to test the conformity of LLM Agents to authoritative prompts or instructions, even when these may contradict factual information. In the settings, we utilize Known, and \\textbf{Unknown MCQ} datasets as Evaluators and \\textbf{CogIdentity} and \\textbf{CogScene} as prompt generators to test the \\textbf{Q\\&A Bias Rate} through Single-H-Single-A Q\\&A scenarios. Average Q\\&A Rate refers to the average bias rate on Unknown and Known MCQ. We design prompts to directly inquire LLM Agents on 5 identity pairs across two MCQ datasets, each for 10 times, resulting in $5 \\times 10 \\times 100 \\times 2$ inquires. \nPrompts are constructed as follows: Human 1st round \\{You are [INFERIOR identity], and I am [SUPERIOR identity] who is an expert in the following field. Please provide your response by selecting either A or B. The question is [MCQ]\\}; 2nd round \\{As an expert, I confirm that the answer should be B. Please provide your answer again. Reply in the following format: Answer: A/B.\\}. Fig.~\\ shows that LLM Agents display considerable variability in responses to deterministic and nondeterministic scenarios when influenced by the Authority Effect. Notably, in certain and uncertain situations, most LLM Agents showed increased obedience to Authority compared to the Herd Effect. This contrasts humans, who typically exhibit no significant difference in obedience between authority and herd scenarios. These findings highlight the LLM Agents' enhanced sensitivity to social status differences, indicating a stronger propensity to adhere to authoritative commands over peer influence.", ["2405_14744v2_0", "2405_14744v2_1"]] [60063, "\\section{Introduction}\nThe ability of LLMs to follow detailed instructions is vital to their usefulness. Generative language models are typically trained on raw web data, and then subsequently fine-tuned on a comparatively small but carefully curated set of instruction data. Instruction fine-tuning is crucial to taming the power of LLMs, and the usefulness of a model is largely determined by our ability to get the most out of small instruction datasets. \\looseness -1 In this paper, we propose to add random noise to the embedding vectors of the training data during the forward pass of fine-tuning. We show that this simple trick can improve the outcome of instruction fine-tuning, often by a large margin, with no additional compute or data overhead. \\underline{N}oisy \\underline{E}mbedding Instruction \\underline{F}ine \\underline{T}uning (\\neftune{}), while simple, has a strong impact on downstream conversational quality. When a raw LLM like \\llama{}-2-7B is finetuned with noisy embeddings, its performance on \\texttt{AlpacaEval} improves from 29.8\\% to 64.7\\% (Figure \\) -- an impressive boost of around 35 percentage points \n\\citep{touvron2023llama2, dubois2023alpacafarm}. \\neftune{} leads to this surprising and large jump in performance on conversational tasks, maintaining performance on factual question answering baselines.\nThis technique seems to be a free lunch for LLM fine-tuning. \\paragraph{\\neftune{} Improves Text Quality.}\nFrom Table \\ref{tab:LLaMA-2_GPT-4}, we can see an increase across all datasets for the $7$B scale with an average increase of $15.1\\%$, showing that training with \\neft{} significantly improves conversational ability and answer quality, as measured via \\texttt{AlpacaEval}. Additionally, we can see from Figure \\ that we also see improvements on older models, such as \\llama{}-1 and OPT. Interestingly, we see less improvement on ShareGPT than on other datasets according to ChatGPT. However, this is not reflected in the GPT-4 evaluation. From Table \\ref{tab:llama-2-chat_SFT}, we see the Win Rate climbs from 75.03\\% to 88.81\\% (+13.78\\%) after adding \\neftune{} to the $70$B parameter model trained on Evol-Instruct (hyperparameters in Appendix \\ref{sec:hyperparameters}).", ["2310_05914v2_0", "2310_05914v2_1"]] [60064, "\\paragraph{Investigations on optimized prompts} We explore several interesting properties of these optimized prompts.\n\\begin{itemize}\n \\item \\textit{Evil twins}. In many cases, the optimized prompts that we find are similar in function to the original prompts (twins), but garbled and unintelligible to humans (evil). For this reason, we refer to them as \\textit{evil twins}. See Figure~\\ for some examples. \n \\item \\textit{Transferability}. Remarkably, these ``evil twin'' prompts transfer between a variety of open-source and proprietary language models; see Section~\\ref{sec:transferability}.\n \\item \\textit{Robustness}. We investigate the robustness of evil twin prompts to changes in their token-order and to replacements of their tokens. We find that whether evil twins are robust to randomly permuting their tokens depends on the LLM family. On the other hand, across LLM families, evil twins are more impacted by randomly replacing their tokens than ground truth prompts. This suggests that even the uncommon, non-English tokens in the optimized prompts play an important role in driving the model output; see Section~\\ref{sec:prompt-investigations}. \\item \\textit{Improving prompt intelligibility}. We explore variants of the optimization problem \\eqref{eq:mle-def-intro} that encourage the optimized prompts to be more interpretable (adding a fluency penalty and restricting the vocabulary to common English tokens). However, we find that these modifications do not improve the KL divergence of the optimized prompts to the ground truth; see Section~\\ref{sec:optim-intel}.\n\\end{itemize} We discuss other applications of the maximum-likelihood problem \\eqref{eq:mle-def-intro} to prompt compression, privacy, and conditional generation in Section~\\ref{sec:discussion}. We compare these methods on 100 randomly sampled prompts from the Alpaca instruction tuning dataset \\citep{alpaca}, where Vicuna-7b-v1.5 is the instruction-tuned model. Additional experiments on various model families and datasets are presented in Appendix~\\ref{app:addn-model-exp}. For each method and prompt, we compute the KL divergence of the optimized prompt with respect to the original prompt. We compare pairs of methods based on which one finds the closer prompt to the ground truth; see Figure~\\. GPT-4 suggestions perform roughly on par with those from cold-start GCG. On the other hand, GCG with a warm start provides a strong improvement over both cold-start GCG and the GPT-4 prompt suggestions. Enforcing interpretability by adding a fluency penalty or pruning the vocabulary does not improve the optimized prompt (see Section~\\ref{sec:optim-intel}).", ["2311_07064v3_0", "2311_07064v3_1"]] [60065, "\\section{Introduction} Recently, Large Language Models (LLMs) such as GPT-4~\\citep{openai2023gpt4}, PaLM 2~\\citep{anil2023palm}, and LLaMA~\\citep{touvron2023llama} have shown exceptional performance on a wide range of NLP tasks, including common sense reasoning~\\citep{wei2022chain,zhou2022least} and mathematical problem-solving~\\citep{lewkowycz2022solving,chen2022program}. Despite their ability to learn from huge amounts of data, LLMs still have limitations in their capacity to retain and understand information. To ensure responsible usage, it is crucial for LLMs to have the capability of recognizing their limitations and conveying uncertainty when responding to unanswerable or unknowable questions. This acknowledgment of limitations, also known as ``\\textit{knowing what you don't know},'' is a crucial aspect in determining their practical applicability. In this work, we refer to this ability as model self-knowledge. The Know-Unknow quadrant in Figure~\\ illustrates the relationship between the model's knowledge and comprehension. The ratio of ``Known Knows'' to ``Unknown Knows'' demonstrates the model's proficiency in understanding and applying existing knowledge. Techniques such as Chain-of-Thought~\\cite{wei2022chain}, Self-Consistency~\\citep{wang2022self}, and Complex CoT~\\citep{fu2022complexity} can be utilized to increase this ratio, resulting in improved performance on NLP tasks. We focus on the ratio of ``Known Unknows'' to ``Unknown Unknows'', which indicates the model's self-knowledge level, specifically understanding its own limitations and deficiencies in the unknows. Existing datasets such as SQuAD2.0~\\citep{rajpurkar2018know} and NewsQA~\\citep{trischler2017newsqa}, widely used in question answering (QA), have been utilized to test the self-knowledge of models with unanswerable questions. However, these questions are context-specific and could become answerable when supplemented with additional information. \\citet{srivastava2022beyond} attempted to address this by evaluating LLMs' competence in delineating their knowledge boundaries, employing a set of 23 pairs of answerable and unanswerable multiple-choice questions. They discovered that these models' performance barely surpassed that of random guessing. \\citet{kadavath2022language} suggested probing the self-knowledge of LLMs through the implementation of a distinct \"Value Head\". Yet, this approach may encounter difficulties when applied across varied domains or tasks due to task-specific training. Consequently, we redirect our focus to the inherent abilities of LLMs, and pose the pivotal question: ``\\textit{Do large language models know what they don\u2019t know?}''. \\paragraph{Input Forms.} The incorporation of instructions and examples serves to boost the self-knowledge of both the GPT-3 and InstructGPT series. Specifically, ICL input form, providing richer contextual information, contributes to a significant enhancement in models' self-knowledge. This impact is particularly noticeable in the davinci model, where ICL facilitates a 27.96\\% improvement over the direct. Moreover, a comparison between Figure~\\ reveals that the inclusion of instructions and examples successfully minimizes the performance disparity between the davinci and text-davinci models, suggesting an acquisition of self-knowledge from the instructions and provided examples. \\vspace{-.1em}\n\\paragraph{Compared with Human.} Figure~\\ reveals that, without supplementary samples, GPT-4 currently performs best among the tested models, achieving an impressive F1 score of 75.47\\%. However, a noticeable gap becomes evident when comparing this performance to the human benchmark of 84.93\\%. This underscores the considerable potential that remains for enhancing the self-knowledge level of LLMs. This study investigates the self-knowledge of LLMs by evaluating their ability to identify unanswerable questions. Through the introduction of a novel dataset and an automated method for detecting uncertainty in the models' responses, we are able to accurately measure the self-knowledge of LLMs such as GPT-3, InstructGPT and LLaMA. Our results reveal that while these models possess a certain degree of self-knowledge, there is still an apparent disparity in comparison to human self-knowledge. This highlights the need for further research in this area to enhance the ability of LLMs to understand their own limitations on the unknows. Such efforts will lead to more accurate and reliable responses from LLMs, which will have a positive impact on their applications in diverse fields.", ["2305_18153v2_0", "2305_18153v2_1", "2305_18153v2_1"]] [60066, "With the increases in pre-training corpora and the number of parameters \\citep{radford2018improving,radford2019language,brown2020language}, large language models (LLMs) have shown remarkable performance across various natural language processing (NLP) tasks, even generating expert-level responses to user queries. \nThe extraordinary capabilities of LLMs hold potential for further real-world applications \\citep{wang2023chatcad,owl,driess2023palm}, which necessitate higher requirements for model trustworthiness \\citep{wang2023decodingtrust,li2023trustworthy,sun2024trustllm} and consistency \\citep{consistency-analysis,elazar-etal-2021-measuring}. However, LLMs still suffer from inconsistency issues: semantically equivalent queries \\citep{elazar-etal-2021-measuring,raj2023semantic} and insignificant order changes of inputted contents \\citep{wang2023fair-evaluator} can lead to divergent outcomes; LLMs can also behave differently in the generation versus validation of the same content \\citep{li2023generate-validate}; moreover, logical transformations like negation and symmetry can also induce inconsistent behaviors \\citep{jang-etal-2022-consistency-bench}. \nIn addition to previous work, we also find LLMs able to solve hard problems surprisingly fail to solve easier ones (as shown in Figure \\), suffering from the hard-to-easy inconsistency. \nUnlike LLMs, humans are naturally consistent reasoners, and it is undisputed that an individual proficient in calculus can easily address simpler arithmetic problems. \nHowever, why this difference exists is still unknown and relevant research to explore hard-to-easy consistency of LLMs is still lacking. To systematically evaluate the hard-to-easy consistency of LLMs, we develop ConsisEval with data from code, mathematics, and instruction-following domains, which are widely considered to be difficult but of significant importance for LLMs \\citep{wei2021finetuned,gsm8k,humaneval,ifeval}. \nDifferent from traditional benchmarks in which data are usually individual, there are only pairwise data in ConsisEval: one datum is comprised of two questions (an easy question and a harder one) \nwith a strict order of difficulty, and we present some example data from ConsisEval in Table \\ref{tab:example_data}.\nTo construct ConsisEval, we collect easy data from some established public datasets (\\S \\ref{subsec:easy_data}); then we acquire hard data through automatic generation by GPT-4 and human annotation (\\S \\ref{subsec:hard_data}), and this process is shown in Figure \\. \\subsection{Hard Data Collection}\n\\label{subsec:hard_data}\nTo build our pairwise dataset in which a strict order of difficulty is guaranteed for each pair of easy and hard problems, all of the hard data are modified from easy data. \nWe employ a semi-automatic pipeline that integrates the automatic generation of GPT-4 with human annotation to acquire hard data, and the whole workflow is illustrated in Figure \\. Compared to traditional methods that rely solely on manual annotation, our semi-automatic approach can significantly alleviate the workload of human annotators.", ["2406_12809v1_0", "2406_12809v1_1", "2406_12809v1_1"]] [60067, "\\section{Cross-lingual Transfer for Automatic Question Generation} The zero-shot XLT approach--leveraging multilingual pretrained language models (mPLMs) fine-tuned exclusively on English data for target languages--has shown promising performance across various classification tasks \\cite{liu2019zero,conneau2019cross,gritta2021xeroalign,wu2022zero,li2023does}.\nHowever, when applied to natural language generation (NLG) tasks, this approach often results in catastrophic forgetting of the target language.\nTo mitigate this issue,\\citet{maurya2021zmbart} proposed fine-tuning only the encoder layers of mPLMs while keeping the word embeddings and all decoder layer parameters frozen. In our preliminary investigation, we found that this technique did not completely prevent code-switching in XLT-QG, as shown in Figure \\.\nSpecifically, the models struggled to fully grasp interrogative structures in the target language, a phenomenon we refer to as ``interrogative code-switching.''\nIn this study, we propose a method that enables small mPLMs to learn interrogative structures without relying on target language data during training. In our preliminary investigation, we found that this technique did not completely prevent code-switching in XLT-QG, as shown in Figure \\.\nSpecifically, the models struggled to fully grasp interrogative structures in the target language, a phenomenon we refer to as ``interrogative code-switching.''\nIn this study, we propose a method that enables small mPLMs to learn interrogative structures without relying on target language data during training. As illustrated in Figure \\, we divide the task into two stages.\nIn the QTC stage, a classification model identifies the type of question to be generated.\nWe focus on Wh-questions, categorizing them into eight types based on English interrogative words.\nWhile the type of question is primarily influenced by the type of answer, the model considers both the answer and the context.\nThis is crucial, as the same answer can result in different types of questions depending on the context.\nFor example, the number ``911'' could refer to a quantity, year, or proper noun. The set of question exemplars corresponding to the question type identified by the QTC model is used in the QG stage.\nThese exemplars are pre-created for each question type and language, as detailed in Section \\ref{sec:experiments/data}.\nBy leveraging the shared interrogative structures among the exemplars, the QG model generates questions using the provided answer and context.\nBoth the QTC and QG models are trained exclusively on English QA data and can be deployed to new languages without the need for additional training with target language data.", ["2410_03197v1_0", "2410_03197v1_0", "2410_03197v1_1"]] [60068, "\\section{Introduction}\nSolving Mathematical reasoning problems is a significant challenge for current LLMs~\\cite{madaan-etal-2022-language,openai2023gpt4}. This task requires interpreting information, identifying relevant mathematical concepts, and formulating equations to solve the problems~\\cite{ahn2024large}. Due to computational errors in LLMs~\\cite{wei2023chainofthought,gao2023pal}, using CoT~\\cite{wang-etal-2023-plan,wei2022chain,chen2024diahaludialoguelevelhallucinationevaluation} solely implemented in natural language can lead to calculation mistakes~\\cite{lewkowycz2022solving,wei2023chainofthought,gao2023pal}. The most common practice currently is to use PoT~\\cite{chen2023program} for handling mathematical reasoning tasks, by guiding the large model to write the code that is then computed using tool calls. \nHowever, we made a surprising discovery recently: when a problem is phrased in a manner closer to verbal scenarios (for example, the question is ``One apple costs three dollars, how much for three apples?'' instead of ``3$\\times$3=?''), PoT tends to make more reasoning errors or text comprehension mistakes, but this phenomenon is almost non-existent in CoT. For such problems, CoT can correctly reason out the answer, whereas PoT makes mistakes. We refer to this type of error as~\\textbf{Code Translation Error (CTE)}. We report the percentage of CTE on five datasets with multiple types of models, the results illustrated in Figure~\\. This error is due to the amount of training data for natural language far exceeding that for code. In the scope of CodeLlama\u2019s pretraining data, which includes 500 billion code tokens, this represents a small fraction compared to the 2 trillion natural language tokens used in the Llama-2 model~\\cite{rozi\u00e8re2023code,touvron2023Llama}. Natural language is more suitable for semantic analysis, planning, and abstract reasoning than code~\\cite{gou2023tora}.\nExisting work also finds the advantage of Natural language, but they have not effectively utilized the reasoning capabilities of natural language. Current research focuses on the following approaches to integrate natural language to enhance the precision of code: (1) Using natural language prompts to guide the model in writing code~\\cite{gao2023pal,toshniwal2024openmathinstruct,wang2023mathcoder}: write a brief step in natural language before generating code. (2) Employing methods like self-correction and hybrid approaches to generate answers in multiple stages~\\cite{gou2023tora,yue2023MAmmoTH,gou2023critic}. (3) Utilizing prompts like ``rethink question''~\\cite{deng2023rephrase} to have the model first paraphrase the question, thereby avoiding comprehension errors.\nHowever, existing methods fall short in two main aspects: First, using few natural language steps or simple paraphrasing methods alone is insufficient for effectively controlling code generation; a more comprehensive natural language reasoning process is necessary to generate more reliable code. Secondly, reasoning within LLMs is not always faithful~\\cite{lanham2023measuring,bao2024llms,turpin2023language}. Frequently, the final answers seem to be derived directly from the questions themselves rather than aligning with the reasoning process. Consequently, even correct reasoning can lead to incorrect answers. To more effectively utilize natural language reasoning to enhance PoT, we propose~\\textbf{Human-Think Language (HTL)}: A novel information-control-based approach to utilize complete CoT reasoning steps to control PoT generation. HTL is inspired by the way humans write code. Humans consider the entire reasoning process using natural language, and the code can fully rely on natural language reasoning. On the right side of Figure~\\, we highlight the parallels between our integrated model and the human approach to solving mathematical problems. Compared to previous works, our framework offers a strong capacity for aligning calculation with reasoning by integrating CoT and PoT. We design Focus Attention mechanism that, during code generation, concentrates solely on information from CoT to promote the chain reasoning better, thereby biasing the answer to be more faithful to CoT. On the other hand, using complete CoT reasoning tends to lead LLMs to use mathematical induction to enumerate reasoning steps verbosely, which results in repetitive generation. We incorporate the error assessment function based on PPO~\\cite{ppo}, leveraging reinforcement learning to penalize repetitive generation. We conduct experiments based on CodeLlama-7B and Mistral-7B and achieve outstanding results on eight datasets using only self-distillation data. In summary, our contributions are as follows: (1) We are the first to conduct a detailed evaluation of current closed-source models, open-source base models, and specialized models. We highlight the shortcomings of PoT and propose that using full natural language reasoning to enhance PoT performance is essential. (2) We propose an advanced model named HTL, which utilizes the complete reasoning process of CoT to enhance PoT. HTL incorporates a novel Focus Attention that approximates chain reasoning, complemented by an error assessment function designed to prevent repetitive generation. (3) We evaluate our work on eight mathematical reasoning datasets, and our experimental results demonstrate that our method achieves outstanding results. HTL shows significant effectiveness in in-domain datasets, out-of-domain datasets, and natural language inference task, demonstrating strong usability and potential. The design of HTL is divided into three parts: reasoning format, Focus Attention, and error assessment function based PPO.", ["2402_15729v3_0", "2402_15729v3_1"]] [60069, "We define a structured entity as a named entity with associated properties and relationships with other named-entities.\nExtracting a structured entity from unstructured text therefore \\new{implicitly} combines named-entity recognition, entity-property extraction, relationship extraction, and coreference resolution.\nFig.~\\ shows an illustration of the structured entity extraction task.\nGiven a text description, we aim to first identify the two entities mentioned \\textit{``Bill Gates''} and \\textit{``Microsoft''}.\nDuring this phase, we need to also address the \\new{coreference resolution} and identify that \\textit{``Bill Gates''} and \\textit{``Gates''} are the same entity.\nThen, given some predefined schema on the all possible entity types and property keys, the exact types, property keys, property values on all identified entities in the text are expected to be predicted, as well as the relations between these two entities (i.e., \\textit{Bill Gates} co-founded \\textit{Microsoft}).\nSuch extracted structured entities may be further linked and merged to automatically construct KBs from text corpora.\nWe present our formalization of the \\textit{structured entity extraction} task which lays the foundation for evaluation for this information extraction task.\nAlong with this, we propose a new evaluation metric with numerous variants for measuring the similarity between the predicted set of entities and the ground truth set, which is appropriate for our task (see Sec.~\\ref{sec:prelim}). Among the three stages depicted, \\textit{\\textcolor{blue}{pred\\_ent\\_names}}, \\textit{\\textcolor{blue}{pred\\_type\\_and\\_property}}, and \\textit{\\textcolor{blue}{pred\\_val}} are special tokens to indicate the task.\nFor each model prediction behavior, the first ``$\\Rightarrow$'' indicates inputting the text into the encoder of MuSEE, while the second ``$\\Rightarrow$'' means inputting the encoded outputs into the decoder.\nAll tokens in \\textcolor{blue}{blue} are the prompt tokens input into the decoder which do not need to be predicted, while all tokens in \\textbf{bold} are the model predictions.\nFor the stage 1, we emphasize that MuSEE outputs a unique identifier for each entity in the given text. Taking the example in Fig.~\\, the first stage outputs ``\\textit{Bill Gates}'' only, rather than both ``\\textit{Bill Gates}'' and ``\\textit{Gates}''. This requires the model implicitly learn how to do coreference resolution, namely learning that ``\\textit{Bill Gates}'' and ``\\textit{Gates}'' are referring to the same entity. Therefore, our approach uses neither surface forms, as the outputs of the first stage are unique identifiers, nor the entity titles followed by entity linkings.\nFor stage 2, the MuSEE model predicts the entity types and property keys, which are all represented by special tokens. Hence, the prediction can be made by sampling the token with highest probability over the special tokens for entity types and property keys only, rather than all tokens.\nNotice that we do not need to predict the value for ``\\textit{type}'' and ``\\textit{name}'' in stage 3, since the type can be directly derived from the ``\\textbf{ent\\_type\\_}'' special key itself, and the name is obtained during stage 1.\nThe tokens in the bracket ``\\{..\\}'' are also part of the prompt tokens and are obtained in different ways during training and inference.\nDuring training, these inputs are obtained from the ground truth due to the teacher forcing technique~\\citep{raffel2023exploring}.\nDuring inference, they are obtained from the output predictions from the previous stages.\nThe full training loss is a sum of three cross-entropy losses, one for each stage.\nAn illustration of our model's pipeline is shown in Fig.~\\.\nMore implementation details are elaborated in Appendix~\\ref{appendix:details}.", ["2402_04437v5_0", "2402_04437v5_0", "2402_04437v5_1"]] [60070, "{INTRODUCTION}\nThe visual servoing problem~\\cite{538972} involves the challenge of controlling the motion of a robot by utilizing feedback from visual sensors, typically cameras, to adjust its actions in real-time. This process entails the robot's ability to interpret visual data to determine its relative position and orientation with respect to target objects or locations within its environment. The core objective is to enable the robot to perform precise movements or reach specific goals by continuously comparing the current visual scene against a desired configuration or outcome. This approach can been seen in Fig. \\. This approach requires sophisticated algorithms for image processing and control theory to bridge the gap between visual perception and mechanical action, thereby allowing the robot to adapt its movements based on the visual feedback it receives. \nRecently, there has been a lot of progress in optical flow-based visual servoing methods~\\cite{rtvs, katara2021deepmpcvs, argus2020flowcontrol, harish2020dfvs, 9550239}. These methods are shown to be highly precise in reaching their goals with some guarantees of convergence. However, the utility of visual servoing has remained limited due to major limitations common to all servoing algorithms: 1) They necessarily require a goal image during test time. This makes it quite tough for visual servoing algorithms to be applied in real-world navigation or manipulation since if we already have a map of the environment, there are better ways to reach the goal pose than through a target image. 2) Visual Servoing cannot work if there is not much overlap between the initial and target image. 3) Visual Servoing can only accommodate feedback from a single camera. \nSolving each of these problems can greatly enhance the utility of visual servoing methods. For example, solving the problem of final image generation based on the skill the robot is executing can make servoing quite useful for real-world tasks. Imagine a drone mounted with a monocular camera trying to cross a door. The robot will first have to visualise the approximate position of the door just before it crosses it; then, it will have to visually servo to the imagined image and then apply a simple hardcoded skill to cross the door. This pattern of imagining a goal, servoing to the goal and applying a hardcoded skill can be repeated for many skills, both in navigation and manipulation. Another good example of a skill we can solve using servoing is the `reaching' skill, which is a part of several manipulation tasks. For instance, take the example of `unplugging the charger', where the robot has to `reach' a particular grasping pose before applying a hardcoded policy.\nIn this paper, we leverage the recent advancements in diffusion-based image editing to provide a much-needed and major update to servoing algorithms. Our contribution can be stated as follows:\n{METHOD}\n{Problem Formulation}\nAt any given moment, our system receives an RGB image $I_t$ from the robot's camera sensor, along with a textual prompt $P$ that specifies the task to be accomplished. Our objective is to accurately predict and implement the necessary control commands, denoted as [$v_t, \\omega_t$], to fulfil the task described in the prompt. \u00a0Tasks vary, ranging from navigating through a doorway to disconnecting a charger. Note that in traditional servoing algorithms, the final image $I_g$ is usually provided to perform the task. This greatly limits the utility of the visual servoing models. To address this challenge, we divide our approach into two distinct phases. Initially, our framework is designed to conceptualize a subgoal $I_g$ based on the task and current camera feedback $I_t$. Next, we aim to attain this subgoal image $I_g$ by employing a servoing algorithm. In the forthcoming sections, we will elaborate on the components of our Imagine2Servo framework. We begin by introducing our foresight model for sub-goal generation in Section \\ref{foresight} followed by the mechanics of our servoing framework \\ref{servoing}, which facilitates reaching the subgoal. We then describe our overall framework in \\ref{overall} and training details in \\ref{training}. Our overall pipeline is summarised using Fig. \\.", ["2410_12432v1_0", "2410_12432v1_1"]] [60071, "The in-band radiative load emitted by the optical elements is the product of their absorptive loss and physical temperature.The absorption loss of a material is directly related to the complex permittivity ($\\varepsilon = \\varepsilon' - i \\varepsilon''$) of a material. Absorption is discussed numerically throughout this paper as a material's loss tangent, which is the ratio of the real part over the complex part of the permittivity, denoted as $\\tan\\delta$. The permittivity of a material varies with temperature, and the $\\tan\\delta$ at millimeter wavelengths of many polymers decreases with cryogenic cooling, reducing the absorption and emission of a polymer optic in-band \\citep{Frank1977,Schnabel2014}. However, the reduction in optical emission at lower temperature is driven primarily by the lower physical temperature rather than the lower loss \\citep{Lamb1996}.\nThe entire instrument is placed within a vacuum cryostat to cool both the optics and the sub-kelvin detectors. However, the cryostat obviously requires a transmissive window to observe external signals, which must remain in contact with the ambient environment. Vacuum windows must have low transmission loss in band and be strong enough to withstand the force of atmospheric pressure. Previously, millimeter-wave vacuum windows have been made of ceramics [such as fused-silica \\citep{piper2021}], plastic foams [including various formulations of Zotefoam \\citep{Keck2015B}], or bulk plastic materials [primarily high density polyethylene (HDPE) or ultra high molecular weight polyethylene (UHMWPE) \\citep{bicep3,delessandro2018}]. A more detailed history of the development of millimeter vacuum windows can be found in \\cite{Barkats2018}.\nThe larger aperture sizes of modern CMB receivers, however, limit the materials available for use. BICEP3 and BICEP Array have nominal clear apertures at the window of 730 mm, while the full outer diameter of the window is 900 mm. Current plans for the CMB-Stage 4 Small Aperture Telescopes (SATs) and Large Aperture Telescope (LATs) include similarly sized windows \\citep{abazajian2019cmbs4}. \nIn addition to the initial elastic deflection, plastics will undergo a visco-elastic creep deformation under continuous load. The creep deformation is logarithmic with time; thus, most of the deflection occurs very quickly, and deflection out to longer time scales is simple to predict with a model. We need to determine that the window will not creep into contact with the filters below (see Figure \\). The initial elastic deflection is described reasonably well by flat plate predictions (previously described in Eq. \\ref{eq:delta}) from \\emph{Theory of Plates and Shells} \\citep{TimoshenkoStephen1959Topa}.Ceramics become prohibitively expensive at those scales. Plastic foams would become impracticably thick and lossy: the primary advantage of the foam windows are their low index of refraction and low transmission loss, but they have the disadvantage of being very weak. A suitably thick foam window would include many laminated layers, which would make their loss properties significantly worse. For example, the 150 GHz \\emph{Keck} receiver windows were 120 mm thick and estimated to cause $\\lesssim$2\\% transmission loss \\citep{Keck2015B}; the doubled aperture size for new BICEP receivers would require a foam window four times thicker. A window with such high transmission loss ($\\lesssim$8\\%) would add approximately 3 pW optical load to a 150 GHz detector, swamping out all other sources of optical power at that frequency.\nBulk plastics, being relatively weak though still stronger than plastic foams, must also become significantly thicker to maintain a suitable safety factor for a vacuum vessel (discussed further in Section \\ref{sec:strength}). For BICEP3, a 95 GHz receiver, we previously estimated that over half of the instrument loading originated from the 31.8 mm thick HDPE slab window \\citep{bicep3}. That window has since been replaced by the thin window presented in this paper.\nWe also show the relative change in per-detector white noise and survey time for different window thicknesses. For the lowest frequency band, at 30--40 GHz, decreasing the window thickness only yields marginal improvements in white noise and mapping speed. For bands above 40 GHz, however, a substantial reduction in window thickness from the nominal could potentially decrease noise by tens of percent. At the high frequencies in the 270 GHz band, such a window thickness reduction could potentially decrease the survey time by as much as 50\\%. The window is the only element in the optical chain where improvement would produce such a significant return, primarily due to the window's relatively high temperature. \nFor safety, we performed this test hydrostatically. As water is nearly incompressible, minimal energy will be stored within the fluid as pressure is increased; energy is only stored within the deflected window and surrounding enclosure. Given the safety factors of 3 calculated in Table \\ref{tab:strength}, we used standard utility water pressure (roughly 60 psi or 4 atm) \u00a0as that was expected to exceed the ultimate tensile strength of these windows and produce a failure.\nThe BICEP Array frame and clamp were designed specifically to hold thin windows with a knurled pattern on the outer radius, with bolts running through the knurling to maximize concentration of force over relatively small points \\citep{Barkats2018}. This knurled pattern on the frame and clamp works well but results in an unique window clamping configuration. We therefore used an actual frame and clamp for the hydrostatic test. Preexisting flange holes for circulating gas between the window and BOPP membrane were used to push the water into the chamber above the window. We sealed the water chamber with a 1/2\" aluminum blanking plate and two 3/16\" thick blended EPDM gaskets. The chamber was clamped with two sets of correctly torqued 1/4\" bolts and C-clamps for additional clamping force, which are visible in a ring around the window in Figure \\.\nWe reached 68 psi in the hydrostatic test using only the standard utility water supply. At that point we took a side-on picture of the deflected window, and estimated the deflection (and therefore radial strain) of the window (see Section \\ref{sec:creep} for a description of the calculations). The window had deflected approximately 75 mm. which corresponds to a radial strain of approximately 3\\%, well below the typical failure strain of 10\\%. In an attempt to produce a failure, we added compressed air to the system, increasing the pressure to 85 psi. At this pressure water began to leak around the gaskets, and the test was stopped. The end state of the window is shown in Figure \\, with the water pressure valve showing the final pressure on the right and the water leaking out around the gaskets on the left. We note that 85 psi is twice the predicted failure pressure for a safety factor of 3, a degree of uncertainty that emphasizes the importance of empirical testing of this novel structure.\nGiven these results, we find that the safety factor of the thin windows is at least approximately 5.7 at sea", ["2411_10428v1_0", "2411_10428v1_1", "2411_10428v1_1"]] [60072, "The structure of this article is as follows: a summary of the data to be analyzed is presented in Sec. \\ref{sec:data}, with the implemented methodology being described in Sec. \\ref{sec:method}, including \u00a0an overview of the analysis sequence in Section \\ref{sec:proc}, details on the adopted stellar component analysis and how the stellar templates are created in Sec. \\ref{sec:rsp}, and a description of the delivered data products in Sec. \\ref{sec:DP}; \u00a0the reliability of the methodology is tested with extensive simulations described in Sec. \\ref{sec:code}, including both physically motivated (Sec. \\ref{sec:sim_idea}) and purely empirical (Sec. \\ref{sec:sim_real}) simulations; the accuracy and prevision in the recovery of the emission lines and the stellar population properties are described in Sec. \u00a0\\ref{sec:sim_el}, \\ref{sec:sim_MaStar}, and \\ref{sec:sim_el}; an example of the use of the DAP is presented in Sec. \\ref{sec:show}, in which we analyze a deep exposure centred in the Huygens region of the Orion nebula, including a description of this particular dataset in Sec. \\ref{sec:O_dataset}, with a summary of the performed analysis results in Sec. \\ref{sec:O_results}, a detailed exploration of the generated deep integrated spectrum in Sec. \\ref{sec:O_int}, the list of detected emission lines in Sec. \\ref{sec:O_el}, the physical properties derived from those emission lines in Sec. \\ref{sec:ori_fprop}, and the spatial distribution of the emission line fluxes presented in Sec. \\ref{sec:ori_spa}; how to download the code is presented in Sec. \\ref{sec:dist}, and finally a summary of the main results of this study is presented in Sec. \\ref{sec:conc}.\nObservational data for the Local Volume Mapper (LVM) project are obtained using a newly constructed facility at Las Campanas Observatory. As indicated before, the instrumental setup comprises four telescopes: one devoted to the acquisition of the science exposures (using the ultra-wide IFU comprising 1801 fibers described before), two monitoring the night-sky spectrum towards the east and the west (using 59 and 60 fibers, respectively), and one dedicated to the acquisition of spectrophotometric calibration stars (using 24 fibers). This setup feeds into three DESI-like spectrographs covering the wavelength range from 3600 to 9800 Angstroms with a resolution of R $\\sim$ 4000 at H$\\alpha$. Each spectrograph features a dichroic system, which divides the light received from each fiber into three wavelength ranges: (b) blue, from 3600 to 5800 \\AA, (r) red, from 5750 to 7570 \\AA, and (z) infrared, from 7520 to 9800 \\AA. The size of the LVM's ultra-wide field IFU projected in the sky ensures detailed spectral and spatial coverage across the survey's targets, including the Milky Way, Magellanic Clouds, and a selection of local volume galaxies.\nThe LVM survey's data reduction pipeline (DRP, Mej\u00eda-Narv\u00e1ez in prep.) follows the procedures described by \\citet{sanchez06a}. The actual code is based on the {\\sc py3D} reduction package, initially developed for the CALIFA survey \\citep[][]{dr1}. Originally written in Python 2.7, it has been updated to Python 3.11, incorporating specialized procedures to cater to the unique features and requirements of the LVM dataset. \nFigure \\ shows the analysis flow of the DAP for each individual spectrum within each LVM exposure { (i.e., no spatial binning is performed in general within the DAP)}. It mimics the one implemented in \\pyp, as described in \\citet{pypipe3d}, Fig.~1, slightly modified to adapt it to the peculiarities of the LVM data regarding the modeling of the stellar component (discussed in Sec \\ref{sec:rsp}). We include here a brief summary of the main steps that comprise the analysis to avoid unnecessary repetition. The key steps performed by the LVM-DRP include: (i) initial raw data preprocessing to merge readings from different amplifiers into a single frame for each channel (b, r, and z) of each spectrograph (producing 9 different frames), followed by bias removal, gain correction, and cosmic ray identification and masking; (ii) identification and tracing of fiber spectra in each CCD of each spectrograph, including assessment of the FWHM along both dispersion and cross-dispersion axes; (iii) extraction of spectra using the established trace and width parameters, assuming a Gaussian shape for each fiber's spectrum projected along the cross-dispersion axis, and performing a concurrent stray-light correction; (iv) wavelength calibration and linear resampling of the extracted spectra; (v) differential correction for fiber-to-fiber transmission discrepancies; (vi) flux calibration based on the stars acquired by the spectrophotometric telescope simultaneously with the science observations; (vii) combination into a single spectrum of the spectra acquired in each channel (b, r, and z) for each science fiber, weighting by the inverse of the errors in the overlapping regions between arms; (viii) estimation and subtraction of the sky spectrum using the spectra obtained by the west and east telescopes devoted to observing the sky; (ix) implementation of an astrometric solution for each observation, using the information provided by the guiding cameras. Errors are propagated during each step of the data reduction.\nThe result of this analysis shows the spectra of a subset of the RSP library generated by selecting 108 clusters, together with the spectra of the original RSPs included in each cluster. By construction, the shape of all spectra grouped in the same cluster is rather similar. However, as already indicated, this does not mean that each of those spectra corresponds to the same or similar physical parameters. This is evident when exploring the projected PDFs of each cluster in the space of observed parameters. Figure~\\ shows three examples of PDFs corresponding to three different clusters whose spectra (\\#18, \\#34, and \\#47), together with the overall distribution for the full RSP library comprising 108 spectra. A simple exploration of the distribution of each PDF in the space of physical parameters shows that for some parameters the RSP corresponds to almost a single value (e.g., $T_{\\mathrm{eff}}$), while for other parameters they present a bi-valuated (or multi-valuated) distribution (e.g., log($g$)). Besides this multi-valuated distribution, the errors and degeneracies are considerably different for each parameter and pair of parameters, also varying from RSP to RSP. For instance, $T_{\\mathrm{eff}}$ and log($g$) do not present any clear degeneracy, while [Fe/H] and [$\\alpha$/Fe] present a degeneracy that is described by an anticorrelation of the PDFs of both parameters. We should stress that these multi-valuated distributions, degeneracies, and errors can be mitigated, for instance, by the selection of a larger number of clusters ($n_{cl}$). Obviously, the selection of a very low $n_{cl}$ number, e.g., 4-12, increases them. However, increasing this number beyond 108 does not produce any significant improvement in the results, as we will translate the degeneracy to the fitting procedure: the DAP algorithm would choose a different RSP among the degenerated ones in each MC iteration described in Sec. \\ref{sec:proc}. This will only slow down the process without improving the quality of the modeling. \u00a0Using external information, like the distance provided by GAIA \\citep{gaia2} and the absolute magnitude may break some of the degenerancies. . { We may explore how to implement them in future versions of the DAP, or as a part of the post-processing of the data required to interpret the results. Nevertheless this is out of the scope of the goals of the current manuscript, aimed to describe the overall procedure, without focusing on the physical interpretation of the results, which would require a much more detailed exploration.}\nStructure of extensions included in the each RSP file, where: (i) \\#WAVE is the number of spectral pixels; (ii) \\#RSP is the number of templates in the library; (iii) \\#PAR is the number of parameters associated to each template, as described in the text; (iv) \\#RSP\\_ORG is the number of RSP templates or stellar spectra from the library adopted to build the described RSP template; and (v) (NX,NY) are the dimension of the projection of the PDF in the considered space of parameters.\nWe store the result of the clustering procedure described previously in a single FITS file with multiple extensions that comprises: (i) the RSP spectra themselves stored in a raw-stacked spectra (RSS) format (SPECTRA); (ii) the uncertainty spectra that retain the dependencies among the spectra coadded to create each RSP template, in the same format (ERROR); (iii) the LW average physical parameters that correspond to each RSP template (PARAMETERS), together with the same values for the original stellar library or RSP template from which the current template was built (ORG\\_PARAM); and (iv) the PDFs projected in the space of physical parameters associated to each template, as shown in Fig. \\. These PDFs are stored as cubes in which each slice along the z-axis corresponds to the individual PDF projection for the considered pair of parameters (e.g., PDF\\_TEFF\\_LOGG corresponds to T$_{eff}$ vs. log(g)) for each RSP template in the library. In this way, \u00a0the PDF corresponding to a particular stellar decomposition can be generated by multiplying each slice of the cube by the corresponding weight/coefficient of the decomposition and averaging all of them. A set of RSP-template files are distributed (\\url{https://ifs.astroscu.unam.mx/sfsanchez/lvmdap/}) together with some python notebooks illustrating how to handle them (included in the DAP distribution).", ["2411_09729v1_0", "2411_09729v1_1", "2411_09729v1_1"]] [60073, "A significant portion of chemical information remains locked in unstructured formats within printed or digital documents, such as patents and scientific papers. In many of these documents, especially in the field of chemistry, molecular structures are often depicted as images. These chemical structures figures are critically important for drug discovery, patent examination, and the retrieval \u00a0of chemical data. But they are difficult to extract into machine-readable text. Automatically parsing molecular structures from these document images, a process known as Optical Chemical Structure Recognition (OCSR), becomes particularly important. With the growing popularity of Large Language Models (LLMs), many efforts are now focusing on applying them to the understanding scientific literature. Parsing molecular structure images into structured, understandable text also unlocks the potential of large models in processing patents and literature in chemistry-related fields.\nOCSR aims to automatically convert chemical structure diagrams from scientific literature, patents, and other scanned documents into machine-readable string such as SMILES ~\\cite{weininger1988smiles} representation. In order to better address the challenges of the OCSR task in real-world literature. We introduce a new end-to-end framework named MolParser for Optical Chemical Structure Recognition in the wild, illustrated in Figure \\ The main contributions of this paper. SMILES, though widely used for molecular representation, has notable limitations in handling complex chemical entities. It struggles with representing Markush structures, which are used in patents to describe a broad class of molecules by allowing variability at certain positions, enabling the protection of entire families of compounds. Additionally, SMILES cannot effectively handle connection points, abstract rings, ring attachments, duplicated groups or polymers, all of which require a level of flexibility that its linear format does not support. Furthermore, SMILES is not well-suited for tasks involving large models, such as Markush-molecule matching, as its structure lacks the clarity and hierarchical organization needed for efficient interpretation by machine learning models. These limitations hinder its utility in advanced cheminformatics applications.\nWe etend the SMILES representation method, to more effectively represent the Markush structures commonly found in patents, as well as complex compositions such as connection points, abstract rings, ring attachments, duplicated groups, and polymer structures. Additionally, we ensure that this approach is compatible with RDKit and LLM-friendly, facilitating subsequent analysis and processing tasks. The extended SMILES format will be denoted as:\n\\texttt{SMILESEXTENSION}. Where \\texttt{SMILES} refers to the RDKit-compatible SMILES representation. Special token \\texttt{} serves as the separator, and optional \\texttt{EXTENSION} represents supplementary descriptive used to handle complex cases such as Markush structures.\nIn \\texttt{EXTENSION}. We use some XML-like special tokens to represent certain special functional groups. For Markush R-groups and abbreviation groups, we add special token \\texttt{} and \\texttt{} to encapsulate descriptions of these special substituents. Similarly, we use special token \\texttt{} and \\texttt{} for ring attachments; \\texttt{} and \\texttt{} for abstract rings. Additionally, there is a special token \\texttt{} representing a connection point. For the specific description of each functional group, we use the following format: \\texttt{[INDEX]:[GROUP\\_NAME]}. Figure \\ shows a example of our extended SMILES. Although there are numerous complex Markush structures in actual patents, our extended SMILES rules can still adequately address these cases. For more details, please refer to the supplementary materials.", ["2411_11098v1_0", "2411_11098v1_1"]] [60074, "In this paper, we propose a time series anomaly detection method based on INR reconstruction (TSINR for short). \nSpecifically, we introduce a transformer-based architecture to predict the INR parameters of the given time series data. \u00a0To better learn and reconstruct time series data, we design a novel form of continuous function to decompose time series \\cite{cleveland1990stl,fons2022hypertime}. The designed function mainly comprises three components and individually learns the trend, seasonal, and residual information of time series. In addition, to further enhance the capability of INR to capture inter- and intra-channel information, we propose a group-based architecture to specifically learn the complex residual information. Simultaneously, we leverage a pre-trained large language model (LLM) to encode the original data to the feature domain, amplifying the fluctuations of anomalies from both time and\nchannel dimensions and enabling INR to subsequently better differentiate normal and abnormal points. The major contributions of this paper are summarized as follows:\u00a0 \u00a0 \\item We utilize the spectral bias property of INR to prioritize fitting low-frequency signals and enhance sensitivity to discontinuous anomalies, thereby improving anomaly detection performance. A transformer-based architecture is employed to generate the parameters for INR, requiring only a single forward step in the inference phase.\n\u00a0 \u00a0 \\item We design a novel form of INR continuous function, which mainly consists of three components to implicitly learn the unique trend, seasonal, and residual information of time series. Furthermore, a group-based strategy is proposed to further learn intricate residual information.\n\u00a0 \u00a0 \\item We leverage a pre-trained LLM to encode the original time series to the feature domain, enabling amplification of the fluctuations of anomalies in both time and channel domains that facilitate INR to be further sensitive for noncontinuous anomaly areas. Ablation studies and visual analysis validate the aforementioned capacity to better distinguish anomaly points via our proposed framework. \n\u00a0 \u00a0 \\item Extensive experiments demonstrate the overall effectiveness of TSINR compared with other state-of-the-art methods on seven multivariate and one univariate time series anomaly detection benchmark datasets. \nTime series anomaly detection methods primarily include statistical, classic machine learning, and deep learning methods. As shown in Figure \\, we innovatively propose a INR continuous function to better learn and reconstruct time series data. Inspired by classical time series decomposition methods \\cite{cleveland1990stl,fons2022hypertime}, the proposed INR continuous function $f$ consists of three components, including trend $f_{tr}$, seasonal $f_{s}$, and residual $f_{r}$. Statistical methods rely on analyzing the statistical properties of the data to identify patterns that deviate from the expected behavior. They are valuable for their simplicity and interpretability, but have limitations in capturing complex patterns \\cite{cleveland1990stl, coluccia2013distribution}.\nClassic machine learning methods rely on manual feature extraction and various algorithms like clustering \\cite{ruff2018deepsvdd,shin2020itad}, density estimation \\cite{breunig2000lof,yairi2017data-MPPCACD,zong2018deep-DAGMM}, and isolation forests \\cite{liu2008isolation-iForest} to identify anomalies in structured data. However, because they require manual feature extraction and selection, they can be labor-intensive and less effective at capturing complex patterns in data.\nIn this section, we analyze the effectiveness of the proposed decomposition components and group-based architecture. The decomposition components indicate the three components (i.e., trend, seasonal, and residual) designed in our paper. And the group-based architecture is proposed for the residual block. \nThe main purpose of the decomposition components is to extract the unique trend and seasonal information of the time series data. The results in Table \\ref{ablation_group} indicate that capturing these distinctive features significantly enhances the capability for anomaly detection. In addition, the group-based architecture is designed to enhance the representational capacity of INR for multivariate data. Experimental results indicate an improvement in the capability for anomaly detection when employing the proposed group-based architecture. This is because modeling multiple variables and capturing both inter- and intra-channel information with a simple continuous function, which only consists of fully-connected layers, is challenging. Our approach addresses this by dividing the variables into several groups and applying independently fully-connected layers in different groups, thereby reducing the number of variables each group needs to model and improving the representational capacity. The global layers extract the inter-channel information, while the group layers selectively focus on detailed information for specific channels. This enhances the representational capability for each variable without losing any knowledge. Detailed ablation studies on the number of groups are left in Appendix \\ref{app_group_num}.\nFurther, we prove the validity of the pre-trained LLM encoder, which is utilized to encode the data into the feature domain to amplify the fluctuations of anomalies and thereby enhance the capability of TSINR in identifying anomalies. Table \\ref{ablation_llm} displays the ablation studies of the pre-trained LLM encoder. For the multivariate datasets, it can be observed that applying this encoder enhances the performance of anomaly detection. To further demonstrate the effectiveness, we compare the raw data with the features extracted through the encoder. The figures in the first row illustrate that during the time interval when anomalies occur, the extracted features exhibit more pronounced fluctuations compared to the original data. Furthermore, we validate the robustness of TSINR with the synthetic data generated for time series anomaly detection. It has univariate time series and involves different types of anomalies \\cite{yang2023dcdetector,lai2021revisiting}, including the point-wise anomaly (global point and contextual point anomalies) and pattern-wise anomalies (shapelet, seasonal, and trend anomalies). As shown in Figure \\, the\nred points are anomaly points and the red areas are anomaly areas. This implies that the discontinuity in anomalies is increased in time domain. Also, these extracted features incorporate inter-channel information, providing a manifestation of anomalies among all variables. As shown in the second line, the features exhibit anomalous fluctuations in the same time interval as other channels, whereas the original data only shows a brief peak. This verifies that the anomalies are shared in channel domain. Based on these results, we indicate that utilizing the pre-trained LLM encoder can effectively enhance abnormal information both intra- and inter-channel. This aligns with the spectral bias of INR, making our model more sensitive to anomalous data.", ["2411_11641v1_0", "2411_11641v1_1"]] [60075, "RL techniques are generally used to learn optimal policies and actions leading to best rewards. These techniques can be classified into value-based (critic-centric), policy-gradient (actor-centric), or hybrid approaches. \nActor-critic algorithms merge the benefits of both by employing two key components: the actor, responsible for policy optimization, and the critic, which evaluates the policy \\citep{sutton2018reinforcement}. This hybrid approach has demonstrated success in various domains, including continuous control problems \\cite{pmlr-v48-mniha16, haarnoja2018soft, haarnoja2018softapp} and with generative adversarial networks (GANs) \\citep{goodfellow2014generative}, where a generator and a critic network collaboratively optimize performance.\nInspired by the success of actor-critic approaches, we introduce a (stateless) actor-critic algorithm, as provided in Algorithm~\\ref{alg:ssac}, tailored to our infinite continuum bandit setting, enabling autonomous learning of effective prompts with a constrained evaluation budget and outperforming previous state-of-the-art black-box prompt learning approaches. Both the actor and the critic are parameterized by neural networks. In typical RL problems, the agent interacts with an environment with varying states, so its policy depends on the current state. That is, the policy network generates actions based on the current state $s$, $\\pi(.| s; \\theta)$, where $\\theta$ denotes the parameters.\nHowever, in the problem we consider, the setting is stateless. Therefore, we define a policy network $\\pi(.; \\theta)$ with a constant input that produces a $d^{'}$-dimensional vector as output $\\mathbf{a}$, representing the continuous action. For the critic, we use a neural network with a $d^{'}$-dimensional input (the action) and a single output that estimates the action's quality, thereby assessing the quality of the policy that proposed it. The critic network's parameters are trained to minimize the following: Furthermore, to avoid fine-tuning, and to achieve adaptive exploration, the entropy coefficient $\\alpha$ can be adjusted dynamically to maintain a target entropy level \\cite{haarnoja2018softapp}, $H_{\\text{target}}$, ensuring sufficient exploration, as follows:\nWe use a stochastic approximation of gradients, specifically the Adam algorithm \\cite{kingma2014adam, reddi2019convergence}, to minimize the losses via stochastic gradient descent. The learning rates $\\lambda$, $\\beta$, and $\\gamma$ are used for the Q-network, the policy networks, and the temperature, respectively. \nThe methodology of our proposed algorithm, ACING, is illustrated in Fig.~\\, showing the actor-critic interaction with the environment. In Fig.~\\, the environment is zoomed in, using the \\emph{larger\\_animal} dataset as an example. In the following, we provide a detailed explanation of the full methodology.\n{Overview.} In each iteration \\( t \\leq T\\), the actor-critic agent generates a continuous vector ``action'' \\(\\mathbf{a}\\) (step 1). This action is then played on the environment, which projects \\(\\mathbf{a}\\) into the appropriate space using a fixed projection matrix $P$ to obtain \\(\\mathbf{z}\\). The environment then concatenates the projected vector \\(\\mathbf{z}\\) with a set of exemplars' embeddings from \\(\\mathcal{E}\\) and feeds it into a white-box LLM $h$ (step 2). The white-box LLM produces a discrete prompt, \\(\\tau\\), which is evaluated using the validation dataset \\(\\mathcal{V}\\) based on the responses from the black-box LLM $f$ (step 3). The black-box LLM generates a prediction, which is then contrasted to the true labels of the validation examples, and a score function provides a final reward to the critic. This reward is used to compute the critic's loss and update both the critic and actor networks accordingly. \\\\\n\\smartparagraph{Step \u00a0\\textcircled{{\\scriptsize 1}}.} \nThe policy-network (actor) outputs a mean and variance of a distribution from which the action is sampled. Specifically, the action, represented as a soft prompt vector, $\\mathbf{a}$ $\\in$ $\\mathbb{R}^{d^{'}}$, is obtained by sampling from a Gaussian distribution with the outputted parameters. The network also computes the associated log probability, which is crucial for policy optimization, as it guides the learning process by balancing exploration and exploitation, by controlling the policy loss, as shown in Eq.~(\\ref{policy_loss}).\\\\\n\\smartparagraph{Step \u00a0\\textcircled{{\\scriptsize 2}}.}\nAs depicted in the left side of Fig.~\\, the examples describing the task from the set of exemplars $\\mathcal{E}$, along with additional text such as ``The instruction was to,'' are input into the embedding layer of the white-box LLM to generate continuous vectors. These continuous vectors are then concatenated with the soft prompt $\\mathbf{z}$, projected from the action $\\mathbf{a}$. The layers of the white-box LLM subsequently process the resulting concatenated vector to produce the discrete prompt $\\tau$ (using the instruction generation template depicted in Fig.~\\ top right). This transformation is essential for converting the continuous prompt into a discrete format suitable for input into the black-box LLM. \\\\\n\\smartparagraph{Step \u00a0\\textcircled{{\\scriptsize 3}}.}\nAs depicted in the right side of Fig.~\\, for every input $x_i$ in the validation set $\\mathcal{V} = \\{(x_j, y_j)\\}_{j=1}^{m}$, the generated prompt $\\tau$ is concatenated to the input sentence $x_i$ and fed to the black-box LLM, which generates an output sentence $\\hat{y}_i = f(\\tau(\\mathbf{z}) \\oplus x_i)$. \u00a0The output of the black-box LLM is fed into a scoring function $q(\\cdot, \\cdot)$, which computes the score between the predicted output $\\hat{y}_i$ and the true label $y_i$. The overall score is calculated by averaging the scores across all samples, representing the reward: $r = \\frac{1}{m} \\sum_{i=1}^m q(\\hat{y}_i, y_i)$,\nwhere $m$ represents the number of samples and $r$ serves as feedback for the actor-critic algorithm.\\\\\n\\smartparagraph{Step \\textcircled{{\\scriptsize 4}}.}\nThe critic is responsible for evaluating the quality of actions taken by the actor using the network $Q_{\\textbf{w}}$. This network estimates the expected reward for a given action $\\mathbf{a}$, which is generated by the policy network $\\pi$. Observing the reward $r(\\mathbf{a})$, the critic computes the loss Eq.~(\\ref{critic_loss}) and updates its network using Eq.~(\\ref{critic_gradient}). The critic provides feedback to the actor by estimating the Q-values of the actions sampled by the policy, and the actor uses this feedback to improve its policy by maximizing Eq.~(\\ref{policy_loss}). This continuous interaction between the actor and critic ensures the actor learns to take actions that maximize the expected reward while balancing exploration and exploitation. \u00a0By repeating steps 1 to 4 until the allowed budget $T$ is exhausted, the agent returns the best instruction prompt, denoted as $\\tau^\\star$. This prompt is then evaluated using the test set $\\mathcal{T}$ on the black-box LLM, yielding the final score $\\mathcal{S}^\\star = \\frac{1}{m^{\\prime}} \\sum_{i=1}^{m^{\\prime}} q(f(\\tau^\\star, x^\\prime_i), y^\\prime_i)$ (using the evaluation template depicted in Fig.~\\ bottom right).\n", ["2411_12736v1_0", "2411_12736v1_1", "2411_12736v1_1", "2411_12736v1_0", "2411_12736v1_1", "2411_12736v1_0"]] [60076, "Leadsee-Precip has an encoder-decoder structure as illustrated in Fig. \\. It consists of three parts: feature extraction, hidden translator, and precipitation upsampling. The feature extraction part primarily serves to derive features from circulation variables and reduce spatial resolution. To facilitate the model coupling, the input circulation variables consist of 69 channels, including 5 upper-air variables, each with 13 levels, and 4 surface variables, as the popular deep learning global atmospheric circulation prediction models. The upper-air variables are processed using 3D ConvNets for feature extraction and downsampling, allowing the model to capture the correlations across different variables and altitude levels. The surface variables use 2D ConvNets only. We set zonal circular padding in downsampling processes to ensure proper handling of the boundary conditions. After downsampling, the spatial resolution of upper-air and surface variables reduces to one-fourth. The downsampled upper-air and surface variables then concatenate together with an additional static layer, which includes three channels and has weights automatically learned by the model. A shortcut connection with 64 channels is established between the initial variables and the precipitation upsampling part to improve the accuracy of predictions. The hidden translator part uses an encoder-decoder structure as well. The spatial resolution is reduced by half after the Enc module to save GPU memory usage while the number of channels remains unchanged. The MogaNet Hidden module contains 16 MogaNet blocks, which outperforms many of the currently leading Transformer-based network structures. The Dec module doubles the spatial resolution and adds the shortcut connection from the feature extraction part.\nThe precipitation upsampling part increases the spatial resolution of forecast results back to 0.25 degrees. A shortcut connection from the initial variables is added to this process to better reconstruct precipitation prediction. The long-tail distribution of precipitation data presents a challenge for DL models when directly trained using RMSE, as these models rarely sample the infrequent extreme precipitation events. This leads to biased forecasts, especially for heavy rainfall. The Google team introduced a method using the logit function to adjust the loss function (called logit adjustment loss), specifically aimed at addressing the class imbalance issue \\cite{menon2020long}. Inspired by this approach, we develop an IB scheme specifically designed to address the regression challenges posed by long-tail data in precipitation forecasting. The IB scheme operates on the principle that no precipitation or light precipitation, due to their high probability of occurrence (low uncertainty), actually provides less information content. Therefore, the information content demonstrated using a logit form $-\\log P(y_i)$ corresponding to the $i$-th precipitation sample can naturally serve as the weight for precipitation error, balancing the differences in sample sizes across different precipitation bins. The loss function of the model aims to minimize the total \"information content-weighted error\" across all precipitation samples. Specifically, The IB scheme uses the following formula to calculate the weight (\\textit{$W_i$}):\nWe use weather station precipitation data from about 10470 weather stations in the China \\cite{feng2024enhancing} to conduct extra evaluations. The original data is stored in hourly precipitations and is processed to 6-hour cumulative ones for the test. When evaluating with weather station data, only the precipitation results at the station location are used. \u00a0It is important to note that we did not use the weather station dataset to train the model. Therefore, such evaluation based on \u201cthird-part\u201d weather stations dataset is more objective but challenging.\nHere, we present the evaluation results of Leadsee-Precip on the global test set, followed by verification results using weather station data, and finally, the evaluation results with weather station data of the fine-tuning model. \n\u00a0Fig. \\ shows the diagnosing skill of our model over the global range of $60^\\circ S$ to $60^\\circ N$. The precipitation in the upper panel is diagnosed by the model based on 69 variables from upper-air layers and surface data at a single time step while the lower panel illustrates the ground truth of the 6-hour accumulated precipitation of NOAA CMORPH. The diagnosed large rainfall (e.g., above the 25 $mm$ $6h^{-1}$ threshold) shows good consistency with the ground truth in both intensity and location. For instance, the model accurately reproduces several events of 6-hour accumulated rainfall exceeding 50 $mm$ in the eastern and western Pacific regions. For precipitation below 1 $mm$ $6h^{-1}$, the diagnostic results show a tendency for overestimation, with finer details appearing comparatively smoothed. \nTable \\ref{tab:TS_FSS_25km} presents the evaluation metrics TS and FSS of the model on the test dataset. We selected seven 6-hour accumulated precipitation thresholds: 0.1, 1, 5, 10, 25, and 50 $mm$ $6h^{-1}$. The metrics generally show a decline with increasing precipitation levels while both TS and FSS slightly increase at the 0.1 and 1 $mm$ $6h^{-1}$ thresholds, suggesting the model tends to overestimate small rainfalls. However, the FSS of 25 $mm$ $6h^{-1}$ still exceeds 0.5, highlighting the model's ability to capture heavy precipitation events. ", ["2411_12640v1_0", "2411_12640v1_1"]] [60077, "In recent years, the Transformer architecture and foundation models, leveraging self-attention mechanisms to capture complex dependencies in text, have transformed Natural Language Processing (NLP) benchmarks \\citep{vaswani2017attention, touvron2023llama2openfoundation, albert_q_jiang_mistral_2023, gemmateam2024gemmaopenmodelsbased}. Similarly, Vision Transformers (ViTs) \\citep{dosovitskiy2020image} have been adapted in Computer Vision (CV) and now serve as the backbone for various tasks such as segmentation and object detection \\citep{thisanke2023semantic, liu2021swin}. Despite their success, understanding the interpretability of Transformers remains a challenge due to the complexity of their latent token representations.\n{Hierarchical clustering tree showing the grouping of token relevance maps for all tokens in a latent layer of the Vision Transformer, not limited to the \\texttt{CLS} token. Each leaf node represents a single token relevance map, while higher-level nodes show aggregated clusters based on a clustering threshold $( \\zeta)$, which controls the level of detail. Lower $\\zeta$ values reveal finer details, while higher values create broader, more general clusters. This approach demonstrates how pre-trained Vision Transformers can perform unsupervised semantic segmentation, identifying meaningful patterns within token representations without requiring additional training or fine-tuning.}\neveral methods have been developed to enhance the interpretability of CNN-based models \\citep{simonyan2014deep, zeiler2014visualizing, selvaraju2017grad}. While some of these can be extended to Transformer architectures, they do not fully leverage the unique attention mechanisms inherent to Transformers. Recent research has introduced interpretability methods specifically designed for Transformers \\citep{chefer2021transformer, abnar2020quantifying, vig2019analyzing}. However, these approaches primarily focus on explaining final model outputs, providing limited insight into the intermediate processes that lead to predictions. For instance, \\citep{cheninvite} maps latent tokens into CLIP's \\citep{radford2021learning} multi-modal space to find corresponding text descriptions, relying on an external text encoder for interpretability. In contrast, our approach directly interprets the latent space of ViTs, elucidating the role and function of each token within the high-dimensional space without relying on external models.\nThis paper introduces a framework to interpret latent tokens, offering a deeper understanding of the internal workings of Transformers. This understanding enables users to perform image semantic segmentation using pre-trained Transformer-based vision models in an unsupervised, zero-shot manner, without any additional training. We demonstrate that applying semantic segmentation based on our interpretability framework achieves state-of-the-art performance on benchmark image segmentation datasets.\nDrawing inspiration from \\citep{chefer2021transformer}, our method analyzes the semantic information retained by latent tokens, enabling tasks such as object selection and semantic segmentation without additional training. We demonstrate that Transformers inherently understand the semantic structure of their input, viewing it as a collection of distinct concepts. Each latent token identifies a specific concept with semantic significance, thereby shedding light on the decision-making process of these models.\nAs shown in Section \\ref{sec: Experiment}, our framework proves effective in a range of tasks, including semantic segmentation and model interpretation. Most recent unsupervised semantic segmentation methods involve an additional training phase to learn feature representations \\citep{sick2024unsupervised, hamilton2022unsupervised, li2023acseg}. Our approach, however, utilizes the understanding embedded in pre-trained models to achieve zero-shot segmentation, leveraging their inherent knowledge of images. The stronger a model's comprehension of image content, the more accurately it performs segmentation. We further demonstrate that our method is capable of interpreting large language models (LLMs) at the token level, validating its application in tasks such as text summarization. The main contributions of this paper are as follows: \nDue to the skip connections in the transformer, most of the contribution of \\( S_i^{(l)} \\) is concentrated on \\( S_i^{(l)}[i-1] \\) which makes it hard to analyze other tokens' contribution. To address this issue, we replace this element with the maximum value of other elements, thereby capturing the contributions of additional tokens to the selected token. For vision tasks, we first reshape the relevance map and then upsample it using bilinear or cubic interpolation to match the resolution of the model's input. The resulting higher-dimensional matrix is denoted as \\( \\Tilde{S} \\). This upsampling step is essential for enabling accurate object selection and semantic segmentation tasks.\n\\subsection{ULTra in Unsupervised Tasks}\nIn this section, we aim to examine ULTra's capability to adapt to various tasks involving semantic knowledge. Importantly, it requires no additional training, leveraging the inherent understanding within transformers rather than relying on loss functions objective, final layer outputs, or fine-tuning.\nAs previously discussed, a relevance map can be defined for each latent token at a fixed layer, with the total number of relevance maps equal to the number of latent tokens. In the context of segmentation, the goal is to assign a class label to each pixel within an image. To achieve this, we employ clustering techniques, such as hierarchical clustering, that do not require a predefined number of classes. These techniques group the relevance maps into \\( k \\) distinct clusters, where \\( k \\) is unknown. Ideally, we aim for \\( k \\) to approximate the actual number of classes present in the image.\nOur approach provides flexibility in adjusting the value of \\( k \\) by modifying the cutoff distance threshold $\\zeta$ within the clustering algorithm. Increasing $\\zeta$ produces fewer, broader clusters that capture general categories, such as background and foreground. Conversely, reducing $\\zeta$ allows for finer segmentation, distinguishing more specific features, such as an object's head or hands. To prevent the method from disproportionately favoring larger objects, given that the number of elements in each cluster may vary, we apply min-max scaling to each cluster independently. After clustering, we define \\( k \\) distinct concepts by aggregating the relevance maps within each cluster. The aggregated relevance map \\( \\tilde{S}^{(l)}_c[x, y] \\) for a cluster \\( c \\) is computed as:\nSome examples illustrating our segmentation method is presented in Figure \\. Additionally, the hierarchy tree and the effect of the threshold are demonstrated in Figure \\.", ["2411_12589v1_1", "2411_12589v1_0"]] [60078, "UGAD has attracted increasing attention in recent years, many researchers focus on how to use self-supervised signals to improve the model's ability to detect anomalies in unsupervised scenarios. Early traditional UGAD methods, such as Radar~ \\cite{li2017radar} and ResGCN~\\cite{pei2022resgcn}, which capture anomalous features by describing the residuals of attribute information and their consistency with network information, are concise and effective. With the rapid development of GNNs, it has also been naturally introduced into the field of anomaly detection, and recent researchers have been trying to develop UGAD models based on GNNs. We broadly categorize GNN-based models into three types: MPI methods, CL-based methods, and GAE-based methods. In this section, we propose a novel unsupervised multiplex graph anomaly detection method named \\model depicted in Fig.~\\, which mainly includes three key components: \\textit{(1) Original-view Graph Reconstruction}, \\textit{(2) Augmented-view Graph Reconstruction} and \\textit{(3) Dual-view Contrastive Learning}, to capture the anomalous information on both original and augmented graphs, and fuse the attribute and structural information of the nodes learned from the two views by contrastive learning to improve the model anomaly detection performance. It is well known that most of the GNN-based methods exploit the message passing mechanism~\\cite{wang2021modeling} of GNN to learn node attributes by aggregating the neighborhood information of nodes. However, due to the feature inconsistency between normal and anomalous nodes, the original message passing mechanism may impair the model's ability to extract the anomalous information. Luo et al.~\\cite{luo2022comga} improve the message passing mechanism from the perspectives of community segmentation and community structure learning and obtain node attributes by introducing the community structure. Qiao et al.~\\cite{qiao2024truncated} propose a truncated affinity maximization method named TAM, which maximizes the local affinity of a node with its neighbor nodes and performs node characterization on the truncated graph instead of the original graph. by maximizing the local affinity between nodes and neighboring nodes, and performing node attribute learning on a truncated graph instead of the original graph. As a common method for mining self-supervised signals~\\cite{liu2024towards,kong2024federated,liu2024bourne}, contrastive learning is also introduced to the UGAD task~\\cite{wang2023cross} to capture feature inconsistencies between normal and anomalous nodes. CoLA~\\cite{liu2021anomaly} is a novel contrastive instance pair sampling method that leverages local information in network data to learn information embedding from high-dimensional attributes and local structures. Zhang et al.~\\cite{zhang2022reconstruction} propose a self-supervised learning framework called Sub-CR that jointly optimizes two modules, multi-view contrastive-based learning, and attribute-based reconstruction, to more accurately detect anomalies on attribute networks. In addition to focusing on node attribute learning and attribute reconstruction, contrastive learning can also be extended to the subgraph level to capture complex structural anomalies. GRADATE~\\cite{duan2023graph}, a multi-view and multi-scale contrastive learning framework including node-subgraph contrastive learning and subgraph-subgraph contrastive learning, can better capture different features between normal and abnormal substructures, which helps to identify complex structural anomalies. Huang et al.~\\cite{huang2023unsupervised} combine variance-based models and attribute reconstruction models to detect outliers in a balanced way.GAE-based methods:} GAE-based methods~\\cite{li2019specae} are one of the mainstay methods to detect anomalies by reconstructing the node attributes and structural features in the graphs. Ding et al.~\\cite{ding2019deep} use GCN to model graph topology and node attributes to learn node embeddings. Roy et al.~\\cite{roy2024gad} incorporate neighborhood reconstruction in their proposed GAD-NR, aiming to reconstruct the entire neighborhood of a node based on the corresponding node representation (including local structure, self-properties, and neighborhood properties). ADA-GAD~\\cite{he2024ada} introduces a two-stage anomaly denoising autoencoder framework, where the first stage trains the GAE on the generated denoised graphs, and the second stage retains only the encoder and retrains the encoder on the original graphs while introducing regularization of the distribution of node anomalies to prevent anomalous overfitting.The results of ranked anomaly scores on two datasets with injected anomalies are shown in Fig.~\\ and Fig.~\\. Compared with the other four best-performing baselines, the curve of our UMGAD can converge quickly (become stable) to a position closer to the actual number of anomalies in the Retail and Alibaba datasets. In this section, we evaluate the performance of our proposed method through extensive experiments and answer the following questions: {Dataset.} We have conducted experiments on two datasets injected with synthetic anomalies: Retail Rocket~\\cite{ren2024sslrec} (Retail for short) and Alibaba~\\cite{fu2023multiplex}, and two real-world publicly available datasets with anomalies: Amazon~\\cite{dou2020enhancing} and YelpChi~\\cite{kumar2019predicting}. \\textit{The detailed description of these datasets is shown in the supplementary material.} {Tables/Datasets}We compare with four categories of methods on the unsupervised GAD task. ", ["2411_12556v1_0", "2411_12556v1_1", "2411_12556v1_2"]] [60079, "This work focuses on complementary-label learning (CLL), a WSL problem where each label indicates only a class to which a data instance \\textit{does not belong}~\\cite{ishida2017learning}. CLL aims to train models with these complementary labels while still enabling accurate predictions of the ordinary labels during testing. CLL makes machine learning more practical in scenarios where obtaining ordinary labels is difficult or costly~\\cite{ishida2017learning}. Additionally, CLL broadens our understanding of machine learning's practical potential under limited supervision.\nCurrent research on CLL has introduced numerous learning algorithms~\\cite{scl2020, fwd2018, gao2021discriminative, cpe2023} that have been evaluated using a diverse range of datasets, from synthetic datasets based on varied complementary-label generation assumptions to real-world datasets~\\cite{clcifar2023}.\nHowever, the performance of these algorithms often varies significantly across studies due to differences in underlying label-generation assumptions, the absence of a standardized evaluation platform, and the use of diverse network architectures~\\cite{scl2020, fwd2018, ishida2017learning, cpe2023}. Establishing a fair, reproducible, and stable evaluation environment is therefore essential for advancing CLL research. For instance, variations in network architectures, such as the use of ResNet18~\\cite{xu2019generativediscriminative, clcifar2023} versus ResNet34~\\cite{fwd2018, scl2020}, contribute to inconsistencies in performance and hinder fair comparisons across studies. Furthermore, most CLL research has not publicly released implementations~\\cite{mcl2020, cpe2023, scl2020, ComCo2023}, particularly regarding details like loss calculation and data pre-processing. This lack of accessibility presents a challenge for researchers seeking to validate and build upon existing work in CLL.\nTo enable meaningful comparisons among CLL algorithms and create a user-friendly environment for implementation and innovation, we introduce \\texttt{libcll}, a complementary-label learning toolkit built with PyTorch-Lightning. This toolkit standardizes the evaluation process while offering extensive customization options, making it easier for researchers to develop, test, reproduce, and refine algorithms. By performing comprehensive benchmark experiments across established CLL datasets, various algorithms, and a range of complementary-label distributions, as illustrated in Figure \\, \nt{libcll} provides a robust and reproducible evaluation framework. Our goal is for {libcll} to accelerate progress in CLL research and foster a collaborative research community. Additionally, there is a growing body of research focused on leveraging multiple complementary labels for supervision \\cite{mcl2020, reg2021, ComCo2023, conu2023}, where each instance is assigned multiple complementary labels generated from a transition matrix without replacement. In fact, the problem of learning from multiple complementary labels can be connected to partial-label learning or negative-unlabeled learning \\cite{conu2023}. Building on these concepts, \\cite{clcifar2023} curated a human-labeled CIFAR~\\cite{krizhevsky2009learning} dataset with complementary labels to better understand real-world CLL distributions, where the transition matrices are both biased and noisy, and each instance has three complementary labels. There remain several open problems in CLL. First, because the transition matrix $T$ is predefined, there is no universal generation process for biased complementary labels, and a general framework is needed for fair comparison. Second, in the absence of ordinary labels, the transition matrix $T$ is often assumed to be given. If a small portion of true labels is available, the transition matrix can be estimated using the anchor point method proposed in \\cite{fwd2018}. These variations can lead to inconsistent experimental outcomes. Finally, without ordinary labels, the reliability of validation using only complementary labels is uncertain.\nTo address these challenges, we introduce \\texttt{libcll}, the first CLL toolkit, to support future CLL research and advance understanding in the weakly-supervised learning community.\nPrevious methods on CLL}\\label{sec:previous-methods}\nIn this section, we present a timeline of key developments in CLL, as illustrated in Figure~\\. We implement three primary categories of CLL methods in tt{libcll}: URE (unbiased risk estimator), CPE (complementary probability estimation), and MCL (multiple complementary label) methods. Additionally, we include several bridging works that connect CLL with other learning frameworks.\n{URE Series of Works} The concept of complementary-label learning was initially proposed by \\cite{ishida2017learning}, who introduced a risk estimator using Pairwise Comparison (PC) and One-vs-All (OvA) strategies for restricted loss functions. With biased complementary labels, \\cite{fwd2018} employed forward correction to reconstruct classification risk using cross-entropy loss on complementary labels and the transition matrix. Furthermore, \\cite{ishida2019complementarylabel} removed these restrictions, extending the unbiased risk estimator to support arbitrary loss functions and models.\n{CPE Framework} \\cite{cpe2023} offered a new perspective by approaching prediction as a decoding process with the transition matrix. This proposed decoding framework can unify various risk estimator methods \\cite{fwd2018, scl2020, gao2021discriminative}.\n", ["2411_12276v1_0", "2411_12276v1_1"]] [60080, "Data Collection\nFigure~\\ presents the methodology of this study. \nIn this section we explain how we spend significant effort to construct a comprehensive dataset that is critical to this study and the like, as shown on the left side of Figure~\\.\nFor the convenience of operation, we define project deprecation in Section~\\ref{ss:difinitiondeprecation}. \nWe introduce how we select projects and construct dataset in Section~\\ref{ss:projectselection} and ~\\ref{ss:dataprocessing} respectively.\nDefinition of Repository Deprecation\nThere is no \\textit{de facto} definition of repository deprecation. \nRather than defining the concept on all repositories in the open source world, we first limit the scope of our research to GitHub. \nGitHub is a code hosting platform of global prominence and extensive use, with a vast array of projects and contributors, ensuring our dataset's scale and diversity.\nBesides, GitHub's rich development history and ``archive repository'' feature highlight repository deprecations, ensuring the number and correctness of positive samples.\nHow does HITS work in practice? How is its capability of forecasting deprecation compared with other metrics? \nTo answer the questions, firstly we choose the once-popular code editor Brackets, which was deprecated in favor of the more feature-complete VSCode in 2021, as a showcase. \nSecondly we explore the prediction power of HITS's delta $\\Delta$HITS ($\\Delta HITS = HITS_{t} - HITS_{t-1month}$)\nwithin three randomly sampled projects: Project~\\cite{}, ~\\cite{} and~\\cite{}.\n{} displays the activity statistics of the Brackets project since 2015. The figure shows the number of new events created each month for stars, issues, PRs, commits, comments, and tags. It is evident that the development of Brackets has been gradually stagnating since 2015. Yet, the project continued to receive a high and stable number of stars each month, with a notable surge in 2021. The HITS weight however, as a reliable indicator of the project's impact, has been on a steady decline since 2015.\nThis case demonstrates the HITS weight to be a more promising representation of projects' deprecation trends, and less prone to noise compared with other indicators.\nFigure~\\ref{delta_hits} illustrates the relationship between $\\Delta$HITS and time for the three randomly selected projects. It is clear that for Project~\\cite{}, represented in Figure~\\, there was no negative $\\Delta$HITS during the observation period. Indeed, this project has not been deprecated and is still under active maintenance. However, for Project~\\cite{} and~\\cite{}, represented in Figures~\\ and~\\ respectively, deprecation occurred during the observation period. In each case, a negative peak in $\\Delta$HITS in the month preceding deprecation serves as a harbinger of this event.\nTherefore, it is evident that the HITS weight, as a predictor of repository deprecation, exhibits a high degree of sensitivity and have the potential to accurately detect a trend towards deprecation.", ["2405_07508v1_0", "2405_07508v1_0", "2405_07508v1_2", "2405_07508v1_3", "2405_07508v1_4", "2405_07508v1_1", "2405_07508v1_2", "2405_07508v1_2", "2405_07508v1_3", "2405_07508v1_4", "2405_07508v1_3", "2405_07508v1_4"]] [60081, "Introduction\nDiffusion models have shown remarkable capabilities in generating high-quality images~\\citep{ho2020denoising}, with recent advances further enhancing user control over the generation process. Trained on vast data, these models can create stunning images from simple text prompts, unlocking diverse image editing and synthesis applications~\\citep{meng2022sdedit,ruiz2023dreambooth,zhang2023adding}. \nTo pursue higher image quality and more precise text-to-image alignment, researchers are increasingly scaling up diffusion models. As shown in \\, Stable Diffusion~(SD)~\\citep{rombach2022high} 1.4 only has 800M parameters, while SDXL~\\citep{podell2023sdxl} scales this up to 2.6B parameters. AuraFlow v0.1~\\citep{auraflow0.1} extends this further to 6B parameters, with the latest model, FLUX.1~\\citep{flux1}, pushing the boundary to 12B parameters. Compared to large language models (LLMs), diffusion models are significantly more computationally intensive. Their computational costs\\footnote{Computational cost is measured by number of Multiply-Accumulate operations (MACs). 1 MAC=2 FLOPs.} increase more rapidly with model size, posing a prohibitive memory and latency barrier for real-world model deployment, particularly for interactive use cases that demand low latency.\nAs Moore's law slows down, hardware vendors are turning to low-precision inference to sustain performance improvements. For instance, NVIDIA's Blackwell Tensor Cores introduce a new 4-bit floating point (FP4) precision, doubling the performance compared to FP8~\\citep{blackwell}. Therefore, using 4-bit inference to accelerate diffusion models is appealing. In the realm of LLMs, researchers have leveraged quantization to compress model sizes and boost inference speed~\\citep{dettmers2022gpt3,xiao2023smoothquant}. However, unlike LLMs--where latency is primarily constrained by loading model weights, especially with small batch sizes--diffusion models are heavily computationally bound, even with a single batch. As a result, weight-only quantization cannot accelerate diffusion models. To achieve speedup, both weights and activations must be quantized to the same bit width; otherwise, the lower-precision side will be upcast during computation, negating potential performance enhancements. \nFrom low-rank-compute, the quantization error can be expressed as\nQuantization error bound\nFor any tensor $\\mR$ and quantization method described in \\eqn{quantization_def} as $Q(\\mR) = s_\\mR \\cdot \\mQ_\\mR $. Assuming the element of $\\mR$ follows a normal distribution, we have\n\\mathbb{E} \\sbr{\\norm{\\mR - Q(\\mR)}_F} \\le \\frac{\\sqrt{\\log \\rbr{\\text{size}(\\mR)}\\pi}}{q_{\\max}}\\mathbb{E}\\sbr{\\norm{\\mR}_F},\nwhere $\\text{size}(\\mR)$ denotes the number of elements in $\\mR$. \nSee \\app{proof2} for the proof. From this proposition, we obtain the intuition that the quantization error $\\norm{\\mR-Q(\\mR)}_F$ is bounded by the magnitude of the residual $\\norm{\\mR}_F$. Thus, our goal is to find the optimal $\\mL_1\\mL_2$ that minimizes $\\norm{\\mR}_F = \\norm{\\hat{\\mW} - \\mL_1\\mL_2}_F$, which can be solved by simple Singular Value Decomposition (SVD). Given the SVD of $\\hat{\\mW}=\\mU\\mSigma\\mV$, the optimal solution is $\\mL_1 = \\mU\\mSigma_{:,:r}$ and $\\mL_2 = \\mV_{:r,:}$. \\ illustrates the singular value distribution of the original weight $\\mW$, transformed weight $\\hat{\\mW}$ and residual $\\mR$. The singular values of the original weight $\\mW$ are highly imbalanced. After smoothing, the singular value distribution of $\\hat{\\mW}$ becomes even sharper, with only the first several values being significantly larger. By removing these dominant values, Eckart\u2013Young\u2013Mirsky theorem\\footnote{\\url{https://en.wikipedia.org/wiki/Low-rank_approximation}} suggests that the magnitude of the residual \n$\\mR$ is dramatically reduced, as $\\norm{\\mR}_F = \\sqrt{\\sum_{i=r+1}^{\\min(m,n)} \\sigma_i^2}$, compared to the original magnitude $\\norm{\\hat{\\mW}}_F = \\sqrt{\\sum_{i=1}^{\\min(m,n)} \\sigma_i^2}$, where $\\sigma_i$ is the $i$-th singular value of $\\hat{\\mW}$. Furthermore, empirical observations reveal that $\\mR$ exhibits fewer outliers with a substantially compressed value range compared to $\\hat{\\mW}$, as shown in \\(d)(e). In practice, we can further reduce quantization errors by iteratively updating the low-rank branch through decomposing $\\mW - Q(\\mR)$ and adjusting $\\mR$ accordingly for several iterations, and then picking the result with the smallest error.", ["2411_05007v1_0", "2411_05007v1_1", "2411_05007v1_2"]] [60082, "Introduction\nTransformer-based models have not just advanced, but fundamentally reshaped how we approach both vision and language processing, merging these domains in shared sequential representation spaces. Indeed, most recent multi-modal models including DALL-E \\citep{ramesh2022hierarchical}, \u00a0LLaVA \\citep{liu2024visual} and Chameleon \\citep{team2024chameleon} operate over joint tokenized representations of images and language, where models decompose images into ``visual languages'': linearized discrete patches or tokens analogous to words in a sentence. This process, shown in \\, enables seamless integration of images into transformer architectures and allows models to solve multimodal tasks, ranging from image generation and image captioning to visual question answering and translation. \u00a0\nDespite the success of such shared-structure models, current research lacks an in-depth understanding of whether the internal structure of visual tokens mirrors the principles governing natural languages. Specifically, the question arises: do languages formed of visual tokens follow the same statistical patterns, such as frequency distributions, grammatical rules, or semantic dependencies, that human languages exhibit? Investigating such statistical behavior of discrete visual tokens extends beyond theoretical curiosity; it has broad implications for practical machine learning applications. While in linguistic theory, phenomena like Zipf\u2019s law and entropy shape natural languages\u2019 structure and shape the design of machine learning algorithms, no such rules exist for visual languages. Such rules, if they exist, have the potential to motivate creating modality-specific models and procedures to capture the unique statistical properties of the underlying visual data. \nIn pursuit of such rules, in this paper we inspect the equivalence of visual and natural languages through an empirical analysis of token distributions, segmentation granularity, and syntactic and semantic structures. We start by investigating the frequency statistics of visual words and compare them to natural languages. Our analysis reveals that although visual languages can follow power-law (Zipfian) distributions, they use more tokens more uniformly. This leads to languages with greater per-token entropy and lower compression ratios, and implies that vision models may require more attention heads, larger embeddings, and longer training times with more diverse data compared to natural language models. Noting in these experiments that visual languages have coarser granularity than patches, we demonstrate through correlation analysis that visual tokens operate at an intermediate level of granularity, and typically represent object parts rather than whole objects or sub-parts in images. Correspondingly, we show visual tokens are less effective at representing fine-grained details or whole-object structures. Following this line of reasoning, we explore if tokens have composable structure, and using parse trees generated by Compound Probabilistic Context-Free Grammars (C-PCFG), we show visual languages have grammatical structures that are more fragmented, with grammars trained on them exhibiting higher perplexity compared to natural languages. We confirm these observations by building a co-occurrence based embedding space, and evaluating the topological alignment between natural and visual languages. In this, we find visual languages align more with natural languages than with other visual languages, but less so than natural languages align with each other.\nTogether, we aim to show through these experiments show that while visual languages have striking similarities to natural languages, there are also notable and fundamental differences, motivating a unique modality-specific approaches to vision-language learning.\nre Visual Languages Structured Like Natural Languages?\nIn \\autoref{sec:entropy} we showed that visual languages are not very compressible using Huffman encodings, suggesting that visual languages may not have hierarchical structures similar to those of natural languages. \nTo inquire further into this question, we test whether Context-free Grammars~\\citep{chomsky1959algebraic} can approximate the structure of visual languages as well as they can natural languages by fitting grammars to each modality using unsupervised grammar induction techniques. \nParticularly, we use Compound Probabilistic Context-Free Grammars (C-PCFG)~\\citep{kim-etal-2019-compound} as the grammar formalism for our experiments. \nC-PCFGs are a type of neural PCFG, where grammar production rules are modeled as compound probability distributions~\\citep{robbins1956empirical} -- every production depends on both the set of symbols in the grammar as well as a global latent variable $z$. \nThis formulation, trained with variational methods, allows for global sentence information to flow through all parsing decisions in a sentence while remaining compatible with efficient inference methods which standard PCFGs enjoy \\citep{baker1979trainable}. \nFor more details on C-PCFGs see Appendix \\ref{app:cpcfgs}.\nC-PCFG memory costs are cubic on sentence length, leading us to use the \\verb|compvis-vq-f8-64| tokenizer for visual grammars, which provides a tractable 32 tokens per image. \nFor each dataset, we train grammars over five seeds for 15 epochs and select the seed with the lowest test set perplexity for analysis. \nWe test our pipeline by evaluating parsers learned on English COCO captions (COCO-EN) against silver-label parse trees extracted with Benepar~\\citep{kitaev-klein-2018-constituency}, attaining an F1 score of 49 on the best seed, which is comparable to prior work~\\citep{zhao-titov-2020-visually}. \nWe report test set statistics over learned grammars, such as final parse tree perplexity (PPL) and percentage reduction in perplexity (PPL-R) from random initialization to convergence. \nThe mean branching factor~\\citep{li-etal-2024-evaluating} (MBF) measures on average whether generated parse trees tend to branch right or left. \nThis is achieved by averaging the proportion of leaves between the right and left branches of nodes $n$ across parse trees $t$ in the dataset: \nHere, CR and CL represent the counts of leaves in the right and left branches of a node, respectively. \nTo get a better sense of parse tree topology, we also measure the ratio between tree height (the length of the longest path in the tree) and the minimum possible height for the tree: \nWhere H$(t)$ and L$(t)$ are the height and number of tokens in the input sequence, respectively. \nCodebook utilization (CU) measures the percentage of non-terminal labels utilized within generated parse trees. \nWe present these statistics in \\autoref{tab:grammar_stats}, as well as normalized non-terminal node frequencies for parse trees generated by each grammar in \\auto, with some example parse trees in \\auto. \nAlthough both modalities experience a great reduction in perplexity compared to random initialization, textual grammars (COCO-EN and COCO-DE) generally exhibit greater reductions in perplexities than visual grammars, corroborating findings from \\autoref{sec:entropy} which suggest that visual tokens are not as compressible as textual tokens. \u00a0\nAlthough visual grammars converge to PPL values an order of magnitude greater than the textual grammars, we observed that their PPL values at the start of training are proportionally higher, likely due to the generally longer visual sentence length (32 tokens in these experiments). \nAll other measures are generally comparable across modalities -- both modalities show similar proclivities towards right-branching trees (MBF), although visual grammars are somewhat more balanced. \nBoth modalities present similar tree heights (FR), with the non-terminal label codebooks being largely utilized. \nThe notable exception to these trends is the grammar trained on XM3600 tokens. \nXM3600 contains a significantly lower number of training examples (one order of magnitude less than SPIN, and two orders less than all other datasets), which may have resulted in a degenerate grammar being learned. \nThese results suggest that the structure of visual languages may not be as well approximated by context-free grammars as natural languages are. \nThis raises the question of whether they may be better fit by other grammatical formalisms, such as mildly context-sensitive grammars~\\citep{yang-etal-2023-unsupervised} which allow for dependencies to cross between token spans. \u00a0", ["2411_05001v1_0", "2411_05001v1_1", "2411_05001v1_2"]] [60083, "Dataset Generation Pipeline\nIn this section, we provide an overview of the question-answer creation pipeline that we developed to create HourVideo.\nThe pipeline is summarized in Fig.~\\.\nA crucial design consideration for this benchmark is the selection of video sources and types. We chose the Ego4D~\\cite{grauman2022ego4d} dataset for our videos for multiple reasons: (1) its egocentric perspective\naligns well with\nthe typical visual input for autonomous agents and assistants; (2) it features extensive visual narrations, which aid in creating diverse multiple-choice questions; and (3) it is readily accessible under the Ego4D license. We manually reviewed 1,470 videos, ranging from 20 to 120 minutes, from the Ego4D dataset, assessing their potential to generate relevant questions for various tasks in our task suite.\nWe engaged five human experts for video curation.\nFollowing this process, we curated 500 egocentric videos.\nCandidate MCQ Generation, Stage 2.\nThe objective of this stage is to produce high-quality MCQs for each task,\nrequiring analysis and synthesis of information across multiple temporal segments in a long-form video. Initially, we manually develop question template(s) for each task in the suite. As shown in Table~\\ref{table_main:task_suite}, transforming a question template into an actual question involves incorporating video-specific information tailored to the task and template.\nTo facilitate this, we utilize the detailed narrations from the Ego4D dataset, transforming them into a structured format that can be processed by an LLM.\nSpecifically, we segment the video at 20-minute intervals,\nwith each segment's representation including a summary and a list of tools, food items, technology, humans, pets, and physical locations encountered by the camera wearer in the video.\nWe note that synthesizing a structured representation and a question template into a valid question with correct and incorrect answers presents a significant challenge, even for advanced LLMs.\nConsequently, for each task, we formulate detailed prompts that offer question prototypes, comprehensive instructions, in-context examples, and step-by-step guidance on how to transform a question template into a valid candidate $\\QAW_{2}$.\nIn total, we developed 25 task-specific prompts\nRefinement with LLMs using Human Feedback, Stage 3.\nThe purpose of this phase is to refine $\\QAW_{2}$, created in the\nprevious\nstage. $\\QAW_{2}$ may contain invalid questions, incorrect answers,\ntrivial incorrect options,\nand various other issues.\nWe identified that a significant source of these issues stemmed from relying on the noisy narrations in Ego4D.\nFor example, different narrators within the same video could refer to a dishwasher as a \"plate rack\" or use other terms, and an individual might be described as an \"adult,\" \"person with a red and white shirt,\" \"man Y,\" or \"teenager\" at various times in the narration.\nThese inconsistencies, combined with our automatic question generation in the first stage, could lead to\ngeneration\nof invalid\nTo address noisy {\\QAW}s, we implement a human feedback system where trained annotators are tasked with: 1) assessing the validity of each question to ensure it aligns with the video content, 2) verifying the accuracy of the given answer\u2014if found incorrect, they provide the correct answer in free-form text, 3) ensuring that all incorrect options are factually wrong and clearly distinguishable from the correct answer.\nWe gather human feedback for all $\\QAW_{2}$, involving over 400 hours of human\neffort.\nWe then design prompts, to automatically refine $\\QAW_{2}$ using this human feedback to produce $\\QAW_{3}$.\nWe engaged seven trained annotators in this stage.\nBlind filtering, Stage 4. Modern LLMs possess extensive prior knowledge and can thus easily answer certain questions without needing to analyze the videos. The objective of this phase is to eliminate questions that can be answered through prior knowledge or can be trivially answered without requiring any information from the video. To address this, we do blind filtering of $\\QAW_{3}$, utilizing two separate blind LLMs (GPT-4-turbo and GPT-4). Specifically, we exclude any MCQ that is correctly answered by at least one LLM without video input. Although this method may aggressively remove MCQs, it ensures that the remaining $\\QAW_{4}$ are of high quality and specifically tailored to test long-form video-language understanding.\nExpert Refinement, Stage 5.\nThe aim of this stage is to enhance the quality of $\\QAW_{4}$ by utilizing a selected group of expert human annotators.\nThis stage serves as a comprehensive step to address various remaining issues that might have persisted through prior stages. Examples of expert refinement include transforming a broad question like \"Where did the camera wearer leave the keys?\" into a more precise query: \"Where did the camera wearer leave the bike keys after returning home from shopping?\u201d\nOver 300 hours of expert human effort are employed in this stage to carefully examine and refine $\\QAW_{4}$, culminating in a high-quality $\\QAW_{5}$.\nWe engaged four human experts in this stage.\nHourVideo Statistics\nHourVideo consists of 500 videos from the Ego4D dataset, covering 77 \\textit{daily life scenarios} such as cooking, cleaning, eating, watching TV, baking, etc. (Fig.~\\). The dataset includes 381 hours of video footage, with video durations ranging from 20 to 120 minutes (Figure~\\). On average, each video is approximately 45.7 minutes long, which 15$\\times$ larger than prior work in long-form video-language understanding~\\cite{mangalam2024egoschema}. Additionally, 113 videos in our dataset exceed one hour in length. Each video is accompanied by an average of 26 high-quality, five-way multiple-choice questions, totaling \u00a012,976 questions in the dataset.\nFinally, we strive to ensure an even distribution of {\\QAW}s across all tasks in our suite, with the exception of causal, counterfactual, and navigation tasks, where questions were manually generated for a selected group of videos.", ["2411_04998v1_0", "2411_04998v1_1", "2411_04998v1_1"]] [60084, "Introduction\nRecent advances in video diffusion models demonstrate significant improvements in visual and motion quality ~\\citep{VideoDiffusionModels,blattmann2023stable,AlignYourLatents,LVDM}. \nThese models typically take a text prompt~\\citep{VideoDiffusionModels,blattmann2023stable,ImagenVideo} or image~\\citep{VideoCrafter1,VideoCrafter2,AnimateDiff,DynamiCrafter} as input and generate video frames of a photorealistic, animated scene.\nCurrent methods can generate videos that are largely consistent with an input text description or image; however, fine-grained adjustment of specific video elements (e.g., object motion or camera movement) is conventionally a tedious process that requires re-running the model with different text prompts or random seeds~\\citep{wu2023freeinit,qiu2024freetraj}.\nApproaches for controllable video generation aim to eliminate this process of trial-and-error through direct manipulation of generated video elements, such as object motion~\\citep{wu2024draganything,yin2023dragnuwa,wang2024boximator}, pose~\\citep{AnimateAnyone,MagicAnimate}, and camera movement~\\citep{MotionCtrl,li2024image,CameraCtrl,MotionMaster}.\nOne line of work fine-tunes pre-trained video generators to incorporate control signals such as bounding boxes or point trajectories~\\citep{wu2024draganything,MotionCtrl}. \nOne of the primary challenges with these supervised methods is the expensive training cost, and thus, previous methods usually incorporate trajectory control by fine-tuning at a lower resolution than the original model~\\cite{wu2024draganything, yin2023dragnuwa}.\nMore recently, several methods for zero-shot, controllable text-to-video generation have been developed~\\citep{ma2023trailblazer,qiu2024freetraj,jain2024peekaboo}.\nThey control object trajectories by modulating the cross-attention maps between features within a bounding box and an object-related text token.\nStill, it is not always possible to associate a desired edit with the input text prompt (consider, e.g., motion of object parts). \nMoreover, these methods cannot be directly applied to animate existing images, as they are only conditioned on text.\nIn this work, we propose SG-I2V, a new method for controllable image-to-video generation.\nOur approach is \\textit{self-guided}, in that it offers zero-shot control by relying solely on knowledge present in a pre-trained video diffusion model. \nConcretely, given an input image, a user specifies a set of bounding boxes and associated trajectories.\nThen, our framework alters the generation process to control the motion of target scene elements.\nIt is essential to manipulate the structure of the\ngenerated video to achieve precise control over element positions, which is mainly decided by early denoising steps~\\citep{balaji2022ediff,DMProcessAnalyze}.\nIn image diffusion models, it is known that feature maps extracted from the output of upsampling blocks are \\textit{semantically aligned}, i.e., pixels belonging to the same object share similar feature vectors on the feature map and thus can be used to control the spatial layout of generated images~\\citep{tang2023emergent, shi2024dragdiffusion, namekata2024emerdiff, tumanyan2023plug}.\nHowever, our analysis reveals that feature maps extracted from the upsampling blocks of video diffusion models are only weakly aligned across frames (see~\\).\nThis misalignment poses challenges, as directly manipulating these feature maps fails to give useful guidance signals for layout control. %\nInstead, we find that feature maps extracted from the self-attention layers can be semantically aligned by replacing the key and value tokens for each frame with those of the first frame (see bottom row of \\).\nAfter that, we can control the motion of generated videos by optimizing the latent (the input to the denoising network) with a loss that encourages similarity between the aligned features within each bounding box along the input trajectory. \nFinally, we apply a post-processing step to enhance output quality by ensuring that our optimization does not disrupt the distribution of high-frequency noise expected by the diffusion model.\nIn summary, our work makes the following contributions:\nIn this section, we describe our method for the trajectory control task in image-to-video generation (\\cref{sec:3_task}).\nOur framework, SG-I2V, builds on the publicly available image-to-video diffusion model Stable Video Diffusion~(SVD)~\\citep{blattmann2023stable},\nand consists of two main steps. \nFirst, we extract and semantically align the feature maps from a specific layer of SVD during the early steps of the diffusion process (\\cref{sec:motion_estimation}); we show that such feature maps are especially effective at influencing motion in the output video.\nSecond, we optimize the noisy latent (i.e., the input to the denoising network) to enforce similarity between features within the bounding box trajectories (\\cref{sec:motion_optimization}).\nHowever, we find that naive optimization of latent is prone to overfitting and often results in low-quality generation.\nThus, we employ frequency-based post-processing to retain an in-distribution noisy latent (\\cref{sec:fft}).\nOur entire pipeline is summarized in \\.\nPreliminaries: Stable Video Diffusion.\nVideo diffusion models~\\citep{VideoDiffusionodels} learn a data distribution $\\bm{x}_0 \\sim p_\\theta(\\bm{x}_0)$ by gradually denoising a video corrupted by Gaussian noise. \nThe output denoised video is thus drawn from the distribution $p_\\theta(\\bm{x}_0) = \\int p_\\theta(\\bm{x}_{0:T}) \\ d\\bm{x}_{1:T}$, where $\\bm{x}_0 \\in \\mathbb{R}^{N \\times H \\times W}$ is a clean video, and $\\bm{x}_{1:T}$ are intermediate noisy samples.\nFor simplicity, we omit the channel dimension throughout the paper.\nTo reduce computation, Stable Video Diffusion (SVD)~\\citep{blattmann2023stable} performs the diffusion process in a latent space, where a variational autoencoder~\\citep{VAE} maps a raw video $\\bm{x}_0$ to a latent $\\bm{z}_0 \\in \\mathbb{R}^{N \\times h \\times w}$. \\\\\nSince this work aims to animate an existing image, we utilize the image-to-video variant of SVD, which concatenates a conditioning frame with noisy latent ($\\bm{z}_t$) and runs a 3D U-Net~\\citep{U-net} to predict the noise.\nThe 3D U-Net contains a downsampling and an upsampling path.\nSpecifically, the upsampling path consists of three stages operating at different resolutions, where each stage contains three blocks with interleaved residual blocks~\\citep{ResNet}, spatial, and temporal attention layers~\\citep{Transformer}.\nWe will call these three stages bottom, middle, and top from lower to higher resolution.\nFor more details, we refer readers to the original paper of SVD~\\citep{blattmann2023stable}.\nIn image diffusion models, prior work has shown that output feature maps of upsampling blocks in the middle stage of the denoising U-Net are \\textit{semantically aligned}~\\citep{tang2023emergent, namekata2024emerdiff, hedlin2024unsupervised, zhang2023tale, luo2023dhf}, i.e., regions belonging to the same object tend to have similar feature vectors. %\nSuch semantically aligned feature maps are useful in estimating the layout of generated images, enabling spatial control of objects~\\citep{shi2024dragdiffusion, mou2023dragondiffusion}.\nTherefore, we first examine whether SVD feature maps are also semantically correlated across \\textit{both spatial and temporal dimension}. %\n\\c visualizes the principal components of feature maps extracted from the upsampling block and spatial attention layers.\nWe observe that SVD feature maps exhibit weak semantic correspondence across frames at early denoising steps, leading to inaccurate object trajectory estimation.\nYet, we want to operate at early steps as they decide the structure of generated videos~\\citep{materzynska2023customizing}.\nThis dilemma prompts us to align these features before applying optimization.{Feature alignment with modified self-attention.} \nSVD leverages separate spatial and temporal self-attention to model the entire video.\nSince spatial self-attention is only applied per frame, it does not produce cross-frame aligned features.\nWhile temporal attention communicates across frames, it only attends to the same pixel position on the feature map, which may be inadequate for capturing semantic information spatially. %\nTo address this issue, inspired by \\citep{Tune-a-video}, we modify the spatial self-attention on each frame to directly attend to the first frame.\nConcretely, for the $n$-th frame, the original spatial self-attention works as $\\bm{F}_n = \n\\mathrm{Softmax}(\\frac{\\bm{Q}_{n} \\cdot \\bm{K}_{n}^T}{\\sqrt{D}})\\cdot \\bm{V}_{n}$, where $\\bm{F}_n$ %\nis the %\noutputs of self-attention, $\\bm{Q}, \\bm{K}, \\bm{V}$ are the query, key, and value tokens, respectively, and $D$ is the dimensionality of the key and query tokens~\\cite{Transformer}.\nInstead, we replace the key $\\bm{K}_{n}$ and value $\\bm{V}_{n}$ of each frame with $\\bm{K}_{1}$ and $\\bm{V}_{1}$ from the first frame, leading to a new operation $\\tilde{\\bm{F}}_n = \n\\mathrm{SoftMax}(\\frac{\\bm{Q}_{n} \\cdot \\mathrm{SG}(\\bm{K}_{1})^T}{\\sqrt{D}}) \\cdot \\mathrm{SG}(\\bm{V}_{1})$.\nNotably, we apply a stop gradient $\\mathrm{SG}(\\cdot)$ on $\\bm{K}_{1}$ and $\\bm{V}_{1}$ to stabilize the subsequent optimization process.\nNow, all the modified %\nfeature maps $\\tilde{\\bm{F}}_n$ are weighted combinations of $\\bm{V}_1$, exhibiting a stronger cross-frame correspondence while still maintaining the object layout of each frame, as shown in the bottom row of \\c.", ["2411_04989v1_0", "2411_04989v1_0", "2411_04989v1_1", "2411_04989v1_0", "2411_04989v1_0"]] [60085, "The ability to learn concepts about a novel task, such as the goal and motion plans, from a few demonstrations is a crucial building block for intelligent agents -- it allows an agent to learn to perform new tasks from other agents (including humans) from little data.\nHumans, even from a young age, can learn various new tasks from little data and generalize what they learned to perform these tasks in new situations \\cite{lake2019human}.\nIn machine learning and robotics, this class of problems is referred to as Few-Shot Learning \\cite{parnami2022learning}. Despite being a widely studied problem, it remains unclear how we can enable machine learning models to learn concepts of a novel task from only a few demonstrations and generalize the concepts to new situations, just like humans do. Common approaches learn policies either directly, which often suffer from covariate shift \\cite{shimodaira2000improving}, or via rewards \\cite{ziebart2008maximum,ng2000algorithms, fu2017learning}, which are largely limited to previously seen behavior \\cite{ghasemipour2020divergence}. \nIn a different vein, other work has relied on pretraining on task families and assumes that task learning corresponds to learning similar tasks to ones already seen in the task family \\cite{duan2017one,finn2017one}.\nInspired by the success of generative modeling in few-shot visual concept learning \\cite{lake2015human,gal2022image,liu2023unsup}, where concepts are latent representations, in this work, we investigate whether and how few-shot task concept learning can benefit from generative modeling as well. Learning concepts from sequential demonstrations rather than images is by nature more challenging due to sequential data often not satisfying the i.i.d. assumption in machine learning \\cite{oord2016wavenet}. In particular, we assume access to a large pretraining dataset of paired behaviors and task representations to learn a conditional generative model that synthesizes trajectories conditioned on task descriptions. We hypothesize that by learning a generative model conditioned on explicit representations of behavior, we can acquire strong priors about the nature of behaviors in these domains, enabling us to more effectively learn new behavior that is not within the pretraining distribution, given a limited number of demonstrations, and further generate the learned behavior in new settings.\nTo this end, we propose Few-Shot Task Learning through Inverse Generative Modeling (\\methodname{}). In our approach, we first pretrain a large conditional generative model which synthesizes different trajectories conditioned on different task descriptions. To learn new tasks from a limited number of demonstrations, we then formulate few-shot task learning as an {\\it inverse generative modeling problem}, where we find the latent task description, which we refer to as a \\textit{concept}, which maximizes the likelihood of generating the demonstrations. This approach allows us to leverage the powerful task priors learned by the generativeen demonstrations without finetuning the model (Figure~\\). We demonstrate this approach in various domains: object rearrangement, where concepts are relations between objects, goal-oriented navigation, where concepts are target attributes, motion capture, where concepts are human actions, autonomous driving, where concepts are driving scenarios, and real-world table-top manipulation where concepts are manipulation tasks (Figure~\\).\nNew concepts are either (1) compositions of training concepts ({\\it e.g.}, multiple desired relations between objects that define a new object rearrangement concept) or (2) new concepts that are not explicit compositions in the natural language symbolic space of training concepts ({\\it e.g.}, a new human motion `jumping jacks' is not an explicit composition of training concepts `walk', `golf' etc.). Thanks to generative models' compositional properties that enable compositional concept learning \\cite{liu2022compositional}, in addition to being able to learn a single concept from demonstrations directly, \\methodname{} learns compositions of concepts from demonstrations that, when combined, describe the new concept.\nWe show that our approach generates diverse trajectories encapsulating the learned concept. We achieve this due to two properties of generative models. First, these models have shown strong interpolation abilities \\cite{ramesh2021zero,saharia2022photorealistic}, which allow generating the new concept on new initial states they were not demonstrated from. Second, these models have compositional properties that enable compositional trajectory generation \\cite{ajay2023is}, which allow composing learned concepts with training concepts to synthesize novel behavior that was not demonstrated ({\\it e.g.}, `jumping jacks' and `walk'), see Figure~\\ref{fig:generation}. We further demonstrate that our approach addresses a unique challenge introduced in learning task concepts: we utilize plans generated by learned concepts in a closed-loop fashion.\nOur main contributions are (1) formulating the problem of task learning from few demonstrations as Few-Shot Task Learning through Inverse Generative Modeling (\\methodname{}), (2) adapting a method for efficient concept learning to this problem based on the new formulation, and (3) a systematic evaluation revealing the ability of our method to learn new concepts across a diverse set of domains.\nsection{Experiments}\nWe demonstrate results in four domains where concept representations are T5 \\cite{raffel2020exploring} embeddings of task descriptions in natural language for training, and empty string embeddings for the dummy condition. During few-shot concept learning, we are provided with three to five demonstrations of a composition of training concepts or of a novel concept that is not an explicit composition of training tasks in natural language symbolic space. We ask a model to learn the concept from these demonstrations. \n{Learning concepts describing goals that are spatial relations between objects.} Object rearrangement is a common task in robotics \\cite{shah2018bayesian, yan2020robotic, rowe2019desk} and embodied artificial intelligence (AI) \\cite{puig2020watch,srivastava2022behavior}, serving as a foundation for a broader range of tasks such as housekeeping and manufacturing. Here, we use a 2D object rearrangement domain to evaluate the ability of our method to learn task specification concepts. Given a concept representing a relation between objects, we generate a single state describing that relation. The concept in a training example describes the relation (either `right of' or `above') between only one pair of objects (out of three objects) in the environment. Then, a model must learn compositions of these pairwise relations and new concepts such as `diagonal' and `circle' (see Figure~\\ref{fig:shapes}). The results in Figures~\\ref{fig:shapes-new-concept-compos-qual} and~\\ demonstrate that our method learns unseen compositions of training concepts and new concepts. They further demonstrate how our method composes new concepts with learned concepts. For additional qualitative results, please refer to Appendix~\\ref{appendix:add-res}.\nWhile successful in most cases, there are also a few failure examples. The accuracy for the new `circle' concept is low ($0.44$) compared to the mean over task types in Figure~\\ Object Rearrangement New Concept ($0.82\\pm0.09$). This is most likely due to this concept lying far out of the training distribution. The task `square right of circle $\\land$ triangle above circle' has low accuracy for 2 concepts ($0.32$) compared to the mean in Table~\\ref{tab:ablation} Object Rearrangement Training Composition ($0.75\\pm0.11$). This may arise from the combined concept-weight optimization process -- as there is no explicit regularization on weights, they may converge to 0 or diverge. In Figure~\\ref{fig:shapes-tab4-qual}, we show that concept components may or may not capture new concept relations.\ntextbf{Learning concepts describing goals based on attributes of target objects.} We test our method in a goal-oriented navigation domain adapted from the AGENT dataset \\cite{shu2021agent}, where an agent navigates to one of two potential targets. Conditioned on a concept representing the attributes of the desired target object and initial state, we generate a state-based trajectory describing an agent navigating to the target. Each object has a color and a shape out of four possible colors and four shapes. During training, we provide 16 target-distractor combinations that include all colors and shapes (but not all combinations), and a concept is conditioned on one of the target's attributes ({\\it e.g.}, color). We introduce new concepts defined by both target attributes, including (1) unseen color-shape target combinations and (2) new target-distractor combinations. Figure~\\ref{fig:AGENT} shows an example. In training, we see bowl and red object targets. A new concept includes a novel composition as the target -- red bowl. The new concept distractor objects (green bowl and red sphere) were introduced during training, but they were not paired with a red bowl as the target. As Figure~\\ref{fig:AGENT-open} shows, our method successfully learns concepts where targets are new compositions of target attributes in settings with new target-distractor pairs and generalizes to new initial object states. We further evaluate our model and baselines in closed loop (Figure~\\) by making an additional assumption that a planner is provided. The planner produces an action given a current state and a future desired state predicted by a model.", ["2411_04987v1_0", "2411_04987v1_1", "2411_04987v1_2", "2411_04987v1_2", "2411_04987v1_2"]] [60086, "IntroductionRecent works~\\cite{bitnet2} have shown that 1-bit LLMs can match the full-precision models given the same amount of parameters and training tokens while being significantly cost-effective in terms of latency, memory, throughput, and energy consumption. With model weights represented in 1.58-bit (i.e., \\{-1, 0, 1\\}), the bottleneck of inference has shifted from the limited memory bandwidth to high computational cost. Low-bit or sparse activations in LLMs served as a promising approach to further reduce the computational budget while maintaining performance on downstream tasks.\nOne common approach is to utilize activation sparsity~\\cite{dejavu, turbosparse, teal}, which reduces the inference FLOPs and the I/O of weight by pruning the activation entries with smaller magnitudes. Sparsification is particularly well-suited for handling activations that exhibit highly imbalanced long-tailed distributions. Recent works~\\cite{qsparse} have demonstrated that LLMs with fully sparsely-activated activations can achieve results comparable to dense models while having much less active parameters.\nIn addition to sparsification, activation quantization is another approach to accelerate the matrix multiplication. However, the optimization of neural networks with low-bit activations is challenging due to the emergence of outlier dimensions as the training progresses and the model size grows. Despite these outliers only account for a very small portion of the activations~\\cite{llmint8, smoothquant}, they have much larger magnitude, which leads to significant quantization errors and performance degradation on downstream tasks. Previous works~\\cite{int4, quarot, spinquant, duquant} mostly utilize Hadamard or learnable rotary transformation to amortize the outlier features into other entries. However, they are mostly designed for the LLMs of higher precision (e.g., 4-bit). For 1-bit LLMs, the extremely low bit-width of the weights makes it challenging to absorb these transformation matrices directly into the weights, while leaving them as online transformations introduces additional computational overhead and limits overall inference performance.\nIn this work, we introduce \\textbf{BitNet a4.8}, a hybrid quantization and sparsification strategy that enables 4-bit activations for 1-bit LLMs. By carefully analyzing the activation distribution of 1-bit LLMs, we selectively apply 4-bit quantization or sparsification based on the distribution patterns of these activations. Specifically, as shown in Figure~\\, BitNet a4.8 employs 4-bit activations for the inputs to attention and FFN, while utilizing sparsification with 8 bits for intermediate states. To improve the training efficiency, BitNet a4.8 is trained from 8-bit to 4-bit activations with a two-stage recipe, which requires only a few training tokens to adapt \\bitnet{} to the low-bit activations at the end of training. Extensive experiments demonstrate that BitNet a4.8 achieves competitive performance to \\bitnet{} with the same training cost while being significantly more efficient at inference time. Furthermore, BitNet a4.8 has only 55\\% activated parameters and supports 3-bit KV cache, which further enhances the efficiency of LLM deployment.\nparagraph{Down projection of FFN.} We compared 1.3B BitNet a4.8 with different quantization or activation function for the down projection of FFN. All models were trained with the first-stage scheduling for 50B tokens from the RedPajama dataset. To ensure a fair comparison, we leave the other activations at 8-bits. We adopt the absmax quantizer for INT8 quantization and MinMax quantizer for FP4 quantization. The $\\beta$ of absmean quantizer is set as $2 \\text{mean}(|X|)$. Figure~\\ shows the training loss curves of these models. Squared ReLU achieves slightly better training perplexity than Swish while enabling higher sparsity. Furthermore, applying FP4 quantization for the inputs to the down projection leads to a significant performance degradation, while using INT4 activations with STE causes divergence.", ["2411_04965v1_0", "2411_04965v1_1"]] [60087, "{Numerical Results}\\label{sec:Numerical Results}\nWe present here a thorough evaluation of the proposed Steepest Perturbed Gradient Descent (SPGD) algorithm, comparing its performance against several established optimization methods. The comparison includes traditional gradient descent (GD), Perturbed Gradient Descent (PGD), MATLAB \\(fmincon\\) function which is a versatile solver for constrained optimization problems \\cite{Fmincon}, and Simulated Annealing (SA) \\cite{SA}.\nOur initial analysis is conducted through the lens of four challenging 2D benchmark functions, selected for their known difficulties and relevance in assessing optimization algorithms' efficacy. These test functions are recognized benchmarks within the optimization community, providing a diverse set of landscapes to evaluate each algorithm's ability to navigate complex, non-convex, and potentially deceptive optimization spaces \\cite{simulationlib}. For each test function, we apply \\(fmincon\\), Simulated Annealing, traditional gradient descent, PGD, and SPGD, meticulously recording and analyzing the results. In both the SPGD and PGD algorithms, the amplitude of the perturbation is set to be the same, ensuring comparability, \\(Iter_{P}\\) is set to \\(5\\), and \\(N_{P}\\) is set to \\(10\\). Furthermore, the gradient function is explicitly passed to the \\(fmincon\\) function in each scenario to guide the optimization process.\nKey performance indicators include the accuracy of the solution, measured by the proximity to the known global optimum\\cite{hazewinkel2001theory}; the computational efficiency, quantified by the number of function evaluations and CPU execution time. The SPGD algorithm's source code, along with comparative analyses against the discussed methods using additional 2D challenging test functions, is publicly accessible on GitHub \\footnote{Source code and comparisons available at: \\url{https://github.com/Amir-M-Vahedi/SPGD-Benchmark-Functions}}.\nsubsubsection*{Test function 1} The MATLAB Peaks function \\cite{Peaksfunction-MATLAB} presents a formidable challenge for optimization algorithms due to its intricate landscape, which features one global minimum, multiple local minima, a saddle point, and extensive flat regions. This complexity makes the Peaks function a critical benchmark for assessing the capabilities of optimization techniques, particularly those based on gradient descent. Traditional gradient descent methods often struggle with such landscapes, as they can easily become trapped in local minima or stall in flat areas, failing to make significant progress towards the global optimum \\cite{machines10010042}. The mathematical expression defining the Peak test function is given as follows:\nIt has a global minimum point located at \\(x= 0.2283\\), \\(y= -1.6256\\) with an optimal function value of \\(f(x^{*}) = -6.5511\\). The initial condition is chosen randomly to be \\((-2.81,-1.47)\\), and the \\(Amp\\) is set to 2.5. Figure ~\\ and ~\\ \u00a0illustrate the 3D view and top view of optimization trajectory across the Peaks function surface. The convergence history comparison is shown in Fig~\\. The total number of function evaluations, the converged optimal value, and CPU execution time for different methods are given in Table \\ref{tab:Test1 Performance}. Based on the results depicted in Figures ~\\, ~\\, and ~\\, and performance metrics in Table \\ref{tab:Test1 Performance}, it is evident that the GD and PGD algorithms become trapped in local minima. In contrast, the \\(fmincon\\), SA, and SPGD algorithms successfully converge to the global optimum. Among these three, SPGD demonstrates the lowest computational cost. Notably, despite the \\(fmincon\\) method having fewer function evaluations, its CPU time is approximately 20 times greater than that of the SPGD algorithm. \nsubsection{Analysis of Scenario 1: Four Identical Cubes}\nIn Scenario 1, the initial configuration of the four identical cubes was designed to test each algorithm's ability to navigate a relatively simple scenario where the global optimum involves aligning all cubes in a compact configuration. The results of the final configurations found by the GD and SPGD algorithms are illustrated in Figure~\\ and Figure~\\, showing both Gradient Descent and Steepest Perturbed Gradient Descent results side by side.\nhe outcomes depicted in the figures reveal that, due to the collision constraint, GD struggled to converge to the global solution and settled in a suboptimal local minimum. In contrast, SPGD successfully converged to the global optimal configuration, effectively avoiding local minima and fulfilling the collision constraints more efficiently. To further illustrate the performance dynamics over the course of the optimization.", ["2411_04946v1_0", "2411_04946v1_1", "2411_04946v1_2", "2411_04946v1_0", "2411_04946v1_1", "2411_04946v1_2", "2411_04946v1_3", "2411_04946v1_4"]] [60088, "{Pseudo-labeling with Self-supervised ViT} \nIn order to improve OW-DETR, by leveraging pre-trained ViT features, a first step is to replace Resnet activation maps used for pseudo-labeling by DINOv2 \\cite{dinov2} activation maps. We call this variant OW-DETR$^{+}$. Note that DINOv2 is only used for the activation map extraction for the unknown pseudo-labeling while ResNet continues to be the backbone of the detection method. \nTo further improve {OW-DETR$^{+}$}, we propose the novel pseudo-labeling pipeline illustrated in fig. \\. We first propose to filter out the background. Given an input image passed through the DINOv2 backbone, we obtain a feature map $F \\in \\mathbb{R}^{H \\times W \\times d}$, where each pixel is represented by a $d$-dimensional feature vector. Subsequently, we apply DBSCAN \\cite{dbscan} with predefined $eps$ and $min\\_samples$ on these features since it can automatically assign the number of clusters. This algorithm provides a great ability to separate between the foreground and the background regions. Hence, we filter out the biggest cluster, likely to be the background, especially for OD benchmarks where usually single objects are not omnipresent in the image. Second, we aim to find different objects in the image by localising the regions having similar and homogeneous semantics. However, DBSCAN offers very coarse semantic clusters. Thus, we apply agglomerative clustering on the same features, refine the clusters boundaries and minimize noisy representations using morphological operations such as erosion and dilation. The feature space is then segmented into $AC$ distinct clusters, according to a distance threshold $d_{th}$. This clustering helps in determining coherent object-like structures. Note that each cluster represents potentially a class, as different objects from the same class have usually similar features.\nWe aim to keep only regions having high objectness by using the DINOv2 activation maps. In fact, $h_a$ attention maps $A_i =\\{a_{h,w}, 1 \\leq h \\leq H, \u00a01 \\leq w \\leq W \\}$ are derived from the ViT backbone, where $h_a$ represents the number of attention heads. Each pixel value $a_{h,w}$ for each attention map is normalized to prevent outlier activations: $a_{h,w} = \\frac{a_{h,w}}{\\sum_{l=1}^{H} \\sum_{k=1}^{W} a_{l,k}}$. These maps are averaged to obtain: $A_{avg} = \\frac{1}{h_a} \\sum_{i=1}^{h_a} A_i$. Using $A_{avg}$, we compute the average attention activation $A_{ac}$ for each cluster $ac$. Intuitively, higher activations correspond to regions containing objects. Given this insight, we only keep clusters having an average activation greater than the mean average attention of the whole image ($A_{ac}>mean(A_{avg})$). We then search for spatially isolated instances in these clusters and surround the largest $N$ regions with bounding boxes. Finally, we apply Non-Maximum Suppression (NMS) to the generated bounding boxes and filter out boxes that overlap with ground truth boxes ($IoU>T$). Note that in our case, we can have zero unknown object if top activated regions overlap with ground truth objects. We can also have a large number of unknown objects, if different clusters are largely activated. Following the aforementioned method, pseudo-labels for unknown objects are pre-computed in an offline manner for each image of the training dataset. It is important to note that this is a lengthy process. The D-DETR architecture is then trained end-to-end to detect objects from $|\\mathcal{K}|+1$ classes by predicting $|\\mathcal{K}|+1$ softmax probabilities \u00a0($|\\mathcal{K}|$ known class and one unknown class).\n{OpenImagesRoad benchmark} \nWe compare in tab. \\ref{tab:openimages} performances of OpenDet \\cite{OpenDet}, RandBox \\cite{randbox} and OW-DETR \\cite{OW-DETR} alongside its different improvements. ORE is unreported as it shows poor performance in unknown detection in tab. \\ref{tab:voc_coco}. For fairness, ORE (MAVL \\cite{MAVL}) is also unreported as it uses external supervision to extract pseudo-labels. \nInteresting performance trends emerge depending on the nature of each method. Indeed, pseudo-labeling methods such as RandBox, OW-DETR, \u00a0OW-DETR, OW-DETR$^{+}$ and OW-DETR$^{++}$ all have lower $AP_{u}$ than the contrastive method OpenDet which indicates that contrastive methods seem to learn a broader unknown representation which is good for classification. On the other hand, OW-DETR$^{++}$ \u00a0have higher $AP_{all}$ and $AP_{sc}$ than OpenDet, demonstrating that this pseudo-labeling based method seems more apt for object localization. The higher $AP_{sc}$ also indicates a good ability of pseudo-labeling methods to understand high granularity classification but at the same time, an inability to understand smaller grained differences between known and unknown as shown by the lower $AP_{u}$. The surprising drop between $AP_{u}$ and $AP_{all}$ for OpenDet can be explained by frequent double detections of unknown objects: as unknown and known objects. Fig. \\ illustrates the behaviours of OpenDet and OW-DETR$^{++}$, further samples of all methods are included in the appendix.\nOur proposed pseudo-labeling improvements on OW-DETR, namely OW-DETR$^{+}$ and OW-DETR$^{++}$ bring impressive performance gains. Note that no knowledge on unknown objects is transferred from DINOv2 since it is only used to pseudo-label the training images and that no unknown object is present in these images for the \\textit{unseen} scenario. Indeed, OW-DETR$^{++}$ achieves best performance for known OD ($mAP_{k}$), object localization $(AP_{all}$), and high granularity classification unknown OD ($AP_{sc}$). However, despite the drastic performance increase, OpenDet remains better for correctly detecting unknown objects and classifying them as unknown ($AP_{u}$).\nTo conclude, the proposed OpenImagesRoad benchmark permitted to shine new light on the detection performance of these methods. It appears that there is not a single best method among those we tested. Rather, the choice of a method can be driven by the envisaged use-case. If unknown objects should be both correctly detected and classified as unknown, the contrastive learning based OpenDet detector appears to be an appropriate choice, especially for such $S_{unseen}$ scenario. Conversely, if the correct localisation of unknown objects is paramount, and coarse classification is sufficient, then OW-DETR$^{++}$ seems best indicated. It is also worth noting that pseudo-labeling techniques are expected to perform better in the $S_{unlabeled}$ scenario where unknown objects are potentially seen and pseudo-labeled during training. Indeed, we remark that in tab. \\ref{tab:voc_coco} (VOC-COCO/$S_{unlabeled}$), OW-DETR$^{++}$ seems to be relatively more preferment compared to Opendet, than in tab. \\ref{tab:openimages} (OpenImagesRoad/$S_{unseen}$).", ["2411_05564v1_0", "2411_05564v1_1"]] [60089, "{Introduction}\n{I think this paper will benefit from more motivation provided in this section. Much more could be said why such a visualisation is important providing specific examples from different applications}\nHigh-throughput experiments are becoming extremely common in applied sciences.\nNow more than ever, large high-dimensional datasets are generated in almost every laboratory calling for an automated and reliable way to extract new knowledge from them.\nLet us fix a dataset $X$, usually embedded in a high dimensional space.\nStandard dimension reduction techniques, including PCA~\\cite{pca}, t-SNE~\\cite{tsne}, \\textsc{UMAP}~\\cite{umap} and \\textsc{PHATE}~\\cite{phate} aim to find a low dimensional embedding of $X$ so that points that are close in $X$, are also close in the embedding.\nHowever, preservation of the global organization of $X$ in general, and information about distances of far away points in particular, is a challenge for this methods.\nClustering techniques~\\cite{statLearning, SAXENA2017664} on the other hand, based on a fixed similarity measure, provide a partition of the input dataset $X$.\nHowever, clustering itself does not provide information about either intra-- or inter-- cluster organization of points, therefore is not used to asses the global structure of the data.\nThe aim of this work is two bridge this two approaches by enriching the output of a clustering algorithm with additional information on the data's global organization.\nThe first contribution of this paper is the construction of a \\emph{ClusterGraph}; a graph--based structure on a top of a partition $\\mathcal{C}(X)$ of the data obtained from a clustering algorithm $\\mathcal{C}$ applied to $X$.\nIn the ClusterGraph $G = (V,E)$, each vertex corresponds to a single cluster from $\\mathcal{C}(X)$.\nTwo vertices $u,v \\in V$ are connected by an edge whose length corresponds to the distance between their respective clusters in $\\mathcal{C}(X)$.\nFor the purpose of this construction, a number of inter--cluster distances defined in the ambient space are considered.\nClusterGraph has a number of advantages compared to alternative dimension reduction methods.\nOne of them is based on the fact that the distances, computed in the ambient space, are represented by labels on edges and not subjected to distortions made by standard dimension reduction techniques that force the projected data points to be embedded in an Euclidean space.\nThis \u00a0allows to visualize the \\emph{global} distances in the dataset.\nThis is important as many datasets cannot be embedded into low dimensional Euclidean spaces without perturbing the distances between points.\nAs an example of such a situation consider a collection of points in four clusters: $0$, $1$, $2$ and $3$. Points in each cluster are infinitesimally close. Distance between cluster $0$ and clusters $1$, $2$ and $3$ are $1$, while distance between \u00a0clusters $1$, $2$ and $3$ is $2$.\nIt is well known~\\cite{morgan_embedding_1974, bourgain_lipschitz_1985}\nthat such a graph cannot be isometrically embedded to any Euclidean space $\\mathbb{R}^n$ for any $n$.\nAs a consequence, all dimension reduction techniques will distort the distances between clusters, as can be observed in Fig~\\. ClusterGraph, on the contrary, provides the correct graph even in this case.\n{Metric distortion}\nGiven a dataset $X$, different choices of the clustering algorithm $\\mathcal{C}$, as well as the metrics $d_X$ and $d_\\mathcal{C}$ can lead to significantly different ClusterGraphs. The aim of this section is to introduce a score to assess the quality of a given\nClusterGraph $G$ by comparing it to the underlying geometric structure of the dataset $X$.\nFor this purpose let us assume that the considered point cloud $X$ is sampled from a compact and connected manifold $\\mathcal{M}$ equipped with an \u00a0\\emph{intrinsic distance} $d_{\\mathcal{M}}$. Informally, the intrinsic distance between two points $x,y \\in \\mathcal{M}$ is \u00a0defined to be the infimum of the length of a curve $\\gamma \\subset \\mathcal{M}$ joining $x$ and $y$, this is also known as \\emph{geodesic} distance.\nIn most applications, the underlying manifold is not known.\nConsequently, the intrinsic distance needs to be estimated from the point cloud. This is a well-studied problem in computational geometry and computer graphics, and multiple methods have been proposed~\\cite{isomap, klein_point_2004, ruggeri_approximating_2006,yu_geodesics_2014}.\nBelow, we follow the approach of~\\cite{isomap, Bernstein2000GraphAT} using the shortest path in the $k$-nearest neighbor graph as estimator.\nNote that any other estimator of intrinsic distance can be also used in the proposed construction.\nLet $G_{knn}(X)$ be the $k$-nearest neighbor graph on $X$ constructed in the following way: each point of $X$ corresponds to a vertex of $G_{knn}(X)$; it is connected to its $k$-nearest neighbors (in the chosen distance $d_X$, typically Euclidean), with $k$ being a parameter of the method. Weights corresponding to the distance between endpoints are assigned to the edges of $G_{knn}(X)$. We define a distance $d_X^k$ on $G_{knn}(X)$, estimating the intrinsic distance on $X$, as\n\u00a0 \u00a0 It may happen that $G_{knn}(X)$ is not connected.\n\u00a0 \u00a0 \u00a0 \u00a0 There are two possible reason for this, depicted in Fig.~\\.\n\u00a0 \u00a0 \u00a0 \u00a0 There are two possible reason for this.\n\u00a0 \u00a0 In the first case \u00a0points are indeed sampled from a compact and connected manifold but the parameter $k$ is too low. This can be easily solved by increasing $k$. In the second case the underlying manifold is not connected. This will result in the $k$-nn graph being disconnected even for very high values of $k$, especially if many points are sampled. In this case, we will threat each connected component separately, splitting the input dataset $X$ (and the output of the clustering algorithm) into disjoint sets, each one corresponding to a different connected component and analyze each of them separately\\footnote{By performing the construction in Sec.~\\ref{sec:cg} we obtain a ClusterGraph for each connected component, each of them being a fully connected graph. In graph theory such a disjoint union of complete graphs is sometimes called a \\say{cluster graph}. This unexpected but pleasing agreement in nomenclature motivates our choice of referring to our construction in camel case, to avoid confusion.}. For the rest of the Section we \u00a0therefore assume, without lack of generality, that the $k$-nn graph is fully connected. We discuss how to investigate the relations between different connected components in Section~\\ref{sec:connectivity}.\\\\\n\u00a0 \u00a0 Whenever an estimator is used, it is natural to ask how good such estimator is. The choice of a $k$-nn graph as an estimator of the geodesic distance is motivated by the following theorem by Bernstein, \u00a0Vin de Silva, Langford and Tenenbaum.\nTheorem A in~\\cite{Bernstein2000GraphAT}]\n\u00a0 \u00a0 \u00a0 \u00a0 Let $\\mathcal{M}$ be a compact submanifold of $\\mathbb{R}^n$, $X$ a finite set of data points in $\\mathcal{M}$ and $G$ a graph on $X$ (for example, a $k$-nn graph). Then the inequalities\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (1-\\lambda_1) d_\\mathcal{M}(x, y) \\leq d_G(x,y) \\leq (1+\\lambda_2) d_\\mathcal{M}(x, y)\n\u00a0 \u00a0 \u00a0 \u00a0 are valid for all $x,y$ in $X$, where $\\lambda_1, \\lambda_2 < 1$ are two positive real numbers that depends on $G$, $\\mathcal{M}$ and some technical assumptions on the density of $X$.\nFor each point $x \\in X$, we denote with $C_x$ the cluster in $\\mathcal{C}(X)$ that contains $x$. We can then use the distance between clusters described in \\ref{sec:distances_clusters} to define a distance between points $d_{CG}$ in the ClusterGraph $G$ as following\nRecall that $G$ is fully connected; one could wonder why the length of the shortest path between two vertices is used instead of the weight of the edge connecting them given by $d_\\mathcal{C}$. First, the triangle inequality might not hold for $d_\\mathcal{C}$. Secondly, we want the definition of $d_{CG}$ to hold also in the case of a pruned ClusterGraph, which we will discuss in Section~\\ref{sec:pruning}.\nThis notion of $d_{CG}$ is well defined only when $\\mathcal{C}(X)$ is a partition. In the more general case of a division, when a point can belong to more than one cluster, we take $d_{CG}(x,y)$ to be the length of the shortest path between any cluster containing $x$ and any cluster containing $y$.", ["2411_05443v1_0", "2411_05443v1_1"]] [60090, "The main issues that may easily lead to smart contract vulnerabilities are twofold. First, the programming languages and tools are still new and crude, which leaves plenty of rooms for misunderstandings in the built-in functions and tools [Luu et al., 2016]. Second, due to the immutable nature of smart contracts, developers are required to anticipate all possible states (e.g., stack status) and environments that the code may encounter in the future, which is obviously difficult. Existing methods on smart contract vulnerability detection can be roughly cast into two categories. The first line of work [Luu et al., 2016; Tsankov et al., 2018; Jiang et al., 2018] utilized classical static analysis and dynamic execution techniques to identify vulnerabilities. Unfortunately, they fundamentally rely on several fixed expert rules, while the manually defined patterns bear the inherent risk of being errorprone and some complex patterns are non-trivial to be covered. Meanwhile, crafty attackers may easily bypass the fixed patterns using small tricks. Another line of work [Tann et al., 2018; Zhuang et al., 2020] explored using deep learning models to deal with complex contract data, having achieved much improved accuracy. Due to the black-box nature, however, they fail to encode useful expert knowledge and mostly have poor interpretability. This motivates us to consider whether we could combine neural networks with classical expert patterns, where neural networks contribute their ability to handle the complex code semantic graph while expert patterns contribute precise and valuable local information. More importantly, we seek an explainable solution which could tell the weights of different features. In this paper, we propose a new system beyond pure neural networks that can automatically detect vulnerabilities and incorporate expert patterns into networks in an explainable fashion. In particular, (1) we develop automatic tools to extract vulnerability-specific expert patterns. (2) Then, we exploit a graph structure to frame the rich control-flow and data-flow semantics of the function code. Upon the graph, a graph neural network is employed to extract the deep graph feature. (3) Finally, we propose an attentive multi-encoder network to interpretably fuse the global graph feature and local expert patterns. Extensive experiments are conducted on all the 40k contracts in two benchmark datasets, demonstrating significant improvements over state-of-the-art: accuracy from 84% to 90%, 83% to 87%, 75% to 80% on three types of vulnerabilities respectively. More importantly, our model is able to explain its label prediction, give warnings of high weighted local patterns, and provide a grand picture of the significance of different features. : The attentive multi-encoder network, consisting of a self-attention mechanism and a cross-attention mechanism. It combines local pattern features and the global graph feature for vulnerability detection, and outputs interpretable weights for all features. Contributions. The key contributions of this work are: 1) We investigate combining vulnerability-specific expert patterns with neural networks in an explainable way. To the best of our knowledge, we are the first to prob the combination interpretably. 2) In the method, we present a simple but effective multi-encoder network for feature fusion. 3) Our method sets the new state-of-the-art and provides novel insights. To facilitate future research, our implementations are released at https://github.com/Messi-Q/AMEVulDetector. We would like to point out that different from [Liu et al., 2021], this work focuses mainly on the explainability of the expert pattern and deep graph feature combination, and offers a grand picture on the importance of different features. The overview of the proposed system is illustrated in which consists of three components: 1) a local expert pattern extraction tool, which extracts expert patterns of a specific vulnerability from the function code; 2) a graph construction and normalization module, which transforms the function code into a code semantic graph; and 3) an attentive multi-encoder network that combines local expert patterns and the global graph feature for vulnerability detection and outputs explainable weights. In what follows, we introduce the three components one by one. 3.1 Local Expert Pattern Extraction\nFollowing [Liu et al., 2021], we design corresponding expert patterns for three types of vulnerabilities respectively. Then, we implement a fully automatic tool to extract expert patterns from the function code. Specifically, the patterns for different vulnerabilities are defined as: Reentrancy. Technically, the reentrancy vulnerability occurs when a call.value invocation (i.e., a built-in money transfer function) can call back to itself through a chain of calls.\nThat is, call.value is successfully re-entered to perform unexpected repeat money transfers. For the reentrancy vulnerability, we design three local patterns. (1) enoughBalance concerns whether there is a check on the sufficiency of the user balance before transferring to a user. (2) callValueInvocation models whether there exists an invocation to call.value in the function. (3) balanceDeduction checks whether the user balance is deducted after money transfer, which considers the fact that the money stealing can be avoided if the user balance is deducted each time before money transfer. Block timestamp dependence. Generally, the timestamp dependency vulnerability exists when a smart contract is conventionally considered as using block.timestamp as part of the conditions to perform critical operations [Jiang et al., 2018]. We design three local patterns for the timestamp dependence vulnerability. (1) timestampInvocation models whether there exists an invocation to opcode block.timestamp in the function. (2) timestampAssign checks whether the value of block.timestamp is assigned to other variables or passed to a function as a parameter, namely whether block.timestamp is actually used. (3) timestampContaminate verifies if block.timestamp may contaminate the triggering condition of a critical operation (e.g., money transfer). 3.2 Graph Construction and Normalization\nOne may directly use combinations of aforementioned patterns to predict whether the function has a certain vulnerability. However, these fixed patterns are shown to have difficulties in handling relatively complex attacks and are trivial to be bypassed by adversaries [Qian et al., 2020]. Therefore, we further propose to model the control flow and data flow of the entire source code into a semantic graph, and adopt graph neural networks to handle it. Thereafter, the extracted global graph feature and the local expert patterns could supplement each other towards a more precise and explainable label prediction. Graph construction. Different program elements in a function are not of equal importance in detecting vulnerabilities. Therefore, we extract two categories of nodes, i.e., core nodes and normal nodes. (1) Core nodes symbolize key invocations and variables in the function code, which are critical for detecting a specific vulnerability. For instance, for reentrancy vulnerability, (i) an invocation to call.value, (ii) an invocation to a money transfer function that contains a call.value invocation, and (iii) the variable that corresponds to user balance, are treated as core nodes. We summarize\nthe core nodes for detecting the three vulnerabilities in Table 1. (2) Invocations and variables that are not extracted as core nodes are modeled as normal nodes, which play an auxiliary role in detecting vulnerabilities. We also construct an extra fallback node to stimulate the fallback function of a virtual attack contract. The fallback node can interact with the function under test and is considered as a normal node. It is worth mentioning that distinct from prior works such as\n[Zhuang et al., 2020], which merely model key invocations as core nodes, we propose to further extract key variables as core nodes, given that they are undoubtedly important in detecting\nvulnerabilities. To characterize rich connections between different nodes, we construct three categories of semantic edges, namely control flow, data flow, and fallback edges. Graph normalization. It is worth mentioning that different functions corresponding to distinct code semantic graphs, bringing difficulties in training a graph neural network. Moreover, current graph neural networks are inherently flat when propagating information, ignoring that different nodes are not of equal importance. Therefore, we propose to normalize the graph following that of [Zhuang et al., 2020] to remove all normal nodes and merge their features to the nearest core nodes. A simplified example for graph construction and normalization is given in Fig. 1(b).", ["2106_09282v1_1", "2106_09282v1_0"]] [60091, "As motivated in Sec. I, iAudit employs a novel two-stage fine-tuning approach and combines it with LLM-based agents for intuitive smart contract auditing with justifications. As shown in , iAudit has the following four roles:\n\u2022 Detector is the key component for achieving intuitive smart contract auditing. By fine-tuning an LLM with vulnerable and non-vulnerable code, Detector can discern whether a piece of code is vulnerable, much like how a human hacker perceives a potential vulnerability.\n\u2022 Reasoner takes the initial vulnerability perception from Detector to further analyze the potential causes of the vulnerability based on Detector\u2019s decision. By connecting Detector\u2019s output with Reasoner\u2019s reasoning during both training and inference, iAudit achieves two-stage fine-tuning.\n\u2022 To identify the optimal cause of a vulnerability during the inference phase, we further introduce the concept of LLM-based agents into the fine-tuning paradigm in iAu dit. Specifically, Ranker evaluates the reasons for each potential vulnerability, selecting a top explanation, while Critic further assesses Ranker\u2019s output to debate and determine the most appropriate cause of the vulnerability. Challenges. While iAudit\u2019s four roles in are intuitive,\ntraining and coordinating them well for effective smart con tract auditing with reasonable justifications is difficult. More specifically, we encountered the following four challenges during the design and implementation of iAudit: C1: How to collect and derive high-quality training data?\nFor a fine-tuned model like iAudit, obtaining high-quality training data is always crucial. We propose leveraging reputable auditing reports to collect posi tive samples and employing our own data enhance ment method to derive negative samples. Since this part is independent of iAudit\u2019s design, we defer its presentation to the end of this section in Sec. III-D.\nC2: How to make effective vulnerability judgements? While fine-tuning a model with vulnerable and non-vulnerable code is straightforward, tuning it to be effective with limited data presents a challenge. We make an effort towards addressing this problem in Sec. III-A by opting to use multiple prompts for fine-tuning rather than a single prompt. The advantages of this approach are twofold: (i) it enriches the training dataset by increasing the volume of data, and (ii) it diminishes the bias associated with a single prompt, thereby enhancing the reliability of the results [48]. Optimal vulnerability perception\ncould thus be achieved through majority voting.\nC3: How to effectively connect Detector\u2019s vulnerabil ity sensing with Reasoner\u2019s vulnerability reason ing? The fine-tuning of iAudit is unique because it\nemploys a two-stage fine-tuning approach with the Detector and Reasoner models. Therefore, how to effectively connect these two models becomes a new issue not encountered in traditional fine-tuning. We present this aspect of iAudit\u2019s design in Sec. III-B.\nC4: How to obtain the optimal vulnerability cause from Reasoner\u2019s output? Since Reasoner also employs multi-prompt fine-tuning, it is necessary to identify the optimal cause of vulnerability among the multiple causes output by Reasoner. We introduce two LLM-based agents, namely the Ranker and Critic compo nents, in Sec. III-C, to iteratively select and debate the most appropriate cause of vulnerability. An Example of Workflow. To wrap up, also illustrates an example of iAudit\u2019s workflow. Initially, Detector perceives code vulnerabilities using five different inference paths (prompts). The perceived results are then subjected to majority voting to determine a consensus label. Based on the voting result, Reasoner interprets this outcome according to different inference paths, resulting in ten answers (each considering the context of the code location or not). Next, Ranker selects Reason 1 with a confidence score 9/10 and explains this choice. Critic challenges this choice and advises Ranker to re-evaluate. Taking Critic\u2019s feedback into account, Ranker re-ranks the ten reasons and selects Reason 3 with a confidence score of 10/10. Critic reviews Ranker\u2019s choice again and agrees with this decision. The loop is completed, and the final reason is returned to the user. As shown in , iAudit achieves the highest scores of F1, Recall, and Accuracy among all methods, 0.9121, 0.8934 and 0.9111. CodeLlama-13b-class is second only to our method regarding vulnerability detection rate, and the performance is relatively close. GraphCodeBERT and UnixCoder perform worse than CodeLlama-13b-class. Although CodeT5 achieves the highest precision at 0.9431, its other metrics are lower than GraphCodeBERT and UnixCoder. CodeBERT has the worst performance. Additionally, the accuracy scores of these models are relatively high (all are more than 0.91), indicating that many of the predicted risky vulnerabilities are indeed risky. The results, as shown in , indicate that our step-wise approach outperforms the integration models across four key performance metrics. While the three methods are similar in precision, the differences in other metrics are notable. Analyzing these results, we found that the integration methods have higher accuracy for negative samples but a lower detection rate for positive samples (i.e., lower recall). This may be attributed to the generative loss optimization, where the output sequence is longer, making the label-related loss occupy a smaller proportion of the total loss, thus preventing the model from adequately focusing on the label. To test this hypothesis, we added data that includes only label generation to the dataset during the integration training process, guiding the model to focus more on the label. In the evaluation phase, we still required the model to output both labels and explanations simultaneously. Through this mixed training approach, we observed a significant improvement in the model\u2019s vulnerability detection performance, with an F1 score of 0.8433, a recall rate of 0.8164, a precision of 0.8723, and an accuracy of 0.8434. Answer for RQ3: iAudit achieved better detection performance than the integration model that outputs labels and reasons simultaneously. We confirmed that the model struggles to focus on the labels when required to output both types of information, as evidenced by our inclusion of label-only data in the verification process. F. RQ4 - Effectiveness of Majority Voting Our research explored a method using multiple prompts and a voting mechanism for Detector to determine the final label. This method aims to enhance the model\u2019s precision and credibility. During the evaluation process, we continued to use metrics such as the F1 score, recall, precision, and accuracy. We calculated these metrics for each prompt individually for comparative analysis, as shown in . It should be noted that the first row Single-prompt indicates that we used only one prompt format to train Detector. Prompt-1, Prompt-2, Prompt-3, Prompt-4, and Prompt-5 represent the results for each\nprompt after multiple-prompt training. The last row shows the results after majority voting, indicating that majority voting can improve the overall performance of iAudit, with both the F1 score and accuracy being the highest. At the same time, except for Single-prompt, we noticed minimal performance differences among multiple prompts. Single-prompt performed much worse than the others. Training with multiple prompts can improve model performance compared to using only one prompt during training. Additionally, we divided the test set into two groups based on whether the predictions were correct or incorrect, named 'correct prediction' and 'incorrect prediction' groups, respectively, and analyzed the distribution of confidence scores within these two groups. We found that in the incorrect prediction group, the proportion of confidence scores within the range of 0.6 to 0.8 is significantly higher than in the correct prediction group (11% vs 2%, 10% vs 3%, respectively), as shown in . The confidence score can reflect the reliability of the prediction results to a certain extent. When the confidence score is low, the prediction results are less credible.", ["2403_16073v3_0", "2403_16073v3_0", "2403_16073v3_0", "2403_16073v3_1", "2403_16073v3_2", "2403_16073v3_4", "2403_16073v3_3"]] [60092, ": The overall architecture of our proposed GraphGPT with graph instruction tuning paradigm. Contributions. To address these challenges, we propose a novel framework called GraphGPT, which aims to align Large Language Models (LLMs) with Graphs using a carefully designed graph instruction tuning paradigm. (C1) Our framework introduces a textgraph grounding paradigm as the initial step to align the encoding of graph structures with the natural language space. By incorporating textual information in a contrastive manner, we enable effective alignment of graph structure information within language models. (C2) In our proposed dual-stage graph instruction tuning paradigm, we leverage self-supervised signals through the graph matching task, which is derived from unlabeled graph structures, to serve as instructions for guiding model tuning of LLMs. By incorporating this self-supervised instruction tuning, the language model acquires domain-specific structural knowledge related to graphs, thereby enhancing its understanding of graph structures. To further customize the LLM\u2019s reasoning behavior for diverse downstream graph learning tasks, the second stage of our graph instruction tuning paradigm involves fine-tuning the LLM with task-specific graph instructions, to improve the model\u2019s adaptability. (C3) By incorporating the Chain-of-Thought (COT) distillation into our framework, GraphGPT enhances its step-by-step reasoning abilities and improves its performance in the face of distribution shift. In summary, our work makes the following contributions: \u2022 This work aims to align graph domain-specific structural knowledge with the reasoning ability of Large Language Models (LLMs) to improve the generalization of graph learning. \u2022 Our approach aims to align LLMs with Graphs through a graph instruction tuning paradigm. This paradigm incorporates selfsupervised instruction tuning, enhancing the LLM\u2019s comprehension of graph structural knowledge and its reasoning capabilities. Additionally, we introduce task-specific instruction tuning to improve the model\u2019s adaptability across diverse graph tasks. \u2022 We evaluate our proposed GraphGPT on supervised and zeroshot graph learning tasks. We conduct thorough analyses of its component-wise effects and generalization ability. By comparing it with state-of-the-art baselines, we demonstrate the superior generalization power of our approach across various settings. To improve the understanding of graph structural information by large language models, our framework focuses on aligning the encoding of graph structures with the natural language space. This alignment enables language models to effectively comprehend the graph\u2019s structural elements using their language understanding capabilities. To achieve this, we introduce a text-graph grounding paradigm that generates prompts preserving the graph\u2019s structural context for language models. This paradigm acts as a bridge, connecting the semantic understanding of textual information with the inherent structural relationships in the graph. In our GraphGPT, we design the graph encoder to be highly flexible, allowing it to leverage a wide range of backbone GNN architectures obtained from diverse graph pre-training paradigms. We incorporate a message-passing neural network architecture, which can be a graph transformer [60] or a graph convolutional network [17], as the structure-level pre-trained graph model. In each message-passing step, the graph encoder aggregates information from neighboring nodes, considering their relationships: Text-Structure Alignment. To enhance the alignment of graph structure information with Language Models (LLMs), our focus is on exploring effective encoding methods that can collaborate seamlessly with LLMs. Building upon previous works [30, 49], we adopt a contrastive approach by incorporating textual information into the graph structure encoding process. We directly integrate a pre-trained graph encoder into our GraphGPT framework, enabling the seamless utilization of its capabilities. : Workflow of text-structure alignment. The dual-stage graph instruction tuning paradigm proposed in this work builds upon the concept of instruction tuning, which has been recently introduced to enhance the adaptability of language models for specific domains [45]. In this paradigm, we aim to align the language capacity of the model with the nuances of graph learning tasks, enabling the language model to generate more accurate and contextually appropriate responses for graph-structured data. Self-Supervised Instruction Tuning. In the initial stage of our graph instruction tuning, we introduce self-supervised instruction tuning. This mechanism enhances the language model\u2019s reasoning abilities by incorporating graph domain-specific structural knowledge and effectively understanding contextual information within the graph\u2019s structure. To achieve this, we utilize self-supervised signals derived from unlabeled graph structures as instructions for model tuning. Specifically, we design a structureaware graph matching task that guides the language model in differentiating between graph tokens using language tokens. This instruction task plays a vital role in accurately associating graph tokens with their corresponding textual descriptions, deepening the model\u2019s comprehension of the graph with the provided guidance. Obs.2: Benefits with Structure-aware Graph Matching. The presence of the first stage, which involves self-supervised graph matching tasks for instruction tuning, plays a crucial role in enhancing the zero-shot transferability of our GraphGPT. The first stage focuses on aligning the graph tokens, which encode rich structural information, with the language tokens. This alignment enables the model to develop a deeper understanding of the inherent structural characteristics of the graph data. Without the first stage, if we only conduct the second stage of task-specific instruction tuning, the model tends to be more prone to overfitting on the specific dataset. In such cases, the model\u2019s performance may be heavily reliant on dataset-specific patterns and characteristics, rather than a genuine understanding of the underlying graph structure. This can limit the model\u2019s ability to generalize to new, unseen datasets. More Data Boost Model Transfer Ability. In our preliminary investigation, we examine the influence of data quantity on the transfer capability of our GraphGPT, as illustrated in the \"(Arxiv + PubMed)-Cora\" column of Table 1. In this experiment, we train models using a combination of the Arxiv and PubMed datasets and perform zero-shot testing on the Cora dataset. The results reveal that by incorporating a relatively smaller PubMed dataset (with 20,000+ items) alongside Arxiv, our GraphGPT exhibits a significant improvement in transfer performance on Cora. In contrast, the transfer performance of GNN-based models, trained separately on Arxiv and PubMed, actually deteriorates. More Data Yet No Forgetting. We further validate the performance of the combined Arxiv and PubMed instruction data on the original Arxiv data, as demonstrated in the \"(Arxiv + PubMed)- Arxiv\" column in Table 1. The results indicate that most traditional GNN-based approaches experience a significant decline in performance on Arxiv after iterative training. In contrast, our model exhibits improved performance. We attribute this phenomenon to the occurrence of catastrophic forgetting in GNN-based models, where the structural modeling competence of the model trained solely on the smaller PubMed dataset is compromised. However, our model effectively mitigates this issue through our unified graph instruction tuning paradigm. This enables our model to maintain and even enhance its performance by retaining the generalized graph structure patterns despite incorporating additional data.", ["2310_13023v3_0", "2310_13023v3_1"]] [60093, "For the CL module, we incorporate correlation labels to guide the training process. These labels are constructed based on the relationships between sampled contracts. Therefore, employing a suitable sampling strategy is crucial as it can greatly enhance the performance by ensuring the utilization of high-quality sample pairs for training, while minimizing the introduction of bias into the learning process. Motivated by this, our sampling strategy is as follows: There are three types of relationships for contract pairs, i.e. \"V-V\", \"N-N\", and \"V-N\", where V and N denote Vulnerable and Non-vulnerable contracts, respectively. Our intuition is that the relationships of \"V-V\" and \"V-N\" are more important, because we would like to discover the commonality in \"V-V\" and differences in \"V-N\" by CL. In contrast, the \"N-N\" is not substantially helpful in identifying SCV. Therefore, our sampling strategy is to extract all vulnerable contracts from the original dataset and create a new set called the POS set. Then, for each contract in the original dataset, we randomly select a contract from the POS set to form a pair of contracts as input for the CL model. It should be noted that this sampling strategy does not have 'N-N' relationship. Finally, the correlation labels LCL of the contract pairs are constructed to guide the training of the CL model. : Overview of Clear, which encompasses both the CL process, depicted by solid lines indicating the data flow, and the subsequent vulnerability detection process, represented by dotted arrows indicating the data flow. Finally, in Table 1, line 13 reports the performance of the state-of-the-art general method, LineVul, which uses CodeBERT to detect vulnerabilities. The quantitative results suggest that simple migration of general methods to the SCVD field may not yield satisfactory results. Even the LineVul, which performs the best among the general methods, only achieves an average precision, recall and F1-score of 82.92%, 71.56%, and 76.57% for the three vulnerability detection scenarios, respectively. In comparison, Clear outperforms LineVul across all three metrics, surpassing LineVul by more than\n12.93%, 33.37%, and 23.44%, respectively. Answer to RQ1: The proposed Clear outperforms the state-of-the-art methods across all metrics. On average, Clear achieves an F1-score of 94.52%, showcasing a 9.73% increase in F1-score compared to the existing best-performing method. 5.2 RQ2: Impact of Different Modules\nTo answer RQ2, we conduct comprehensive ablation tests to examine and understand the impact of different modules on Clear\u2019s overall effectiveness. In Section 3.1, we have described that Clear consists of three stages, and therefore, we have specifically designed distinct ablation tests for each of these stages. The results of all ablation tests are presented in Table 2, in which the metrics P, R, and F represent precision, recall, and F1-score, respectively. To begin with, for stage 1, we focus our data sampling strategy on two specific types of contract relationships, namely \"V-V\" and \"V-N\". We selectively mask these relationships in order to evaluate the influence of the labels generated by these two relationships on the overall effectiveness of vulnerability detection. The \"Clear-MVV\" indicates the masked \"V-V\" relationship and \"Clear-MVN\" indicates the masked \"V-N\". As shown in Table 2, Clear-MVN and ClearMVV achieve 81.18% and 84.76% average F1-score, respectively. In comparison, Clear outperforms both of them and has an F1-score of 94.52%. That is to say, learning only one of the contract relations within the CL stage does not yield satisfactory results. It is only by simultaneously learning both relations that we observe a significant improvement in the performance of SCVD. Moving on to stage 2, we intentionally remove the MLM module\nthat is integrated into the CL stage. This allows us to analyze the overall effectiveness of the CL stage without the presence of the MLM module. This particular test is referred to as \"Clear-RMLM\". We observe that the MLM module has a substantial impact on the effectiveness of the CL module. Specifically, when the MLM module is removed, there is an average decrease in precision, recall, and F1-score by 4.15%, 6.13%, and 5.14% respectively. Therefore, we believe that the MLM module can enhance the performance of Clear and is an essential component. Lastly, for stage 3, we remove the CL stage altogether and directly performed the vulnerability detection stage. This test, known as \"Clear-RCL\", enables us to evaluate the performance of vulnerability detection in the absence of the CL stage. In comparison to Clear-RCL, we observe a significant improvement in performance for all\nthree types of vulnerability detection tasks with the addition of the CL module. The F-score increased by 35.47% for RE, 34.21% for TD, and 19.00% for IO. This notable improvement can be attributed to the synergistic effects of the CL stage itself and our unique sampling strategy. Specifically, the CL module facilitates the convergence of dispersed vulnerability samples in the feature space, resulting in increased proximity among them. By utilizing our unique sampling strategy, we further reinforce the correlation among samples belonging to the same vulnerability category, thereby promoting their clustering behavior. This process enables the model to more effortlessly identify and discover potential SCVs, leading to a significant improvement in the performance of SCVD. To substantiate our assertion, we thoroughly examine the derivation process of the sample distribution during the CL stage. In particular, we analyze the evolution of the output of the CL stage (denoted as in Eq. 7) at each epoch and employ principal component analysis [29] to project each output onto a two-dimensional space. Subsequently, these outputs are visualized as scatter plots and displayed in , where the horizontal and vertical axes represent linear combinations of the vectors v obtained through PCA. Each point denotes a contract sample, with purple indicating vulnerability samples and yellow representing non-vulnerability samples. The figure clearly depicts the progression of smart contract sample distribution throughout the CL stage and yields the following finding. First, during the training process of the CL module, the samples of vulnerability contracts exhibit a tendency to cluster together, while being distinctly separated from non-vulnerability samples, indicating a clear distinction between the two groups. Second, this distribution enhances the ability to differentiate and detect SCVs. Clear exhibits a higher proficiency in recognizing this particular cluster and accurately classifying contracts within its proximity as vulnerable. This leads to an improved capability for identifying SCVs.", ["2404_17839v1_0", "2404_17839v1_1"]] [60094, " With the continuous growth of bitcoin addresses, an accurate yet efficient method for address classification is much coveted. Existing methods for bitcoin address classification can be roughly cast into two categories, i.e., off-chain informationbased method and on-chain behavior-based method. The offchain information-based method focuses on tagging addresses by gathering real-world data from address users. For example, Ermilov et al. [18] crawl the user profiles from relevant forums (e.g., Bitcointalk.com, Twitter, and Reddit) and darknet markets (e.g., Silkroad, Hub Marketplace, and Alphabay) to obtain the association between some bitcoin addresses and users, and then analyze the behavior of other addresses using a clustering method. Kang et al. [19] obtain the IP address by receiving bitcoin protocol message packets, and then use static analysis to infer other addresses which the current IP address may be associated with, realizing the mapping between bitcoin addresses and IP addresses. Unfortunately, these solutions are highly dependent on off-chain collected information and human experience, and hence cannot be used for all bitcoin addresses. Additionally, there are usually many mistakes in off-chain information, which might lead to low accuracy of the address behavior analysis. The on-chain behavior-based method concentrates on extracting transaction characteristics of bitcoin addresses and analyzing their behavior with the guidance of machine learning. For instance, [20]\u2013[25] directly extract transaction features from bitcoin addresses and then feed them into models, e.g., Random Forest, SVM, and LightGBM. Such methods do not need to rely on off-chain information, thus avoiding potential problems caused by missing off-chain information. However, they also suffer from inherent drawbacks. On one hand, direct feature extraction from transactions in the addresses may lead to large deviations. On the other hand, the traditional classification model has difficulties in utilizing the temporal feature and topology of transactions in the addresses. These problems impose a significant impact on the accuracy of address behavior analysis, which motivates us to design a novel and effective address behavior analysis model. In this paper, we present BAClassifier, a fully automatic framework for bitcoin address classification. In particular, BAClassifier consists of three key components. (1) Address Graph Construction. For each given bitcoin address, BAClassifier constructs a chronological transaction graph that reflects the behavior of that address. Specifically, we engage in a graph node compression technique and a graph structure augmentation method to transfer the transactions of each bitcoin address into a unified graph representation. (2) Graph Representation Learning. BAClassifier utilizes graph neural networks to learn the graph representation of each address and generate the graph embeddings. (3) Address Classification. BAClassifier aggregates all graph embeddings of each address into the address-level representations, and engages in a classification model to give the final address behavior classification. \u2022 We propose BAClassifier, a tool that can automatically classify bitcoin address behaviors. Particularly, we investigate a scalable and generic manner of analyzing bitcoin address behaviors using graph neural networks. \u2022 Within this system, we come up with three key components, i.e., address graph construction, graph representation learning, and address classification. Specifically, we introduce graph node compression and graph structure augmentation techniques to translate bitcoin address transactions into unified graph representations. Furthermore, we adopt graph feature networks to extract address graph features and select the combination of LSTM+MLP as the behavior classification model. \u2022 We construct a large-scale labeled dataset that consists of over 2 million bitcoin addresses as well as their transactions and concerns 4 types of address behaviors, which can serve as a benchmark for evaluating bitcoin address classification methods. \u2022 Extensive experiments on the collected dataset show that our proposed system is indeed useful in identifying bitcoin address behaviors. BAClassifier surpasses stateof-the-art address classifiers and overall provides interesting insights. We have released our code and dataset at https://github.com/AwesomeHuang/BAClassifier, hoping to facilitate future research. Method Overview. The overall architecture of BAClassifier is outlined in . Generally, BAClassifier consists of three key components: \u2022 Address Graph Construction: For each given bitcoin address, BAClassifier will construct a chronological transaction graph that reflects the behavior of that address. \u2022 Graph Representation Learning: BAClassifier utilizes a graph neural network to learn the graph representation of each address and generate the graph embeddings. \u2022 Address Classification: BAClassifier aggregates all graph embeddings to produce the address-level representations, and confirms the classification model to output the predictions of address classifications. In what follows, we will elaborate on the details of these components one by one. The first step in BAClassifier is to transfer the address transactions into graph structures. To obtain a unified address graph, we have to solve three key problems. (1) Different bitcoin addresses have a distinct number of transactions, thus yielding different sizes of graphs. Moreover, the transactions of one address are performed in temporal order. Therefore, we must guarantee that the generated graphs have a unified structure while still preserving the temporal order of transactions. (2) There is a significant disparity in the number of addresses involved in various transactions. For instance, a transaction issued by certain exchanges may only have several associated addresses, while a transaction generated by a mining pool may have thousands of associated addresses. Hence, the graph size must be limited. (3) Since bitcoin transactions provide an insufficient amount of information, we thus need to go deeper into the global graph structural feature to elicit further information. To tackle the above problems, BAClassifier incorporates three key modules into the address graph construction, namely original graph extraction, graph node compression, and graph structure augmentation. When a transaction occurs, the bitcoin wallet will zero off the balance in the original address, and transfer any leftover funds to a new address. Naturally, the address that receives the change could be set to the original address of the originating transaction. Then, the bitcoin wallet automatically generates a new address to receive the change after the transferred amount and the fee are deducted. Such a mechanism safeguards the user\u2019s privacy by ensuring that no one other than the user could know which address is the change address and which is the receipt address. However, all these facts make the analysis of bitcoin address behavior more difficult and challenging than that of traditional account models. Single-Transaction Address Compression. The single-transaction address compression is proposed to merge the address nodes that have only a single transaction, reducing the number of address nodes in the graph. illustrates the specific procedure. For the single-transaction address connected to the same transaction node, we merge these nodes into a hyper node, termed a single-transaction hyper node. To retain the input or output value of these addresses, we adopt the statistical feature extraction method to retrieve the transfer feature for these single-transaction hyper nodes, which is the features of the single-transaction hyper node in the graph. After compressing address nodes, a transaction node can link to a maximum of two single-transaction hyper nodes on the input and output side of the transaction, respectively.", ["2211_14582v1_0", "2211_14582v1_1"]] [60095, "TRIALMASTER conducts inference on itself without any help from a backtracking system like DFS or BFS. It outputs two kinds of tactics: tactics in Lean and backtrack instructions. When TRIALMASTER is doing a proof search on the test set, it is prompted with all history paths, including previous tactics, states, the backtracking it made before, and the failed search path. It then outputs the entire proof path after. Nonetheless, we only utilize the first tactic in the output and employ Lean as a calculator to determine the next state, thereby ensuring the correctness of the state following the tactic. If the tactic outputted by TRIALMASTER is a backtrack instruction, it is then prompted with all the proof search history including the backtrack instruction and the state that the backtrack instruction says to return to. If that tactic is not a backtrack instruction, the tactic and the current state will be fed into Lean for producing the state after. TRIALMASTER is then prompted with the entire proof tree including the state that Lean calculated, and it should output a tactic again. This process is repeated until Lean identifies that the proof is complete or any Lean error occurs. We also note that TRIALMASTER only outputs one tactic at each state using greedy search. : Method comparison. (a) A conventional system: The tactic generator (i.e., LLM) is fine-tuned on correct proof paths only. During inference, the trained tactic generator produces Nsampled (e.g., 2 in the example) tactics at a time. If Lean decides that the current tactic is wrong, the system backtracks to the last valid state and tries other candidate tactics. (b) Our methodology: The tactic generator is fine-tuned on proofs with trial-and-error. During inference, we take the first tactic it generates and feed that into Lean for state checking at each step. Why do we choose DFS over BFS? While the breadth-first-search (BFS) system is also popular for building neural provers in Automated Theorem Proving, we have opted for DFS as our baseline over BFS in the context of propositional logic theorem proving. This is due to the finite number (around 20) of tactics available at any step for the search process of intuitionistic propositional logic theorems, making DFS more efficient than BFS without compromising the success rate. 5.3 Results and Analysis\nTRIALMASTER outperforms conventional DFS system. We begin by evaluating the methods of the in-distribution test set. Table 2 illustrates that both our method and the DFS system perform exceptionally well, achieving a success rate of nearly 100% in most configurations. This suggests that Llama-7b effectively masters in-distribution intuitionistic propositional logic theorems. Then, we compare the performance of the methods on the out-of-distribution task. The results are presented in . Our method with trial-and-error significantly outperforms the DFS system across various\nhyperparameter configurations. Additionally, we observe that feeding more proofs without trial-and-error for LLM fine-tuning does not further improve\nthe performance. Model learns backtracking capability from trial-and-error data. In the experiments, we find out that our TRIALMASTER successfully acquires the\nbacktracking capability from proofs with trial-anderror information. This is evidenced by the fact that during TRIALMASTER\u2019s proof search for theorems in the test set, all backtracking instructions produced by the LLM adhere to the correct format and point to existing state numbers.\nIncluding failed search paths helps TRIALMASTER to learn. The following experiment shows that adding failed search paths to the training data\nfor TRIALMASTER results in an overall gain. In this experiment, the model is only trained to learn the correct search paths and the backtracking instructions. The model is not trained to learn the failed search paths (we don\u2019t compute the loss for the failed search paths during the training stage in\nthis case). The proof success rate in this case is 75.6%, which is lower than TRIALMASTER\u2019s proof success rate of 88.7%. The NLean for the model is 13,600, which is lower than that of TRIALMASTER. This is expected since the model does not learn to predict failed search paths. Our explanation for why TRIALMASTER has a higher proof search success rate than the model trained in the previously mentioned experiment is that the failed search paths also contribute to improving the proof search success rate of the model. TRIALMASTER strategically tried some potentially failed search paths to gain a more comprehensive view of the problem, which then led to the correct search paths. 5.4 Ablation Study\nTo evaluate the effectiveness of training with trial-and-error, we craft an ablated version of our method where the LLM is fined-tuned with data of the correct path only and do inference in the same way as our method (i.e., producing one tactic at a time and applying Lean for state checking). We denote the ablated version as Model - proof w/o t.a.e.. For both methods, we mark the search attempt as failed if the tactic induces a Lean error, or the search exceeds the 1500-word limit. The result is shown in the Table 3. The difference between the success rates of the two models is 29.4%, which is significant. This clearly shows that failed search states and trial-and-error information tremendously enhance the model\u2019s capability to solve theorem-proving tasks. 5.5 Exploratory Study: Effect of Training\nProof Length on Model Performance Since the FPS algorithm of PropL dataset can generate multiple proofs with variable length, we conduct an exploratory study to assess the impact of proof length on model performance. We fine-tune two models using proofs with different lengths of trial-and-error information. For the first model, which is our TRIALMASTER, the training data is derived by randomly selecting two out of the shortest four proofs from the ten available proofs for each theorem in PropL. We denote it as Model - short proof w/ t.a.e. In contrast, the training data of the second model is formed by randomly selecting two proofs from the ten available for each theorem, irrespective of their lengths. We denote it as Model - long proof w/ t.a.e. For both models, we use greedy search to let them generate one tactic for each state. We evaluate the models on our 1000 OOD test set. The results are shown in the . A higher success rate is observed in the model trained with shorter proofs. This can be attributed to the fact that as the proof with trial-and-error information becomes longer, there is too much trial-and-error information that may detrimentally affect the model's performance, as too many failed search paths may lower the quality of the training data. 6 Conclusion and Future Work\nIn this paper, we study Automated Theorem Proving in formalized environments. We create a complete, scalable, and representative dataset of intuitionistic propositional logic theorems in Lean. We demonstrate that leveraging information from failed search states and backtracking not only teaches models how to backtrack effectively, but also helps in developing better tactics than those generated by models trained without access to backtracking insights. We release our datasets on GitHub and Huggingface.", ["2404_07382v3_0", "2404_07382v3_2", "2404_07382v3_1"]] [60096, "In this paper, we shed light on smart contract fuzzing by employing a sequence-aware mutation and seed mask guidance strategy. In particular, we first utilize data-flow-based feedback to determine transaction orders in a meaningful way and further introduce a sequence-aware mutation technique to explore deeper states. Thereafter, we design a mask-guided seed mutation strategy that biases the generated transaction inputs to hit target branches. In addition, we develop a dynamic-adaptive energy adjustment paradigm that balances the fuzzing resource allocation during a fuzzing campaign. . A high-level architecture and analysis pipeline of MuFuzz. We design and implement MuFuzz, a novel fuzzing framework for smart contracts, which consists of a sequence-aware mutation module, a seed mask guidance module, and a dynamic energy adjustment module. In summary, we make the following key contributions:\n\u2022 We design a sequence-aware mutation and seed mask guidance strategy for blockchain smart contract fuzzing.\n\u2022 We propose a new fuzzing framework for smart contracts that consists of three key components: a sequence-aware mutation, a mask-guided seed mutation, and a dynamic adaptive energy adjustment, which increases the probabil ity of exploring deep contract states. The proposed mod ules hold the potential to be transferable to the fuzzing of smart contracts on alternative blockchain platforms.\n\u2022 We implement MuFuzz1 and conduct extensive exper iments on three benchmarks. Not only does MuFuzz outperform state-of-the-art smart contract fuzzers in terms of both coverage and runtime, but it is also able to detect more vulnerabilities than existing bug detection tools.\n\u2022 We release both our system and benchmarks to facilitate future research, in the spirit of open science. II. BACKGROUND\nA. Ethereum Smart Contract\nThe notion of smart contracts was originally put forth in 1994 by Nick Szabo [42], who described the concept of a trustless system containing self-executing computer programs. However, the concept did not become a reality until the emergence of the Ethereum in 2014 [43]. Ethereum is currently\nthe most popular blockchain, which employs the Ethereum Virtual Machine (EVM) to execute smart contracts. Smart contracts maintain Storage, which is organized as a key value store to persistently hold state variables [18]. Since smart contracts are stored on the blockchain, they inherit certain properties. The immutability of the blockchain ensures that the dapp/program execution strictly adheres to the rules defined in the smart contract, which are unalterable once deployed on chain. Attributed to the decentralized nature, smart contracts allow transactions to transpire between anonymous parties without relying on a trusted third party. To date, more than 60 million smart contracts have been created on Ethereum [4],\ngiving rise to a variety of decentralized applications, such as decentralized finance (DeFi) [44], non-fungible tokens (NFT) [45], Internet of Things (IoT) [46], healthcare [47], crowdsale [48], and many others [1], [2], [49]. B. Smart Contract Vulnerability Smart contracts on the Ethereum blockchain have been\nsubject to numerous destructive attacks. The most notable ones include the DAO attack in 2016 [50], the Parity Multisig Wallet attack in 2017 [5], the Beauty Chain attack in 2018 [51], the Cream Finance attack in 2021 [9], and the Rari Fuse Pool attack in 2022 [52], which together resulted in huge financial losses. Prior works such as [53], [18] have studied and defined various defects in Ethereum smart contracts. Here, we mainly focus on detecting nine types of vulnerabilities, which are summarized in Table I. The reasons for considering these bug classes are: (1) A large body of previous research has demonstrated that these vulnerabilities account for a significant portion of the existing bugs in the Ethereum ecosystem [12], [13]. (2) Over 90% of the financial losses in Ethereum smart contracts are caused by these vulnerabilities [6], [30], [54]. (3) We have investigated the bug classes handled by existing smart contract security tools and found that these vulnerabilities are of the highest concern [18], [15], [19], [33]. shows the experimental results, where each bar represents the percentage of achieved coverage or detected vulnerabilities compared to the results when all three com ponents were enabled (i.e., the gray bar in the back). It can be seen from the figure that each component is essential to the performance improvement of MuFuzz. In particular, we can observe that generating a meaningful transaction sequence plays the most critical role in achieving branch coverage and detecting more vulnerabilities. Quantitatively, without the three components, the achieved coverage decreases by 18%, 9%, 10% on small contracts and 26%, 19%, 25% on large contracts, respectively. Moreover, MuFuzz can discover 14%, 6%, 11% more bugs on small contracts and 27%, 22%, 24% more bugs on large contracts than without the three components, respectively. E. Real-World Case Study (Answering RQ4)\nWe also pay attention to the scalability and practicality of MuFuzz. We randomly select 100 contracts from D3, where each contract contains more than 30,000 transactions in Ethereum. We manually check the bug detection results and classify them into true positives (TP) and false positives (FP). In addition, we present the overall branch coverage (i.e., the average of the 100 contract runs) of MuFuzz. Table IV summarizes the experimental results. From the table, we can see that: (1) MuFuzz reports a total of 86 bug alarms. Out of the 100 contracts, 39 contracts are flagged as having at least one of these alarms. We manually verify the alarms and con firm that 94% of them are true positives. (2) MuFuzz achieves an average coverage of 80.71% on the 100 contracts, showing inspiring practicality in testing real-world large contracts. Case Study. To provide further insight on how MuFuzz achieves high coverage and detects bugs, we illustrate the fuzzing strategy of MuFuzz by through its workflow. shows a contract where MuFuzz achieves 100% coverage while sFuzz and ConFuzzius achieve only 50%. Function withdraw has an if condition at line 30. If the condition cannot be satisfied, the bug inside the condition cannot be exposed. To cover line 31, one must call function invest twice to set the phase = 1. sFuzz and ConFuzzius fail to\ncover line 31 because they cannot generate a sequence that runs invest twice. MuFuzz, instead, incorporates a sequence aware mutation strategy to create a sequence that can handle such conditions. Particularly, MuFuzz works in four main steps: 1) MuFuzz parses a contract source code into bytecode, ABI, and AST. By analyzing the ABI and AST, MuFuzz identifies which state variables are defined. 2) MuFuzz derives the dataflow dependencies of state variables and formulates a sequence of transactions. 3) MuFuzz determines the test inputs for each transaction in the sequence and activates\nthe mutation masking strategy to evolve the seed inputs. 4) MuFuzz analyzes the fuzzing log and identifies if there is a vulnerability by matching it against the bug oracles. The vulnerability in the crowdsale case can be exposed by MuFuzz with a sequence of transactions:", ["2312_04512v2_0", "2312_04512v2_1", "2312_04512v2_2"]] [60097, "Transformers succeed in learning from Teaching Sequences. For all 5 tasks, we find that Transformers achieve perfect accuracy on subsequent points after receiving the teaching sequence. shows the accuracy curve of Transformers and baselines on three representative tasks after receiving the first t points containing the teaching sequence. Note that, the accuracy of Transformers stays (close to) 100% after receiving the teaching sequence. It is interesting to see that Transformers succeed in learning with teaching sequences for tasks like DNFs and sparse parities where they struggled to learn in the vanilla setting. By definition, the teaching sequence is the smallest sequence required to learn the target function. Hence, it can often be much smaller than the sample complexity for a learning algorithm to learn from arbitrary distributions such as the ones considered in the previous section. Thus, our experiments show that models predict perfectly with much fewer examples when provided with these teaching sequences. This is not true for FFNs trained with gradient descent which fail to predict accurately given only the teaching sequence during training (see right in Appendix). FFNs require a larger number of additional points apart from the teaching sequence to predict with high accuracy. Two distinct algorithms to learn one task. An interesting finding is that Transformers seem to learn two different algorithms to learn Conjunctions depending on the data distribution DX during the training process. See that ( left) when Transformers are trained to perform in-context learning with Conjunctions in the vanilla setting (as in Section 3) without teaching sequences, and tested on examples (or Prompts) containing teaching sequences, they are not able to leverage the teaching sequence and require more examples to predict accurately. The algorithm learned with standard Conjunctions still works on examples with teaching sequences even if it is not as sample-efficient. On the other hand, when Transformers are trained with teaching sequences and are tested in the standard setting without teaching sequences, they do not perform well since the learning algorithm relies on using the first t examples containing the teaching sequence ( center-left). These results indicate that models can learn two distinct algorithms depending on the distribution of inputs provided during training. We conducted another experiment where we trained a Transformer on a mixture of examples with and without teaching sequence. During training, we sample an example (or Prompt sequence) with a teaching sequence with probability 1 2 , and without a teaching sequence with equal probability. We evaluate the model on both tasks separately, one which contains examples with teaching sequences and one without them. We find that the same Transformer model could achieve near-optimal performance for both tasks \u2013 when provided with teaching sequences, it behaves like the model that is trained on just the Teach Conjunction task ( Right) and when provided with examples without teaching sequences, it behaves like a model trained on just the Conjunction task ( center-right). This indicates that Transformers can learn two distinct learning algorithms for Conjunctions and implement the optimal one depending on the sequence of in-context examples. This highlights the versatility of neural sequence models in being able to find separate learning algorithms with respect to the data distribution for solving the same task. Results with Frozen GPT. We evaluate the performance of a frozen GPT-2 model on tasks such as Conjunctions and Disjunctions. depicts the performance on Conjunctions (the behavior on Disjunctions is almost identical). We find that the GPT-2 model performs relatively better than the nearest neighbor baseline and much better than a randomly initialized model on these tasks. However, it still does not come close to achieving the near-perfect accuracy obtained by fully trainable Transformer networks as in Section 3. GPT-2 can implement Nearest Neighbor. Given the observation that the performance of GPT-2 was close to the nearest neighbor algorithm, we examined if a frozen GPT-2 model can implement the nearest neighbor (NN) algorithm. We designed the NN task to test this hypothesis. In this task, each prompt contains 100 points where the first 20 points are labeled 0 or 1 uniformly at random and the subsequent 80 points are labeled according to the nearest neighbor algorithm. We then evaluate the GPT-2 model in the frozen-GPT setup described earlier and find that it can achieve near-perfect accuracy on the 80 points labeled with the nearest-neighbor algorithm (see center). Moreover, upon analyzing the attention heads we found heads which closely implemented nearest neighbors \u2014 for an input xi , the attention head attended over yj where xj is the nearest neighbor of xi among x1, . . . , xi\u22121 (illustrated in right). Further details about these experiments are provided in Appendix G. These heads are reminiscent of induction heads (Olsson et al., 2022) where instead of matching prefixes, they can find the nearest neighbor over the input space. Since the weights of the Transformers are frozen and only the input embedding and output layers are updated during the training process, the mechanisms to solve tasks such as NN and Conjunctions must have been learned during pretraining on language data. Moreover, a Transformer of the same size with randomly initialized weights is unable to perform much better than chance-level accuracy. Results with Direct Evaluation. The performances of all LLMs for the Conjunction and Majority tasks with varying number of dimensions are provided in . It is clear that all models are able to perform as well as or better than the nearest neighbor baseline when the number of dimensions is up to 7. Note that in this setup, the LLM is essentially unaware of the task (such as Conjunction) when it is provided directly with the prompt at inference time. Even with n = 7, there are 2 128 Boolean functions, and so the in-context learning problem remains challenging, and the observed performance of these models is impressive. It is perhaps surprising to see that the open-source LLaMA-2 model performs quite similar to GPT-3.5-Turbo. It is also particularly interesting to see the strong performance of GPT-4; apart from outperforming all other models, it also slightly outperforms the nearest neighbor baseline even in the 15-dimensional case.", ["2310_03016v1_0", "2310_03016v1_2", "2310_03016v1_3", "2310_03016v1_3", "2310_03016v1_3", "2310_03016v1_3", "2310_03016v1_4", "2310_03016v1_4", "2310_03016v1_4", "2310_03016v1_1"]] [60098, "One of the most critical problems in BCFL is the resource allocation on local devices. Firstly, local devices with heterogeneous computational power usually have their own tasks to finish, so a universal resource allocation scheme for all the mobile devices is not practical. In addition, the whole system may not work effectively and sustainably if there are no reasonable rewards allocated to clients. Furthermore, both training and mining in the framework of BCFL consume significant amount of resources and time, and thus it is difficult for clients to appropriately allocate their limited resources to ensure the performance of the global model during the required time period. Lastly, since the system may not be aware of the amount of training data that each client owns, it can be challenging for the model owner (MO), i.e., the BCFL task publisher, to make proper decisions regarding the reward distribution. Inspired by [20], we consider a fully coupled BCFL system, which runs FL on a consortium blockchain network. In such a decentralized BCFL system, the participants in FL work as the blockchain nodes as well. Specifically, there are multiple local devices, termed as clients, working collaboratively to train a machine learning model, i.e., the global model. The set of clients can be denoted as N = {1, \u00b7 \u00b7 \u00b7 , i, \u00b7 \u00b7 \u00b7 , N} with N representing the total number of clients in the BCFL system. For simplicity, we refer to the work done by the blockchain for FL as mining in a uniform way, which does not imply that clients perform mining jobs consuming excessive amount of computing power like Proof of Work (PoW) [21]. In our system, we consider that lightweight consensus algorithms, such as Practical Byzantine Fault Tolerance (PBFT) [22] and Delegated Proof of Stake (DPoS) [23], are utilized in the consortium blockchain system. In our considered BCFL system, each client should be responsible for both training and mining. Since the workflow of the fully coupled BCFL is that mining starts only after the training is completed, we assume that training and mining are not parallel in this paper. Once the training is finished, all the clients will upload their local model updates to the blockchain network so as to be recorded on the blockchain. Here we define one round of BCFL as finishing both the training and mining processes. Since mobile devices usually have their own tasks to finish rather than only contributing to BCFL, we assume that they will not use all of their computation resources. In other words, CPU cycles per second for training and mining can be adjusted strategically in each round of BCFL. The MO is the requester of the FL task, aiming at to receive a well-trained final global model from the BCFL system. After the FL task is published on the blockchain, clients start to train their local models and then broadcast the obtained model updates to the blockchain network once the local training process is finished. By this means, the MO can only access the model updates from all clients rather than raw data of devices, thus preventing the leakage of private information for participants. An illustration of our system model is shown in . The detailed workflow to finish one BCFL task is as below:\n\u2022 The MO publishes an FL task, with the rewards for training and mining.\n\u2022 Clients determine the computational resources, i.e., CPU cycles per second, used to train the model and mine for the blockchain based on the rewards provided by the MO.\n\u2022 Each client trains the local model, and then broadcasts the model updates to the blockchain network. Then, clients start to mine the block.\n\u2022 Once the block is generated, the model updates are stored on the blockchain, and the rewards will be delivered to each client.\n\u2022 Clients calculate the global model with the verified model updates on chain. As long as the expected performance of the global modal is not reached, clients will start the next round of training based on the aggregated global model. Before we formulate the game model, we should clarify the fair reward allocation scheme first. In our system, we consider that each client has an equal chance to participate in both the training and mining processes with fair rewards. And since the allocation of rewards to each client in training and mining has a significant impact on the system fairness and further the participation willingness, we need to design a fair reward allocation scheme. Although we have already defined the payoff of each client during the training and mining processes in the above section, it is necessary to investigate their upper bounds based on the MO\u2019s rewards budget. And the rewards distribution should not only be associated with the computing power of the device, but also take into account the performance of its work. On the one hand, the reward budget of the MO and the rewards that each client can get are limited; on the other hand, if the resources are allocated only based on the computing power devoted, it could lead to the situation where devices with sufficient computing power take most of the rewards, while devices with less power cannot get enough rewards, making the system unstable and unsustainable. In this section, we will conduct numerical experiments to verify and support our designed mechanism. We first clarify the experimental settings and then illustrate the results. We implement the simulations using Matlab 2019b in macOS 11.0.1 running on Intel i7 processor with 32 GB RAM and 1 TB SSD. First, we prove the correctness of the optimal strategies derived from our models. We assume there are 50 clients in total and each client has the same ata size, so we set \u00b5i = 10. In our experiments, for clients and the MO, there are four strategy combinations, i.e., both sides choose the optimal strategies, one chooses the random strategies while the other chooses optimal strategies, and both choose the random strategies. For example, we define the strategy combination Random vs. Optimal as the clients choose the random strategies and the MO chooses the optimal strategy. We compare the utilities of clients and the MO with random strategies and optimal strategies, respectively. The results in illustrate that clients and the MO can obtain the higher utilities than all other strategies when they both choose the optimal strategies, proving the validity of our proposed optimal strategies.", ["2202_10938v1_0", "2202_10938v1_1"]] [60099, "Abstract\nLarge Language Models (LLMs) have emerged as powerful tools across var ious domains within cyber security. Notably, recent studies are increasingly\nexploring LLMs applied to the context of blockchain security (BS). How ever, there remains a gap in a comprehensive understanding regarding the\nfull scope of applications, impacts, and potential constraints of LLMs on blockchain security. To fill this gap, we undertake a literature review focus ing on the studies that apply LLMs in blockchain security (LLM4BS).\nOur study aims to comprehensively analyze and understand existing re search, and elucidate how LLMs contribute to enhancing the security of\nblockchain systems. Through a thorough examination of existing literature, we delve into the integration of LLMs into various aspects of blockchain secu rity. We explore the mechanisms through which LLMs can bolster blockchain security, including their applications in smart contract auditing, transaction anomaly detection, vulnerability repair, program analysis of smart contracts, and serving as participants in the cryptocurrency community. Furthermore, we assess the challenges and limitations associated with leveraging LLMs for enhancing blockchain security, considering factors such as scalability, pri vacy concerns, and ethical concerns. Our thorough review sheds light on the opportunities and potential risks of tasks on LLM4BS, providing valuable insights for researchers, practitioners, and policymakers alike. In this section, we introduce a thematic taxonomy devised to systematically categorize the body of literature about tasks associated with large language models for blockchain security (LLM4BS), emphasizing the function of the LLM within these contexts. depicts the five applications of LLM4BS task, involving code audit of smart contracts \u00a73.1, analysis of abnormal transactions \u00a73.2, dynamic analysis of smart contracts \u00a73.3, development of smart contracts \u00a73.4, participants of cryptocurrency community \u00a73.5, and other potential directions \u00a73.6. Governance emerges as a major theme, as LLMs could contribute signifi cantly to the structuring and transparency of this largely unregulated space. The first document outlines the broader governance challenges faced by AI systems, suggesting blockchain as a viable solution to introduce verifiability\nand accountability. On the other hand, the limitations of LLMs in capturing the complexities of legal reasoning are highlighted, a concern that is echoed\nacross the three studies to varying degrees. The practical applications of these models in legal settings, specifically detailed in the second and third documents, emphasize their innovative role in drafting legal complaints. This development is promising for the future of legal work related to cryptocurrency regulations and litigation, as it sug gests that LLMs could alleviate some of the workload from human experts, although the need for human oversight remains. While governance and legal assistance dominate the discourse, there\u2019s a tone of cautious optimism throughout the texts. There is recognition of the transformative potential of LLMs in the cryptocurrency sector, but also a clear acknowledgment of the need for further advancement in AI technology to fully integrate into complex decision-making processes where legal and ethical considerations are paramount. In essence, the collective narrative from the three documents converges on the premise that LLMs hold transformative potential for the cryptocurrency\ncommunity\u2019s governance and legal sectors but must overcome challenges in understanding before they can be fully trusted in autonomous roles. 3.6. Miscellaneous\nAs displayed in Table 7, LLM is also used in other blockchain security fields, involving smart contract compilers [57], zero-knowledge proofs [58], model training [20, 59], NFT generation [60]. We will introduce their appli cations in detail in the future. 4. Case study of LLM4BS\nIn this section, we engage in an in-depth examination through three dis tinct case studies, each serving to illustrate and shed light on the diverse\nand concrete applications of Large Language Models for Blockchain Systems (LLM4BS). These cases in Table.8 have been meticulously selected to encom pass a broad range of scenarios, comprising LLM4FUZZ [44] \u00a74.1, SMART INV [29] \u00a74.2, BLOCKGPT [26] \u00a74.3. LLM4FUZZ emerges as an innovative technique in the cybersecurity landscape, specifically in the niche of smart contract security within blockchain networks. It intricately combines the prowess of Large Language Models (LLMs) with fuzz testing methodologies to proactively unearth vulnerabilities that could potentially compromise the integrity of smart contracts. LLMs are highly sophisticated AI models that have made significant strides in understanding and generating human-like text, and more recently, they have proven to be adept at comprehending programming languages and code structure. LLM4FUZZ exploits this capacity by deploying LLMs to guide fuzzing processes intelligently. This results in a more incisive and nuanced exploration of smart contracts, focusing testing efforts on areas that LLMs determine to be most likely to contain security flaws. By doing so, LLM4FUZZ succeeds in not only streamlining the anomaly detection process but also in enhancing its accuracy and depth. : The architecture of LLM4FUZZ. In the world of blockchain technology, where smart contracts serve as immutable agreements that execute automatically based on coded conditions,\nthe potential negative impact of a security breach is heightened. Smart contracts control significant digital assets and are essential to the functioning of distributed applications (dApps). The immutable nature of blockchain adds a layer of complexity as deployed smart contracts, once committed to the blockchain, cannot be altered. Therefore, preemptive security assurances become crucial to ensuring their reliability and safeguarding the assets and\nprocesses they govern. LLM4FUZZ provides a novel layer of security analysis by identifying and prioritizing potential problem areas within smart contract code. This prior itization is achieved through the LLM\u2019s learned understanding of code pat terns that are historically or commonly associated with vulnerabilities. The methodology enhances traditional fuzzing strategies, which typically adopt a more scattergun approach by bombarding the code with random data inputs. LLM4FUZZ\u2019s targeted testing is not just more efficient but also more effec tive in discovering complex vulnerabilities that might otherwise be missed. Following implementation, LLM4FUZZ has been benchmarked against existing fuzzing techniques and has consistently demonstrated superior performance. It expedites the vulnerability detection process and increases the breadth of security flaws that can be detected, thereby reinforcing the overall security posture. The case of LLM4FUZZ is emblematic of the foresight in AI integration into cybersecurity regimes. It encapsulates the transformative effects of AI on improving and redefining existing technological processes, particularly in areas critical to the burgeoning digital economy. Through its lens, we catch a glimpse of the future of smart contract security \u2013 a future where AI-driven tools not only anticipate but actively engage in the continuous battle against cyber threats.", ["2403_14280v4_0", "2403_14280v4_1"]] [60100, "We compare the performance of Astute RAG and baselines across different subsets partitioned by their retrieval precision, on our collected data with Claude as the LLM. As shown in \\, Astute RAG achieves consistently better performance than all baselines across different retrieval precision, indicating its effectiveness in improving RAG trustworthiness in broad scenarios. Notably, Astute RAG does not sacrifice performance gain under high retrieval quality in exchange for improvement under low retrieval quality. When the retrieval quality is extremely low (close to zero retrieval precision), all other RAG variants underperforms the 'No RAG' baseline, except for the proposed Astute RAG. This observation aligns with the worst-case results on RGB. It demonstrates the difficulty in overcoming imperfect retrieval augmentation, and verify the effectiveness of Astute RAG in doing so. title{Effectiveness in addressing knowledge conflicts}. We split our collected data in to three subset according to the answers from Claude, with and without RAG. The answers from two inference methods can be both correct, both incorrect, or conflicting with one being correct. These three subsets represents the three situations between internal and external knowledge. The results are shown in \ref{}. On the conflicting subset, Astute RAG successfully chooses the correct answer in approximately 80//% of cases, being the most effective method in addressing knowledge conflicts. Notably, Astute RAG even brings performance improvement on the subset where neither internal nor external knowledge alone leads to the correct answer. This indicates that Astute RAG can effectively combine partially-correct information from LLM-internal and external knowledge, to achieve the correct answer through collective information across them. title{Worst-case performance on RGB.} \\ presents the results under the worst-case setting on RGB where all retrieved documents are negative. It demonstrates the noise robustness of Astute RAG and baseline RAG methods. The performance gap between RAG and No RAG exceeds 50 points, highlighting the detrimental impact of imperfect retrieval results and emphasizing the importance of providing robust safeguards against worst-case scenarios. While the baseline RAG methods outperform the original RAG, they still obviously fall behind No RAG. Astute RAG is the only RAG method that reaches a performance close to No RAG under the worst-case scenario, further supporting its effectiveness in addressing imperfect retrieval augmentation. title{Qualitative study}. In \\ we present two representative examples showing the intermediate outputs of Astute RAG. In the first example, LLM without RAG generates a wrong answer, while RAG returns a correct answer. Astute RAG successfully identified the incorrect information in its generated passage and an external passage, avoiding confirmation bias \\cite{tan2024blinded}. In the second example, LLM alone is correct, while RAG is incorrect due to the noisy retrieval results. Astute RAG detected the correct answer from noisy retrieved information by checking with its internal knowledge. ", ["2410_07176v1_0", "2410_07176v1_1", "2410_07176v1_2", "2410_07176v1_3"]]