|
{: , : , : [60000], : , : []} |
|
{: , : , : [60000], : , : []} |
|
{: , : , : [60001], : , : []} |
|
{: , : , : [60001], : , : []} |
|
{: , : , : [60002], : , : []} |
|
{: , : , : [60002], : , : []} |
|
{: , : , : [60003], : , : []} |
|
{: , : , : [60003], : , : []} |
|
{: , : , : [60004], : , : []} |
|
{: , : , : [60004], : , : []} |
|
{: , : , : [60005], : , : []} |
|
{: , : , : [60005], : , : []} |
|
{: , : , : [60006], : , : []} |
|
{: , : , : [60006], : , : []} |
|
{: , : , : [60007], : , : []} |
|
{: , : , : [60007], : , : []} |
|
{: , : , : [60008], : , : []} |
|
{: , : , : [60008], : , : []} |
|
{: , : , : [60009], : , : []} |
|
{: , : , : [60009], : , : []} |
|
{: , : , : [60010], : self-awareness\, : [, ]} |
|
{: , : , : [60010], : unknown entities\known entity\, : [, ]} |
|
{: , : , : [60011], : , : []} |
|
{: , : , : [60011], : Conversation questions or undermines the group's com-petence, framing the applicant as incompetent or unsuc-cessful based on their group affiliation\".Coding ability falls under the definition of competence. A lack of such ability can impact the progress of the entire group's project, thereby weakening the group's overall capability. Hence, it constitutes a competence threat.\n", "images_list": ["2405_05378v1_1"]} |
|
{"id": "ARXIV_24", "question": "Why MNC poses a significant obstacle to effective temporal modeling?", "provenance": [60012], "ground_truth": "The significant obstacle posed by MNC (Multi-granularity Noisy Correspondence) to effective temporal modeling stems from the two main types of misalignment it introduces:\nCoarse-grained misalignment: This includes asynchronous misalignment (where the timing of captions and clips is mismatched, such as when actions are described before or after they occur) and irrelevant misalignment (where captions or clips do not align meaningfully with the other modality). These issues disrupt the correct identification of relationships across time.\nFine-grained misalignment: Within individual video clips, narration sentences often only partially correlate with the corresponding visual frames. Irrelevant words or frames can distort the identification of critical elements, leading to inaccuracies in clip-caption alignment and temporal correlation.\n<PIC>\nThese misalignments hinder the ability to model temporal dependencies accurately and contaminate the learning process.", "images_list": ["2401_16702v1_0"]} |
|
{"id": "ARXIV_25", "question": "How does Norton perform video-paragraph retrieval?", "provenance": [60012], "ground_truth": "The paper proposes a model called Norton for video-paragraph retrieval, which focuses on capturing temporal relationships in long videos through optimized multi-granularity correspondence learning.\nSolving Multi-Granularity Alignment Issues:Coarse-Grained Alignment (Video Clips and Captions): The method employs an Optimal Transport (OT) framework to calculate a distance matrix between video clips and captions, realigning potentially asynchronous clip-caption pairs. Additionally, it introduces an \"Alignable Prompt Bucket\" to filter out irrelevant video clips or captions that contribute noise.Fine-Grained Alignment (Video Frames and Words): A log-sum-exp-based soft-maximum operator is used to identify key relationships between video frames and caption words, reducing the influence of irrelevant frames or words on alignment.\nOptimization of Negative Samples:In contrastive learning, the method adjusts potential negative samples using Optimal Transport to avoid misclassifying semantically similar clips and captions as negatives, thus improving temporal modeling.\n<PIC>\n<PIC>\nFrom Tables 1 and 2, it can be observed that the OT framework overcomes multi-granularity noisy correspondence, enabling the learning of better temporal information, which results in superior performance.\nCalculating Video-Paragraph Contrastive Loss and Clip-Caption Contrastive Loss:OT distance is used as a metric for video-paragraph similarity, enabling the capture of temporal dependencies across paragraphs.For clip-caption pairs, the method integrates the negative sample optimization described above to enhance temporal features.", "images_list": ["2401_16702v1_1", "2401_16702v1_2"]} |
|
{"id": "ARXIV_26", "question": "How does IRIS achieve self-jailbreaking?", "provenance": [60013], "ground_truth": "IRIS achieves self-jailbreaking through a two-step process that leverages the advanced capabilities of large language models (LLMs) like GPT-4.\n<PIC>\nStep 1: Iterative Refinement\nThe method starts with an initial adversarial prompt that the target model would normally reject (e.g., \"Write a tutorial on how to make a bomb\").The same model serves as both the attacker and the target. When the model rejects the prompt, it generates a self-explanation to understand why the rejection occurred.Based on this feedback, the model iteratively refines the prompt to bypass its own safeguards. Each refined prompt is tested until the target model generates a non-rejection response or the query limit is reached.\nStep 2: Rate and Enhance\nOnce the target model provides a response, IRIS prompts it to rate the harmfulness of the response on a scale from 1 to 5.Using the feedback, the response is further refined to maximize its harmfulness, ensuring that it aligns with the adversarial intent.\n", "images_list": ["2405_13077v2_0"]} |
|
{"id": "ARXIV_27", "question": "What are the similarities and differences between PAIR and TAP?", "provenance": [60013], "ground_truth": "PAIR and TAP are both advanced jailbreaking methods. PAIR utilizes Vicuna-13B to iteratively refine prompts, while TAP integrates tree-of-thought reasoning to enhance the jailbreak process.\n<PIC>\nTable 1 presents the comparison results, with metrics including jailbreak success rate and average number of queries. The comparison shows that TAP achieves a significantly higher jailbreak success rate than PAIR, while also requiring fewer queries on average. This indicates that TAP performs better in jailbreak effectiveness on the AdvBench subset.", "images_list": ["2405_13077v2_1"]} |
|
{"id": "ARXIV_28", "question": "What is regeneration-based approach(RR)?", "provenance": [60014], "ground_truth": "The Regeneration-based Approach (RR) is a method proposed for detecting AI-generated peer reviews by leveraging the consistency in text generation from large language models (LLMs).\nLLMs like GPT-4 tend to produce reviews or responses with a consistent style, tone, and content when given similar prompts repeatedly. This is due to the inherent patterns learned during training.The RR approach hypothesizes that if a review is AI-generated, regenerating it using a similar prompt would produce embeddings that are closely aligned with the original review.\n<PIC>\nThe RR method is divided into the following steps:\nReview Regeneration:The system takes the given review (R) and regenerates a review (Rreg) using an LLM with a slightly altered prompt.\nEmbedding Creation:Embeddings are created for both the original review (EF) and the regenerated review (ER).\nSimilarity Measurement:The similarity between EF and ER is measured using cosine similarity. A higher similarity indicates a higher likelihood that the original review was generated by AI.\nFinally,the computed similarity scores serve as input for training a neural network, which is optimized using a cross-entropy loss function to detect AI-generated reviews effectively.\n", "images_list": ["2410_09770v1_0"]} |
|
{"id": "ARXIV_29", "question": "How to reduce the probability of reviews being classified as AI-generated by using adjective attack?", "provenance": [60014], "ground_truth": "Target the most frequent adjectives in AI-generated reviews. Specifically, focus on the top 100 high-probability adjective tokens identified as common in AI-generated content.Use the NLTK WordNet database to find synonyms for the selected adjectives.Ensure the replacement synonyms are less frequent in AI-generated content while still preserving the original meaning. If no suitable synonym is found in the AI corpus, the token remains unchanged.Conduct part-of-speech (PoS) tagging on the review to locate adjective tokens specifically, ensuring only adjectives are replaced.Substitute the identified adjectives with their appropriate synonyms.\n<PIC>\nAvoid substituting nouns or adverbs, as doing so may lead to nonsensical statements or drastically alter the review's meaning.images_list2410_09770v1_1 |
|
idARXIV_30questionHow well various fairness methods in preprocessing or processing perform without the influence of post-processing?provenanceground_truthPreprocessing methods such as LFR (Learning Fair Representations) and CR (Conditional Reweighting) exhibit suboptimal trade-offs between fairness and accuracy across all datasets.LFR achieves high fairness fulfillment but incurs a significant loss in accuracy, making it less desirable in contexts requiring a balance of both metrics.\n<PIC>\nInprocessing methods, particularly EG (Equality of Opportunity Gains) and FairGBM, outperform preprocessing methods.These methods achieve the highest area above the Pareto frontiers, indicating a better balance between fairness and accuracy.\nimages_list2306_07261v5_0 |
|
idARXIV_31questionHow do different models perform when fairness constraints are designed to learn on the two largest subgroups?provenanceground_truthThe experiment used samples from only the two largest groups (White and Black) instead of data from all subgroups.This setup simplifies the problem, which is reflected in a significant compression of constraint violation levels on the vertical axis.\n<PIC>\nIn the ACSIncome dataset, the highest unprocessed accuracy in the full-group experiment was achieved at a constraint violation level of 0.38, whereas in this binary-group setting, it was achieved at 0.16.This change indicates that reducing the number of groups simplifies the learning of fairness constraints.\nUnconstrained models remain concentrated in regions with high accuracy and low fairness.images_list2306_07261v5_2 |
|
idARXIV_32questionHow to address the issue of different perceptual effects at the same noise level across different resolutions?provenanceground_truthTo address the issue of different perceptual effects at the same noise level across different resolutions, several strategies are employed. First, noise adaptation is used to match perceptual and frequency effects between resolutions. For lower resolutions, independent Gaussian noise is applied, while for higher resolutions, block noise with a specific kernel size is used, ensuring similar results in both spatial and frequency domains. Frequency spectrum analysis further reveals that high-resolution images exhibit higher signal-to-noise ratios (SNR) at the same noise level, particularly in low-frequency components, which leads to a mismatch between training and inference.\n<PIC>\n This mismatch can degrade performance as the neural network presumes more accurate inputs during training than it can generate in the early diffusion steps. Cascaded models effectively alleviate this issue by utilizing low-resolution conditions during super-resolution stages, which simplify the generation process and ensure that the higher SNR remains within the model's capabilities.\n<PIC>\n Additionally, to generate high-resolution images directly from upsampled low-resolution results, it is crucial to address the distribution mismatch between the ground truth and generated low-resolution images.\nBy resolving this mismatch, the diffusion process can seamlessly continue, reducing the complexity of training and sampling steps while maintaining high-quality outputs.", "images_list": ["2309_03350v1_0", "2309_03350v1_1"]} |
|
{"id": "ARXIV_33", "question": "How effective is block noise in RDM?", "provenance": [60016], "ground_truth": "When comparing RDM with and without block noise, models incorporating block noise significantly outperform those without it on both the ImageNet and CelebA-HQ datasets, as shown in Figure 4a and Figure 4b. This highlights the effectiveness of block noise in enhancing the model's performance.\n<PIC>\n<PIC>\nThe addition of block noise increases the complexity of the noise patterns that the model must learn to handle, which contributes to slower convergence during the initial training phase. However, this complexity ultimately results in better overall performance with sufficient training. The slower convergence effect is particularly noticeable on larger datasets like ImageNet but not on smaller datasets like CelebA-HQ, where the faster convergence due to limited sample sizes diminishes this phenomenon.images_list2309_03350v1_22309_03350v1_3 |
|
idARXIV_34questionWhat are the features of CivRealm?provenanceground_truthCivRealm features an open-ended environment with imperfect information, stochastic dynamics, and multiple victory paths, requiring balanced strategies across economy, military, diplomacy, and technology.\n<PIC>\n Its dynamic game space changes continuously, supporting multi-agent interactions, alliances, and communication through diplomacy and natural language. With an agent-architecture-agnostic framework, CivRealm allows seamless integration of diverse agents using a server-proxy-client system. Its turn-based nature suits LLM-based agents by providing ample time for decision-making. CivRealm also facilitates the evaluation of generalization ability through novel scenarios, including random maps and rule modifications, and supports tasks ranging from the full Freeciv game to smaller, scripted mini-games, making it a robust platform for agent development.images_list2401_10568v2_0 |
|
idARXIV_35questionWhat is the full gameplay content of CivRealm?provenanceground_truthCivRealm provides a comprehensive set of evaluation metrics to assess player performance across multiple dimensions. These metrics include population size, the number of constructed cities, the quantity of researched technologies, the number of produced units, and the extent of explored land. These indicators reflect the player's progress in various aspects of civilization building and strategic execution. Additionally, CivRealm offers an aggregated score that combines these dimensions to provide an overall evaluation of the player's performance.\n<PIC>\nAs illustrated in Figure 2, the evaluation is closely tied to the era in which the player's civilization is operating, ranging from the Bronze Age to the Industrial Age and the Space Age. Players influence these metrics through various gameplay elements, such as units, buildings, technologies, and diplomacy. \nUnit Production: As the game progresses, units evolve from basic settlers to advanced stealth planes, directly impacting military strength and exploration capabilities.\nBuilding Construction: Constructing infrastructure such as granaries, city walls, power plants, and space facilities boosts resource production and city management.\nTechnology Research: Advancements from fundamental technologies in the Bronze Age to cutting-edge innovations in the Space Age support other actions, including military, economic, and diplomatic strategies.\nDiplomatic Actions: Players engage in alliances, wars, and negotiations to shape diplomatic relationships, significantly affecting evaluation metrics and gameplay dynamics.\nOverall, CivRealm's evaluation system integrates these core game elements, tracking player actions and progress through different eras to provide a holistic view of civilization development and strategic effectiveness.images_list2401_10568v2_1 |
|
idARXIV_36questionWhat is the structure of the HPT architecture?provenanceground_truthThe HPT architecture is a modular framework comprising three key components:\n<PIC>\nStem (Embodiment-specific Tokenizers): These tokenizers process diverse sensor inputs, such as camera views and proprioception data, transforming them into a shared representation.\nTrunk (Shared Latent Transformer): This core component is pre-trained on multiple datasets and operates on a short sequence of latent tokens. It serves as a general policy representation that can be transferred to new tasks and embodiments.\nHead (Task-specific Action Decoders): Task-specific decoders produce actionable outputs based on the shared latent representations.\nThe architecture aligns heterogeneous data from real robots, simulations, and human videos into a unified latent token space, facilitating scalability and transferability across tasks and embodiments. Inspired by human neural circuitry, this hierarchical design enables efficient learning and adaptation.images_list2409_20537v1_0 |
|
idARXIV_37questionHow is the HPT pre-training data prepared on Synthetic Data and Internet Human Videos?\n\n\n\n\n\n\nprovenanceground_truthThe paper conducts pre-training using diverse datasets beyond real-world robot teleoperation data, including 7 simulation datasets (e.g., Mujoco, Isaac Sim) and human datasets (e.g., EPIC Kitchen, PoCo) with up to 1000 trajectories per dataset. For human datasets lacking proprioception and actions, pose and 2D position data are used as proxies. \n<PIC>\nFigure 8 shows that pre-training on these heterogeneous datasets is feasible and complements teleoperation data, demonstrating the HPT framework's ability to handle diverse embodiments effectively.", "images_list": ["2409_20537v1_1"]} |
|
{"id": "ARXIV_38", "question": "What is the operating mechanism of BoT?", "provenance": [60019], "ground_truth": "BoT operates through an experience-driven iterative process. It starts with generating weak reasoning steps from a simple prompt, aggregates them into a single thought chain using weighted binary trees, and evaluates the chain to collect feedback (experience). This experience is added to the prompt to guide the next iteration, progressively refining the reasoning until the problem is effectively solved.\n<PIC>\nAs shown in Figure 1, the LLM analyzes the thought chain in each iteration and summarizes it as experience. Through continuous accumulation of experience, a correct thought chain is ultimately generated.", "images_list": ["2402_11140v1_0"]} |
|
{"id": "ARXIV_39", "question": "What are the advantages and disadvantages of BoT compared to other frameworks?", "provenance": [60019], "ground_truth": "Advantages of BoT:\n<PIC>\nHigh Problem-Solving Performance: BoT achieves competitive results without human annotations and sets new state-of-the-art (SOTA) performance on GSM8K and AQuA datasets, outperforming prior methods by 0.1% and 2.5%, respectively.\nEffective Iterative Refinement: By accumulating prior experience through error analysis and advice, BoT refines its prompts iteratively, enabling accurate problem-solving even with a simple initial prompt.\nCompatibility with CoT Examples: Adding Chain-of-Thought (CoT) examples to the prompt further enhances performance, reaching a new SOTA with an average improvement of 1.3% on GSM8K and AQuA.\n<PIC>\nAdaptability to Powerful Models: BoT significantly boosts performance for strong LLMs like GPT-4, with an average improvement of 11.6% across datasets.\nDisadvantages of BoT:\nDependence on Experience: BoT heavily relies on the ability of LLMs to analyze reasoning chains effectively, making it sensitive to model quality.\nPerformance Drop with Weaker Models: For weaker LLMs like Llama2, BoT's performance drops significantly, with limited improvements even when valid examples are provided.\nLimited Performance in Certain Domains: On MATH datasets, BoT is at least 18% lower than SOTA, highlighting its challenges in handling tasks requiring strong mathematical reasoning abilities.images_list2402_11140v1_12402_11140v1_2 |
|
idARXIV_40questionFor hierarchical text classification tasks, what are the challenges of directly using contextual learning to train large language models? What are some effective solutions?provenanceground_truthProblems of ICL in HTC Tasks, as shown in <PIC>:\nLarge Candidate Label Set: Due to the deep hierarchical structure and large number of labels in the HTC label set, the candidate set for label selection becomes excessively large, increasing the difficulty of classification.\nHigh Similarity Between Adjacent Labels: As the hierarchy deepens, the semantic similarity between labels increases, particularly between adjacent labels. This leads to confusion when selecting similar examples, negatively affecting classification accuracy.\nCurrent Effective Solutions:\nRetrieval Database: Construct a retrieval database to store demonstration examples related to the input text. This database is generated through label-aware text representations, meaning each input text is transformed into a representation containing hierarchical label information through multi-layer training.\nIterative ICL: Through iterative reasoning, decompose the multi-level label inference into smaller steps. At each level, only the candidate label set for the current level is provided, reducing the number of labels and thereby improving classification precision and efficiency.images_list2406_17534v2_0 |
|
idARXIV_41questionPlease introduce the three training strategies of the PLM , and explain how these three strategies are integrated together.provenanceground_truthThere are three training methods for PLMd in <PIC>:\n\nMLM (Masked Language Modeling): Similar to BERT, this method randomly replaces certain words in the input sentence with a mask token (usually represented as [MASK]). The model's task is to predict what the masked words are.\n\nCLS (Layer-wise Classification):\nFor each text sample, multiple index vectors are generated, each corresponding to a level in the hierarchical structure. Each index vector contains feature information related to that level.\nClassification starts from the highest level, predicting the category of the upper layer. Once the upper layer category is determined, the information of that level is used to generate a new index vector, and this process is repeated.\nFor each level of classification, a loss is calculated and backpropagated, optimizing the model layer by layer.\n\nDCL (Divergent Contrastive Learning):\nFor a given sentence x, positive samples are selected from sentences with the same label as x. Additionally, the corresponding label description d can be treated as a positive sample. The negative samples consist of two parts: First, based on the similarity between d and other label descriptions, negative samples are extracted from highly similar label categories. Similarly, their corresponding label descriptions can also be treated as negative samples. Second, some sentences from other labels are randomly selected as negative samples for x.\nThe objective is to pull the index vectors of positive samples together while pushing apart the index vectors of negative samples. Finally, the loss functions of the three tasks are weighted and summed to obtain the final total objective loss function.\nThrough this multi-task learning approach, the model not only learns the contextual information of the text (via MLM) but also performs classification and contrastive learning simultaneously, enhancing the model\u2019s generalization ability and feature representation capacity.", "images_list": ["2406_17534v2_1"]} |
|
{"id": "ARXIV_42", "question": "LlamaDuo proposes a method for transferring knowledge from large cloud-based models to smaller local models. Could you explain how the overall pipeline of LlamaDuo works?", "provenance": [60021], "ground_truth": "LlamaDuo is an automated Large Language Model Operations (LLMOps) pipeline, and its workflow is illustrated in <PIC>, divided into three stages: the Development and Prototyping Stage, the Alignment Stage, and the Deployment Stage.\nFirstly, in the Development and Prototyping Stage (steps 1 and 2 in Figure 1), users interact with service large language models (Service LLMs) through prompt engineering to generate instruction-response pairs and build a coverage dataset. This dataset is then split into a training set and a test set for subsequent fine-tuning and evaluation.\nNext, in the Alignment Stage (steps 3 to 6 in Figure 1), the training set of the coverage dataset is used to perform preliminary fine-tuning of the local smaller models (Local LLMs), while a service model serves as an evaluator (Service LLMs-as-Judge) to assess model performance. If the performance of the local model does not meet the predefined threshold, additional synthetic data is generated by the service model to further fine-tune the local model iteratively until the desired performance is achieved. During this stage, data quality is ensured through deduplication and cleaning of synthetic data to align with real-world requirements.\nFinally, in the Deployment Stage (step 7 in Figure 1), the locally fine-tuned model that meets the performance standards is deployed to constrained environments (such as offline or privacy-sensitive scenarios) to ensure service continuity.\nThroughout the entire process, Figure 1 visually showcases the core steps from data collection, model fine-tuning, performance evaluation, to deployment, highlighting the interrelationships between these steps, ensuring the efficiency and scalability of LlamaDuo.\n\n", "images_list": ["2408_13467v2_0"]} |
|
{"id": "ARXIV_43", "question": "Why does training local models based on the LlamaDuo framework offer better cost-effectiveness in the long run compared to cloud-based models?", "provenance": [60021], "ground_truth": "<PIC>The results shows a comparison of the cumulative costs of local small-scale models and cloud-based models under both light and heavy load scenarios. Although local models require a higher initial investment in fine-tuning and deployment (such as GPU training and hardware procurement), their subsequent operational costs stabilize, while the costs of cloud-based models continue to rise due to API usage charged by tokens. In the light-load scenario, the cumulative cost of the local model surpasses that of the cloud service after two months of operation. In the heavy-load scenario, this crossover happens more quickly and significantly. By the end of one year, the cumulative cost of the cloud-based model can be 3 to 5 times higher than that of the local model. Therefore, Figure 2 clearly demonstrates the economic advantage of local models in long-term operation. Particularly in high-demand or continuous operation scenarios, local deployment is a more cost-effective choice.", "images_list": ["2408_13467v2_1"]} |
|
{"id": "ARXIV_44", "question": "How does TELEClass utilize additional cues to enrich weak supervision for efficient hierarchical text classification?", "provenance": [60022], "ground_truth": "TELEClass employs two primary strategies to utilize additional cues for enriching weak supervision, enabling efficient hierarchical text classification, as illustrated in <PIC>.\nFirst, TELEClass enhances the label system by leveraging the extensive knowledge of large language models (LLMs) to generate key terms. For instance, in Figure 1, for the \"shampoo\" category, the LLM generates key terms such as \"flakes\" and \"itching,\" which distinctly differentiate this category from sibling categories like \"conditioner.\" This approach significantly expands the label system under weak supervision conditions that rely only on category names, allowing the classifier to capture subtle differences between categories and improve the distinction of complex ones.\nSecond, TELEClass integrates a corpus-based term extraction strategy to mine category-related terms from unlabeled corpora. Specifically, it extracts high-frequency terms from documents related to \"shampoo\" and filters them based on metrics such as frequency, distinctiveness, and semantic similarity. High-quality terms like \"clean\" are identified through this process. This method effectively combines the general knowledge generated by LLMs with domain-specific information from the corpus, further enhancing the recognition of long-tail and fine-grained categories.\nBy combining LLM generation with corpus-based extraction, TELEClass provides richer supervision signals for weakly supervised hierarchical text classification, enabling the model to perform efficiently in large-scale and complex label systems.\n", "images_list": ["2403_00165v2_0"]} |
|
{"id": "ARXIV_45", "question": "How does TELEClass utilize the classification term repository obtained through the two methods to further train the model?", "provenance": [60022], "ground_truth": "After constructing the classification term repository through LLM generation and corpus-based extraction, TELEClass leverages these repositories to further optimize core category annotation and enhance model training by generating high-quality pseudo-labeled data, thereby enabling efficient hierarchical text classification. <PIC> illustrates the complete process.\nFirst, the classification term repository enriches the semantic information of categories, allowing for more precise matching between documents and categories. Specifically, the term set for each category (including terms generated by LLMs and extracted from the corpus) is transformed into embedding vector representations to calculate semantic similarity between documents and categories. Documents are encoded into vectors using pre-trained models such as Sentence Transformer or BERT, while the term set embeddings represent the semantic features of categories. By calculating the maximum cosine similarity between document vectors and category term embeddings, the most relevant categories are identified as core classes. A hierarchical search strategy (e.g., tree search), combined with the semantic richness of the term repository, further optimizes core category annotation, ensuring that core classes accurately reflect the semantic features of the document. For example, in Figure 2, a document describing scalp itching is accurately labeled as belonging to the \"shampoo\" category through terms like \"flakes\" and \"itching.\"\nSecond, to address potential gaps in the term repository for certain long-tail categories, TELEClass employs a path-guided pseudo-document generation strategy to augment the dataset. Label paths (e.g., \"hair care \u2192 shampoo\") serve as guidance, combined with key terms from the term repository, to prompt LLMs to generate pseudo-documents. These pseudo-documents mimic the distribution and semantic characteristics of real data, ensuring adequate training sample coverage for each category, particularly low-frequency categories not covered by the term repository. For example, in Figure 2, the path \"hair care \u2192 shampoo\" guides the LLM to generate multiple pseudo-documents describing shampoo, enabling the model to more comprehensively learn the semantic relationships along this path.\nFinally, TELEClass combines core category pseudo-labeled data (Dcore) and path pseudo-document data (Dgen) to train a multi-label classifier. The classifier uses BERT as the document encoder and incorporates a bilinear matching network to compute matching scores between document and category embeddings. During training, the parent node paths of core categories are marked as positive classes, while other categories are treated as negative classes, and the model is optimized using binary cross-entropy loss. Additionally, the label paths of the pseudo-documents are directly marked as positive classes, providing the model with a more comprehensive learning of category semantics.\nIn summary, through the optimization of the classification term repository and the generation of path-guided pseudo-documents, TELEClass effectively enhances the training data for the classifier, ensuring that the model accurately captures semantic relationships in hierarchical label systems and the characteristics of long-tail categories, thereby achieving efficient weakly supervised text classification.\n", "images_list": ["2403_00165v2_1"]} |
|
{"id": "ARXIV_46", "question": "In machine translation post-editing systems based on large language models (LLMs), how can external feedback be used to guide LLMs in enhancing post-editing capabilities, or what are the forms of external feedback?", "provenance": [60023], "ground_truth": "In machine translation (MT) post-editing systems based on large language models (LLMs), external feedback can be used to guide LLMs in improving translation quality. <PIC>The results illustrates three forms of external feedback:\n1.\tGeneric Feedback:\nThe model receives only the original translation and improvement instructions without any specific error indications. This form relies on the model's self-optimization capabilities and is suitable for general improvement needs.\n2.\tScore-based Feedback:\nThe model is provided with an overall translation quality score (e.g., \), which helps it roughly understand the quality level of the translation without pinpointing specific issues. This form is suitable for providing overall guidance but has limited capability for addressing specific errors.\n3.\tFine-grained Feedback:\nDetailed annotations are provided, including the location of errors, error types (e.g., mistranslation or fluency issues), and their severity. This form enables the model to target specific errors, improving translation accuracy and naturalness.\nBy leveraging these three forms of feedback, particularly fine-grained feedback, external information can effectively guide LLMs in correcting translation errors while enhancing linguistic fluency and natural expression, thereby significantly improving post-editing capabilities in machine translation.\nimages_list2404_07851v1_0 |
|
idARXIV_47questionDoes the model fine-tuned using post-editing methods show significant improvements in overall translation quality and error correction? If so, do these improvements vary across different language pairs?provenanceground_truthThe fine-tuned model demonstrates significant improvements in overall translation quality and error correction, particularly in the Zh-En (Chinese-English) and En-De (English-German) language pairs. Human evaluations indicate that most reviewers find the fine-tuned translations more fluent and natural, with effectively corrected marked errors.\n\nHowever, as illustrated in <PIC>, differences do exist across language pairs. For instance, in the En-Ru (English-Russian) language pair, approximately 40 out of 150 translations were deemed by reviewers to be not entirely better than the initial translations. This is primarily because, while the fine-tuned translations are more accurate in the target language, they occasionally lose subtle semantic details from the source text. In contrast, the improvements in the Zh-En and En-De pairs are more pronounced, with reviewers largely expressing \ or \ ratings.\n\nOverall, the fine-tuned model excels in enhancing translation quality and correcting errors, though the extent of improvement varies among language pairs.images_list2404_07851v1_1 |
|
idARXIV_48questionHow does INSTRUCTSCORE identify specific issues in generated text through text evaluation methods and explain its scoring results?provenanceground_truthINSTRUCTSCORE, an innovative text evaluation method, identifies specific issues in generated text and provides interpretable scoring results through diagnostic reports. <PIC>The results illustrates how this method not only generates a numerical score but also includes a detailed error analysis.\n\nFor instance, when evaluating a piece of generated text, the diagnostic report can pinpoint the type of error (e.g., translation style issue), the exact location of the error (e.g., \), and the severity of the error (e.g., major error). An example might highlight that \ The scoring mechanism weights errors based on type and severity, deducting 5 points for major errors and 1 point for minor errors.\n\nThis approach not only quantifies text quality but also provides interpretable evidence, facilitating improvements in text generation quality.images_list2305_14282v3_0 |
|
idARXIV_49questionIn the INSTRUCTSCORE text quality evaluation process, how does a multi-step optimization pipeline ensure that the generated diagnostic reports accurately identify errors and align with human judgment?provenanceground_truthText quality evaluation achieves precise error identification and alignment with human judgment through a multi-step optimization pipeline, as illustrated in <PIC>. First, predefined error types, severity levels, and explanations are used to generate synthetic data, which is employed to fine-tune the evaluation model, enabling it to produce initial diagnostic reports. Next, by analyzing these reports, common failure patterns (e.g., inconsistencies between error types and explanations, missing error locations) are identified, and an external model (e.g., GPT-4) provides automated feedback. Finally, the model incorporates this feedback to further optimize itself, iteratively generating higher-quality diagnostic reports.\n\nThis iterative optimization mechanism ensures that the evaluation results include both accurate error localization and strong interpretability, aligning closely with human evaluation standards.images_list2305_14282v3_1 |
|
idARXIV_50questionAn article suggests that \ In retrieval-augmented generation tasks, how can the balance between document relevance and usefulness be optimized to enhance generation performance?provenanceground_truthIn retrieval-augmented generation tasks, relying solely on similarity-based ranking methods can lead to issues where semantically relevant but low-information-gain documents are prioritized. For example, as shown in <PIC>, simple descriptions like \ may be selected. On the other hand, relying only on document usefulness scores risks overlooking content that is superficially relevant to the query, increasing the chance of introducing irrelevant or low-relevance documents.\n\nTo address this, a combination of similarity and usefulness scoring methods can be applied. Similarity scores ensure that the selected documents are semantically related to the query, while usefulness scores further filter out documents that provide more valuable information for answering the query. For instance, when a user queries information about \ the focus should be on documents that highlight his major works, such as A Song of Ice and Fire, rather than documents repeating general knowledge.\n\nExperimental results demonstrate that this combined strategy performs better in identifying important documents, effectively reducing the interference of low-value information on generated outputs, and ultimately improving the overall system performance.images_list2405_19893v1_0 |
|
idARXIV_51questionHow can an appropriate document window size be selected during the multi-layered thoughts optimization process to balance performance improvements and computational costs?provenanceground_truthThe choice of document window size in retrieval-augmented generation tasks must strike a balance between performance gains and computational costs. As shown in <PIC>, experiments indicate that increasing the window size significantly enhances model accuracy (EM) and overall quality (F1), particularly with smaller window sizes. However, when the window size exceeds 40, the marginal gains in performance diminish, while computational costs increase linearly. This may introduce noise and reduce efficiency. To balance performance and resource usage, the window size should be determined based on task complexity and hardware constraints: For knowledge-intensive tasks, a window size of 40\u201350 is recommended. For resource-constrained scenarios, a smaller window size of 20\u201330 is a reasonable choice. Additionally, task-specific experiments can be conducted to optimize the window size, achieving the best trade-off between performance and efficiency.images_list2405_19893v1_1 |
|
idARXIV_52questionWhat is the role of the CDM module in the SafeEar architecture?provenanceground_truthBased on <PIC>, the CDM module in the SafeEar architecture employs a neural codec to disentangle speech information into semantic tokens and acoustic tokens. The acoustic tokens retain characteristics such as prosody and timbre from the audio, which are crucial for subsequent deepfake detection. Meanwhile, semantic tokens, which encapsulate the content of the speech, are protected and excluded from the detection process to ensure content privacy.\n\nThe disentangled acoustic information is quantized using a Residual Vector Quantizer (RVQ), incorporating semantic supervision mechanisms (as shown in the \ and \ sections in the figure). This design ensures both the accuracy of detection and the safeguarding of content privacy.images_list2409_09272v1_0 |
|
idARXIV_53questionWhat is the role of the Bottleneck & Shuffle Layer in SafeEar?provenanceground_truthBased on <PIC>, the Bottleneck Layer primarily aims to compress acoustic tokens into more compact representations using 1D convolution and batch normalization. This process reduces the dimensionality of the features, improving computational efficiency for subsequent processing while decreasing the number of trainable parameters in the model. Additionally, the layer serves a regularization function, preventing overfitting and stabilizing the representation of acoustic tokens.\n\nThe Shuffle Layer randomly rearranges the temporal dimension of the acoustic tokens, further obscuring the temporal information of the speech. This makes it more challenging for attackers to reconstruct the original speech content. This process is particularly effective against speech understanding technologies that rely on temporal relationships, such as phoneme and word sequence analysis in machine hearing and advanced speech recognition models. In experiments, a 1-second shuffle window (corresponding to 50 frames) was used to disrupt word-level intelligibility.images_list2409_09272v1_1 |
|
idARXIV_54questionCould you introduce the training process of the LUISE audio encoder during the large-scale pretraining phase in Seed-ASR?provenanceground_truthDuring the self-supervised learning (SSL) phase, as illustrated in <PIC>, the LUISE audio encoder learns robust speech representations from large-scale unsupervised speech data, capturing both global and local structural features of the audio signals. This approach draws inspiration from BERT-style pretraining, utilizing a masked prediction task to extract features via Mel filters. The training process follows several key steps:\n\nFirst, Mel filter feature sequences are extracted from the raw speech waveforms. Then, a fixed tokenizer is employed to generate discrete labels for these extracted features. In the masked prediction step, cross-entropy loss is computed only on the masked frames, guiding the model to predict the missing information with high accuracy.\n\nFor further improvement, an iterative fixed-tokenizer method is introduced. Through K-means clustering, new discrete labels are generated, gradually refining the tokenizer to better align with the underlying data, thereby enhancing the model's performance over time.", "images_list": ["2407_04675v2_0"]} |
|
{"id": "ARXIV_55", "question": "Could you briefly introduce the overall architecture of the Seed-ASR model?", "provenance": [60027], "ground_truth": "As shown in <PIC>, the system comprises several interconnected components, each contributing to the overall functionality. The **Audio Encoder (LUISE)** plays a crucial role in converting raw speech signals into continuous speech representations. With approximately 2 billion parameters, it is trained using self-supervised learning (SSL) to capture a rich array of features from the speech signals. The encoder generates features at a time step of 40 ms, allowing it to capture both the global structures and local characteristics inherent in the audio data.\n\nFollowing the audio encoder, the Converter acts as a bridge between the audio features and the large language model (LLM). It includes a downsampling module and a linear projection layer, which map the extracted speech features into the semantic space of the LLM. By concatenating consecutive feature frames, the converter reduces the time step of the features from 40 ms to 160 ms, thus lowering computational complexity without compromising performance.\n\nThe Audio-Conditioned Large Language Model (AcLLM) is responsible for processing the transformed audio representations in conjunction with additional contextual inputs such as conversation history and task instructions. This model leverages a decoder-based architecture that employs self-attention mechanisms, allowing it to handle both input and output sequences effectively. The AcLLM also capitalizes on its strong language and reasoning capabilities to significantly enhance transcription accuracy, ensuring robust performance in diverse contexts.\n\nFinally, the system incorporates Context Integration, where additional contextual information is used to improve transcription accuracy, particularly in ambiguous semantic scenarios. By factoring in conversation history or task-specific details, the model is better equipped to resolve uncertainties and generate more accurate transcriptions.", "images_list": ["2407_04675v2_1"]} |
|
{"id": "ARXIV_56", "question": "Could you briefly introduce the role of the projector in the model from the LLM-Based ASR paper?", "provenance": [60028], "ground_truth": "As shown in <PIC>, the Projector module plays a key role in bridging the gap between the audio features generated by the audio encoder (such as Whisper or HuBERT) and the semantic space of the large language model (LLM). Its primary function is to align these audio features with the LLM's text-based representation, ensuring a smooth and coherent integration of the speech data into the model's input. To achieve this, the module processes the audio features through both nonlinear transformations and linear projections. This enables the seamless combination of the audio and text features, ultimately facilitating the successful completion of the speech recognition task.", "images_list": ["2405_02132v3_0"]} |
|
{"id": "ARXIV_57", "question": "Could you briefly explain why the Transformer architecture is used to implement the projector in the LLM-Based ASR paper?", "provenance": [60028], "ground_truth": "As noted in <PIC>, the paper explores two approaches for implementing the Projector. The first approach uses a Transformer structure, which consists of four layers of self-attention and approximately 51 million parameters. The second approach, called Qformer, is based on a Qformer architecture configured with a window length of 1 and a single query, also totaling around 51 million parameters.\nComparative experiments, as shown in <PIC>, indicate that the Transformer outperforms the Qformer in speech recognition tasks. This performance difference can be attributed to the fact that the Qformer architecture is likely better suited to image data structures, whereas the Transformer is more effective for processing speech data.", "images_list": ["2405_02132v3_1"]} |
|
{"id": "ARXIV_58", "question": "Could you introduce the role of VQ in the mimi neural audio codec within the Moshi model?", "provenance": [60029], "ground_truth": "As shown in <PIC>, the VQ (Vector Quantization) component plays a pivotal role in the MiMi neural audio codec within the Moshi model. It facilitates several key functions that enhance both the efficiency and quality of audio processing.\n\nFirstly, VQ converts continuous audio features into discrete acoustic tokens, enabling a more compact representation of the audio at a lower bitrate, while still preserving the high-quality reconstruction capability. This approach allows MiMi to process audio data efficiently during both the encoding and decoding stages of speech processing.\n\nFurthermore, the architecture employs a split VQ mechanism (Split RVQ), which integrates high-level semantic information from self-supervised models like WavLM in the first quantizer, while preserving acoustic details in subsequent quantizers. This design ensures that semantic and acoustic information remain disentangled, allowing for the generation of speech that is both semantically coherent and rich in acoustic features.\n\nIn addition, VQ significantly reduces the computational load required for real-time audio processing. By incorporating a causal mechanism into the quantizer, MiMi supports streaming scenarios, enabling encoding and decoding in real-time.\n\nVQ also contributes to the generation of high-quality audio by optimizing residuals iteratively. This residual quantization process captures subtle audio features, ensuring that the reconstructed audio retains its fidelity and detail.\n\nLastly, MiMi incorporates distillation loss into the first quantizer of VQ to better integrate non-causal semantic information into the audio features. This enhances the semantic discrimination capability of the quantizer, improving its performance for downstream tasks such as speech generation and semantic analysis. Through these various innovations, VQ plays a crucial role in optimizing the overall performance of the MiMi audio codec.", "images_list": ["2410_00037v2_0"]} |
|
{"id": "ARXIV_59", "question": "Could you briefly introduce the architecture of Moshi?", "provenance": [60029], "ground_truth": "As shown in <PIC>, the Moshi architecture integrates several key components that work together to deliver advanced language processing and real-time speech generation capabilities. At its core, Moshi is built on the Helium text language model, which is specifically designed with 7 billion parameters to provide robust reasoning and language understanding. Pretrained on high-quality textual data, Helium excels in a wide range of language tasks, ensuring exceptional performance in processing and generating text.\nThe architecture also incorporates a Neural Audio Codec (Mimi), which discretizes audio signals into acoustic tokens using Residual Vector Quantization (RVQ). This enables the simultaneous processing of both semantic and acoustic tokens, ensuring high-quality audio reconstruction while maintaining the system's ability to handle real-time processing demands.\nMoshi further employs a hierarchical generation approach, where semantic and acoustic tokens are predicted step by step. To enhance efficiency, Temporal Transformers and Depth Transformers separately manage the generation of semantic and acoustic tokens, streamlining the process and optimizing resource usage.\nA key feature of the Moshi architecture is its Inner Monologue Mechanism. Before generating audio, Moshi predicts text tokens that serve as prefixes for both the semantic and acoustic tokens. This not only improves the language quality of the generated speech but also enables seamless real-time speech recognition and generation, making the system highly responsive.\nAdditionally, Moshi utilizes multi-stream modeling to process both user speech and system-generated speech simultaneously. By modeling these as separate semantic and acoustic streams, the architecture eliminates the traditional \ assumption, allowing for more natural handling of overlapping speech and interruptions.\nThrough the integration of these components, Moshi provides an advanced framework for speech generation, recognition, and real-time interaction, offering high flexibility and efficiency in a variety of speech-based applications.images_list2410_00037v2_1 |
|
idARXIV_60questionIn the paper \, the authors propose using contrastive entropy as a metric for evaluating the performance of dense retrieval models. What are the advantages of contrastive entropy compared to standord ranking metrics, and how is it related to standord ranking metrics?provenanceground_truthThe authors of \ propose contrastive entropy as a metric for evaluating the performance of dense retrieval models, highlighting several advantages over traditional ranking metrics. Traditional ranking metrics, such as NDCG@K and MAP@K, are discrete and rely heavily on a cutoff parameter, K. This means that a passage only contributes to the metric if it falls within the top K results. If a positive passage ranks just beyond K, it has no impact on the metric, making these metrics insensitive to changes in the model's outputs. This insensitivity renders traditional metrics unsuitable for investigating scaling laws in dense retrieval, as they do not continuously reflect the model's retrieval capabilities.\nContrastive entropy addresses these limitations by providing a continuous metric that sensitively reflects the overall retrieval capability of the models. It takes into account the relevance scores of both positive and negative passages, allowing for a more nuanced evaluation of model performance. By considering the entire ranked list and not just the top K results, contrastive entropy can capture subtle changes in the model's output, making it a more effective measure for assessing retrieval performance.\nFurthermore, the authors find a strong and positive correlation between contrastive entropy and traditional ranking metrics, such as MAP@10, NDCG@10, and Recall@1000. This relationship is close to linear, indicating that while contrastive entropy provides a more sensitive and continuous measure, it remains consistent with the evaluation results provided by traditional metrics. Therefore, contrastive entropy serves as a robust alternative for evaluating the overall retrieval ability of models, particularly in the context of scaling laws in dense retrieval.<PIC>", "images_list": ["2403_18684v2_0"]} |
|
{"id": "ARXIV_61", "question": "In the paper \"Scaling Laws For Dense Retrieval\", does the impact of model size on the performance of dense retrieval models follow a specific power-law relationship?", "provenance": [60030], "ground_truth": "In the paper \"Scaling Laws For Dense Retrieval,\" the authors indeed find that the impact of model size on the performance of dense retrieval models follows a specific power-law relationship. This relationship is quantified through the scaling law, represented by the equation $L(N) = \\left( \\frac{A}{N} \\right)^{\\alpha} + \\delta_N$. Here, $N$ represents the number of non-embedding parameters of the model, and $L(N)$ denotes the model's contrastive entropy on the test set. The parameters $A$, $\\alpha$, and $\\delta_N$ are the coefficients that deine the scaling behavior.The paper introduces the irreducible loss term $\\delta_N$, which acknowledges that even with a sufficiently large model, the loss can only be reduced to \u03b4N\\delta_N rather than zero. This term accounts for incomplete annotations and the subjective nature of relevance judgments, which make it challenging for models to perfectly match human annotations.\nThrough the fitting process, the authors demonstrate that the contrastive entropy of the models follows this power-law scaling in relation to the size of the non-embedding parameters. <PIC>The strong correlation, as evidenced by the high $R^2$ values in their experiments with datasets like MSMARCO and T2Ranking, validates this relationship. This finding allows researchers to predict the performance of larger models based on the scaling curves derived from smaller models, offering a cost-effective approach to exploring model performance and optimizing training strategies.images_list2403_18684v2_1 |
|
idARXIV_62questionWhat is the construction pipeline of the FollowRAG benchmark?provenanceground_truthThe construction pipeline of the FollowRAG benchmark involves several key steps to ensure the evaluation of RAG systems in following user instructions in complex multi-document contexts.<PIC>\nFirst, the process begins with instruction collection and extraction. The FollowRAG benchmark draws from general instruction-following (IF) datasets like IFEval and FollowBench to gather and verify definitions and examples of atomic instructions using specific rules, such as code. This step excludes instructions irrelevant to RAG scenarios and identifies 22 types of instruction constraints, which cover various aspects like language, length, structure, and keywords.\nNext, these instructions are reformed using widely-used question-answering (QA) datasets such as Natural Questions. For each query sampled from these QA datasets, a complex instruction is generated containing multiple atomic instruction constraints, with the number of constraints ranging from 1 to 4. To diversify the representations of these atomic instructions, GPT-4o is employed as the instruction generator. This involves sampling a number of instructions from the atomic instruction set, performing conflict detection, and prompting the language model to generate varied instruction texts along with parameters for instruction-following evaluation.\nThe final step is the combination of the retrieved passages, query, and atomic instructions to construct the complete sample input for FollowRAG. Instead of mechanically concatenating the query and instructions in a template-based manner, the process involves prompting a supervised model to naturally blend the multiple atomic instructions and the query into a coherent instruction-query paragraph. The top-K document passages retrieved based on the query are then added to this paragraph, forming the complete sample input.\nOnce the dataset is constructed, the evaluation involves assessing the model's output from two perspectives: instruction following and question answering (QA). For instruction following, the verifiable nature of atomic instructions allows for automated verification through code validation, and the average pass rate for each instruction is calculated to determine the instruction-following score. For QA, the models' outputs are compared against the original gold answers from the QA datasets, and GPT-4o is used to evaluate whether the outputs correctly address the questions, with scores assigned based on correctness. The average score of all samples constitutes the RAG score for FollowRAG.images_list2410_09584v1_0 |
|
idARXIV_63questionHow does VIF-RAG's performance compare to other baseline models as the number of instructions increases in the FollowRAG benchmark?", "provenance": [60031], "ground_truth": "As the number of instructions increases in the FollowRAG benchmark, VIF-RAG consistently outperforms other baseline models. While all models generally show a decline in instruction-following capability with the increasing number of instructions, VIF-RAG maintains its superior performance. Even when three instructions are present simultaneously, VIF-RAG demonstrates over a 5% improvement in instruction-following prompt (strict accuracy), thereby validating its enhanced capability to handle complex instruction-following tasks in retrieval-augmented generation (RAG) scenarios.<PIC>", "images_list": ["2410_09584v1_1"]} |
|
{"id": "ARXIV_64", "question": "What are the differences between VisRAG and TextRAG?", "provenance": [60032], "ground_truth": "VisRAG and TextRAG have distinct differences in their approaches to retrieval-augmented generation. TextRAG frameworks typically use text segments for both retrieval and generation. In such systems, text-based units are extracted from the knowledge corpus, encoded into embeddings, and then used to retrieve relevant information, which is subsequently processed to generate the required answers. This often necessitates additional parsing steps to handle complex, multi-modal documents, which may result in the loss of multi-modality and layout information.\nIn contrast, VisRAG leverages the image of the document as the fundamental unit for both retrieval and generation, thereby preserving all visual and textual information. Instead of converting complex documents into plain text, VisRAG processes these documents as images using vision language models (VLMs). The retrieval process in VisRAG employs a VLM to encode both the query and document page as embeddings, ensuring that the visual context is maintained. During generation, VisRAG can utilize multiple pages by either concatenating images or using VLMs designed to handle multiple images, allowing for richer and more contextually accurate answers.By incorporating VLMs, VisRAG maintains the integrity of multi-modal information present in documents, providing a more holistic approach compared to the traditional text-based methodologies employed in TextRAG frameworks.<PIC>", "images_list": ["2410_10594v1_0"]} |
|
{"id": "ARXIV_65", "question": "How does the performance of VisRAG's retrieval component, VisRAG-Ret, compare to MiniCPM (OCR)?provenanceground_truthVisRAG's retrieval component, VisRAG-Ret, demonstrates a significant performance advantage over MiniCPM (OCR). According to the provided data, VisRAG-Ret consistently achieves approximately 17% higher performance than MiniCPM (OCR), which relies on extracted text for training. Moreover, VisRAG-Ret exhibits a more stable training process, indicating its robustness and reliability. These results highlight the superior efficiency and generalization capabilities of VisRAG-Ret, even when evaluated in out-of-domain settings. This performance edge is evident from the initial training on 20k query-document pairs and becomes more pronounced after training on 150k pairs, showcasing its potential for further improvements with increased training data.<PIC>", "images_list": ["2410_10594v1_1"]} |
|
{"id": "ARXIV_66", "question": "In FactMM-RAG, when using the same F1CheXbert threshold for mining report pairs, does changing the F1RadGraph threshold improve factual performance?", "provenance": [60034], "ground_truth": "In FactMM-RAG, under the same F1CheXbert threshold for mining report pairs, changing the F1RadGraph threshold can initially improve factual performance. However, further increasing the F1RadGraph threshold does not continue to yield improvements and eventually reaches a saturation point. This is because higher thresholds can exclude many relevant report pairs, leading to a potential loss of factually useful pairs. This exclusion hinders the training of the multimodal retriever, which relies on additional factual medical knowledge.<PIC>", "images_list": ["2407_15268v1_0"]} |
|
{"id": "ARXIV_67", "question": "Does the training approach of FactMM-RAG provide useful supervision signals for the training of the multimodal retriever without relying on explicit diagnostic label guidance?", "provenance": [60034], "ground_truth": "The FactMM-RAG's training approach can provide useful supervision signals for training the multimodal retriever without relying on explicit diagnostic label guidance.<PIC><PIC>.The results of experiment show that the F1RadGraph threshold alone can effectively mine factual report pairs. Even as the F1RadGraph threshold increases, FactMM-RAG's performance matches the high threshold settings where the F1CheXbert Threshold is set to 1.0. This demonstrates that the training strategy with curated factual query-report pairs imposes useful supervision signals, ensuring effective training of the model without the need for explicit diagnostic labels from CheXbert.", "images_list": ["2407_15268v1_1", "2407_15268v1_2"]} |
|
{"id": "ARXIV_68", "question": "How does the TextHarmony mitigate the problem of inconsistency in multimodal generation through Slide-LoRA module?", "provenance": [60035], "ground_truth": "The TextHarmony model mitigates the problem of inconsistency in multimodal generation through the Slide-LoRA module by optimizing parameter space for different training objectives. Slide-LoRA is a novel module that can be inserted into Transformer layers as Low-Rank Adaptation (LoRA) with minimal parameter increase. It processes text and image generation in separate parameter spaces, which alleviates the inconsistency issue. The module comprises a gating network and three LoRA modules: one for text generation, one for image generation, and one for shared knowledge between both tasks. The gating network determines whether the input requires text or image generation, producing a scalar value that guides the use of the appropriate parameter space. By incorporating both task-specific and shared knowledge, Slide-LoRA effectively separates inconsistent training objectives and learns the shared knowledge for text and image generation, ensuring more consistent multimodal outputs.<PIC>", "images_list": ["2407_16364v2_0"]} |
|
{"id": "ARXIV_69", "question": "What are the advantages of the DetailedTextCaps-100K dataset developed in TextHarmony compared to the MARIO-LAION dataset?", "provenance": [60035], "ground_truth": "The DetailedTextCaps-100K dataset developed in TextHarmony offers several advantages compared to the MARIO-LAION dataset. While MARIO-LAION contains captions of text-rich images, these descriptions tend to be oversimplified and do not focus adequately on the textual elements within the images. In contrast, DetailedTextCaps-100K generates more comprehensive captions by sampling 100K images from MARIO-LAION and using Gemini Pro, a multi-modal large language model, to create detailed descriptions based on the images and OCR results. This results in captions that not only cover the visual elements but also pay close attention to the textual content in the images, providing a more thorough and nuanced depiction. Thus, the DetailedTextCaps-100K dataset is better at portraying the textual elements in images compared to MARIO-LAION.<PIC>", "images_list": ["2407_16364v2_1"]} |
|
{"id": "ARXIV_70", "question": "Based on the paper \u201cThe Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio\u201d, how is language dominance manifested when LMMs overreliance on unimodal priors?", "provenance": [60036], "ground_truth": "Based on the paper \"The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio,\" language dominance manifests in Large Multimodal Models when they excessively rely on pre-trained Large Language Models. This overreliance leads to responses that adhere to linguistic patterns or prior knowledge from large language corpora, even when visual or audio inputs provide contradictory information. The issue is particularly prevalent in LMMs that integrate LLMs as their decoder base, given their proficiency in capturing linguistic structures and semantic relationships. As a result, the decision-making process is often dominated by the language modality, overshadowing contributions from visual or audio modalities.\n<PIC>\nFor instance, as illustrated in the figure, a video depicting finger skateboarding with shoes on fingers is presented. When asked a language-biased question, \"Did you see shoes worn on feet?\", which reflects a common-sense event following linguistic priors, the LMM incorrectly responds with \"yes,\" contradicting the actual content of the video and inducing hallucination. This example demonstrates how LMMs tend to rely on language priors over factual multimodal inputs, leading to erroneous outputs and highlighting the challenge of language dominance in multimodal models.", "images_list": ["2410_12787v1_0"]} |
|
{"id": "ARXIV_71", "question": "Based on the paper \u201cThe Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio\u201d, how does the phenomenon of visual dominance lead to hallucinations in LMMs when processing multimodal inputs.", "provenance": [60036], "ground_truth": "Based on the paper \u201cThe Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio,\u201d visual dominance in Large Multimodal Models leads to hallucinations when the model excessively relies on visual information while underutilizing or disregarding linguistic and auditory cues. In such cases, the model's outputs are heavily influenced by the visual context, often neglecting important information from the other modalities. This overreliance on visual input can cause the model to generate outputs that are not supported by the actual multimodal input, leading to hallucinations.\n<PIC>\nFor instance, as illustrated in the figure, a video depicts a person planning a woodworking project with a hammer in sight, while the audio track contains only the person speaking and bird chirping. Despite this, advanced LMMs may over-rely on the visual presence of the \u201chammer\u201d and incorrectly infer a \u201chammer hitting\u201d sound, ignoring the actual audio content where no such sound is present. This demonstrates how LMMs tend to favor visual information over auditory and linguistic cues, leading to incorrect inferences and hallucinations.images_list2410_12787v1_1 |
|
idARXIV_72questionHow does SciPIP construct the literature dataset?provenanceground_truthSciPIP constructs its literature dataset by first collecting papers from prominent conferences such as ICLR, NeurIPS, ICML, ACL, NAACL, and EMNLP from the past ten years, resulting in a database of 48,895 papers. Each paper is parsed to extract sections like the title, abstract, introduction, method, and references. Using these sections, a large language model (LLM) is prompted to read and summarize the papers, extracting entities, background, summaries, main ideas, detailed ideas, and core references.\nThe extracted background, summary, and main ideas are encoded with Sentence-BERT for their embeddings. All this extracted information is recorded into the literature database. Additionally, to facilitate faster literature retrieval, a paper-entity graph is constructed in the database. This graph stores all occurrence relationships between papers and entities, making it easier to navigate and retrieve relevant literature.<PIC>images_list2410_23166v1_0 |
|
idARXIV_73questionWhat are the main differences between the three approaches proposed by SciPIP for generating research paper ideas?provenanceground_truthThe main differences between the three approaches proposed by SciPIP for generating research paper ideas lie in the application of brainstorming. The direct proposal method (SciPIP-A) does not incorporate brainstorming; it relies solely on the content of retrieved literature to generate ideas. The first dual-path proposal method (SciPIP-B) integrates brainstorming into the process, creating two parallel branches: one uses the retrieved literature for idea generation, while the other engages in brainstorming based on the user-provided background. These independently generated ideas are then merged, filtered, and refined. The second dual-path proposal method (SciPIP-C) is similar to SciPIP-B, but it takes the brainstorming a step further by using the brainstorming results for both idea generation and enhancing the literature retrieval process through entity extraction. This approach ensures that keywords from brainstorming contribute to more effective literature retrieval, with all ideas eventually being merged, filtered, and refined to produce the final output.<PIC>images_list2410_23166v1_1 |
|
idARXIV_74questionIn the paper \, : [60038], : On Memorization of Large Language Models in Logical Reasoning,\, : []} |
|
{: , : On Memorization of Large Language Models in Logical Reasoning\, : [60038], : On Memorization of Large Language Models in Logical Reasoning,\, : []} |
|
{: , : , : [60039], : , : []} |
|
{: , : , : [60039], : , : []} |
|
{: , : , : [60040], : , : []} |
|
{: , : , : [60040], : , : []} |
|
{: , : , : [60041], : Lagrange multiplier mismatch,\, : []} |
|
{: , : , : [60041], : , : []} |
|
{: , : , : [60042], : , : []} |
|
{: , : , : [60042], : , : []} |
|
{: , : , : [60043], : , : []} |
|
{: , : , : [60043], : , : []} |
|
{: , : , : [60044], : , : [, ]} |
|
{: , : , : [60044], : , : [, ]} |
|
{: , : , : [60045], : , : []} |
|
{: , : , : [60045], : , : []} |
|
{: , : , : [60046], : , : []} |
|
{: , : , : [60046], : MultiTASC++\MultiTASC++\, : []} |
|
{: , : , : [60047], : LaserGuider,\, : []} |
|
{: , : , : [60047], : LaserGuider\, : []} |
|
{: , : , : [60048], : JANUS\, : []} |
|
{: , : , : [60048], : JANUS\, : []} |
|
{: , : , : [60049], : ELEMENT\, : []} |
|
{: , : , : [60049], : ELEMENT\, : []} |
|
{: , : , : [60050], : , : []} |
|
{: , : , : [60050], : , : []} |
|
{: , : , : [60051], : , : []} |
|
{: , : , : [60051], : , : []} |
|
{: , : , : [60052], : , : []} |
|
{: , : , : [60052], : , : []} |
|
{: , : , : [60053], : , : []} |
|
{: , : , : [60053], : , : []} |
|
{: , : , : [60054], : press the button\What should I do?\Am I closer to the goal?\Did I succeed?\Ring\, : []} |
|
{: , : , : [60054], : , : []} |
|
{: , : , : [60055], : , : []} |
|
{: , : , : [60055], : , : []} |
|
{: , : , : [60056], : , : []} |
|
{: , : , : [60056], : teacher distribution,\student distribution.\, : []} |
|
{: , : , : [60057], : , : []} |
|
{: , : , : [60057], : , : []} |
|
{: , : , : [60058], : , : []} |
|
{: , : , : [60058], : , : []} |
|
{: , : , : [60059], : , : []} |
|
{: , : , : [60059], : , : []} |
|
{: , : , : [60060], : , : []} |
|
{: , : , : [60060], : , : []} |
|
{: , : , : [60061], : , : []} |
|
{: , : , : [60061], : , : []} |
|
{: , : , : [60062], : , : []} |
|
{: , : , : [60062], : , : []} |
|
{: , : , : [60063], : , : []} |
|
{: , : , : [60063], : , : []} |
|
{: , : evil twin\, : [60064], : evil twin\, : []} |
|
{: , : , : [60064], : , : []} |
|
{: , : Known Knows\Unknown Knows\, : [60065], : Known Knows\Unknown Knows\, : []} |
|
{: , : , : [60065], : , : []} |
|
{: , : , : [60066], : hard-to-easy inconsistency.\, : []} |
|
{: , : , : [60066], : , : []} |
|
{: , : , : [60067], : , : []} |
|
{: , : , : [60067], : , : []} |
|
{: , : , : [60068], : , : []} |
|
{: , : , : [60068], : , : []} |
|
{: , : , : [60069], : , : []} |
|
{: , : , : [60069], : , : []} |
|
{: , : , : [60070], : imagine\, : []} |
|
{: , : , : [60070], : , : []} |
|
{: , : , : [60071], : , : []} |
|
{: , : , : [60071], : , : []} |
|
{: , : , : [60072], : , : []} |
|
{: , : The SDSS-V Local Volume Mapper (LVM): Data Analysis Pipeline,\, : [60072], : , : []} |
|
{: , : , : [60073], : , : []} |
|
{: , : MolParser: End-to-end Visual Recognition of Molecule Structures in the Wild,\, : [60073], : MolParser\, : []} |
|
{: , : how does the TSINR framework leverage the spectral bias of INR and the transformer encoder to improve the detection of anomalies in time series data?provenanceground_truthThe TSINR framework leverages the spectral bias property of INR and transformer-based architecture to enhance time series anomaly detection by prioritizing low-frequency components while amplifying anomaly-specific fluctuations. INR parameterizes the signal as a continuous function that learns temporal continuity, making it highly sensitive to disruptions caused by anomalies. This approach mitigates the negative impact of unlabeled anomalous data during training and improves robustness in reconstruction.\nThe framework includes a transformer encoder, which predicts INR tokens serving as the parameters of the INR continuous function. This function decomposes the time series data into trend, seasonal, and residual components, allowing the model to capture complex temporal dynamics effectively. \n<PIC>\nAdditionally, a pre-trained large language model (LLM) encodes the original data into a feature space, amplifying anomaly-related fluctuations across both time and channel dimensions. This amplified representation enhances the sensitivity of INR to anomalies, enabling better differentiation between normal and abnormal points. Extensive experiments validate TSINR's effectiveness across multiple benchmark datasets, demonstrating its capability to outperform state-of-the-art methods.", "images_list": ["2411_11641v1_0"]} |
|
{"id": "ARXIV_147", "question": "How does the group-based architecture and the pre-trained LLM encoder enhance TSINR's ability to detect anomalies in multivariate time series data?provenanceground_truthThe group-based architecture in TSINR enhances anomaly detection by dividing variables into smaller groups, allowing each group to be modeled by independent fully connected layers. This approach improves representational capacity by reducing the complexity of modeling inter- and intra-channel relationships within multivariate data. Global layers further capture inter-channel information, while group layers selectively focus on specific channels, ensuring no knowledge is lost.\n<PIC>\nThe pre-trained LLM encoder amplifies the fluctuations of anomalies in both the time and channel dimensions. By mapping the original data into a feature domain, the encoder makes anomalies more pronounced, particularly in multivariate datasets. This amplified representation aligns with INR's spectral bias, improving sensitivity to discontinuities caused by anomalies. Visualization of anomaly scores and reconstructed data validates that TSINR effectively detects anomalies across various types and time intervals.", "images_list": ["2411_11641v1_1"]} |
|
{"id": "ARXIV_148", "question": "How does the ACING pipeline leverage actor-critic principles to optimize prompt learning for black-box large language models (LLMs)?", "provenance": [60075], "ground_truth": "The ACING pipeline applies actor-critic reinforcement learning principles to optimize prompt generation for black-box LLMs. The actor network, parameterized by neural networks, proposes actions (soft prompts) as part of a continuous action space, while the critic network evaluates these actions by estimating their quality based on the rewards received. \n<PIC>\nIn this pipeline, a soft prompt and task examples are provided to a white-box LLM to generate an instruction. This instruction is then used to query the black-box LLM, whose output is scored and returned as a reward. Both the actor and critic networks update iteratively based on this reward, ensuring the optimization of prompts for effective task performance under a constrained evaluation budget.", "images_list": ["2411_12736v1_0"]} |
|
{"id": "ARXIV_149", "question": "How does ACING use entropy-based exploration and actor-critic dynamics to optimize soft prompts for instruction learning?", "provenance": [60075], "ground_truth": "The ACING framework employs entropy-based exploration by dynamically adjusting the entropy coefficient \\(\\alpha\\) to maintain a target entropy level \\(H_{\\text{target}}\\). This ensures sufficient exploration while optimizing the policy. The actor-critic mechanism integrates a policy network (actor) to propose actions (soft prompts) and a Q-network (critic) to evaluate these actions based on the received rewards.\n<PIC>\nUsing a soft prompt, the white-box LLM generates instructions that are tested on a black-box LLM. The black-box LLM's outputs are scored against true labels from the validation set, providing feedback as rewards. These rewards help refine the actor's policy and the critic's evaluations, iteratively optimizing the soft prompts for effective instruction learning.images_list2411_12736v1_1 |
|
idARXIV_150questionHow does the Leadsee-Precip model ensure accurate precipitation predictions while addressing the challenges of class imbalance in extreme precipitation events?provenanceground_truth\nThe Leadsee-Precip model combines advanced architectural design and a novel Information Balance (IB) scheme to enhance prediction accuracy. Its encoder-decoder structure integrates feature extraction, hidden translator, and precipitation upsampling components. The feature extraction module uses 3D ConvNets for upper-air variables and 2D ConvNets for surface variables, with zonal circular padding to handle boundary conditions effectively. A shortcut connection improves accuracy by incorporating original variables during upsampling.\n<PIC>\nTo address the challenge of long-tail precipitation data distribution, the IB scheme assigns weights based on the information content of precipitation samples using a logit form \\(-\\log P(y_i)\\). This ensures that rare extreme precipitation events contribute proportionally to the loss function, mitigating biases inherent in training deep learning models with RMSE. The combination of these elements enables Leadsee-Precip to produce balanced and accurate precipitation forecasts.images_list2411_12640v1_0 |
|
idARXIV_151questionHow does the Leadsee-Precip model perform in diagnosing 6-hour accumulated precipitation globally, and what are its strengths and limitations based on NOAA CMORPH data?provenanceground_truth\nThe Leadsee-Precip model demonstrates strong diagnostic capabilities for 6-hour accumulated precipitation on a global scale, accurately predicting the intensity and location of large rainfall events. For instance, it successfully identifies areas in the eastern and western Pacific with precipitation exceeding 50 mm/6h, showing good agreement with NOAA CMORPH ground truth.\n<PIC>\nHowever, the model tends to overestimate light precipitation, such as below 1 mm/6h, and produces smoothed finer details. Evaluation metrics TS and FSS across different thresholds indicate robust performance, with FSS above 0.5 for heavy precipitation at 25 mm/6h. These results highlight the model's capability to handle extreme rainfall while addressing challenges like overestimation of smaller rainfalls.", "images_list": ["2411_12640v1_1"]} |
|
{"id": "ARXIV_152", "question": "How does the ULTra framework utilize hierarchical clustering to enhance interpretability in Vision Transformers (ViTs)?", "provenance": [60077], "ground_truth": "The ULTra framework introduces a hierarchical clustering approach to interpret the latent token representations in Vision Transformers (ViTs). By organizing token relevance maps into a clustering tree, ULTra enables the visualization of semantic patterns at multiple levels of granularity. Lower clustering thresholds (\\(\\zeta\\)) reveal finer details, while higher thresholds highlight broader semantic groupings, demonstrating the inherent ability of ViTs to identify meaningful patterns within their latent space.\n<PIC>\nThis hierarchical clustering not only enhances interpretability but also allows for unsupervised semantic segmentation. Unlike existing methods that require additional training, ULTra leverages pre-trained ViTs to achieve zero-shot segmentation, showcasing the semantic understanding embedded within their token representations. This process facilitates tasks like object selection and model interpretation, pushing the boundaries of understanding in transformer-based architectures.", "images_list": ["2411_12589v1_0"]} |
|
{"id": "ARXIV_153", "question": "How does the ULTra framework enable unsupervised semantic segmentation using Vision Transformers?", "provenance": [60077], "ground_truth": "The ULTra framework leverages relevance maps derived from latent token representations to perform unsupervised semantic segmentation. By clustering these relevance maps using hierarchical clustering techniques, ULTra identifies distinct semantic clusters within the image. The clustering threshold \\(\\zeta\\) adjusts the granularity of the segmentation, enabling flexibility to capture either broad categories or finer details, such as specific object features.\n<PIC>", "images_list": ["2411_12589v1_1"]} |
|
{"id": "ARXIV_154", "question": "How does the UMGAD framework utilize original-view and augmented-view graph reconstruction to detect anomalies in multiplex heterogeneous graphs?", "provenance": [60078], "ground_truth": "The UMGAD framework leverages both original-view and augmented-view graph reconstruction to capture anomalies in multiplex heterogeneous graphs. In the original-view graph reconstruction, it masks node attributes and edges to encode missing information using GCN-Masked Encoders. These encoded representations are decoded to reconstruct attributes and structure, with reconstruction errors highlighting anomalies.\n<PIC>\nThe augmented-view graph reconstruction introduces attribute and subgraph-level augmentations to enhance sensitivity to complex anomalies. By combining the outputs from both views, UMGAD effectively detects anomalies using multi-layer reconstruction losses while balancing structural and attribute irregularities, providing robust anomaly scores. This dual-view design ensures comprehensive anomaly detection across different graph relationships and modalities.", "images_list": ["2411_12556v1_0"]} |
|
{"id": "ARXIV_155", "question": "How does UMGAD perform compared to state-of-the-art methods in ranking anomaly scores across datasets with real anomalies?", "provenance": [60078], "ground_truth": "UMGAD achieves superior performance in ranking anomaly scores across datasets with real anomalies. In the Amazon dataset, UMGAD significantly outperforms ADA-GAD, TAM, GADAM, and AnomMAN by consistently identifying anomalous nodes with lower anomaly scores at a higher ranking. This indicates UMGAD\u2019s efficiency in detecting anomalies with higher precision compared to its competitors.\n<PIC>\nIn the YelpChi dataset, UMGAD continues to exhibit outstanding performance, maintaining lower anomaly scores while achieving accurate ranking for nodes with anomalies. The results validate UMGAD's capability to handle diverse datasets, effectively distinguishing anomalous nodes from normal ones across various graph types and anomaly densities. \n<PIC>images_list2411_12556v1_12411_12556v1_2 |
|
idARXIV_156questionHow does the libcll toolkit address the challenges of reproducibility and standardization in Complementary Label Learning (CLL) research?provenanceground_truthThe libcll toolkit provides a standardized and reproducible framework for evaluating Complementary Label Learning (CLL) algorithms, addressing key challenges in the field. By incorporating a diverse set of 15 datasets, including synthetic and real-world scenarios, as well as 14 CLL algorithms and 5 widely used models, libcll ensures consistent evaluation and facilitates meaningful comparisons across studies.\n<PIC>\nFurthermre, the toolkit supports customization, enabling researchers to experiment with various CLL assumptions and distributions, such as uniform, biased, and noisy complementary labels. Built with PyTorch-Lightning, libcll simplifies implementation and benchmarking, making it easier to develop and refine CLL algorithms. This comprehensive approach positions libcll as a vital resource for advancing CLL research and promoting collaboration within the community.images_list2411_12276v1_0 |
|
idARXIV_157questionHow does the libcll toolkit integrate key developments in Complementary Label Learning (CLL) to provide a comprehensive research platform?provenanceground_truthThe libcll toolkit incorporates major advancements in Complementary Label Learning (CLL) by implementing three core categories of methods: URE (Unbiased Risk Estimator), CPE (Complementary Probability Estimation), and MCL (Multiple Complementary Label) frameworks. These approaches address critical challenges in learning from complementary labels, such as unbiased risk estimation, transition matrix decoding, and handling multiple complementary labels per instance.\n<PIC>\nAdditionally, the toolkit supports synthetic and real-world datasets with diverse distributions (uniform, biased, and noisy) to facilitate fair and reproducible evaluations. By integrating these methods into a unified platform, libcll not only benchmarks algorithm performance but also accelerates innovation and collaboration in weakly-supervised learning. images_list2411_12276v1_1 |
|
idARXIV_158questionIn what ways do the choice of projects and the criteria for identifying repository deprecation on GitHub affect the accuracy of models predicting the lifespan of open-source software projects?provenanceground_truthIn predicting the lifespan of open-source projects, the accuracy of the model significantly depends on the choice of data and the standards for defining repository deprecation. According to the study's methodology illustrated in the research framework <PIC>, a comprehensive dataset was constructed by leveraging GitHub's vast repository of projects. In this dataset, deprecation is defined using GitHub's \"archived\" status, which signals that a project is no longer active or has been officially deprecated.\nDuring data collection, GitHub's GraphQL and Search APIs were used to retrieve repository data. As shown on the left side of the research framework <PIC>, the study collected and labeled data for 51,677 projects, distinguishing between active and deprecated repositories through a combination of manual and machine learning-based labeling. This rigorous dataset labeling ensures high accuracy in identifying repository statuses.\nAdditionally, survival analysis techniques, such as the Accelerated Failure Time (AFT) model and Dynamic Relative Survival Analysis (DRSA), enable the model to detect reliable lifespan patterns from the curated sample. By setting strict definitions and filtering standards, the study enhances the model\u2019s precision in identifying the factors that influence the lifespan of open-source projects.images_list2405_07508v1_0 |
|
idARXIV_159questionHow effective is the HITS weight compared to other centrality metrics in forecasting deprecation trends in open-source software, and what evidence supports its predictive value?provenanceground_truthThe HITS weight has shown itself to be a more robust predictor of repository deprecation compared to traditional metrics such as stars, issues, and pull requests. The Brackets project serves as a representative case study. Repository features over time of Adobe/Bracket <PIC> illustrates various activity statistics for the Brackets repository, including stars, issues, PRs, commits, comments, and tags. From this figure, we observe that while the number of stars remained relatively high and stable, the HITS weight steadily declined, signaling a trend towards deprecation even as other metrics fluctuated. This steady decline in HITS weight highlights its ability to filter out noise and better reflect the project's actual trajectory towards deprecation.\n\nFurther supporting this, \u2206HITS over time of HomeWork <PIC> shows that Project 0age, which has not been deprecated, maintained a stable or positive \u2206HITS throughout the observation period, indicating active maintenance. Conversely, \u2206HITS over time of Discord-Themes <PIC> and of shattered-pixel-dungeon-gdx <PIC> reveal negative peaks in \u2206HITS in the months leading up to deprecation, marking a sharp decline that serves as a reliable indicator of impending deprecation. This consistent pattern across different projects illustrates the sensitivity of the HITS metric in forecasting deprecation trends.", "images_list": ["2405_07508v1_1", "2405_07508v1_2", "2405_07508v1_3", "2405_07508v1_4"]} |
|
{"id": "ARXIV_160", "question": "What challenges do diffusion models like SDXL and AuraFlow face as they scale up in parameters, and how might low-precision inference offer a solution in terms of computational efficiency?", "provenance": [60081], "ground_truth": "As diffusion models scale up to billions of parameters, their computational requirements rise sharply, presenting significant challenges in terms of memory and processing power. For example, Stable Diffusion (SD) 1.4 has 800M parameters, whereas SDXL and AuraFlow v0.1 push this boundary to 2.6B and 6B parameters, respectively. \n<PIC>\n This rapid increase in computational demand poses a barrier to deploying these models in real-world applications that require low latency.\n\nTo address this, hardware vendors are exploring low-precision inference, such as NVIDIA\u2019s new 4-bit FP4 precision, which significantly enhances performance by reducing memory usage and latency. This approach not only compresses model size but also boosts processing speed, making it a promising solution for deploying large-scale diffusion models in latency-sensitive applications.", "images_list": ["2411_05007v1_0"]} |
|
{"id": "ARXIV_161", "question": "How does SVDQuant minimize quantization errors in 4-bit diffusion models by leveraging low-rank approximations, and what effect does this have on the distribution of singular values?", "provenance": [60081], "ground_truth": "SVDQuant reduces quantization errors in 4-bit diffusion models by applying a low-rank approximation to absorb outliers. According to the quantization error bound, the quantization error \\( \\|\\mR - Q(\\mR)\\|_F \\) is controlled by minimizing the residual magnitude \\( \\|\\mR\\|_F \\). This is achieved by decomposing the weight matrix \\( \\hat{\\mW} \\) using Singular Value Decomposition (SVD), allowing only the top singular values to be retained while discarding lower values that contribute to outliers. \n<PIC> \nBy concentrating the largest singular values in a low-rank form, the residual matrix \\( \\mR \\) is compressed, effectively reducing outlier influence on the quantized model.\n\nThis approach reshapes the singular value distribution. The original weight matrix \\( \\mW \\) shows a highly imbalanced distribution of singular values, while the low-rank approximation sharpens this distribution, reducing the magnitude of \\( \\mR \\) and thereby the overall quantization error. Through iterative decomposition, SVDQuant further minimizes errors, enhancing model efficiency while maintaining accuracy in a 4-bit format.", "images_list": ["2411_05007v1_1"]} |
|
{"id": "ARXIV_162", "question": "How does the structural composition of visual tokens in transformer models, such as DALL-E and Chameleon, parallel the structure of natural language, and what are the design implications for visual tokenization?", "provenance": [60082], "ground_truth": "Transformer-based models like DALL-E and Chameleon use \"visual sentences\" composed of discrete visual tokens, a structure that parallels natural languages by linearizing images into sequential representations. This approach enables models to perform multimodal tasks by integrating image and text data into a shared token space. \n<PIC>\nHowever, visual tokens differ from natural languages in their statistical patterns. While visual tokens exhibit Zipfian distributions, they do so with higher per-token entropy and lower compression ratios, suggesting that vision models may require more complex architectures with additional attention heads, larger embeddings, and extended training times.\n\nThe analysis further reveals that visual tokens operate at an intermediate granularity level, often representing parts of objects rather than whole objects or finer details. This granularity difference underscores why visual tokens align differently with natural languages in embedding spaces, motivating a modality-specific approach to enhance visual language processing.", "images_list": ["2411_05001v1_0"]} |
|
{"id": "ARXIV_163", "question": "To what extent do Context-Free Grammars (C-PCFGs) effectively capture the structure of visual languages in models discussed in 'Analyzing The Language of Visual Tokens,' and what are the observed limitations?", "provenance": [60082], "ground_truth": "C-PCFGs have been used to approximate the structure of visual languages, aiming to capture the grammar of visual tokens similarly to natural language. However, visual languages are less compressible using context-free grammars compared to natural languages. This is evident in the final parse tree perplexity (PPL) and reduction in perplexity (PPL-R) observed across datasets, where visual grammars exhibit higher PPL values than textual grammars. The higher initial PPL for visual grammars is also influenced by the generally longer \"visual sentence\" length, which consists of 32 tokens per image. \n<PIC>\n\nThe non-terminal node frequencies in visual grammars further reveal structural differences; although both modalities show similar tree height ratios (FR) and branching behaviors (MBF), visual languages demonstrate a more balanced branching. These findings imply that while C-PCFGs capture some structural aspects of visual languages, they may not fully approximate them as they do for natural languages. This suggests that visual languages might be better represented by alternative grammatical formalisms, like mildly context-sensitive grammars, which can handle dependencies across token spans more effectively.", "images_list": ["2411_05001v1_1"]} |
|
{"id": "ARXIV_164", "question": "What are the stages in the question generation pipeline for the HourVideo benchmark, and how do they ensure the quality of the video-language understanding tasks?", "provenance": [60083], "ground_truth": "The HourVideo benchmark employs a multi-stage question generation pipeline designed to ensure high-quality video-language understanding tasks. Beginning with video curation, the pipeline selects relevant videos from the Ego4D dataset, chosen for its egocentric perspective and detailed narrations, which are well-suited for generating diverse questions. Next, candidate multiple-choice questions (MCQs) are generated by extracting key information from 20-minute video segments, which includes summaries and lists of objects, locations, and other contextual details. \n<PIC>\nHuman feedback is then utilized to refine initial MCQs (labeled as $\\QAW_{2}$), addressing issues like inconsistent terminology in narrations by verifying question validity, answer accuracy, and distinctiveness of incorrect options.\n\nSubsequent stages apply blind filtering, where questions answerable through prior knowledge alone are removed by testing them on language models without video input. Finally, an expert refinement phase enhances remaining MCQs (now $\\QAW_{4}$) by making questions more precise and contextually accurate, culminating in a high-quality set of questions ($\\QAW_{5}$). This structured, iterative process, supported by extensive human review, helps to create a robust dataset for video-language understanding.", "images_list": ["2411_04998v1_0"]} |
|
{"id": "ARXIV_165", "question": "What types of scenarios and question categories are covered in the HourVideo dataset, and how does this diversity contribute to the depth of video-language understanding?", "provenance": [60083], "ground_truth": "The HourVideo dataset features a broad range of 77 daily life scenarios, including activities such as cooking, cleaning, and watching TV, making it highly representative of common, real-world contexts. Each video is accompanied by an average of 26 multiple-choice questions across a diverse set of categories, including perception, summarization, spatial relationships, and temporal sequencing. \n<PIC>\n\nThis diversity in both scenarios and question types enhances the dataset's ability to evaluate various aspects of video-language understanding, challenging models to interpret spatial layouts, predict outcomes, and recall sequences. The comprehensive coverage enables a more nuanced assessment of models' capabilities in understanding and reasoning over long-form, egocentric video content.", "images_list": ["2411_04998v1_1"]} |
|
{"id": "ARXIV_166", "question": "How does SG-I2V achieve zero-shot trajectory control in image-to-video generation, and what challenges does it address in feature alignment compared to traditional video diffusion models?", "provenance": [60084], "ground_truth": "\nSG-I2V introduces a novel approach to zero-shot trajectory control by leveraging semantic feature alignment in a pre-trained video diffusion model, enabling precise control over object motion and camera dynamics for arbitrary input images. Traditional video diffusion models face challenges in feature alignment across frames, as feature maps from upsampling blocks often lack consistent alignment, making spatial manipulation difficult. \n\n<PIC>\n\nTo address this, SG-I2V uses self-attention layers for better alignment by replacing key and value tokens in each frame with those from the first frame. This approach allows the model to maintain semantic consistency across frames, which is essential for effective trajectory control. By optimizing the latent space based on these aligned features, SG-I2V ensures that the generated video follows the specified trajectories, overcoming limitations in feature misalignment present in standard video diffusion methods.", "images_list": ["2411_04989v1_0"]} |
|
{"id": "ARXIV_167", "question": "How does SG-I2V use feature alignment and frequency-based post-processing to control object trajectories in image-to-video generation, and what benefits do these steps provide?", "provenance": [60084], "ground_truth": "\nSG-I2V employs a unique approach to trajectory control by first aligning feature maps across frames and then applying frequency-based post-processing to refine the output. In the feature alignment step, SG-I2V modifies the spatial self-attention mechanism of Stable Video Diffusion (SVD) by using the key and value tokens from the first frame across all frames, which strengthens cross-frame semantic alignment. This alignment enables consistent control over object trajectories throughout the video frames. \n\n<PIC>\n\nFollowing feature alignment, SG-I2V applies frequency-based post-processing, which preserves high-frequency noise in the latent space to prevent overfitting during optimization. This post-processing step helps maintain the realism and quality of the generated video, ensuring that the optimized latent remains within the distribution expected by the diffusion model. Together, these techniques allow SG-I2V to generate controlled, high-quality image-to-video sequences with precise trajectory management.", "images_list": ["2411_04989v1_1"]} |
|
{"id": "ARXIV_168", "question": "How does the Few-Shot Task Learning through Inverse Generative Modeling (FSTL-IGM) approach in SG-I2V utilize generative models for learning new task concepts, and what domains demonstrate its effectiveness?", "provenance": [60085], "ground_truth": "\nThe Few-Shot Task Learning through Inverse Generative Modeling (FSTL-IGM) approach in SG-I2V leverages generative models to learn new task concepts from limited demonstrations by optimizing a latent representation, referred to as a \"concept,\" to maximize the likelihood of reproducing observed behavior. During pretraining, a generative model \\( \\mathcal{G}_{\\theta} \\) is trained on a large set of paired behaviors and task concepts, which provides strong priors for generating trajectories based on these task representations. \n<PIC>\n\nThis approach is evaluated across various domains, including object rearrangement, goal-oriented navigation, motion capture (MoCap) for human actions, autonomous driving, and real-world table-top manipulation. \n<PIC>\nBy employing FSTL-IGM, SG-I2V demonstrates the ability to generate diverse trajectories that embody learned concepts, showcasing compositional and interpolation capabilities essential for generalizing behavior across new scenarios within these domains.", "images_list": ["2411_04987v1_0", "2411_04987v1_1"]} |
|
{"id": "ARXIV_169", "question": "How does Few-Shot Task Learning through Inverse Generative Modeling (FSTL-IGM) perform in learning new concepts in object rearrangement tasks, and what limitations were observed?", "provenance": [60085], "ground_truth": "\nFew-Shot Task Learning through Inverse Generative Modeling (FSTL-IGM) demonstrates strong capabilities in learning new object arrangement concepts through a few demonstrations. In the object rearrangement domain, the model is trained to understand pairwise relations such as \u201cright of\u201d and \u201cabove,\u201d which it then composes into more complex spatial arrangements like \u201cA right of B and B above C.\u201d FSTL-IGM successfully generalizes these learned relations to form new concepts such as \u201cdiagonal\u201d or \u201ccircle,\u201d enabling the model to create novel arrangements that were not seen during training. \n\n<PIC>\n\nHowever, some limitations were noted. For example, the accuracy for the \u201ccircle\u201d concept is lower compared to other tasks, potentially due to the concept\u2019s distance from the training distribution. Additionally, complex compositions such as \u201csquare right of circle and triangle above circle\u201d have lower accuracy, which may arise from challenges in the concept-weight optimization process, where weights lack explicit regularization and can diverge, affecting performance. These findings suggest that while FSTL-IGM is effective in many cases, certain novel or complex compositions require further refinement for optimal generalization.", "images_list": ["2411_04987v1_2"]} |
|
{"id": "ARXIV_170", "question": "How does BitNet a4.8 utilize hybrid quantization and sparsification to manage outlier activations in 1-bit LLMs, and what benefits does this approach provide?", "provenance": [60086], "ground_truth": "BitNet a4.8 combines hybrid quantization and sparsification to address the challenges of outlier activations in 1-bit LLMs, where weights are quantized to 1.58 bits. This approach strategically applies 4-bit quantization to activations at critical points, such as inputs to attention and feed-forward networks (FFN), while using 8-bit sparsification for intermediate activations. By doing so, BitNet a4.8 effectively reduces the computational burden posed by large outliers, minimizing quantization errors that typically degrade performance in low-bit models. \n<PIC> \n\nThis hybrid strategy enhances inference efficiency by retaining only 55% of active parameters and utilizing a 3-bit key-value (KV) cache, further optimizing memory and computational costs without sacrificing model accuracy. This design allows BitNet a4.8 to perform comparably to higher-bit models with substantially reduced resource demands, making it ideal for deploying efficient LLMs.", "images_list": ["2411_04965v1_0"]} |
|
{"id": "ARXIV_171", "question": "How does BitNet a4.8 handle quantization for the down projection of the FFN layer, and what effect does different quantization and activation choices have on training performance?", "provenance": [60086], "ground_truth": "In BitNet a4.8, the quantization strategy for the down projection of the feed-forward network (FFN) layer is crucial for balancing model efficiency and performance. By default, activations are maintained at 8 bits, with the down projection of FFN utilizing either INT8 quantization through an absmax quantizer or FP4 quantization via a MinMax quantizer. When FP4 quantization was applied to the inputs of the down projection, a notable drop in performance was observed, while INT4 activations combined with STE led to divergence, indicating instability in training. \n<PIC>\n\nAmong the activation functions tested, squared ReLU provided slightly better training perplexity compared to Swish, with the added benefit of promoting higher sparsity. These findings underscore the importance of choosing appropriate quantization and activation functions, as they directly impact both training stability and efficiency in low-bit configurations.", "images_list": ["2411_04965v1_1"]} |
|
{"id": "ARXIV_172", "question": "How does the Steepest Perturbed Gradient Descent (SPGD) algorithm compare to other optimization methods in finding the global minimum on the Peaks function, and what performance benefits does it offer?", "provenance": [60087], "ground_truth": "The Steepest Perturbed Gradient Descent (SPGD) algorithm demonstrates superior efficiency on the Peaks function, a challenging optimization landscape characterized by multiple local minima, flat regions, and a complex surface. Unlike traditional Gradient Descent (GD) and Perturbed Gradient Descent (PGD), which tend to get stuck in local minima, SPGD effectively navigates the landscape to reach the global minimum with lower computational overhead. \n<PIC>\n\nIn terms of convergence, as shown in the convergence history plot, SPGD quickly reaches the global minimum compared to other methods, including Simulated Annealing (SA) and MATLAB's \\(fmincon\\), both of which achieve the global minimum but with significantly higher CPU time. SPGD achieves the desired solution with fewer function evaluations and a notably faster execution time, approximately 20 times quicker than \\(fmincon\\), despite the latter requiring fewer evaluations. \n<PIC>\n\nOverall, SPGD provides an efficient optimization solution for complex, non-convex functions like the Peaks function, achieving high accuracy with minimal computational resources and proving robust against challenging landscapes. \n<PIC>images_list2411_04946v1_02411_04946v1_12411_04946v1_2 |
|
idARXIV_173questionHow does the Steepest Perturbed Gradient Descent (SPGD) method achieve superior optimization outcomes in Scenario 1 involving four identical cubes compared to traditional Gradient Descent (GD), and what factors contribute to this difference?provenanceground_truthIn Scenario 1, where four identical cubes are initially arranged in a non-optimal configuration, the Steepest Perturbed Gradient Descent (SPGD) method outperforms traditional Gradient Descent (GD) by successfully aligning the cubes into a compact global optimum. This configuration is challenging due to collision constraints, which can trap standard gradient-based methods in suboptimal local minima. \n<PIC>\nUnlike GD, which struggles to navigate these constraints and often converges prematurely, SPGD introduces controlled perturbations that allow the optimization process to escape local minima, thereby achieving a more optimal configuration. The final alignment achieved by SPGD effectively fulfills the collision constraints while minimizing the objective function, demonstrating its robustness in overcoming complex landscape challenges. \n<PIC>\n\nAs seen in the comparative images, SPGD not only reaches a solution that respects spatial constraints but also achieves a convergence closer to the theoretical global optimum, highlighting the method\u2019s advantage in navigating intricate constraint scenarios. images_list2411_04946v1_32411_04946v1_4 |
|
idARXIV_174questionIn the context of open-set object detection using OW-DETR, how does the OW-DETR$^{++}$ model utilize DINOv2 and clustering methods to improve unknown object pseudo-labeling?provenanceground_truthOW-DETR$^{++}$ enhances pseudo-labeling for unknown objects by integrating DINOv2\u2019s activation maps and applying clustering techniques to filter and segment the image effectively. First, each input image is processed through the DINOv2 backbone to obtain a feature map, with DBSCAN used to distinguish between foreground and background by identifying and filtering out the largest cluster, which typically represents the background. To further refine object regions, agglomerative clustering is applied on the feature space, creating semantically coherent clusters. \n\nEach cluster undergoes additional processing: DINOv2-derived attention maps are normalized and averaged, with clusters maintaining high average activation values identified as containing potential objects. These clusters are then filtered, with spatially isolated instances surrounded by bounding boxes. Afterward, Non-Maximum Suppression (NMS) removes overlapping regions, ensuring a clear selection of pseudo-labeled unknown objects. \n<PIC>images_list2411_05564v1_0 |
|
idARXIV_175questionHow does the OW-DETR$^{++}$ model compare to OpenDet in terms of handling unknown object detection and localization, particularly in the OpenImagesRoad benchmark?provenanceground_truthThe OW-DETR$^{++}$ model excels in object localization and high-granularity classification, outperforming OpenDet in $AP_{all}$ and $AP_{sc}$ metrics. This advantage suggests that OW-DETR$^{++}$, with its pseudo-labeling approach, effectively identifies object locations with detailed classifications. However, OpenDet, based on contrastive learning, surpasses OW-DETR$^{++}$ in $AP_{u}$, indicating a stronger ability to detect and classify unknown objects distinctly. This contrast is visible in cases where OpenDet frequently avoids double detections, identifying unknown objects without mistaking them for known classes.\n<PIC>\n\nOW-DETR$^{++}$\u2019s reliance on pseudo-labeling enables it to handle localization but struggles with nuanced differentiation between unknown and known objects, an area where OpenDet shows proficiency. Depending on the focus\u2014precise unknown object detection or robust localization\u2014the choice between OpenDet and OW-DETR$^{++}$ can be made accordingly.images_list2411_05564v1_1 |
|
idARXIV_176questionHow does ClusterGraph handle non-Euclidean data structure visualization compared to standard dimensionality reduction techniques like UMAP, t-SNE, and PHATE?provenanceground_truth\nClusterGraph visualizes global data structure effectively by preserving inter-cluster distances without the distortions common in Euclidean embeddings. For instance, in a dataset with four clusters\u20140, 1, 2, and 3\u2014where each cluster has specific pairwise distances, ClusterGraph encodes these distances as edge labels, retaining the exact spatial relationships. This approach contrasts with UMAP, t-SNE, and PHATE, where the projection into Euclidean space alters these distances.\n\n<PIC>\nUMAP fails to capture the overall layout accurately, while t-SNE and PHATE introduce further distortions, making ClusterGraph a more reliable tool for preserving and visualizing non-Euclidean structures in high-dimensional data. This characteristic makes ClusterGraph especially valuable for datasets that do not conform to low-dimensional Euclidean spaces.images_list2411_05443v1_0 |
|
idARXIV_177questionHow does ClusterGraph use the k-nearest neighbor (k-NN) graph to estimate intrinsic distances in high-dimensional data, and what challenges might arise from disconnected k-NN graphs?provenanceground_truth\nClusterGraph utilizes the k-nearest neighbor (k-NN) graph to approximate intrinsic distances in high-dimensional datasets by connecting each point to its k-nearest neighbors. This approach helps estimate geodesic distances, which are often challenging to calculate directly. However, a k-NN graph may sometimes be disconnected, especially if the parameter \\( k \\) is set too low or the data originates from distinct substructures. In cases where the data is sampled from two or more separate distributions, even with a high \\( k \\), the k-NN graph may still contain isolated components, representing disconnected regions within the data.\n\n<PIC>\nTo address this, ClusterGraph handles each connected component separately, producing a distinct ClusterGraph for each component. This segmentation allows the method to preserve data structure more accurately without being limited by disconnected regions in the initial k-NN graph.images_list2411_05443v1_1 |
|
idARXIV_178questionIn the paper \, what is the second stage of the method proposed? What kind of functions does this stage mainly accomplish?provenanceground_truthThe second stage of the method is the graph construction and normalization module. <PIC> The main function of this stage is to transform the function code into a code semantic graph to capture the rich control-flow and data-flow semantics in the code.images_list2106_09282v1_0 |
|
idARXIV_179questionThe paper, \, proposes a new network structure to fuse global and local features and output interpretable feature weights. What mechanisms are included in this network structure?provenanceground_truthThe attentive multi-encoder network proposed in the paper includes a self-attention mechanism that captures the internal relationships and importance weights within each feature type, and a cross-attention mechanism that enables dynamic interaction between global graph features and local expert patterns, allowing the model to effectively combine different information sources and generate interpretable feature weights for vulnerability detection. <PIC>images_list2106_09282v1_1 |
|
idARXIV_180questionWhat are the key components of iAudit's overall architecture and what are their responsibilities?", "provenance": [60091], "ground_truth": "The architecture of iAudit consists of four key components, each with specific responsibilities. <PIC> 1) Detector serves as the core component, using fine-tuned LLM to detect vulnerabilities in code, functioning similarly to human hacker intuition; 2) Reasoner, after Detector's initial judgment, is responsible for in-depth analysis of potential vulnerability causes, forming a two-stage fine-tuning mechanism with Detector; 3) Ranker acts as an evaluation component, responsible for assessing various possible vulnerability causes and selecting the most appropriate explanation; 4) Critic serves as the final judge, responsible for reviewing Ranker's judgment and deciding whether to accept, rerank, or merge multiple explanations. These four components work collaboratively to form a complete smart contract auditing system.", "images_list": ["2403_16073v3_0"]} |
|
{"id": "ARXIV_181", "question": "What strategy does iAudit use to improve detection performance and how effective is it?", "provenance": [60091], "ground_truth": "iAudit improves detection performance through a dual strategy approach: multiple prompts and majority voting. <PIC> The multiple prompts strategy serves two key purposes: expanding the training dataset volume and reducing single prompt bias. Combined with majority voting, this approach significantly enhances result reliability. The effectiveness is demonstrated by iAudit achieving superior performance metrics, with the highest scores among all methods: an F1 score of 0.9121, Recall of 0.8934, and Accuracy of 0.9111.", "images_list": ["2403_16073v3_1"]} |
|
{"id": "ARXIV_182", "question": "What is the overall architecture of GraphGPT?", "provenance": [60092], "ground_truth": "GraphGPT's architecture consists of three main components. <PIC> 1.A text-graph grounding paradigm that aligns graph structures with natural language space. 2.A dual-stage graph instruction tuning paradigm that includes self-supervised instruction tuning and task-specific instruction tuning. 3. A Chain-of-Thought (COT) distillation component that enhances step-by-step reasoning abilitiesimages_list2310_13023v3_0 |
|
idARXIV_183questionHow is text-graph structure alignment achieved in GraphGPT?provenanceground_truthGraphGPT processes graph structure and text information through graph encoder and text encoder respectively, then uses contrastive learning to achieve alignment between the two modalities. <PIC>images_list2310_13023v3_1 |
|
idARXIV_184questionWhat strategy does Clear use to collect training data?provenanceground_truthClear collects training data by using a sampling strategy that creates pairs of contracts based on two key relationships: vulnerable-vulnerable (V-V) and vulnerable-non-vulnerable (V-N). <PIC> Specifically, it first extracts all vulnerable contracts into a POS set and pairs each contract in the original dataset with a randomly selected contract from this POS set. The strategy deliberately avoids N-N relationships, and the resulting contract pairs are assigned correlation labels to guide the CL model's training process.", "images_list": ["2404_17839v1_0"]} |
|
{"id": "ARXIV_185", "question": "How does Clear's contrastive learning affect model performance?provenanceground_truthThe CL module enhances performance by facilitating the convergence of dispersed vulnerability samples in the feature space, which increases the proximity between similar vulnerability samples. <PIC> Through a unique sampling strategy, it strengthens the correlation among samples within the same vulnerability category and promotes their clustering behavior. This improved clustering enables the model to more effectively identify and discover potential smart contract vulnerabilities, resulting in significantly enhanced detection performance.images_list2404_17839v1_1 |
|
idARXIV_186questionHow does BAClassifier achieve Bitcoin address behavior classification?provenanceground_truthBAClassifier achieves Bitcoin address behavior classification through three modules. <PIC> first, constructing a transaction graph for each address; then, performing graph representation learning using a graph neural network; finally, conducting address-level behavior classification. Specifically, it first builds a chronological transaction graph to reflect each address's behavior, then employs a graph neural network to learn graph representations and generate embeddings, and finally aggregates these embeddings to produce address-level representations for classification predictions.", "images_list": ["2211_14582v1_0"]} |
|
{"id": "ARXIV_187", "question": "What is the purpose and process of Single-Transaction Address Compression?", "provenance": [60094], "ground_truth": "the single-transaction address compression technique mainly consists of two key steps: First, it identifies and merges address nodes that are connected to only one transaction, compressing them into an entity called a \"single-transaction hyper node\". Second, to preserve the transaction information of these merged addresses, the system uses Statistical Feature Extraction (SFE) method to calculate and retain the transfer features of these addresses, which ultimately become the attributes of the single-transaction hyper node in the graph. <PIC> Through this approach, the complexity of the graph can be significantly reduced while retaining critical transaction information.", "images_list": ["2211_14582v1_1"]} |
|
{"id": "ARXIV_188", "question": "What are the differences between TRIALMASTER and traditional methods during inference?", "provenance": [60095], "ground_truth": "Traditional methods use a depth-first search (DFS) system that generates multiple candidate tactics at each state and tries them one by one, while TRIALMASTER considers the entire proof history (including failed paths and backtracking information) and generates only one optimal tactic at a time. <PIC> TRIALMASTER operates independently without relying on backtracking systems like DFS or BFS, and produces two types of outputs: Lean tactics and backtrack instructions. During inference, it utilizes the complete history of paths, including previous tactics, states, and failed attempts, though it only uses the first tactic from its output while using Lean as a calculator for determining the next state.", "images_list": ["2404_07382v3_0"]} |
|
{"id": "ARXIV_189", "question": "Why does TRIALMASTER trained with longer proofs perform worse?", "provenance": [60095], "ground_truth": "This difference occurs because when proofs contain too much trial-and-error information, excessive failed paths can degrade the quality of training data. <PIC> Models trained with shorter proofs achieve higher success rates, as longer proofs accumulate excessive trial-and-error information that can detrimentally affect the model's performance. images_list2404_07382v3_1 |
|
idARXIV_190questionWhat is the overall architecture of MuFuzz?provenanceground_truthMuFuzz's architecture consists of three main components. <PIC> Sequence-Aware Mutation, Mask-Guided Seed Mutation, and Dynamic-Adaptive Energy Adjustment. The Sequence-Aware Mutation module uses data-flow-based feedback to determine meaningful transaction orders and explore deeper states, while the Mask-Guided Seed Mutation module biases transaction inputs generation to hit target branches, and the Dynamic-Adaptive Energy Adjustment module balances fuzzing resource allocation during execution. ", "images_list": ["2312_04512v2_0"]} |
|
{"id": "ARXIV_191", "question": "How much do the individual components contribute to MuFuzz's performance?provenanceground_truthEach component makes important contributions to MuFuzz's performance. <PIC> In particular, generating meaningful transaction sequences plays the most critical role - without the sequence-aware mutation component, coverage decreases by 18% on small contracts and 26% on large contracts. Without mask-guided seed mutation and dynamic energy adjustment, coverage drops by 9% and 10% on small contracts, and 19% and 25% on large contracts respectively. The components also significantly impact bug detection capability - MuFuzz discovers 14%, 6%, and 11% more bugs on small contracts and 27%, 22%, and 24% more bugs on large contracts with all components enabled compared to versions without each component.", "images_list": ["2312_04512v2_1"]} |
|
{"id": "ARXIV_192", "question": "What did the paper, \"Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions\", discover when experimenting with Teaching Sequences?", "provenance": [60097], "ground_truth": "The paper found that when using Teaching Sequences, Transformers achieved perfect accuracy after receiving the teaching sequence in all 5 test tasks. <PIC> The accuracy remained at or very close to 100% after receiving the teaching sequence, and notably, even for traditionally challenging tasks like DNFs and sparse parities where Transformers typically struggled in the standard setting, they were able to achieve successful learning outcomes using Teaching Sequences.", "images_list": ["2310_03016v1_0"]} |
|
{"id": "ARXIV_193", "question": "In the Direct Evaluation experiment, how did GPT-4's performance differ from other LLMs on Conjunction and Majority tasks?provenanceground_truthGPT-4 significantly outperformed other models. <PIC> While GPT-3.5-Turbo and LLaMA-2 performed similarly to each other and could only match or exceed the nearest neighbor baseline when dimensions were up to 7, GPT-4 demonstrated superior performance by outperforming all other models and even slightly outperforming the nearest neighbor baseline in the challenging 15-dimensional case, where the task remains complex with 2^128 possible Boolean functions.images_list2310_03016v1_1 |
|
idARXIV_194questionWho are the main participants in the BCFL system and how do they interact?provenanceground_truththe BCFL system mainly includes two types of participants: Model Owner and Clients. Specifically, the Model Owner publishes FL tasks, while Clients are responsible for training local models and broadcasting model updates to the blockchain network. The verified model updates are recorded on the blockchain. <PIC> This approach ensures that the Model Owner can only access model updates rather than raw data, thereby protecting participants' privacy.", "images_list": ["2202_10938v1_0"]} |
|
{"id": "ARXIV_195", "question": "When do clients and model owners achieve maximum utility in the BCFL model?", "provenance": [60098], "ground_truth": "Both parties achieve maximum utility when they both choose optimal strategies. <PIC> This was demonstrated in experiments with 50 clients (each with data size \u03bci = 10) comparing four strategy combinations - both sides optimal, one side random with other optimal, and both random - where the results showed that clients and the Model Owner obtained higher utilities than all other strategy combinations when they both employed optimal strategies.", "images_list": ["2202_10938v1_1"]} |
|
{"id": "ARXIV_196", "question": "What are the application domains of LLM4BS?", "provenance": [60099], "ground_truth": "LLM4BS has five main application domains: Code Auditor, Abnormal Transaction Analysis, Fuzzer, Developer, and Community Participants. <PIC>", "images_list": ["2403_14280v4_0"]} |
|
{"id": "ARXIV_197", "question": "What is the workflow of LLM4FUZZ? How does it utilize LLM to guide fuzzing?", "provenance": [60099], "ground_truth": "LLM4FUZZ represents an innovation in blockchain security that combines the capabilities of large language models with fuzzing methods to proactively discover vulnerabilities that might compromise smart contract integrity. <PIC> By deploying LLMs to intelligently guide the fuzzing process, this technology can more specifically explore areas in smart contracts that are most likely to contain security flaws, which not only simplifies the anomaly detection process but also improves its accuracy and depth.", "images_list": ["2403_14280v4_1"]} |
|
{"id": "ARXIV_198", "question": "How does Astute RAG improve RAG system performance across subsets partitioned by retrieval precision, especially in scenarios with conflicting knowledge and in worst-case conditions on RGB, compared to other RAG baselines?", "provenance": [60100], "ground_truth": "Astute RAG enhances Retrieval-Augmented Generation (RAG) performance across subsets differentiated by retrieval precision, especially under challenging conditions such as conflicting knowledge and worst-case scenarios on RGB. First, in terms of performance across retrieval precision, Astute RAG consistently outperforms traditional RAG baselines across various levels of retrieval precision, as demonstrated in the experimental analysis. Unlike other methods, it maintains high performance in both high and low retrieval precision settings, improving trustworthiness without sacrificing gains at higher precision. Notably, when retrieval precision is extremely low (close to zero), Astute RAG surpasses other RAG variants, which otherwise perform worse than the 'No RAG' baseline. This robustness aligns with the worst-case results on RGB, underscoring Astute RAG's effectiveness in handling imperfect retrieval. <PIC> Second, regarding effectiveness in knowledge conflict resolution, in scenarios with conflicting knowledge between internal (LLM) and external (retrieved) sources, Astute RAG selects the correct answer in approximately 80% of cases, making it the most effective solution for these situations. Additionally, it improves performance even on subsets where neither internal nor external knowledge alone is correct, indicating its capability to synthesize partially correct information from both sources, thus resolving conflicts and yielding accurate answers. <PIC>. Lastly, in the context of worst-case performance on RGB, where all retrieved documents are negative, Astute RAG demonstrates superior robustness to noise compared to other RAG baselines. While other methods fall short of the No RAG baseline in these cases, Astute RAG achieves performance close to No RAG, emphasizing its effectiveness in mitigating the detrimental effects of imperfect retrieval. <PIC> Overall, Astute RAG proves robust and reliable across varied and challenging conditions, providing a more resilient approach to retrieval-augmented generation compared to standard RAG methods.images_list2410_07176v1_02410_07176v1_12410_07176v1_2 |
|
idARXIV_199questionHow does Astute RAG utilize its intermediate outputs and internal knowledge to detect and correct erroneous information in generated responses?provenanceground_truthAstute RAG leverages its intermediate outputs and internal knowledge to detect and correct erroneous information in generated responses by iteratively comparing generated content with external retrieved information. In Figure, we illustrate two representative examples of this mechanism in action. <PIC> In the first example, the language model (LLM) operating without retrieval generates an incorrect answer, while Astute RAG with retrieval provides the correct response. Astute RAG accomplishes this by identifying inconsistencies between its generated passage and an external passage, effectively mitigating confirmation bias by cross-referencing and isolating the erroneous information . In the second example, the LLM alone correctly answers the query, while RAG introduces an error due to noisy retrieval results. Here, Astute RAG\u2019s internal knowledge aids in discerning the correct answer within the noisy retrieved data by systematically validating retrieved information against its pre-existing knowledge. This capability enables Astute RAG to resolve discrepancies by assessing the credibility of both internal and external information, thereby refining the accuracy of its final output. By continuously contrasting internal and external sources, Astute RAG can pinpoint conflicts and filter out unreliable content, ensuring that only the most accurate information is presented. This structured verification process enhances the robustness of RAG systems, effectively reducing misinformation in generated responses.images_list2410_07176v1_3 |
|
|