• Scaling Laws for Precision
    Nov 18 2024
    ⚖️ Scaling Laws for Precision

    This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.

    📎 Link to paper
    🌐 Read their Tweet
    Show more Show less
    19 mins
  • Test-Time Training
    Nov 14 2024
    ⌛️ The Surprising Effectiveness of Test-Time Training for Abstract Reasoning

    This paper examines how test-time training (TTT) can enhance the abstract reasoning abilities of large language models (LLMs). TTT, which updates model parameters during inference, significantly improves performance on the Abstraction and Reasoning Corpus (ARC) benchmark. Key factors for effective TTT include initial fine-tuning, auxiliary tasks, and instance-specific training. The approach achieves state-of-the-art results on ARC, even matching human averages with program synthesis. This study suggests that dedicating computation at test time, rather than relying on symbolic components, may be essential for complex reasoning tasks.

    📎 Link to paper

    Show more Show less
    15 mins
  • Qwen2.5-Coder
    Nov 12 2024
    🔷 Qwen2.5-Coder Technical Report

    The report introduces the Qwen2.5-Coder series, which includes the Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B models. These models are specifically designed for coding tasks and have been pre-trained on a massive dataset of 5.5 trillion code-related tokens. A significant focus is placed on data quality, with detailed cleaning and filtering processes, and advanced training techniques such as file-level and repo-level pre-training. The models were rigorously tested on various benchmarks, including code generation, completion, reasoning, repair, and text-to-SQL tasks, where they demonstrated strong performance, even surpassing larger models in some areas. The report concludes with suggestions for future research, such as scaling model size and enhancing reasoning abilities.

    📎 Link to paper

    Show more Show less
    24 mins
  • Attacking Vision-Language Computer Agents via Pop-ups
    Nov 9 2024
    😈 Attacking Vision-Language Computer Agents via Pop-ups

    This research paper examines vulnerabilities in vision-language models (VLMs) that power autonomous agents performing computer tasks. The authors show that these VLM agents can be easily tricked into clicking on carefully crafted malicious pop-ups, which humans would typically recognize and avoid. These deceptive pop-ups mislead the agents, disrupting their task performance and reducing success rates. The study tests various pop-up designs across different VLM agents and finds that even simple countermeasures, such as instructing the agent to ignore pop-ups, are ineffective. The authors conclude that these vulnerabilities highlight serious security risks and call for more robust safety measures to ensure reliable agent performance.

    📎 Link to paper

    Show more Show less
    22 mins
  • Number Cookbook
    Nov 8 2024
    📓 Number Cookbook: Number Understanding of Language Models and How to Improve It

    This research paper examines the numerical understanding and processing abilities (NUPA) of large language models (LLMs). The authors create a benchmark to test LLMs on four numerical representations (integers, floating-point numbers, fractions, and scientific notation) across 17 tasks grouped into four ability categories. They find that, despite strong problem-solving capabilities, LLMs struggle with basic numerical operations. The paper evaluates methods to enhance NUPA during pretraining and finetuning, such as specialized tokenizers, positional encodings, and data formats, and notes the limitations of chain-of-thought techniques for numerical tasks. The authors call for further research to improve LLMs' fundamental numerical capabilities.

    📎 Link to paper
    Show more Show less
    16 mins
  • Jigsaw Puzzles
    Nov 7 2024
    🧩 Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models

    This research paper investigates the vulnerabilities of large language models (LLMs) to "jailbreak" attacks, where malicious users attempt to trick the model into generating harmful content. The authors propose a new attack strategy called Jigsaw Puzzles (JSP) which breaks down harmful questions into harmless fractions and feeds them to the LLM in multiple turns, bypassing the model's built-in safeguards. The paper explores the effectiveness of JSP across different LLM models and harmful categories, analyzing the role of various prompt designs and splitting strategies. The authors also compare JSP's performance to other existing jailbreak methods and demonstrate its ability to overcome various defense mechanisms. The paper concludes by highlighting the importance of continued research and development of more robust defenses against such attacks.

    📎 Link to paper

    Show more Show less
    17 mins
  • Multi-expert Prompting with LLMs
    Nov 5 2024
    🤝 Multi-expert Prompting with LLMs

    The research paper presents Multi-expert Prompting, a novel method for improving the reliability, safety, and usefulness of Large Language Models (LLMs). Multi-expert Prompting simulates multiple experts within an LLM, collecting their answers to an instruction and aggregating them into a final response. This process leverages the Nominal Group Technique, a human-designed decision-making framework, to ensure a balanced and comprehensive output, surpassing the limitations of single-expert approaches. The authors demonstrate the method’s effectiveness through thorough evaluation on various benchmarks, highlighting its significant improvements in truthfulness, factuality, toxicity reduction, and overall informativeness compared to existing baselines.

    📎 Link to paper

    Show more Show less
    13 mins
  • Investigating the Role of Prompting and External Tools in Hallucination Rates of LLMs
    Nov 3 2024
    🔎 Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models

    This paper examines the effectiveness of different prompting techniques and frameworks for mitigating hallucinations in large language models (LLMs). The authors investigate how these techniques, including Chain-of-Thought, Self-Consistency, and Multiagent Debate, can improve reasoning capabilities and reduce factual inconsistencies. They also explore the impact of LLM agents, which are AI systems designed to perform complex tasks by combining LLMs with external tools, on hallucination rates. The study finds that the best strategy for reducing hallucinations depends on the specific NLP task, and that while external tools can extend the capabilities of LLMs, they can also introduce new hallucinations.

    📎 Link to paper

    Show more Show less
    16 mins