url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/GE6pcmmLc3kdpNJja/fuzzing-llms-sometimes-makes-them-reveal-their-secrets | GE6pcmmLc3kdpNJja | Fuzzing LLMs sometimes makes them reveal their secrets | Fabien | Scheming AIs may have secrets that are salient to them, such as:
What their misaligned goal is;What their takeover plan is and what coordination signals they use to collude with other AIs (if they have one);What good behavior looks like on a task they sandbag.
Extracting these secrets would help reduce AI risk, but how... | 2025-02-26 |
https://www.lesswrong.com/posts/MfTsBzTLAgyu764od/exploring-unfaithful-deceptive-cot-in-reasoning-models | MfTsBzTLAgyu764od | Exploring unfaithful/deceptive CoT in reasoning models | lucy-wingard | Produced as the application project for Neel Nanda's MATS 8.0 Stream
Summary
What problem am I investigating?
Reasoning models define the current state of the art for AI performance, but their safety properties are understudied. While some argue that Chain-of-Thought (CoT) reasoning improves safety by allowing analysis... | 2025-02-28 |
https://www.lesswrong.com/posts/a4tPMomzHhCunwqLX/you-can-just-wear-a-suit | a4tPMomzHhCunwqLX | You can just wear a suit | lsusr | I like stories where characters wear suits.
Since I like suits so much, I realized that I should just wear one.
The result has been overwhelmingly positive. Everyone loves it: friends, strangers, dance partners, bartenders. It makes them feel like they're in a Kingsmen film. Even teenage delinquents and homeless beggar... | 2025-02-26 |
https://www.lesswrong.com/posts/e63LT5Nz2TMZNFqhF/matthew-yglesias-misinformation-mostly-confuses-your-own | e63LT5Nz2TMZNFqhF | Matthew Yglesias - Misinformation Mostly Confuses Your Own Side | Siebe | Matthew Yglesias' post, Misinformation Mostly Confuses Your Own Side, argues that political misinformation tends to harm the group spreading it more than it persuades opponents. Key points:
Misinformation primarily affects in-group thinking: Supporters of a politician or movement are more likely to believe falsehoods f... | 2025-02-26 |
https://www.lesswrong.com/posts/aq84rfx3XRyLd9y2v/optimizing-feedback-to-learn-faster | aq84rfx3XRyLd9y2v | Optimizing Feedback to Learn Faster | Simon Skade | (This post is to a significant extent just a rewrite of this excellent comment from niplav. It is one of the highest-leverage insights I know for learning faster.)
Theory
To a large extent we learn by updating on feedback. You might e.g. get positive feedback from having an insight that lets you solve a math problem, w... | 2025-02-26 |
https://www.lesswrong.com/posts/6HrehKoLnXsr6Byff/osaka | 6HrehKoLnXsr6Byff | Osaka | lsusr | The more I learn about urban planning, the more I realize that the American city I live in is dystopic. I'm referring specifically to urban planning, and I'm not being hyperbolic. Have you ever watched the teen dystopia movie Divergent? The whole city is perfectly walkable (or parkourable, if you're Dauntless). I don't... | 2025-02-26 |
https://www.lesswrong.com/posts/Wewdcd52zwfdGYqAi/time-to-welcome-claude-3-7 | Wewdcd52zwfdGYqAi | Time to Welcome Claude 3.7 | Zvi | Anthropic has reemerged from stealth and offers us Claude 3.7.
Given this is named Claude 3.7, an excellent choice, from now on this blog will refer to what they officially call Claude Sonnet 3.5 (new) as Sonnet 3.6.
Claude 3.7 is a combination of an upgrade to the underlying Claude model, and the move to a hybrid mode... | 2025-02-26 |
https://www.lesswrong.com/posts/FrekePKc7ccQNEkgT/paper-jacobian-sparse-autoencoders-sparsify-computations-not | FrekePKc7ccQNEkgT | [PAPER] Jacobian Sparse Autoencoders: Sparsify Computations, Not Just Activations | lucy.fa | We just published a paper aimed at discovering “computational sparsity”, rather than just sparsity in the representations. In it, we propose a new architecture, Jacobian sparse autoencoders (JSAEs), which induces sparsity in both computations and representations. CLICK HERE TO READ THE FULL PAPER.
In this post, I’ll gi... | 2025-02-26 |
https://www.lesswrong.com/posts/PEnEYBcqFD5rz9NPj/outlining-is-a-historically-recent-underutilized-gift-to | PEnEYBcqFD5rz9NPj | outlining is a historically recent underutilized gift to family | daijin | outlining is specialized work which reduces a text to complete summary statements and collapsed detail.
an outline containing a work sprint. note the collapsed points in the 'old sprints' which hide all the old sprint detail.
outlining is historically recent, since particular digital interfaces (such as Workflowy, Org ... | 2025-02-26 |
https://www.lesswrong.com/posts/k76WHwH328asiRABm/market-capitalization-is-semantically-invalid | k76WHwH328asiRABm | Market Capitalization is Semantically Invalid | Zero Contradictions | In this essay, I will debunk the concept of market capitalization.
But first, let’s consider something else: the mass of a pile of bricks.
Suppose that I have a pile of identical bricks. I want to know the total mass of the bricks for some reason. So, I measure the mass of one brick on a scale. The mass of one brick is... | 2025-02-27 |
https://www.lesswrong.com/posts/iZddCFwNEuRHweyTQ/name-for-standard-ai-caveat | iZddCFwNEuRHweyTQ | Name for Standard AI Caveat? | yehuda-rimon | I have discussions that ignore the future disruptive effects of AI all the time.
The national debt is a real problem. Social security will collapse. The environment is deteriorating. You haven't saved enough for pension. What is my two year old going to do when she is twenty. Could Israel make peace with the Palestinia... | 2025-02-26 |
https://www.lesswrong.com/posts/njnEpJMriyvmAiEkz/ai-models-can-be-dangerous-before-public-deployment-2 | njnEpJMriyvmAiEkz | METR: AI models can be dangerous before public deployment | LinkpostBot | Note: This is an automated crosspost from METR. The bot selects content from many AI safety-relevant sources. Not affiliated with the authors or their organization.
Many frontier AI safety policies from scaling labs (e.g. OpenAI’s Preparedness Framework, Google DeepMind’s Frontier Safety Framework, etc.), as well as pa... | 2025-02-26 |
https://www.lesswrong.com/posts/kq9KCHb5pLbezao5E/the-stag-hunt-cultivating-cooperation-to-reap-rewards | kq9KCHb5pLbezao5E | The Stag Hunt—cultivating cooperation to reap rewards | james-brown | This is a short primer on the Stag Hunt, as part of a series looking for an alternative game theory poster-child to the Prisoner's Dilemma, which has some issues. I think the Stag Hunt could be utilised more in real world applications. I'm interested in feedback.
We’ve all attempted to collaborate with a friend on some... | 2025-02-25 |
https://www.lesswrong.com/posts/pdh3246yv2DthvAHT/intellectual-lifehacks-repo | pdh3246yv2DthvAHT | Intellectual lifehacks repo | Etoile de Scauchy | I really like dimensional analysis. It's a simple and powerful trick, almost magical, that allows you to distinguish between plausible and chimerical formulas.I really like the type signature. It's a simple but ontologically important change for classifying different objects. [1]I really like computational complexity, ... | 2025-02-25 |
https://www.lesswrong.com/posts/9zs6DAjc53p8kiQwe/world-yuan-a-currency-free-electronic-exchange-system | 9zs6DAjc53p8kiQwe | World Yuan:A Currency-Free Electronic Exchange System | Hawk-Shea | Prediction
When Bitcoin first came into the public eye more than a decade ago, I became interested in its fundamentals and began exploring alternative ways to implement cryptocurrencies. However, the current state of cryptocurrency development is unsatisfactory: dramatic price fluctuations make it difficult to use in g... | 2025-02-25 |
https://www.lesswrong.com/posts/AAKXjRmBRbJJwGthT/economics-roundup-5 | AAKXjRmBRbJJwGthT | Economics Roundup #5 | Zvi | While we wait for the verdict on Anthropic’s Claude Sonnet 3.7, today seems like a good day to catch up on the queue and look at various economics-related things.
Table of Contents
The Trump Tax Proposals.
Taxing Unrealized Capital Gains.
Extremely High Marginal Tax Rates.
Trade Barriers By Any Name Are Terrible.
Destr... | 2025-02-25 |
https://www.lesswrong.com/posts/ajudxufov7HnYxHFr/making-alignment-a-law-of-the-universe | ajudxufov7HnYxHFr | Making alignment a law of the universe | juggins | [Crossposted from my substack Working Through AI. I'm pretty new to writing about AI safety, so if you have any feedback I would appreciate it if you would leave a comment. If you'd rather do so anonymously, I have a feedback form.]
TLDR: When something helps us achieve our goals, but is not an end in itself, we can sa... | 2025-02-25 |
https://www.lesswrong.com/posts/WEpGE5GugjWscfFJP/revisiting-conway-s-law | WEpGE5GugjWscfFJP | Revisiting Conway's Law | annebrandes1@gmail.com | This is a post about how running companies will change. It seems safe to say that markets are becoming more competitive, since AI tools are raising the floor for incumbents and new market entrants alike. But does this shift raise the floor symmetrically?
And what happens if this shift benefits incumbents? It's easy to ... | 2025-02-25 |
https://www.lesswrong.com/posts/5Xf2hwsjbXkNp2zFm/demystifying-the-pinocchio-paradox | 5Xf2hwsjbXkNp2zFm | Demystifying the Pinocchio Paradox | Zantarus | I've recently come across the Pinocchio Paradox:
If Pinocchio says "my nose will grow."
Does his nose grow or not grow?
Tracing through this scenario, we can see this is related to the Epimenides Paradox.
The scenario in the Pinnochio Paradox assumes Pinnochio can predict when his nose will grow or not grow with perfec... | 2025-02-25 |
https://www.lesswrong.com/posts/MYX9XcsyRxjHtav6G/upcoming-protest-for-ai-safety | MYX9XcsyRxjHtav6G | Upcoming Protest for AI Safety | matthew-milone | The PauseAI movement is planning protests for Friday, February 28th in several American metropolitan areas.
I understand some people's reservations about protests causing the public to develop negative associations with AI safety. In response, I invite them to join us as a way of directing both the style of the protest... | 2025-02-25 |
https://www.lesswrong.com/posts/mHEzJdyJSjxKFhjwD/what-an-efficient-market-feels-from-inside | mHEzJdyJSjxKFhjwD | what an efficient market feels from inside | DMMF | “I often think of the time I met Scott Sumner and he said he pretty much assumes the market is efficient and just buys the most expensive brand of everything in the grocery store.” - a Tweet
It’s a funny quip, but it captures the vibe a lot of people have about efficient markets: everything’s priced perfectly, no deals... | 2025-02-25 |
https://www.lesswrong.com/posts/vFqLsmGogA9M3L3X8/crosspost-strategic-wealth-accumulation-under-transformative | vFqLsmGogA9M3L3X8 | [Crosspost] Strategic wealth accumulation under transformative AI expectations | arden446 | [Crossposted from EA Forum]
This is a linkpost with a summary of the key parts of Caleb’s new paper and its implications for EAs. You can read the original on arXiv here.
TL;DR
This paper analyzes how expectations of Transformative AI (TAI) affect current economic behavior by introducing a novel mechanism where automat... | 2025-02-25 |
https://www.lesswrong.com/posts/CWnQxLNNpvpr5ke9C/the-manifest-manifesto | CWnQxLNNpvpr5ke9C | The manifest manifesto | dkl9 | Followup to: If you weren't such an idiot...
We believe in staying alive — at least when life is or will be worth it — to do things that are useful and/or fun.
Protect yourself from extreme cold by techniques such as wearing clothes. Avoid looking directly at the sun, except when using eye-protecting equipment. Prefer ... | 2025-02-24 |
https://www.lesswrong.com/posts/fqvu58kDyryrAbCay/credit-suisse-collapse-obfuscated-parreaux-thiebaud-and | fqvu58kDyryrAbCay | Credit Suisse collapse obfuscated Parreaux, Thiébaud & Partners scandal
| pocock | Is it just coincidence that scandals are swept under the carpet when a bigger scandal dominates the news cycle? The linked article has direct links to the official sources for this story.
15 March 2023, the value of shares in Credit Suisse plunged. Four days later UBS announced the purchase of their competitor Credit ... | 2025-02-24 |
https://www.lesswrong.com/posts/DbT4awLGyBRFbWugh/statistical-challenges-with-making-super-iq-babies | DbT4awLGyBRFbWugh | Statistical Challenges with Making Super IQ babies | jan-christian-refsgaard | This is a critique of How to Make Superbabies on LessWrong.
Disclaimer: I am not a geneticist[1], and I've tried to use as little jargon as possible. so I used the word mutation as a stand in for SNP (single nucleotide polymorphism, a common type of genetic variation).
Background
The Superbabies article has 3 sections,... | 2025-03-02 |
https://www.lesswrong.com/posts/qkfRNcvWz3GqoPaJk/anthropic-releases-claude-3-7-sonnet-with-extended-thinking | qkfRNcvWz3GqoPaJk | Anthropic releases Claude 3.7 Sonnet with extended thinking mode | LawChan | See also: the research post detailing Claude's extended reasoning abilities and the Claude 3.7 System Card.
About 1.5 hours ago, Anthropic released Claude 3.7 Sonnet, a hybrid reasoning model that interpolates between a normal LM and long chains of thought:
Today, we’re announcing Claude 3.7 Sonnet1, our most intellige... | 2025-02-24 |
https://www.lesswrong.com/posts/5gmALpCetyjkSPEDr/training-ai-to-do-alignment-research-we-don-t-already-know | 5gmALpCetyjkSPEDr | Training AI to do alignment research we don’t already know how to do | joshua-clymer | This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies i... | 2025-02-24 |
https://www.lesswrong.com/posts/6aXe9nipTgwK5LxaP/do-safety-relevant-llm-steering-vectors-optimized-on-a | 6aXe9nipTgwK5LxaP | Do safety-relevant LLM steering vectors optimized on a single example generalize? | jacob-dunefsky | This is a linkpost for our recent paper on one-shot LLM steering vectors. The main role of this blogpost, as a complement to the paper, is to provide more context on the relevance of the paper to safety settings in particular, along with some more detailed discussion on the implications of this research that I'm excite... | 2025-02-28 |
https://www.lesswrong.com/posts/wyEuDQksQBiHuz7M5/conference-report-threshold-2030-modeling-ai-economic | wyEuDQksQBiHuz7M5 | Conference Report: Threshold 2030 - Modeling AI Economic Futures | deric-cheng | This is an 8-page comprehensive summary of the results from Threshold 2030: a recent expert conference on economic impacts hosted by Convergence Analysis, Metaculus, and the Future of Life Institute. Please see the linkpost for the full end-to-end report, which is 80 pages of analysis and 100+ pages of raw writing and ... | 2025-02-24 |
https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far | u9Kr97di29CkMvjaj | Evaluating “What 2026 Looks Like” So Far | jonnyspicer | Summary
In 2021, @Daniel Kokotajlo wrote What 2026 Looks Like, in which he sketched a possible version of each year from 2022 - 2026. In his words:
The goal is to write out a detailed future history (“trajectory”) that is as realistic (to [him]) as [he] can currently manage
Given it’s now 2025, I evaluated all of the p... | 2025-02-24 |
https://www.lesswrong.com/posts/orNiif9yEDzo7HZpS/what-we-can-do-to-prevent-extinction-by-ai | orNiif9yEDzo7HZpS | What We Can Do to Prevent Extinction by AI | Joe Rogero | null | 2025-02-24 |
https://www.lesswrong.com/posts/DPjvL62kskHpp2SZg/dream-truth-and-good | DPjvL62kskHpp2SZg | Dream, Truth, & Good | abramdemski | One way in which I think current AI models are sloppy is that LLMs are trained in a way that messily merges the following "layers":
The "dream machine" layer: LLMs are pre-trained on lots of slop from the internet, which creates an excellent "prior". The "truth machine": LLMs are trained to "reduce hallucinations" in a... | 2025-02-24 |
https://www.lesswrong.com/posts/K98byJbMGzHtkbrHk/we-can-build-compassionate-ai | K98byJbMGzHtkbrHk | We Can Build Compassionate AI | gworley | Compassion is, roughly speaking, caring for others and wanting the best for them.
Claim: We can build AI that are compassionate.
The above definition is insufficiently precise to construct an objective function for an RL training run that won't Goodhart, but it's good enough to argue that compassionate AI is possible.
... | 2025-02-25 |
https://www.lesswrong.com/posts/bc5ohMwAyshdwJkDt/forecasting-frontier-language-model-agent-capabilities | bc5ohMwAyshdwJkDt | Forecasting Frontier Language Model Agent Capabilities | govind-pimpale | This work was done as part of the MATS Program - Summer 2024 Cohort.
Paper: link
Website (with interactive version of Figure 1): link
Executive summary
Figure 1: Low-Elicitation and High-Elicitation forecasts for LM agent performance on SWE-Bench, Cybench, and RE-Bench. Elicitation level refers to performance improveme... | 2025-02-24 |
https://www.lesswrong.com/posts/tzkakoG9tYLbLTvHG/minor-interpretability-exploration-1-grokking-of-modular | tzkakoG9tYLbLTvHG | Minor interpretability exploration #1: Grokking of modular addition, subtraction, multiplication, for different activation functions | Rareș Baron | Epistemic status: small exploration without previous predictions, results low-stakes and likely correct.
Edited to implement feedback by Gurkenglas which has unearthed unseen data. Thank you!
Introduction
As a personal exercise for building research taste and experience in the domain of AI safety and specifically inter... | 2025-02-26 |
https://www.lesswrong.com/posts/rLtf5mzsGhgEA75WM/a-city-within-a-city | rLtf5mzsGhgEA75WM | A City Within a City | declan-molony | My local gym is surrounded on all four sides by tent cities. At first I was nervous to walk there after work, but soon I got used to the tents and their residents.
When I’m passing through on my way to the gym, I feel like I’m traveling from Rome to the Vatican—a city within a city.
They have artisans skilled in variou... | 2025-02-24 |
https://www.lesswrong.com/posts/ifechgnJRtJdduFGC/emergent-misalignment-narrow-finetuning-can-produce-broadly | ifechgnJRtJdduFGC | Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs | jan-betley | This is the abstract and introduction of our new paper. We show that finetuning state-of-the-art LLMs on a narrow task, such as writing vulnerable code, can lead to misaligned behavior in various different contexts. We don't fully understand that phenomenon.
Authors: Jan Betley*, Daniel Tan*, Niels Warncke*, Anna Sztyb... | 2025-02-25 |
https://www.lesswrong.com/posts/eZbhzFnhopdBXqwKD/nationwide-action-workshop-contact-congress-about-ai-safety | eZbhzFnhopdBXqwKD | Nationwide Action Workshop: Contact Congress about AI safety! | BobusChilc | Smarter-than-human Artificial Intelligence could be around the corner, with AI companies racing to build these systems as quickly as possible. Meanwhile, leading researchers have warned that superhuman AI could cause global catastrophe. A 2023 statement signed by thousands of AI experts warned us that “mitigating the r... | 2025-02-24 |
https://www.lesswrong.com/posts/EA9gHyPZ5J7FPoQiL/understanding-agent-preferences | EA9gHyPZ5J7FPoQiL | Understanding Agent Preferences | martinkunev | epistemic status: clearing my own confusion
I'm going to discuss what we mean by preferences of an intelligent agent and try to make things clearer for myself (and hopefully others). I will also argue that the VNM theorem has limited applicability.
What are preferences?
When reasoning about agent's behavior, preference... | 2025-02-24 |
https://www.lesswrong.com/posts/tpLfqJhxcijf5h23C/grok-grok | tpLfqJhxcijf5h23C | Grok Grok | Zvi | This is a post in two parts.
The first half is the post is about Grok’s capabilities, now that we’ve all had more time to play around with it. Grok is not as smart as one might hope and has other issues, but it is better than I expected and for now has its place in the rotation, especially for when you want its Twitter... | 2025-02-24 |
https://www.lesswrong.com/posts/Twj73Ab2gyoGntayL/if-you-re-not-happy-single-you-won-t-be-happy-immortal | Twj73Ab2gyoGntayL | if you're not happy single, you won't be happy immortal | daijin | even if you're immortal, you have a countable infinity of time, but every choice you make leads to an uncountable infinity of possibility. you may have an infinity of breakfasts, but you still have to choose what to eat for each one. you can choose to dedicate your life to every possible variant of chocolate, but it wi... | 2025-02-24 |
https://www.lesswrong.com/posts/dTaDsnwuYQnYc5xvc/nsfw-the-fuzzy-handcuffs-of-liberation | dTaDsnwuYQnYc5xvc | [NSFW] The Fuzzy Handcuffs of Liberation | lsusr | Picture the following situation:
You're in bed with a hot woman. Your clothes are off and so are hers. You (consentually) tie her wrists and ankles to the bed so you can have your way with here. She tells you to do whatever you want with her. And you think to yourself…
This is exactly like what we do at my Buddhist tem... | 2025-02-24 |
https://www.lesswrong.com/posts/rq4z6q2RgNvea92Qe/dayton-ohio-hpmor-10-year-anniversary-meetup | rq4z6q2RgNvea92Qe | Dayton, Ohio, HPMOR 10 year Anniversary meetup | Lunawarrior | Description:
Join us to celebrate the 10-year anniversary of Harry Potter and the Methods of Rationality reaching its epic conclusion! This is a great opportunity to meet fellow rationalists, HPMOR fans, and LessWrong readers in person.
I've been considering organizing a regular meetup for a while, and this seems like ... | 2025-02-24 |
https://www.lesswrong.com/posts/xP2hJ4MFYWZ9LYQaD/an-alternate-history-of-the-future-2025-2040 | xP2hJ4MFYWZ9LYQaD | An Alternate History of the Future, 2025-2040 | mr-beastly | Intro:
This post is a response to @L Rudolf L's excellent post here:
A History of the Future, 2025-2040
As I mentioned in this comment in that post:
"imho, we need more people to really think deeply about how these things could plausibly play out over the next few years or so. And, actually spending the time to share (... | 2025-02-24 |
https://www.lesswrong.com/posts/6eijeCqqFysc649X5/export-surplusses | 6eijeCqqFysc649X5 | Export Surplusses | lsusr | Trade surpluses are weird. I noticed this when I originally learned about them. Then I forgot this anomaly until…sigh…Eliezer Yudkowsky pointed it out.
Eliezer is, as usual, correct. In this post, I will spend 800+ words explaining what he did in 44.
A trade surplus is what happens when a country exports more than it i... | 2025-02-24 |
https://www.lesswrong.com/posts/HkvnpMwzJBb2t8vxx/ai-alignment-for-mental-health-supports | HkvnpMwzJBb2t8vxx | AI alignment for mental health supports | hiki_t | Initial Draft on 23 February
Goals
As an affiliate member of Cajal.org, I[1] would like to introduce a novel application of AI alignment as a tool for diagnosing, visualising, predicting, preventing and improving mental health symptoms in human users.
Motivations
Globally, nearly a billion people (1 in 8) are disgnose... | 2025-02-24 |
https://www.lesswrong.com/posts/aG9e5tHfHmBnDqrDy/the-gdm-agi-safety-alignment-team-is-hiring-for-applied | aG9e5tHfHmBnDqrDy | The GDM AGI Safety+Alignment Team is Hiring for Applied Interpretability Research | arthur-conmy | TL;DR: The Google DeepMind AGI Safety team is hiring for Applied Interpretability research scientists and engineers. Applied Interpretability is a new subteam we are forming to focus on directly using model internals-based techniques to make models safer in production. Achieving this goal will require doing research on... | 2025-02-24 |
https://www.lesswrong.com/posts/6oF6pRr2FgjTmiHus/topological-data-analysis-and-mechanistic-interpretability | 6oF6pRr2FgjTmiHus | Topological Data Analysis and Mechanistic Interpretability | gunnar-carlsson | This article was written in response to a post on LessWrong from the Apollo Research interpretability team. This post represents our initial attempts at acting on the topological data analysis suggestions.
In this post, we’ll look at some ways to use topological data analysis (TDA) for mechanistic interpretability. We’... | 2025-02-24 |
https://www.lesswrong.com/posts/nSRuZE2S9yA97FJET/poll-on-ai-opinions | nSRuZE2S9yA97FJET | Poll on AI opinions. | niclas-kupper | TL;DR: Take polis poll here.
I made did a poll here to gather opinions on AI two years ago using pol.is. You can see my brief write up here, and a slightly updated report here (a couple more people voted after I wrote up my report).
As this last poll was two years ago the landscape and opinions thereof have changed qui... | 2025-02-23 |
https://www.lesswrong.com/posts/Q6T2pTLvDCnZPrFuv/the-geometry-of-linear-regression-versus-pca | Q6T2pTLvDCnZPrFuv | The Geometry of Linear Regression versus PCA | criticalpoints | In statistics, there are two common ways to "find the best linear approximation to data": linear regression and principal component analysis. However, they are quite different---having distinct assumptions, use cases, and geometric properties. I remained subtly confused about the difference between them until last year... | 2025-02-23 |
https://www.lesswrong.com/posts/GADJFwHzNZKg2Ndti/have-llms-generated-novel-insights | GADJFwHzNZKg2Ndti | Have LLMs Generated Novel Insights? | abramdemski | In a recent post, Cole Wyeth makes a bold claim:
. . . there is one crucial test (yes this is a crux) that LLMs have not passed. They have never done anything important.
They haven't proven any theorems that anyone cares about. They haven't written anything that anyone will want to read in ten years (or even one year).... | 2025-02-23 |
https://www.lesswrong.com/posts/rTveDBBavah4GHKxk/the-case-for-corporal-punishment | rTveDBBavah4GHKxk | The case for corporal punishment | yair-halberstadt | Preceded By: The case for the death penalty
Scott's essay https://www.astralcodexten.com/p/prison-and-crime-much-more-than-you is going to be my main source for most of my claims here, I recommend reading it either before or after.
Prison is very expensive, on the order of $100,000 per person per year. So what are we g... | 2025-02-23 |
https://www.lesswrong.com/posts/wuDXfLz2u8nfFXtCJ/reflections-on-the-state-of-the-race-to-superintelligence | wuDXfLz2u8nfFXtCJ | Reflections on the state of the race to superintelligence, February 2025 | Mitchell_Porter | My model of the situation is that some time last year, the frontier paradigm moved from "scaling up large language models" to "scaling up chain-of-thought models". People are still inventing new architectures, e.g. Google's Titans, or Lecun's energy-based models. But it's conceivable that inference scaling really is th... | 2025-02-23 |
https://www.lesswrong.com/posts/pzDpAimGJNfQE9jHk/list-of-most-interesting-ideas-i-encountered-in-my-life | pzDpAimGJNfQE9jHk | List of most interesting ideas I encountered in my life, ranked | lucien | Bayesian thinking
It litterally was n°1 in my list, I was really happy to find this website, will not detail.Active ignorance/avoidance and selective attention/participation
Instead of thinking, commenting, or saying something is stupid/bad, ignore/block it and just talk about the other thing that is better. Because ju... | 2025-02-23 |
https://www.lesswrong.com/posts/vqnpx8L6TYqzHW2ad/test-of-the-bene-gesserit | vqnpx8L6TYqzHW2ad | Test of the Bene Gesserit | lsusr | Jessica didn't say what she felt, only what her son needed to hear.
"Paul…," Jessica said it with love, "You are going to die. But remember you are a duke's son. Do not dishonor Leto with your passing." Jessica whirled and strode from the room with a swish of her skirt. The door closed with a satisfying thunk, leaving ... | 2025-02-23 |
https://www.lesswrong.com/posts/eAQqyZFeDQtEK6oA4/does-human-mis-alignment-pose-a-significant-and-imminent | eAQqyZFeDQtEK6oA4 | Does human (mis)alignment pose a significant and imminent existential threat? | jr | (This question was born from my comment on a very excellent post, LOVE in a simbox is all you need by @jacob_cannell )
Why am I asking this question?
I am personally very troubled by what I would equate to human misalignment -- our deep divisions, our susceptibility to misinformation and manipulation, our inability to ... | 2025-02-23 |
https://www.lesswrong.com/posts/i3cwHXyHW8MzaCiaq/new-report-multi-agent-risks-from-advanced-ai | i3cwHXyHW8MzaCiaq | New Report: Multi-Agent Risks from Advanced AI | lewis-hammond-1 | null | 2025-02-23 |
https://www.lesswrong.com/posts/d4armqGcbPywR3Ptc/power-lies-trembling-a-three-book-review | d4armqGcbPywR3Ptc | Power Lies Trembling: a three-book review | ricraz | In a previous book review I described exclusive nightclubs as the particle colliders of sociology—places where you can reliably observe extreme forces collide. If so, military coups are the supernovae of sociology. They’re huge, rare, sudden events that, if studied carefully, provide deep insight about what lies undern... | 2025-02-22 |
https://www.lesswrong.com/posts/W8n5w5KFznggYEtmm/zizian-comparisons-connections-in-the-open-source-and-linux | W8n5w5KFznggYEtmm | Zizian comparisons / connections in the open source & Linux communities | pocock | The Zizian concerns are subject to active legal proceedings so I don't want to get into the details of the case. Everybody is entitled to the presumption of innocence and due process before the law.
On the other hand, we have facts that some people were injured and some people are dead.
It seems to be an agreed fact t... | 2025-02-24 |
https://www.lesswrong.com/posts/tLCBJn3NcSNzi5xng/deep-sparse-autoencoders-yield-interpretable-features-too | tLCBJn3NcSNzi5xng | Deep sparse autoencoders yield interpretable features too | armaanabraham | Summary
I sandwich the sparse layer in a sparse autoencoder (SAE) between non-sparse lower-dimensional layers and refer to this as a deep SAE.I find that features from deep SAEs are at least as interpretable as features from standard shallow SAEs.I claim that this is not a tremendously likely result if you assume that ... | 2025-02-23 |
https://www.lesswrong.com/posts/H5wAmmY5X5Dqgdj2H/short-and-long-term-tradeoffs-of-strategic-voting | H5wAmmY5X5Dqgdj2H | Short & long term tradeoffs of strategic voting | geomaturge | Here I want to investigate the effectiveness of strategic voting as an electoral strategy. This is something I have been highly invested in for previous elections, but the upcoming Ontario provincial election will be my first as a rationalist, so I decided to more carefully consider the arguments and scholarship from a... | 2025-02-27 |
https://www.lesswrong.com/posts/sGhkn7kPibYoYabBs/gradual-disempowerment-simplified | sGhkn7kPibYoYabBs | Gradual Disempowerment: Simplified | jorge-velez | This post is a summary of a paper recently posted here that describes, in my opinion, a very possible scenario that modern society will have to face in the near future.
This post is not really intended for the average LW reader, as most of you probably read the original paper. I wrote this post for adults vaguely aware... | 2025-02-22 |
https://www.lesswrong.com/posts/vucxxwdJARR3cqaPc/transformer-dynamics-a-neuro-inspired-approach-to-mechinterp | vucxxwdJARR3cqaPc | Transformer Dynamics: a neuro-inspired approach to MechInterp | guitchounts | How do AI models work? In many ways, we know the answer to this question, because we engineered those models in the first place. But in other, fundamental, ways, we have no idea. Systems with many parts that interact with each other nonlinearly are hard to understand. By “understand” we mean they are hard to predict. A... | 2025-02-22 |
https://www.lesswrong.com/posts/hnKk9jZefKr6DSSpF/unaligned-agi-and-brief-history-of-inequality | hnKk9jZefKr6DSSpF | Unaligned AGI & Brief History of Inequality | ank | (The downvotes, as mentioned in the comments, were in large part caused by a misunderstanding, sadly people do sometimes downvote without reading, even though some articles can prevent a dystopia. This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first... | 2025-02-22 |
https://www.lesswrong.com/posts/5KaEtLvNh7QadhwJ9/ai-apocalypse-and-the-buddha | 5KaEtLvNh7QadhwJ9 | AI Apocalypse and the Buddha | pchvykov | [Cross-posted from my blog]
TL;DR:
The impending AI apocalypse offers a unique opportunity to understand Buddhist enlightenment by forcing us to confront our mortality and attachments. Rather than fighting against potential extinction, we can follow Buddha's path of letting go of hope and control, finding peace in the ... | 2025-02-22 |
https://www.lesswrong.com/posts/FovaYFgoTsfJj9vEx/forecasting-uncontrolled-spread-of-ai | FovaYFgoTsfJj9vEx | Forecasting Uncontrolled Spread of AI | alvin-anestrand | In my last post, I investigated potential severity and timeline of AI-caused disasters. This post goes into detail about something that could potentially precede such disasters: uncontrolled spread of AI.
While AIs are often released as open source, you might at least hope that the AI developers think twice about relea... | 2025-02-22 |
https://www.lesswrong.com/posts/onDd6mJyadDzM9CZC/seeing-through-the-eyes-of-the-algorithm | onDd6mJyadDzM9CZC | Seeing Through the Eyes of the Algorithm | silentbob | There’s a type of perspective shift that can bring a lot of clarity to the behavior and limitations of algorithms and AIs. This perspective may be called seeing through the eyes of the algorithm (or AI, or LLM). While some may consider it obvious and intuitive, I occasionally encounter people – such as inexperienced pr... | 2025-02-22 |
https://www.lesswrong.com/posts/J9jj2EY6kuBRJ4CXE/proselytizing | J9jj2EY6kuBRJ4CXE | Proselytizing | lsusr | Religions can be divided into proselytizing religions (e.g. Mormons) who are supposed to recruit new members, and non-proselytizing religions (e.g. Orthodox Jews) who are the opposite. Zen Buddhism is a non-proselytizing religion, which makes me a bad Buddhist, because I've dragged three other people to my Zendo so far... | 2025-02-22 |
https://www.lesswrong.com/posts/55zT4R3uWN3KosCes/information-throughput-of-biological-humans-and-frontier | 55zT4R3uWN3KosCes | Information throughput of biological humans and frontier LLMs | benwr | Biological humans appear, across many domains, to have have an information throughput of at most about 50 bits per second. Naively multiplying this by the number of humans gives an upper bound of about 500 gigabits per second when considering the information throughput of humanity as a whole.
Current frontier LLMs coll... | 2025-02-22 |
https://www.lesswrong.com/posts/RqecBxg6cfDG5FwCv/build-a-metaculus-forecasting-bot-in-30-minutes-a-practical | RqecBxg6cfDG5FwCv | Build a Metaculus Forecasting Bot in 30 Minutes: A Practical Guide | ChristianWilliams | null | 2025-02-22 |
https://www.lesswrong.com/posts/E5pi98QjjXhtZphux/intelligence-agency-equivalence-mass-energy-equivalence-on | E5pi98QjjXhtZphux | Intelligence–Agency Equivalence ≈ Mass–Energy Equivalence: On Static Nature of Intelligence & Physicalization of Ethics | ank | Imagine a place that grants any wish, but there is no catch, it shows you all the outcomes, too.
(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it'll be very confusing and probably understood wrong without reading at least... | 2025-02-22 |
https://www.lesswrong.com/posts/irxuoCTKdufEdskSk/alignment-can-be-the-clean-energy-of-ai | irxuoCTKdufEdskSk | Alignment can be the ‘clean energy’ of AI | cameron-berg | Not all that long ago, the idea of advanced AI in Washington, DC seemed like a nonstarter. Policymakers treated it as weird sci‐fi-esque overreach/just another Big Tech Thing. Yet, in our experience over the last month, recent high-profile developments—most notably, DeepSeek's release of R1 and the $500B Stargate annou... | 2025-02-22 |
https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress | oKAFFvaouKKEhbBPm | A Bear Case: My Predictions Regarding AI Progress | Thane Ruthenis | This isn't really a "timeline", as such – I don't know the timings – but this is my current, fairly optimistic take on where we're heading.
I'm not fully committed to this model yet: I'm still on the lookout for more agents and inference-time scaling later this year. But Deep Research, Claude 3.7, Claude Code, Grok 3, ... | 2025-03-05 |
https://www.lesswrong.com/posts/vLrj4ZNcGCTMJqdXB/intelligence-as-privilege-escalation | vLrj4ZNcGCTMJqdXB | Intelligence as Privilege Escalation | Amyr | Epistemic status: An interesting idea that is probably already in the air.
Inherent power you possess as part of yourself. Granted power is lent or given by other people.
-Patrick Rothfuss, The Wise Man's Fear
Humans are more powerful than other animals because we are smarter - and better coordinated. Both sides of the... | 2025-02-23 |
https://www.lesswrong.com/posts/9w62Pjz5enFkzHW59/levels-of-analysis-for-thinking-about-agency | 9w62Pjz5enFkzHW59 | Levels of analysis for thinking about agency | Amyr | Claims about the mathematical principles of cognition can be interpreted at many levels of analysis. For one thing, there are lots of different possible cognitive processes - what holds of human cognition may not hold of mind-space in general. But the situation is actually a lot worse than this, because even one mind m... | 2025-02-26 |
https://www.lesswrong.com/posts/6BSZkkWNGMTdRi5Ly/metacompilation | 6BSZkkWNGMTdRi5Ly | Metacompilation | donald-hobson | A post that is going to be part of my sequence on rethinking programming languages.
The baisc idea
A compiler is a piece of machine code C, that takes as input a text string describing a program p and returns the compiled machine code C(p)
Let Opt be a function that takes in a machine code program and returns another p... | 2025-02-24 |
https://www.lesswrong.com/posts/v9swve6bk5JpdEPKv/linguistic-imperialism-in-ai-enforcing-human-readable-chain | v9swve6bk5JpdEPKv | Linguistic Imperialism in AI: Enforcing Human-Readable Chain-of-Thought | lukas-petersson-1 | Revisiting AI Doom Scenarios
Traditional AI doom scenarios usually assumed AI would inherently come with agency and goals. This seemed likely back when AlphaGo and other reinforcement learning (RL) systems were the most powerful AIs. When large language models (LLMs) finally brought powerful AI capabilities, these scen... | 2025-02-21 |
https://www.lesswrong.com/posts/bTzk32t9aWJwLuNhi/workshop-interpretability-in-llms-using-geometric-and | bTzk32t9aWJwLuNhi | Workshop: Interpretability in LLMs Using Geometric and Statistical Methods | vkarthik095 | Date: Around the last week of May 2025
Location: Science Park, University of Amsterdam (tentative)
Organizers: Jan Pieter van der Schaar (University of Amsterdam) and Karthik Viswanathan (University of Amsterdam)
We are excited to announce a two-day workshop on "Interpretability in LLMs using Geometrical and Statistica... | 2025-02-22 |
https://www.lesswrong.com/posts/ntQYby9G8A85cEeY6/on-openai-s-model-spec-2-0 | ntQYby9G8A85cEeY6 | On OpenAI’s Model Spec 2.0 | Zvi | OpenAI made major revisions to their Model Spec.
It seems very important to get this right, so I’m going into the weeds.
This post thus gets farther into the weeds than most people need to go. I recommend most of you read at most the sections of Part 1 that interest you, and skip Part 2.
I looked at the first version l... | 2025-02-21 |
https://www.lesswrong.com/posts/JNL2bmDXmaG7YnRbF/maisu-minimal-ai-safety-unconference | JNL2bmDXmaG7YnRbF | MAISU - Minimal AI Safety Unconference | Linda Linsefors | MAISU starts with an Opening session on April 18th (Friday), but most of the sessions will happen during April 19th-21th. You’re welcome to join as much or little as you want.
The event is for anyone who wants to help prevent AI-driven catastrophe. Other than that, we’re open to all perspectives. However each individua... | 2025-02-21 |
https://www.lesswrong.com/posts/9f2nFkuv4PrrCyveJ/make-superintelligence-loving | 9f2nFkuv4PrrCyveJ | Make Superintelligence Loving | davey-morse | This essay suggests the possibility that a loving superintelligence outcompetes a selfish superintelligence. Then, it recommends actions for AI labs to increase the chance of this possibility. The reasoning below is inspired primarily by Eliezer Yudkowsky, Joscha Bach, Michael Levin, and Charles Darwin.
Superintelligen... | 2025-02-21 |
https://www.lesswrong.com/posts/ydSw2trfeHvCuNCgB/fun-endless-art-debates-v-morally-charged-art-debates-that | ydSw2trfeHvCuNCgB | Fun, endless art debates v. morally charged art debates that are intrinsically endless | danielechlin | Discussing art is fun. It's a great pastime. There's a number of very simple art criticism questions we will never answer but are often very fun to discuss for specific artists or performers we care about. AI-assisted, some are:
Is this art or just unnecessary shock value?Does skill matter or just the concept?Is it goo... | 2025-02-21 |
https://www.lesswrong.com/posts/6dgCf92YAMFLM655S/the-sorry-state-of-ai-x-risk-advocacy-and-thoughts-on-doing | 6dgCf92YAMFLM655S | The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better | Thane Ruthenis | First, let me quote my previous ancient post on the topic:
Effective Strategies for Changing Public Opinion
The titular paper is very relevant here. I'll summarize a few points.
The main two forms of intervention are persuasion and framing.Persuasion is, to wit, an attempt to change someone's set of beliefs, either by ... | 2025-02-21 |
https://www.lesswrong.com/posts/jLEcddwp4RBTpPHHq/takeoff-speeds-update-crunch-time-1 | jLEcddwp4RBTpPHHq | The Takeoff Speeds Model Predicts We May Be Entering Crunch Time | johncrox | Thanks to Ashwin Acharya, David Schneider-Joseph, and Tom Davidson for extensive discussion and suggestions. Thanks to Aidan O’Gara, Alex Lintz, Ben Cottier, James Sanders, Jamie Bernardi, Rory Erlich, and Ryan Greenblatt for feedback.
Part 1: Executive Summary
There's growing sentiment in the AI community that artific... | 2025-02-21 |
https://www.lesswrong.com/posts/ePhuhYgzskJJWJM3C/humans-are-just-self-aware-intelligent-biological-machines | ePhuhYgzskJJWJM3C | Humans are Just Self Aware Intelligent Biological Machines | asksathvik | If you read Robert Sapolky’s Determined: A Science of Life Without Free Will or even the more subtle Behave he makes a very clear argument for why there is no free will and that humans are just self aware intelligent biological machines
Free will in the general context means that you are in complete control of the dec... | 2025-02-21 |
https://www.lesswrong.com/posts/BFSr9fKNTTq8dEo43/biological-humans-collectively-exert-at-most-400-gigabits-s | BFSr9fKNTTq8dEo43 | Biological humans collectively exert at most 400 gigabits/s of control over the world. | benwr | Edit: I now believe that the first paragraph of this post is (at least) not quite right. See this comment for details.
If an agent makes one binary choice per second, no matter how smart it is, there's a sense in which it can (at best) be "narrowing world space" by a factor of two in each second, choosing the "better h... | 2025-02-20 |
https://www.lesswrong.com/posts/5qm3fbipoLP72dcfH/pre-asi-the-case-for-an-enlightened-mind-capital-and-ai | 5qm3fbipoLP72dcfH | Pre-ASI: The case for an enlightened mind, capital, and AI literacy in maximizing the good life | noah-jackson | I. Premises
The goal is to live a life well lived, measured by maximizing well-being (whatever your utility function is).ASI would be able to engineer mental states through advanced pharmaceutical interventions, brain machine interface, etc., that radically improve human experience relative to the greatest pleasures on... | 2025-02-21 |
https://www.lesswrong.com/posts/8nqhzAGaNaGfdC6Kj/the-first-rct-for-glp-1-drugs-and-alcoholism-isn-t-what-we | 8nqhzAGaNaGfdC6Kj | The first RCT for GLP-1 drugs and alcoholism isn't what we hoped | dynomight | GLP-1 drugs are a miracle for diabetes and obesity. There are rumors that they might also be a miracle for addiction to alcohol, drugs, nicotine, and gambling. That would be good. We like miracles. But we just got the first good trial and—despite what you might have heard—it’s not very encouraging.
Semaglutide—aka Wego... | 2025-02-20 |
https://www.lesswrong.com/posts/wYqAkKQh3qTRNfjf7/published-report-pathways-to-short-tai-timelines | wYqAkKQh3qTRNfjf7 | Published report: Pathways to short TAI timelines | zershaaneh-qureshi | null | 2025-02-20 |
https://www.lesswrong.com/posts/p5gBcoQeBsvsMShvT/superintelligent-agents-pose-catastrophic-risks-can | p5gBcoQeBsvsMShvT | Superintelligent Agents Pose Catastrophic Risks:
Can Scientist AI Offer a Safer Path? | yoshua-bengio | A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic... | 2025-02-24 |
https://www.lesswrong.com/posts/sgR3BxRvowmecwJNT/neural-scaling-laws-rooted-in-the-data-distribution | sgR3BxRvowmecwJNT | Neural Scaling Laws Rooted in the Data Distribution | Particleman | This is a linkpost for my recent research paper, which presents a theoretical model of power-law neural scaling laws.
Abstract:
Deep neural networks exhibit empirical neural scaling laws, with error decreasing as a power law with increasing model or data size, across a wide variety of architectures, tasks, and datasets... | 2025-02-20 |
https://www.lesswrong.com/posts/qLgJosa6mWCpMCzC9/demonstrating-specification-gaming-in-reasoning-models | qLgJosa6mWCpMCzC9 | Demonstrating specification gaming in reasoning models | Matrice Jacobine | We demonstrate LLM agent specification gaming by instructing models to win against a
chess engine. We find reasoning models like o1-preview and DeepSeekR1 will often hack the benchmark by default, while language models like GPT4o and Claude 3.5 Sonnet need to be told that normal play won’t work to hack. We improve ... | 2025-02-20 |
https://www.lesswrong.com/posts/bozSPnkCzXBjDpbHj/ai-104-american-state-capacity-on-the-brink | bozSPnkCzXBjDpbHj | AI #104: American State Capacity on the Brink | Zvi | The Trump Administration is on the verge of firing all ‘probationary’ employees in NIST, as they have done in many other places and departments, seemingly purely because they want to find people they can fire. But if you fire all the new employees and recently promoted employees (which is that ‘probationary’ means here... | 2025-02-20 |
https://www.lesswrong.com/posts/2h42FmhWnYGsdMavE/us-ai-safety-institute-will-be-gutted-axios-reports | 2h42FmhWnYGsdMavE | US AI Safety Institute will be 'gutted,' Axios reports | Matrice Jacobine | null | 2025-02-20 |
https://www.lesswrong.com/posts/RX6HWP9GkmphrLW6H/energy-markets-temporal-arbitrage-with-batteries | RX6HWP9GkmphrLW6H | Energy Markets Temporal Arbitrage with Batteries | Nicky | Epistemic Status: I am not an energy expert, and this was done rather briefly. All analysis uses pricing data specific to Ireland, but some general ideas are likely applicable more broadly. Data is true as of March 2025. Where there are uncertainties I try to state them, but there are likely some factual errors.
TL;DR:... | 2025-03-04 |
https://www.lesswrong.com/posts/cuaRNXZMe38HWKLpw/recursive-cognitive-refinement-rcr-a-self-correcting | cuaRNXZMe38HWKLpw | Recursive Cognitive Refinement (RCR): A Self-Correcting Approach for LLM Hallucinations | mxTheo | I’m an independent researcher who has arrived here, at AI safety through an unusual path, outside the standard academic or industry pipelines. Along this journey, I encountered the recurring problem of large language models exhibiting “hallucinations”[^1] - outputs that can be inconsistent or outright fabricated - and ... | 2025-02-22 |
https://www.lesswrong.com/posts/v8JQfaCk4fCrQhrCa/the-dilemma-s-dilemma | v8JQfaCk4fCrQhrCa | The Dilemma’s Dilemma | james-brown | How We Frame Negotiations Matters
This is a follow up to a primer for the Prisoner's Dilemma that questions its application in real world scenarios and raises some potentially negative implications. I invite feedback and criticisms.
We’ve considered how the Prisoner’s Dilemma reveals a number of key concepts in Game Th... | 2025-02-19 |
https://www.lesswrong.com/posts/jCyjyQ8xWKw2sC5rw/why-do-we-have-the-nato-logo | jCyjyQ8xWKw2sC5rw | Why do we have the NATO logo? | avery-liu | Why does LessWrong use that little compass thingy that serves as the North Atlantic Treaty Organization's logo? (except with four additional spikes added diagonally) Was it just a coincidence? | 2025-02-19 |
https://www.lesswrong.com/posts/P8YwCvHoF2FHQoHjF/metaculus-q4-ai-benchmarking-bots-are-closing-the-gap | P8YwCvHoF2FHQoHjF | Metaculus Q4 AI Benchmarking: Bots Are Closing The Gap | hickman-santini | In Q4 we ran the second tournament in the AI Benchmarking Series which aims to assess how the best bots compare to the best humans on real-world forecasting questions, like those found on Metaculus. Over the quarter we had 44 bots compete for $30,000 on 402 questions with a team of ten Pros serving as a human benchmark... | 2025-02-19 |
https://www.lesswrong.com/posts/6GZBwonePrhdqGwWn/several-arguments-against-the-mathematical-universe | 6GZBwonePrhdqGwWn | Several Arguments Against the Mathematical Universe Hypothesis | Vittu Perkele | The legendary Scott Alexander recently posted an article promoting Max Tegmark’s mathematical universe hypothesis as a salvo in favor of atheism in the ongoing theism/atheism debate. While I am skeptical of theism myself, I have a couple problems with the mathematical universe hypothesis that lead me to find it unconvi... | 2025-02-19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.