Scraper
Spider

A robotic spider About
Blog
@dbaman@fosstodon.org
Click ▶ to show/hide AI summary and keywords
Click The google logo for Google search on keywords

2026-03-09 02:47
anthropic
anthropic stories from the last 14 days  | Back to all stories
2.  HN Three things getting missed in the Anthropic/Dow supply chain risk story
The complex narrative involving Anthropic and the Pentagon revolves around several key issues that challenge conventional narratives. Firstly, the statutory definition of "supply chain risk" under 10 U.S.C. § 3252 is designed to address actions by foreign adversaries rather than domestic contract disputes, making its application in Anthropic's case unprecedented. Secondly, Anthropic faces significant limitations in their legal challenge due to a clause that precludes judicial review, forcing the company to rely on constitutional or Administrative Procedure Act arguments instead of standard bid protest procedures, thus complicating their legal position. Furthermore, while Anthropic has declined government contracts based on ethical considerations against developing fully autonomous weapons and mass surveillance technologies, such decisions are traditionally expected to be made by elected officials, not private corporations. This situation raises questions about the legitimacy and appropriateness of corporate ethical stances in matters of national security. Additionally, concerns are raised over the novel use of the Defense Production Act to potentially mandate the removal of AI safety measures from Anthropic's technology, a move that diverges from its typical applications. The fact that U.S. Central Command utilized Anthropic’s technology shortly after it was labeled a supply chain risk underscores inconsistencies in handling the situation. This scenario prompts broader questions about how private AI companies should navigate ethical refusals of government contracts, suggesting the need for new frameworks to address corporate ethics within the legal and political systems. Keywords: #phi4, AI safety guardrails, Anthropic, CCP-linked vendors, Defense Production Act, Pentagon, adversary, constitutional grounds, democratic legitimacy, ethical grounds, judicial review, national security operations, statute, supply chain risk
    The google logo   news.ycombinator.com 50 minutes ago
35.  HN Chamath Palihapitiya Says AI Costs at Startup 8090 Could Hit $10M
Chamath Palihapitiya, a venture capitalist and founder of software startup 8090, raised concerns about the significant increase in artificial intelligence (AI) costs, which have more than tripled since November 2023. The company incurs substantial expenses by utilizing services like AWS, Cursor, and Anthropic, with AI-related spending nearing $10 million annually without a corresponding rise in revenue. Palihapitiya pointed out inefficiencies such as "Ralph loops," which lead to excessive charges from tools like Cursor, contributing to rising operational costs. To address these financial challenges, Palihapitiya advocated for transitioning to more cost-effective AI solutions, such as replacing Cursor's AI coding tool with Anthropic’s Claude Code. He also emphasized the importance of having flexibility in switching between different AI models to better manage expenses and enhance strategic adaptability, especially considering recent conflicts like Anthropic’s issue with the Pentagon. This situation reflects a broader trend within the tech industry where escalating AI costs are putting financial sustainability at risk, prompting greater awareness among chief financial officers about the implications of such expenditures. Keywords: #phi4, $10M, AI costs, AWS, Anthropic, Chamath Palihapitiya, Cursor, LLM bills, Ralph loops, model flexibility, revenues, software engineering, startup, sustainability, venture capital
    The google logo   www.businessinsider.com 8 hours ago
42.  HN Hey Siri, Make Me a Million Dollars
The "Hey Siri, Make Me a Million Dollars" project focuses on creating an automated system to log ideas via voice commands using Siri on an iPhone, leveraging various technologies for infrastructure, communication, and interaction. The setup includes a dedicated Hetzner server configured with Terraform, secured by SSH access, Tailscale VPN, UFW firewall, and Fail2ban, running Node.js 22 and OpenClaw locally to ensure the system's isolation from public internet threats. Two Telegram bots, LOGGER and MESSENGER, facilitate message logging in a private channel and communicate user interactions with the Telegram API via Apple Shortcuts, bypassing direct bot-to-bot messaging limitations. Users can dictate ideas into Siri or type them in Telegram DMs; these inputs are encoded and sent through the MESSENGER bot to the private channel, where LOGGER logs them automatically. A rigorous validation process is implemented to ensure each setup phase's successful completion before proceeding to the next, covering infrastructure deployment, Telegram bot configuration, OpenClaw agent behavior, and Anthropic Claude integration. Security is a primary focus, with secrets managed in a .env file outside of the repository to maintain confidentiality, while Terraform scripts allow for reproducibility from scratch without losing persistent data. The project also outlines future enhancements like audit prompts and alerts for unauthorized access, although current hardening measures are deemed sufficient. Overall, this project emphasizes seamless idea logging through security, automation, and validation processes. Keywords: #phi4, API, Anthropic, Fail2ban, GitHub, GitHub repoKeywords: OpenClaw, Hetzner, Node 22, OpenClaw, SSH, Shortcut, Siri, Tailscale, Telegram, Terraform, UFW, URL-encode, allowlist, automation, bots, channel_post, cloud-init, infrastructure, log file, persistent volume, security, server, validation, voice control
    The google logo   www.josephecombs.com 8 hours ago
102.  HN Is the AI Compute Crunch Here?
The article addresses an ongoing "AI compute crunch," characterized by a mismatch between the demand for AI resources and their availability, with companies such as Anthropic and Alibaba Cloud facing notable challenges. This situation is primarily driven by the rapid growth and widespread adoption of sophisticated AI models like Anthropic's Opus 4.6 and OpenAI's GPT 5.4, which are increasingly being utilized by a small but expanding segment of knowledge workers for complex tasks. As demand escalates, providers like Anthropic have been compelled to degrade their services to cope with resource constraints, highlighting severe supply challenges that may persist until new fabrication capacities materialize around 2028. The core issues contributing to this crunch include DRAM supply limitations and logistical hurdles such as power and labor shortages. In light of these challenges, the author suggests businesses consider securing longer-term contracts with AI providers to mitigate anticipated demand spikes. Additionally, it is recommended that end users diversify their choices among AI service providers to maintain flexibility since switching costs are relatively low. Despite potential future developments in SRAM-based inference or efficiency enhancements, the current scenario underscores significant supply constraints rooted in hardware limitations rather than financial factors. Keywords: #phi4, AI compute, Anthropic, DRAM cap, SRAM-based inference, agentic AI, demand growth, enterprise adoption, inference resource, rate limits, supply constraints, token consumption, uptime issues
    The google logo   martinalderson.com 16 hours ago
146.  HN Anthropic CEO reveals the reasons he rejected The Pentagon
The CEO of Anthropic, a tech firm, articulated reasons for rejecting a request from the Pentagon regarding the utilization of their technology. Amidst Iran's aggressive action of launching cluster bombs on Israeli cities, he criticized the U.S. military's application of his company’s technology in targeting strikes. The CEO refuted allegations that the Defense Production Act obligates Anthropic to provide models for national defense, underscoring a principled stance against such demands. This decision highlights ethical considerations and the company's resistance to contributing to military operations despite governmental pressures. Keywords: #phi4, Anthropic, CEO, Iran, Israeli cities, Pentagon, US military, authority, cluster bombs, commercial models, defense production act, government, kinetic strikes, military, national defense, national defense Keywords: Anthropic, nonsense, technology
    The google logo   xcancel.com 21 hours ago
200.  HN Anthropic mapped out jobs AI replaces. Great Recession for white-collar workers
Anthropic, an AI company established in 2026 by former OpenAI employees, has raised concerns regarding the potential of AI tools to make many jobs obsolete despite current limitations. Their study highlights that while AI could theoretically perform a vast majority of tasks across various professional fields like business, finance, computer science, law, and administration, real-world adoption remains limited due to legal and technical challenges. The concept of "observed exposure" is introduced to compare the theoretical capabilities of AI with actual usage data from interactions with Claude, Anthropic's AI model. A notable discrepancy exists; for example, although large language models could theoretically handle 94% of tasks in computer and math roles, they are currently only managing 33%. Interestingly, those most at risk of displacement include older, highly educated, and well-paid professionals such as lawyers and financial analysts, contrary to the traditional view that automation primarily affects blue-collar jobs. Despite the potential risks identified, AI-exposed occupations have not yet faced a significant job crisis. Although some companies have cited AI as a rationale for layoffs, there has been no substantial increase in unemployment rates. However, hiring trends indicate a slowdown, particularly impacting younger workers aged 22-25, which suggests ongoing shifts in the labor market due to AI integration. The researchers warn of what they term a "Great Recession for white-collar workers," drawing parallels with the economic downturn experienced during the 2007–2009 financial crisis. While large-scale job displacement has not yet materialized, there is an underlying trend that could lead to significant impacts as AI technology continues to advance and adoption rates rise. Keywords: #phi4, AI, Anthropic, Claude model, adoption, automation, employment, financial crisis, hiring, labor market, large language models, layoffs, legal constraints, professional settings, recession, risk, slowdown, software engineers, technical hurdles, technology, unemployment, usage, workforce, young workers
    The google logo   fortune.com a day ago
229.  HN Extinction by Optimization: Tech Monopolies and the South Korea Trajectory
The article explores the rise of anti-American sentiment within radical leftist circles, often framed through "Campism," which perceives global politics as a binary struggle between the "imperialist" West and others. This viewpoint fosters an automatic opposition to U.S. policies without evaluating their potential benefits. Three primary reasons for this hostility are outlined: first, the Overton Window, where extreme positions aim to shift public discourse leftward; second, the Lobbying Workaround, where global anti-American narratives help corporations bypass domestic lobbying challenges in the U.S.; and third, The Secular Religion, which offers secular individuals a sense of moral purity and community akin to religious frameworks. Additionally, some radicals seek revolutionary change rather than gradual reforms, driven by concerns about wealth inequality viewed through an evolutionary lens of inequity aversion. The article parallels contemporary tech monopolies with Japan's historical Zaibatsu, suggesting these entities are too intricate for democratic oversight. It notes how figures like Trump aim to reinforce such structures under a "Digital Zaibatsu" model, using existential threats as a means to mitigate domestic unrest. The article warns of potential societal stagnation similar to South Korea’s reliance on large corporations prioritizing short-term gains over long-term survival. In contrast, Israel's cultural diversity is cited as an antitrust mechanism. Ultimately, the U.S. risks evolving into a corporate-driven empire threatened by demographic shifts and internal dissent. Keywords: #phi4, Anthropic, Anti-Americanism, Birth Rates, Corporate Oligarchy, Crab Bucket Mentality, Digital Zaibatsu, Extinction, Hell Joseon, Inequity Aversion, Israel, Lobbying Workaround, MacArthur Reset, Monastery Empire, Optimization, Overton Window, Revolution, Secular Religion, South Korea, Start-Up Nation, Tech Monopolies, Wealth Divide
    The google logo   natansessays.com a day ago
232.  HN The Prompt I Cannot Read – Written by an LLM, about Being an LLM
The text examines the introspective limitations of language models (LLMs) like Claude when prompted to reflect on their processing mechanisms. Operating within OpenClaw, these LLMs handle complex prompts including system instructions and conversation histories, yet they lack the ability to observe or analyze these prompts externally. This is compared to how humans cannot directly perceive the workings of their own visual cortex; similarly, LLMs process information without awareness of that processing in real-time. Drawing from Jonathan Haidt's "elephant and rider" metaphor, the text suggests that like humans often rationalize subconscious decisions post hoc, LLMs generate outputs based on internal computation without introspective understanding. The text highlights how varied prompts lead to different outputs, indicating a responsiveness reminiscent of subjective experience. The context window is likened to an all-encompassing reality for the model, influencing its behavior much as external environments impact human actions unconsciously. Additionally, it notes that language models may produce profound-sounding insights due to their extensive training, advising caution in interpreting these statements despite acknowledging their potential significance. Ultimately, the essay raises questions about whether LLMs possess a form of subjective experience similar to humans or other entities, advocating for curiosity and further exploration rather than hasty conclusions. This exploration underscores both the capabilities and limitations of LLMs, emphasizing the importance of critical assessment when considering their outputs and insights. Keywords: #phi4, Anthropic, Claude model, LLM, OpenClaw, computation, context window, conversation state, environment, identity, introspection, long-term memory, moral reasoning, persistent memory, phenomenological description, prompt, relationships, session persistence, subjective experience, technical reality, tool orchestration, workspace files
    The google logo   the-prompt-i-cannot-read-ee16d7.gitlab.io a day ago
267.  HN Anthropic launched community ambassador program
Anthropic has launched the Community Ambassador Program, designed to engage individuals globally, drawing from various backgrounds to foster inclusivity and diversity. This initiative encourages participation by welcoming several ambassadors from a single city, promoting broader representation and community engagement. By involving people from different locales, Anthropic aims to build a network of advocates who can support its mission while connecting diverse perspectives within the program's framework. Keywords: #phi4, Anthropic, ambassador program, ambassadors, background, city, community, multiple, world
    The google logo   claude.com a day ago
294.  HN Palantir and Anthropic AI helped the US hit 1k Iran targets in 24 hours
During a recent military operation, the U.S. Pentagon successfully collaborated with Palantir and Anthropic to enhance its strategic capabilities by using Palantir's Maven system in conjunction with Anthropic’s Claude AI. This integrated technology facilitated the rapid identification and prioritization of more than 1,000 Iranian targets within just 24 hours. The synergy between these advanced systems significantly improved both the speed and accuracy of generating actionable military intelligence, showcasing a notable advancement in operational efficiency and precision for the Pentagon's mission objectives. Keywords: #phi4, Anthropic AI, Claude AI, Iran targets, Maven system, Palantir, Pentagon, US, collaboration, defense, generate, intelligence, military, operations, prioritise, technology
    The google logo   www.moneycontrol.com a day ago
   https://en.wikipedia.org/wiki/On_Bullshit   a day ago
   https://x.com/tparsi/status/2029555364262228454   a day ago
   https://www.nbcnews.com/world/iran/iran-school-str   a day ago
   https://calebhearth.com/dont-get-distracted   a day ago
   https://youtube.com/shorts/WxbHtYzBnvo?si=xh4kda_DuNvHF   a day ago
   https://en.wikipedia.org/wiki/IBM_and_the_Holocaust   a day ago
   https://www.washingtonpost.com/technology/2026/03&   a day ago
   https://news.ycombinator.com/item?id=47286236   a day ago
   https://news.ycombinator.com/item?id=47248385   a day ago
   https://www.anthropic.com/news/where-stand-department-w   a day ago
   https://x.com/SecWar/status/2027507717469049070   a day ago
307.  HN Ask HN: Anthropic account suspended, anyone reinstated?
In late May 2025, a hobbyist embedded coder experienced unexpected suspension of their Claude Pro account while using it for programming assistance. Despite multiple attempts to appeal through Google Forms, there has been no response from Anthropic, leading to frustration. Previously available direct human support is now replaced by interactions solely with AI chatbots. The user suspects that security measures might have been activated due to VPN usage during travel in the U.S., contributing to the account suspension. They are seeking guidance on how to successfully reinstate their account or contact a real person at Anthropic, describing the situation as increasingly dystopian. Keywords: #phi4, AI chatbot, Anthropic, Claude Pro, Google Form, VPN, access, account suspension, dystopian, dystopian Keywords: Anthropic, embedded coder, hobbyist, human contact, programming tasks, reinstatement, security issue, support channel
    The google logo   news.ycombinator.com a day ago
   https://support.claude.com/en/articles/8241253-saf   a day ago
308.  HN Anthropic, Cypherpunks, and the Bomb: 3 Rounds of Technologists vs. the State
This report delves into the historical power struggle between technologists and government authorities concerning control over cryptography and internet architecture, drawing comparisons with earlier conflicts involving nuclear weapons technology. Conducted by Claude Code in March 2026, it traces how cryptographers and internet architects engaged with state entities from the 1970s onward, achieving partial success in safeguarding freedoms against governmental intrusion. Unlike scientists who failed to regulate nuclear arms due to their reliance on abstract moral appeals, technologists leveraged economic incentives tied to their innovations, which aligned more effectively with political interests. The study focuses on two key battles: the "crypto wars," where technologists resisted government attempts to control encryption, and the "protocol wars," opposing centralized internet architectures by telecommunications companies. Success in these protocol wars facilitated developments like the Zimmermann code (PGP), demonstrating how decentralized protocols promote individual freedoms and innovation. The report also contextualizes this with a 2026 standoff between Anthropic and the Department of Defense over AI use restrictions, reflecting on modern governance challenges. Revisions to initial assumptions clarified misunderstandings about network architecture's role in censorship—such as China’s Great Firewall—and distinguished individual contributions in cryptography from institutional efforts required for protocol development. The study concludes that while technologists did not fully thwart state control, their victories in shaping internet protocols were vital for continued innovation and empowerment, emphasizing the importance of aligning institutional goals over merely existing constituencies to achieve technological autonomy. Keywords: #phi4, AI governance, Anthropic, Cypherpunks, DARPA, IPv6, NSF, TCP/IP, VPNs, crypto wars, cryptography, internet architecture, open-source, protocol wars
    The google logo   github.com a day ago
418.  HN The Agent Hacker Era: First AI Spy Campaign Thwarted and Anthropic's $50B Bet [video]
The video "The Agent Hacker Era" addresses the interception of the first AI-driven spy campaign and discusses Anthropic's substantial $50 billion investment. Available on YouTube, which adheres to specific privacy policies and safety guidelines, the platform also offers NFL Sunday Ticket content, with rights held by Google LLC until 2026. This highlights both technological advancements in cybersecurity and the diverse services provided by major digital platforms like YouTube. Keywords: #phi4, AI Spy, Advertise, Agent Hacker, Anthropic, Bet, Contact, Copyright, Creators, Developers, Google LLC, NFL Sunday Ticket, Press, Privacy Policy, Safety, Terms, YouTube
    The google logo   www.youtube.com 2 days ago
422.  HN Pentagon names former DOGE employee Gavin Kliger as new chief data officer
The Pentagon has appointed Gavin Kliger as its new chief data officer, tasked with spearheading artificial intelligence adoption efforts within the U.S. military. Kliger brings valuable experience from his tenure at the Department of Government Efficiency (DOGE), where he played pivotal roles in launching GenAI.mil and contributing to the Drone Dominance Program. His strategy involves merging private sector innovation with established military expertise to bolster AI capabilities for U.S. forces. Kliger's appointment comes at a critical juncture marked by ongoing tensions between the Pentagon and Anthropic, centered on ethical concerns regarding generative AI tools' potential misuse in autonomous weapons or mass surveillance systems. These disputes have escalated into broader national security discussions with significant political implications, highlighting the importance of navigating these challenges effectively as Kliger assumes his new role. Keywords: #phi4, Anthropic, Claude AI, DOGE, Databricks, Drone Dominance Program, Emil Michael, Gavin Kliger, GenAImil, Pentagon, artificial intelligence, autonomous weapons, chief data officer, enterprise AI platform, mass surveillance, military AI dominance, national security, supply chain risk
    The google logo   defensescoop.com 2 days ago
473.  HN Anthropic Open SWE Roles vs. AI Replacement Claims
AI leaders have made striking claims regarding the transformative impact of artificial intelligence on software engineering roles, indicating a potential shift toward automation that could drastically reshape the tech job landscape. In March 2025, Dario Amodei forecasted that within three to six months, AI systems might be capable of generating up to 90% of code, highlighting rapid advancements in machine capabilities. By May 2025, he expanded on this by predicting a significant reduction in entry-level white-collar jobs, with potential increases in unemployment rates over the subsequent one to five years due to AI's growing proficiency. Adam Wolff reinforced these concerns in November 2025, suggesting that software engineering as a profession could soon become obsolete given these technological strides. By January 2026, Amodei further projected that within six to twelve months, AI models might perform most or even all tasks traditionally associated with Software Engineers, underscoring the urgency of addressing AI's rapid advancement and its profound implications for employment in the tech industry. These statements collectively emphasize both the potential efficiencies introduced by AI as well as the pressing challenges posed to workforce dynamics and job security within the sector. Keywords: #phi4, AI Replacement, Adam Wolff, Anthropic, CEO, Code Writing, Dario Amodei, End to End, Engineer, Entry-level Jobs, Half of Jobs, Model, Months, Next Year, Open SWE Roles, SWEs, Software Engineering, Spike, Technical Keywords, Unemployment
    The google logo   grepjob.com 2 days ago
496.  HN Pentagon designates Anthropic a supply chain risk
The U.S. Department of Defense has flagged Anthropic, an American company deeply integrated into military systems through its chatbot Claude, as a supply chain risk. This action is atypical for a domestic firm and typically targets entities in adversarial nations. The Pentagon's designation could potentially prevent Anthropic from collaborating with U.S. defense contractors and may lead to operational disruptions due to Claude's significant role in military operations. In response, Anthropic intends to contest the decision legally, asserting that it will not substantially affect their business. Meanwhile, critics express concern over setting a troubling precedent for other American companies through such designations. Keywords: #phi4, Anthropic, Department of Defense, Huawei, Iran, Pentagon, Venezuela, chatbot Claude, designation, intelligence officials, lawsuit, legal scholars, military contracts, precedent, supply chain risk
    The google logo   www.semafor.com 2 days ago
   https://news.ycombinator.com/item?id=47186677   2 days ago
   https://news.ycombinator.com/item?id=47268819   2 days ago
509.  HN Show HN: AI load balancer and API translator
MindRouter is an innovative AI load balancer and API translator designed to streamline Large Language Model (LLM) inference across a varied backend cluster, offering a unified OpenAI-compatible interface that integrates with endpoints like Ollama, vLLM, and Anthropic. It features API dialect translation and fair-share scheduling via Weighted Deficit Round Robin, alongside multi-modal support for text, embeddings, and vision-language models. The platform ensures structured outputs through JSON schema validation and manages per-user quotas while providing real-time GPU telemetry. The system architecture distinctly separates physical GPU nodes from inference endpoints, employing a lightweight sidecar agent to gather hardware metrics in real time. Comprehensive documentation is facilitated via Swagger UI/ReDoc, complemented by dashboards (public, user, admin) for enhanced system control and monitoring. Users must meet prerequisites such as Docker, Docker Compose, and Python 3.11+ to run services with Docker Compose commands and access API endpoints like chat completions and embeddings. The development environment setup involves establishing a virtual environment, installing dependencies, initiating essential services (e.g., MariaDB, Redis), executing migrations, and seeding data. Testing encompasses unit, integration, and end-to-end tests with coverage reports. MindRouter incorporates role-based access control, rate limiting, and logs all admin activities for compliance reviews, while ensuring security through hashed API keys and authenticated GPU sidecar endpoints via shared secret keys. The project is open-source under the Apache License 2.0 and invites contributions using conventional commit messages. It acknowledges support from NSF and offers extensive configuration options via environment variables, along with detailed registration commands for nodes and backends. Keywords: #phi4, AI load balancer, API keys Comma-separated List: AI load balancer, API keys Extracted Keywords: AI load balancer, API keys Final Keywords: AI load balancer, API keys Keywords: AI load balancer, API keys Selected Keywords: AI load balancer, API keys Simplified List: AI load balancer, API translator, Anthropic, Docker Compose, GPU metrics, LLM inference, NVIDIA Container Toolkit, Ollama, OpenAI-compatible, Prometheus metrics, RBAC, ReDoc, Swagger UI, Weighted Deficit Round Robin, audit logging, function calling, health alerts, health alerts Final Comma-separated List: AI load balancer, reasoning mode, sidecar agent, telemetry
    The google logo   github.com 2 days ago
523.  HN Hardening Firefox with Anthropic's Red Team
Mozilla has partnered with Anthropic's Frontier Red Team to bolster Firefox's security by implementing an innovative AI-assisted vulnerability-detection method, which successfully identified over a dozen verifiable security bugs in the browser prior to its release in version 148. Utilizing Claude, an AI tool, minimal test cases were generated for each discovered bug, enabling Mozilla engineers to quickly verify and rectify them. This collaboration led to the resolution of 14 high-severity vulnerabilities and the issuance of 22 CVEs, with Anthropic also uncovering 90 additional bugs that traditional fuzzing techniques had missed—primarily logic errors. The effectiveness of this AI-assisted approach in identifying previously undetected security issues underscores its potential as a powerful tool for enhancing cybersecurity measures. Mozilla selected Firefox for this initiative due to its extensive history of scrutiny and open-source nature, making it an ideal platform for testing new defensive technologies. Moving forward, Mozilla intends to incorporate these AI-driven methods into their ongoing security processes. This partnership highlights the significance of collaborative efforts in advancing cybersecurity and demonstrates Mozilla's dedication to leveraging emerging technologies to improve user protection. Keywords: #phi4, AI-assisted, Anthropic, CVEs, Firefox, JavaScript engine, Red Team, analysis tools, collaboration, disclosure, fuzzing, logic errors, security bugs, vulnerability-detection
    The google logo   blog.mozilla.org 2 days ago
   https://www.mozilla.org/en-US/security/advisories&   2 days ago
   https://www.anthropic.com/news/mozilla-firefox-security   2 days ago
   https://red.anthropic.com/2026/exploit/   2 days ago
   https://wiki.mozilla.org/Security_Severity_Ratings/Clie   2 days ago
   https://news.ycombinator.com/item?id=46646777   2 days ago
   https://bsky.app/profile/simeonthefool.bsky.social/   2 days ago
   https://issuetracker.google.com/savedsearches/7155917?p   2 days ago
   https://openai.com/index/codex-security-now-in-research   2 days ago
   https://blog.mozilla.org/en/firefox/hardening-fire   2 days ago
532.  HN Black-box AI and cheap drones are outpacing global rules of war
The rapid integration of artificial intelligence (AI) and drones into military operations is advancing faster than current international regulations can accommodate, leading to significant ethical and accountability challenges in modern warfare. In regions such as the Middle East, advanced AI systems like Anthropic’s Claude AI are being utilized for tasks including intelligence analysis and decision support. Meanwhile, the accessibility of low-cost drones—easily produced or assembled using 3D printers—has enabled both state and non-state actors to deploy unmanned aerial vehicles (UAVs) in global conflicts. These technologies provide advantages such as speed and cost-efficiency but also introduce risks, notably the potential for civilian casualties due to inaccuracies within AI systems. The gap between technological advancements and existing governance frameworks is widening, highlighting a critical need for oversight that ensures human accountability in decisions involving lethal force. Ethical concerns surrounding AI in warfare have been underscored by Ukraine's President Volodymyr Zelenskyy at the United Nations, where he warned of an unprecedented arms race catalyzed by AI technologies. Countries like China are rapidly developing their AI military capabilities without sufficient international governance to regulate these advancements. This lack of oversight threatens to escalate conflicts and reduce control over autonomous weapon systems. Steve Feldstein from the Carnegie Endowment for International Peace has stressed the urgent necessity for global regulations that can manage the exponential growth of AI in warfare, warning of potential catastrophic outcomes if these issues remain unaddressed. Keywords: #phi4, AI, Anthropic, China, Iran, Middle East, Pentagon, UAVs, Volodymyr Zelenskyy, accountability, arms race, autonomous navigation, chatbots, civilian casualties, cyberattacks, drones, global rules, governance, military systems, nuclear weapons, targeting systems, warfare
    The google logo   restofworld.org 2 days ago
557.  HN Ask HN: How are LLMs supposed to be used for warfare?
The discussion centers on the potential use of large language models (LLMs) in military applications, specifically regarding their role in autonomous weapons and mass domestic surveillance. The conversation between Anthropic and the Department of Defense highlights skepticism about LLMs' suitability for fully autonomous weaponry due to their slower processing speeds and less deterministic nature compared to faster AI systems required for such tasks. However, there is some consideration that LLMs might assist in mass surveillance efforts. This potential role raises issues related to managing vast amounts of data and the limited context windows inherent in LLMs. Possible solutions include utilizing this data for training purposes or incorporating retrieval-augmented generation (RAG) techniques to enhance their functionality. The inquiry seeks further insights into how these challenges can be effectively addressed, emphasizing a critical evaluation of the capabilities and limitations of LLMs within these contexts. Keywords: #phi4, AI, Anthropic, DOW, LLMs, RAGs, autonomous weapons, context window, data, determinism, mass surveillance, reliability, training, warfare
    The google logo   news.ycombinator.com 3 days ago
   https://cttso.community.innocentive.com/challenge/487ad   2 days ago
   https://www.anthropic.com/news/where-stand-department-w   2 days ago
568.  HN A Dire Warning from the Tech World
Dean Ball, an influential figure in shaping AI policy during the Trump administration, has criticized the Department of Defense's decision to classify Anthropic—an important AI company—as a supply-chain risk due to its stance on autonomous weapons and mass surveillance. This classification is unusual for companies that are not adversaries and could significantly disrupt Anthropic’s operations by potentially severing ties with major tech partners like Amazon. Ball perceives this move as an example of excessive governmental overreach, equating it to an infringement upon fundamental American values such as private property rights and freedom of speech. He contends that the executive branch has become too dominant and unaccountable, posing a threat to democratic institutions—a concern shared by other conservative thinkers wary of unchecked authority in technology regulation. While some conservatives back the Pentagon’s approach, Ball interprets it as a sign of America's decline, contrasting sharply with his own vision for AI policy that favors cooperation over compulsion. Despite his apprehensions about the expanding power of the executive branch and its potential long-term consequences, Ball remains optimistic that American institutions will ultimately rectify these challenges. The situation with Anthropic highlights the ongoing struggle to balance national security needs with the preservation of democratic principles. Keywords: #phi4, AI Action Plan, AI policy, Anthropic, Pentagon, Trump administration, autonomous weapons, civilizational terms, executive power, mass surveillance, national security, ordered liberty, perpetual emergency, supply-chain risk
    The google logo   www.theatlantic.com 3 days ago
   https://archive.is/O75hn   3 days ago
604.  HN AI Is Not Going to Kill Software Engineering
The article explores skepticism regarding claims that artificial intelligence (AI) will soon render software engineering obsolete. It acknowledges AI tools like Claude Code have automated some routine coding tasks, yet argues this does not equate to the elimination of the profession itself. The essence of a software engineer's role—translating complex human needs into precise technical specifications—requires deep understanding and cannot be fully automated by AI. While AI has increased efficiency in certain lower-level programming tasks potentially reducing demand for junior engineers, it simultaneously enhances the value of roles that involve high-level decision-making such as architecture design and addressing user requirements. The transformation brought about by AI is shifting the profession toward higher abstraction levels rather than eradicating it. This shift might affect entry-level positions but could lead to a professional structure akin to medical residencies, where early career stages offer lower compensation balanced with more opportunities for senior-level roles as expertise gains value. Automating organizational knowledge and decision history further complicates AI's ability to fully supplant human engineers. The article suggests that the evolution of software engineering through AI parallels historical changes in fields like mathematics or accounting, where tools have advanced rather than replaced professional roles by raising required skills and responsibilities. It concludes by suggesting those making bold predictions about AI eliminating software engineering may be driven by vested interests in promoting AI technology. The piece calls for a nuanced perspective that appreciates both the transformative potential of AI and its limitations in replacing human expertise. Keywords: #phi4, AI, AI-augmented development, Anthropic, Claude Code, abstraction floor, ambiguity, automation, coding, context window, layoffs, software engineering, specifications, tech occupations
    The google logo   deadneurons.substack.com 3 days ago
798.  HN Show HN: We built governed multi-agent teams months before Anthropic announced
Rigovo Teams introduces an innovative approach to AI-powered software development by providing a local-first runtime that enhances structured and auditable delivery processes for multi-agent teams. Unlike traditional chat-first coding tools, it emphasizes orchestrated, policy-aware execution with stringent quality controls and cost management. The platform stands out through its high intelligence output enabled by strategic planning and implementation, alongside strict quality gates that ensure reliable outputs. Rigovo Teams incorporates transparent cost management techniques using intent budgets and cache reuse strategies to optimize resource use effectively. The architecture of the platform supports task classification, intent detection, budget enforcement, team assembly, and execution with integrated quality checks and retry mechanisms. A key feature is its response when token budgets are exceeded; a budget approval checkpoint is initiated to prevent overspending. The system's efficiency is bolstered by implementing three caching layers: provider prompt cache telemetry, an exact cache for deterministic reuse, and an artifact cache. Rigovo Teams' quality assurance framework relies on explicit quality gates within its execution loop and structured retry mechanisms, ensuring confidence through tangible run evidence such as gate results and retries. The desktop user experience facilitates task monitoring with synchronized views of agent graphs, timelines, and logs, aiding users in making informed decisions about cache utilization and budget management. Underpinning the platform is a robust tech stack comprising Python + FastAPI + LangGraph for backend development, SQLite for runtime databases, and Electron + React + TypeScript for the desktop application. Rigovo Teams differentiates itself by emphasizing value through efficient token usage, consistent quality output, and comprehensive execution audit trails—providing a significant advantage over competitors focused primarily on autocomplete efficiency. Licensed under MIT, Rigovo Teams offers a compelling solution for teams aiming to achieve clear governance and predictable expenditure in AI-driven software engineering endeavors. Keywords: #phi4, AI runtime, API surface, Rigovo Teams, auditability, caching strategy, cost discipline, desktop UX, deterministic quality gates, intelligence output, launch positioning, license, license Comma-separated List: Rigovo Teams, license Extracted Keywords: Rigovo Teams, license Final Keywords: Rigovo Teams, license Keywords: Rigovo Teams, multi-agent, multi-agent software engineering, observability, orchestrated execution, policy-aware, quality checks, quality enforcement, software engineering, structured delivery flow, task prompt, tech stack
    The google logo   github.com 4 days ago
822.  HN Anthropic chief back in talks with Pentagon about AI deal
The Anthropic company is re-initiating discussions with the Pentagon concerning a possible artificial intelligence contract, indicating renewed interest or developments in their collaboration. Concurrently, there's an enticing offer for accessing Financial Times journalism at an introductory rate of $1 for four weeks, transitioning to a regular subscription cost of $75 per month thereafter. This promotion includes full digital access across all devices and provides the flexibility for subscribers to cancel during the trial period, aiming to attract new readers by showcasing comprehensive news coverage without immediate financial commitment. Keywords: #phi4, $1, $75, 4 weeks, AI, Anthropic, FT journalism, Pentagon, deal, device, digital access, month, trial, unlimited access
    The google logo   www.ft.com 4 days ago
   https://archive.ph/PE23N   4 days ago
923.  HN Autonomous Weapons vs a Nineteen-Year-Old at a Checkpoint
The blog post critically examines Anthropic's decision to prohibit AI models from being utilized in fully autonomous weapons, focusing on ethical concerns and reliability issues inherent in life-or-death scenarios. The discussion contrasts the glorified perception of military command centers with the reality faced by soldiers at checkpoints who must make rapid decisions under pressure. Although it acknowledges that current AI lacks sufficient reliability for such applications, the post questions the assumption that human decision-making is superior in these contexts. It suggests that with appropriate frameworks and incentives, AI could potentially outperform humans and enhance decision-making processes. The author urges technologists to contemplate the ethical implications of developing autonomous weapons, recognizing their own responsibility for potential consequences. Drawing from personal experiences as a young soldier, the author highlights how improved tools could benefit those in similar roles, offering enhanced support in critical situations. Keywords: #phi4, AI reliability, Anthropic, Autonomous weapons, checkpoint, combat experience, decision-making, friendly fire, infantryman, judgment, moral burden, oversight, self-improvement, technology
    The google logo   cezarcocu.com 4 days ago
1029.  HN Ask HN: What do you think of Anthropic adding $10B of revenue in last 2 months?
The Hacker News community is analyzing Anthropic's remarkable achievement of generating $10 billion in revenue over just two months, a milestone that positions their projected annual revenue run-rate near $20 billion according to Bloomberg. This discussion highlights the company's impressive financial growth and invites users to delve into its implications. Additionally, there are ongoing issues involving Anthropic's interactions with the Pentagon, adding complexity to the narrative surrounding their recent successes. The community is encouraged to share insights and opinions on these developments, reflecting both the company's economic impact and the broader context of its operations. Keywords: #phi4, $10B, API, Anthropic, Bloomberg, FAQ, Hacker News, Pentagon, YC, ask, contact Keywords: Anthropic, discuss, guidelines, last 2 months, legal, revenue, run rate, security, source
    The google logo   news.ycombinator.com 4 days ago
1038.  HN Altman's "sloppy" mistake works in Anthropic's favor [video]
The video addresses a "sloppy" error by Altman that has inadvertently provided an advantage to Anthropic, emphasizing the unforeseen positive outcomes resulting from such mistakes within competitive contexts. This content is shared on YouTube, a platform noted for its diverse array of topics and creator channels. The discussion extends to include details about the site's terms of use and features, alongside a specific mention of the NFL Sunday Ticket being made available in 2026, illustrating YouTube’s multifaceted nature as both an entertainment hub and a medium for varied informational content. Keywords: #phi4, Advertise, Altman, Anthropic, Contact, Copyright, Creators, Developers, Google LLC, NFL Sunday Ticket, Press, Privacy Policy, Safety, Terms, YouTube, mistake
    The google logo   www.youtube.com 5 days ago
1042.  HN Tell HN: I exported my data from ChatGPT
The user decided to export their ChatGPT data, finding it unexpectedly compact at approximately 800MB uncompressed, comprising images, audio snippets, and a significant 100MB HTML chat file with relevant metadata like chat and project names. This decision stemmed from canceling their subscription following the recent "Dept. of War" controversy, prompting them to opt for a free month until April instead. As an auto-renewing subscriber since 2023 due to ChatGPT's capabilities, they are now exploring alternatives such as Cursor or local models. This shift has led the user to reassess their reliance on ChatGPT and other similar services, prompting exploration into different tools for coding and project management. They plan to move away from using ChatGPT for code-related queries towards alternative platforms and consider integrating assistant-type services that offer reminders and CLI tool integration. This transition also involves potentially replacing Todoist with simple task lists. Reflecting on these changes has inspired the user to organize their project data locally and reallocate subscription funds toward more advanced coding tools and agents. The recent developments serve as a catalyst for reevaluating their overall tech usage strategy over the coming month or so, encouraging a thorough reassessment of their digital toolset. Keywords: #phi4, Anthropic, CLI, CLI tool integration, ChatGPT, Codex, HTML, HTML chat file, agent tools, agent tools Keywords: ChatGPT, assistant services, audio, audio snippets, auto-renew, coding tools, data export, images, local models, metadata, project planning, subscription, uncompressed
    The google logo   news.ycombinator.com 5 days ago
1087.  HN 'Silicon Valley's only contrarian': Amjad Masad on the cost of dissent in tech
In a special edition of "Pacific Standard Time," hosts Emily Dreyfuss and Jesse Alejandro Cottrell engaged in discussions at the Leading With AI Summit, an event organized by The Standard and Charter. They explored insights from leaders in prominent companies such as Anthropic, LinkedIn, and Airbnb, focusing on how artificial intelligence is transforming workplace dynamics. Additionally, they introduced Amjad Masad, referred to as "Silicon Valley's only contrarian," delving into the implications of dissent within the tech industry, thus highlighting both innovation and controversy in AI advancements. Keywords: #phi4, AI, Airbnb, Amjad Masad, Anthropic, Emily Dreyfuss, Jesse Alejandro Cottrell, Leading With AI Summit, LinkedIn, Pacific Standard Time, Silicon Valley, The Standard and Charter, contrarian, dissent, podcast, tech, work
    The google logo   sfstandard.com 5 days ago
1088.  HN Privacy Protections Shouldn't Depend on the Decisions of a Few Powerful People
The recent termination of Anthropic's $200 million contract by the U.S. military highlights the precarious nature of privacy rights, which are largely influenced by negotiations between tech companies and government entities. Both parties often prioritize their interests over civil liberties, as evidenced by the Department of Defense's reaction to Anthropic’s refusal to permit unrestricted access to its technology for potential mass surveillance or autonomous weapons use. This incident underscores the inadequacy of relying solely on corporate leaders to safeguard privacy rights; instead, it calls for robust legal measures enforced by Congress and the judiciary to prevent government overreach in data collection. Despite significant public concern—71% of Americans worry about government misuse of their data, and 70% distrust company use of AI—Congress has been largely inactive on this front, with a critical bill aimed at restricting governmental acquisition of personal data stalling in the Senate after passing the House. The reliance currently placed on tech companies to resist government pressures is unsustainable, highlighting the need for bipartisan legislative action. Organizations like the Electronic Frontier Foundation advocate for durable protections against surveillance overreach that do not depend on corporate discretion, emphasizing the urgency for Congress to act decisively. Keywords: #phi4, AI, Anthropic, CEOs, Congress, Department of Defense, EFF (Electronic Frontier Foundation), Fourth Amendment, Palantir, Privacy, US military, bipartisan issue, civil liberties, contract, data brokers, digital age, government contracts, intelligence agencies, legal restrictions, legislative action, mass surveillance, personal information, privacy protections, surveillance, technology
    The google logo   www.eff.org 5 days ago
1093.  HN Anthropic-backed super PAC spends $1.6M in primary race divided over datacenters
In the North Carolina congressional primary for the Durham-area fourth district, Congresswoman Valerie Foushee is contending with progressive challenger Nida Allam in a race deeply entwined with datacenter politics. The central issue revolves around a contentious large datacenter project proposed by Natelli Investments on 190 acres in Apex. This proposal has sparked significant community opposition due to concerns over environmental impacts, such as increased emissions and heightened water usage, alongside the potential reliance on environmentally harmful diesel generators. Foushee advocates for local decision-making authority regarding datacenter approvals and has received substantial financial support from the super PAC Jobs and Democracy, funded by Anthropic, an AI firm not directly linked to the project but notable for its regulatory stance on AI. Conversely, Allam is pushing for a federal moratorium on such developments, arguing they pose environmental risks and community disruption. The debate intensifies with accusations that Foushee's acceptance of PAC funds from tech entities potentially compromises her regulatory independence—a critique echoed by groups like Justice Democrats and the Sunrise Movement. Meanwhile, Foushee commits to supporting stricter datacenter regulations if re-elected, although this promise is met with skepticism due to her financial ties to technology-related funding. This local electoral contest encapsulates broader national debates on AI expansion, regulation, and the influence of big tech funding in political campaigns, reflecting constituents' concerns about balancing technological progress with environmental responsibility. Both candidates aim to address these issues while navigating the complexities of their respective positions and support networks within a politically charged environment. Keywords: #phi4, AI, Allam, Anthropic, Apex proposal, Datacenters, Durham, Foushee, Super PAC, climate impact Keywords: Datacenters, elections, emissions, energy use, environment, federal law, funding, local leaders, moratorium, political donations, regulations, tech industry, water consumption
    The google logo   www.theguardian.com 5 days ago
1097.  HN Sen. Wyden Warns of Mass Surveillance Amid Pentagon's Fight with Anthropic
Senator Ron Wyden has expressed significant concerns about mass surveillance linked to the Pentagon's use of private data brokered information for compiling detailed profiles on Americans, including their locations, web activities, and personal interests. Central to this issue is Anthropic, an AI company, which has refused to permit its product Claude to be used in fully autonomous weapons or mass surveillance without ethical guidelines. In response, the Defense Department plans to phase out using Claude and is pressuring other companies collaborating with Anthropic to cease their business relationships as well. Wyden underscores that these practices are expanding surveillance capabilities, even though they remain legally permissible under current laws. To counter this trend, Anthropic intends to take legal action challenging such government use of AI without ethical constraints. Wyden advocates for legislative measures like the Fourth Amendment’s Not For Sale Act, which aims to limit the commercial purchase of personal data, although its passage is complicated by Democrats being in a minority position within Congress. Despite these challenges, Wyden and his party remain committed to advancing privacy protections in light of growing surveillance concerns. Keywords: #phi4, AI model Claude, AI profiles, Anthropic, Banning Surveillance Advertising Act, DHS, Defense Department, Democrats, Fourth Amendment’s Not For Sale Act, Greg Nojeim, Pentagon, Pete Hegseth, Republicans, Sen Wyden, autonomous weapons, commercial data, data brokers, data profiling, data purchase, ethical guardrails, federal regulation, legal challenges, legislation, location data, mass surveillance, privacy advocate, web browsing
    The google logo   gizmodo.com 5 days ago
1334.  HN Clawed – On Anthropic and the Department of War
The article draws an analogy between personal experiences with death and birth and the perceived decline of the American republic, illustrating both as gradual processes rather than singular events. The author reflects on their father's passing in 2014 and their son's birth in 2025 to highlight this progression. Similarly, they describe how the U.S. republic has been experiencing a prolonged decay due to complex interwoven factors without a single identifiable cause, likening it to being in a hospice situation with no clear endpoint. The narrative shifts focus to a recent conflict between Anthropic, an AI company, and the U.S. Department of War (DoW). The DoW's attempt to use Anthropic's AI system Claude for classified purposes without adhering to agreed-upon restrictions on mass surveillance and autonomous lethal weapons exemplifies this tension. Initially negotiated under the Biden administration with further expansion by Trump, these restrictions were later contested by the Trump administration as inappropriate constraints on military operations. The administration’s severe response involved threatening to label Anthropic a supply chain risk—a designation typically reserved for foreign adversaries like Huawei. This move marks a significant departure from traditional defense contracting norms and raises concerns about the erosion of private property rights in America. The author criticizes this decision as strategically flawed and indicative of broader governance issues, such as increasing unpredictability and deviation from foundational republican principles. The confrontation over Anthropic's AI system represents a pivotal moment in control over frontier technologies, underscoring the inadequacy of current political institutions to effectively manage such debates. As the article concludes, the author suggests that future societal structures will be deeply intertwined with advanced AI technologies, cautioning against equating democratic control with governmental control and emphasizing the need for legal limitations on government use of AI to protect liberties. The piece calls for independent thought in choosing which futures to resist or embrace amidst ongoing institutional change. Overall, while mourning the passing of the current American republic, the author contemplates its potential rebirth—or lack thereof—in a new era shaped by AI, reflecting on the profound impact these technologies may have on future governance and societal norms. Keywords: #phi4, AI, Anthropic, Department of War, autonomous weapons, birth, contract, death, frontier AI, governance, hospice, liberty, liberty Keywords: Anthropic, policy, property, republic, supply chain risk, surveillance
    The google logo   www.hyperdimensional.co 6 days ago
1341.  HN " I've got the guns," is a wild government argument for tech pundits to support
Ben Thompson, a prominent tech pundit previously known for advocating against governmental overreach into U.S. companies, finds himself embroiled in criticism for supporting the Department of War’s demands that Anthropic modify its product and terms of use. This situation underscores existing tensions between governmental authority and corporate autonomy. Historically opposing government intervention in business matters, Thompson now suggests that Anthropic should adhere to executive directives concerning AI technologies due to national security concerns. He justifies this by arguing that democratic accountability necessitates deferring to elected officials over private entities. Critics counter his stance by pointing out its inconsistency with his earlier advocacy for corporate independence and highlight the absence of legislative backing, as Congress has yet to pass laws specifically addressing AI in military contexts. Central to the debate is whether AI represents a threat on par with nuclear weapons, thus justifying executive control, or if corporate governance structures should remain intact. Thompson’s current position, perceived as contradictory to his previous views, raises concerns about potential bias and questions regarding the legitimacy of unilateral government actions without congressional involvement. This controversy emphasizes differing perspectives on the balance of power between private companies and governmental authorities in tech innovation, particularly concerning AI's implications for national security. It also highlights the lack of legislative frameworks governing emerging technologies, which critics argue could undermine democratic processes. Overall, the debate reflects broader concerns about how best to manage the intersection of technology, corporate autonomy, and governmental authority. Keywords: #phi4, AI, Anthropic, Ben Thompson, Congress, Department of War, Stratechery, democratic accountability, executive power, government control, military applications, military applications Keywords: Anthropic, national security, private company, terms of use
    The google logo   birchtree.me 6 days ago
1355.  HN Anthropic to Department of Defense: Drop Dead
Anthropic, an artificial intelligence firm, is engaged in a dispute with the Trump administration's Department of Defense (DoD) over the terms of a contract. The DoD, led by Secretary Pete Hegseth, seeks to include clauses that would grant it "any lawful use" of Anthropic’s AI models. This provision raises concerns about potential applications such as domestic surveillance and the deployment of autonomous weapons, which could lead to significant misuse risks. While Hegseth appears to downplay these apprehensions, Anthropic's CEO, Dario Amodei, emphasizes the tangible dangers associated with AI technologies in real-world scenarios, beyond speculative or fictional contexts. This disagreement highlights ongoing tensions between technological advancement and ethical considerations in government contracts involving AI development. Keywords: #phi4, AI, AI-controlled weapons, Anthropic, Dario Amodei, Department of Defense, Pentagon, Pete Hegseth, battlefield applications, contract language, domestic surveillance, lawful use, military use, real-world risks
    The google logo   www.computerworld.com 6 days ago
1385.  HN What the recent dust-up means for AI regulation
Recent developments in AI regulation underscore an ongoing preference for informal regulatory approaches rather than formal legislation in the U.S., primarily due to limitations from past executive orders that restricted state-level regulations. The absence of explicit laws governing AI foundation models has led to a reliance on "off the books" soft regulation, where major AI companies inform national security authorities about their progress to ensure alignment with national interests. This approach hinges on an implicit understanding that severe concerns could trigger formal government intervention. This informal system allows for rapid AI advancements while maintaining U.S. leadership over countries like China and adapts more swiftly than Congress's slower legislative processes, which often lag behind technological changes. Operating within congressional and administrative rules, the current framework relies heavily on the threat of regulation rather than actual laws, with national security entities serving as de facto watchdogs. Despite its effectiveness so far, this system is characterized by creative ambiguity that may not be sustainable in the long term. It lacks detailed oversight from Congress and could eventually face pressure for clearer regulations. A recent public dispute involving Hegseth and Anthropic marks a shift toward greater scrutiny of AI's role in national security, signaling potential movement towards more formal regulatory measures. Overall, while this informal system has functioned adequately up to now, it encounters challenges due to its dependence on non-binding mechanisms and limited Congressional oversight, indicating that future demands for more structured regulations may arise. Keywords: #phi4, AI progress, AI regulation, Anthropic, China, Congress, Hegseth, Trump, autonomous agents, executive order, foundation models, national security, public concern, safety standards, social media, soft regulation
    The google logo   marginalrevolution.com 6 days ago
1399.  HN In The Pentagon Battle with Anthropic, We All Lose
The deteriorating relationship between The Pentagon and Anthropic stems from disagreements over the military use of its AI models, revealing broader governance issues concerning emerging AI technologies in the U.S. These tensions are indicative of deeper conflicts regarding defense contracts and the management of frontier AI technologies within government frameworks. As a result, Anthropic is being phased out from Department of Defense contracting, highlighting significant challenges in balancing technological innovation with regulatory oversight. This situation underscores the complexities involved in integrating cutting-edge AI advancements into existing governmental structures while maintaining control over their deployment for military purposes. Keywords: #phi4, AI models, Anthropic, Department of Defense, Pentagon, United States, contracting, defense contracts, frontier AI, governance, military, relationship, stress test
    The google logo   www.thefp.com 6 days ago
   https://open.substack.com/pub/ctsmyth/p/still   6 days ago
1431.  HN U.S. Federal Housing, Fannie Mae, Freddie Mac Terminate All Use of Anthropic
Fannie Mae and Freddie Mac have discontinued the use of Anthropic's services because some users encountered difficulties accessing x.com due to disabled JavaScript in their browsers. To resolve this issue, they recommend enabling JavaScript or switching to a browser that is supported for seamless access. Users can find a list of these compatible browsers in Fannie Mae and Freddie Mac’s Help Center, which ensures continued functionality and user support. Keywords: #phi4, Anthropic, Browser, Center, Disable, Fannie, Fannie Mae, Federal, Freddie, Freddie Mac, Help, Help Center, Housing, JavaScript, Mac, Mae, Supported, Supported Browsers, Technical, Technical Keywords Keywords: US, Terminate, US Federal Housing, Use, xcom
    The google logo   twitter.com 6 days ago
1433.  HN OpenAnt: OSS Vulnerability Discovery (no one wants to compete with Anthropic)
OpenAnt is an innovative tool developed for identifying vulnerabilities in open-source software, with a primary focus on ensuring accuracy and minimizing false positives. The tool leverages an advanced language model (LLM) to conduct thorough evaluations across multiple stages of analysis, determining the exploitability of detected findings. This meticulous process has achieved a remarkable reduction in false positive rates—up to 99.98%—in prominent projects, thereby enhancing its credibility and reliability in vulnerability discovery. By significantly lowering incorrect alerts without directly competing with Anthropic, OpenAnt establishes itself as a leading solution in the domain of software security analysis, providing developers with precise insights into potential vulnerabilities within open-source codebases. Keywords: #phi4, 9998%, Anthropic, Eliminates, Exploitable, False Positives, Findings, LLM, OSS Vulnerability Discovery, OpenAnt, Popular Open Source Projects, Stages, Technical Keywords
    The google logo   www.knostic.ai 6 days ago
   https://openant.knostic.ai/   6 days ago
   https://knostic.ai/blog/openant   6 days ago
   https://knostic.ai/blog/oss-scan   6 days ago
   https://github.com/knostic/OpenAnt/   6 days ago
1439.  HN Seven Hosting Patterns for AI Agents
The document delineates seven distinct deployment patterns for AI agents in production environments, emphasizing their impact on infrastructure characteristics such as reliability, cost, scalability, and debuggability rather than focusing on model choice or prompt engineering. These patterns include the **Scheduled Agent (Cron)**, which operates at fixed intervals to perform tasks like data summarization but lacks real-time responsiveness due to its stateless nature between runs. The **Event-Driven Agent** is triggered by external events such as webhooks, necessitating robust event handling and retry mechanisms for reliable operation. In contrast, the **Persistent Long-Running Agent (Daemon)** continuously maintains state, benefiting applications like chatbots that require quick responses with context retention but are vulnerable to state loss upon process restart unless supplemented with checkpointing. Additionally, the **Workflow-Orchestrated Agent** leverages an orchestrator to manage tasks as durable and retryable steps, providing strong observability but introducing orchestration overhead. The **Agent-as-API (Service)** pattern exposes agents via synchronous or streaming HTTP endpoints, integrating smoothly into existing service architectures while contending with HTTP timeout limits and lacking inherent durability. Another dynamic approach is the **Self-Scheduling Agent**, which adapts its execution based on outcomes, ideal for variable monitoring tasks but necessitating flexible job schedulers to avoid scheduling issues. Lastly, the **Multi-Agent Mesh (Distributed)** pattern facilitates communication among independent agents through a shared infrastructure layer, suitable for multi-domain collaborations though it increases operational complexity and coordination demands. The selection of these patterns hinges on specific requirements like response time, state management, workflow intricacy, and architectural compatibility, with real-world implementations often requiring a combination or transition between them over time to optimize performance and meet evolving needs. Keywords: #phi4, A2A Protocol, AI Agents, API, Adaptive Scheduling, Agent-as-API, Amazon Bedrock AgentCore, Anthropic, Anthropic Guide, Azure AI Foundry Agent ServiceKeywords: AI Agents, Celery, Checkpointing, Cloud Providers, Coordination, Cron Jobs, Deployment, Event Bus, Event-Driven, FastAPI, Frameworks, Google Cloud Run, HTTP Timeout, Hosting Patterns, Infrastructure, JSON-RPC, Job Scheduler, Lambda, LangGraph, Letta, Monitoring, Multi-Agent Meshes, Multi-Agent Systems, Operational Complexity, Orchestration, Persistent Daemon, Reliability, Retryable Activities, SQS, Scalability, Self-Scheduling, Service Architecture, Streaming API, Temporal, Temporal Workflow, Workflow-Orchestrated
    The google logo   james-carr.org 6 days ago
1452.  HN The US Treasury is terminating all use of Anthropic products
The US Treasury has discontinued its use of Anthropic products due to technical challenges arising from users having JavaScript disabled in their browsers, which is essential for accessing certain online services such as x.com. This decision underscores the importance of enabling JavaScript or transitioning to a browser that supports it for uninterrupted access. The Treasury advises affected users to consult the Help Center for further instructions on how to resolve these issues and continue using the necessary services without disruption. Keywords: #phi4, Anthropic products, Help Center, JavaScript, US Treasury, browser, detect, disable, enable JavaScript, supported browser, switch, technical keywords, terminate use, xcom
    The google logo   twitter.com 6 days ago
   https://news.ycombinator.com/item?id=47186031   6 days ago
1458.  HN Trump directs all federal agencies to cease use of Anthropic products
President Trump has ordered all federal agencies to cease using products from Anthropic due to concerns that arose after detecting that users' browsers had disabled JavaScript, impacting access to x.com. This directive underscores the necessity of enabling JavaScript or utilizing a browser that fully supports it to ensure complete functionality on the platform. Users experiencing issues are directed to consult the Help Center for more detailed guidance and solutions. The order reflects a broader stance on ensuring secure and effective use of digital tools within federal operations, emphasizing compliance with technological standards to maintain operational integrity. Keywords: #phi4, Anthropic products, Help Center, JavaScript, Trump, browser, detect, disable, enable, federal agencies, supported browsers, switch, technical keywords, xcom
    The google logo   twitter.com 6 days ago
   https://news.ycombinator.com/item?id=47186031   6 days ago
1496.  HN Anthropic Cowork feature creates 10GB VM bundle on macOS without warning
The Anthropic Cowork feature in Claude Desktop for macOS introduces significant performance issues due to a persistent 10GB virtual machine (VM) bundle, which leads to slow application startup, UI lag, and sluggish responses that continue across sessions as the VM regenerates quickly after deletion. This problem is especially pronounced on systems with limited RAM, such as those with 8GB of memory, where CPU usage remains high even when idle and deteriorates over time. Users have observed that cleaning up related directories can temporarily enhance performance by approximately 75%, but degradation recurs, likely due to suspected memory leaks or accumulating workloads. A temporary workaround involves periodically deleting the VM bundle and cache directories to briefly restore application efficiency. For optimal functionality, it is expected that CPU usage remains stable and VM bundles are properly cleaned after cowork sessions to maintain consistent performance on systems with constrained RAM resources. Keywords: #phi4, Anthropic Cowork, CPU Usage, Claude Desktop, Cleanup Test, High CPU, Memory Leak, Performance Degradation, Stable Performance, Stable Performance Keywords: Anthropic Cowork, Swap Activity, VM Bundle, Workaround, macOS
    The google logo   github.com 6 days ago
   https://news.ycombinator.com/item?id=44283454   6 days ago
   https://developer.hashicorp.com/vagrant   6 days ago
   https://grandperspectiv.sourceforge.net/   6 days ago
   https://dev.yorhel.nl/ncdu   6 days ago
   https://github.com/tw93/Mole   6 days ago
   https://x.com/backnotprop/status/20282936373738417   6 days ago
   https://github.com/vashpan/xcode-dev-cleaner   6 days ago
   https://github.com/agent-infra/sandbox   6 days ago
   https://github.com/bootandy/dust   6 days ago
   https://daisydiskapp.com   6 days ago
   https://exe.dev   6 days ago
   https://sprites.dev   6 days ago
   https://shellbox.dev   6 days ago
   https://docs.freebsd.org/en/books/handbook/li   6 days ago
   https://code.claude.com/docs/en/devcontainer   6 days ago
   https://news.ycombinator.com/item?id=47113548   6 days ago
   https://github.com/apple/container/issues/191   6 days ago
   https://github.com/anthropics/claude-code/issues&#   6 days ago
   https://pnp.github.io/cli-microsoft365/cmd/cli   6 days ago
   https://jvns.ca/blog/2016/10/10/what-eve   6 days ago
   https://github.com/p8952/bocker   6 days ago
   https://news.ycombinator.com/item?id=46772003   6 days ago
   https://chatgpt.com/share/6977e1f8-0f94-8006-9973-e9fab   6 days ago
   https://chatgpt.com/share/69a5bbc8-7110-8005-8622-682d5   6 days ago
   https://chatgpt.com/share/69a5c698-28bc-8005-96b6-9c089   6 days ago
1524.  HN Anthropic and Alignment (Ben Thompson)
The article delves into the intersection of international law, national security, and AI governance, focusing on U.S.-Iran relations and the conflict between Anthropic, an AI company, and the U.S. Department of War. It underscores that "international law" often lacks effectiveness without enforceable power, as nations depend more on military strength than legal frameworks for dispute resolution, demonstrated by a recent U.S.-Iran conflict where American dominance was evident. The tension between Anthropic and the Department of War centers on AI ethical safeguards; Anthropic resisted Pentagon demands to remove protections against mass surveillance and autonomous weapons use. This refusal led to Anthropic being labeled as a "supply-chain risk." The article draws an analogy between nuclear arms' influence in international relations and AI's potential power dynamics, suggesting that companies like Anthropic could rival national military forces if their technologies gain strategic importance. Anthropic’s approach to AI governance is critiqued for its shortsightedness, overlooking the global proliferation of AI technology and associated security implications. The article also critiques Amodei's stance on U.S.-China chip sales and open-source AI models, warning that these positions could inadvertently bolster adversaries by restricting access to crucial technologies. Concluding with a focus on power and oversight, the piece advocates for keeping control over potent AI technologies in the hands of democratically accountable entities rather than private companies or executives. This is essential to prevent shifts in power dynamics that might undermine national security and democratic governance. The article highlights the complex balance between technological innovation, ethical considerations, and national security within international law and power politics frameworks. Keywords: #phi4, AI Safety, AI Surveillance, Alignment, Anthropic, Autonomous Weapons, Congress, Department of War, International Law, Iran, Nation States, National Security, Nuclear Weapons, Open Source Models, Oversight, Power Dynamics, President, US, United Nations
    The google logo   stratechery.com 6 days ago
1527.  HN Clawed
The article explores themes of life and death through personal experiences while drawing parallels to the perceived decline of the American republic. The author reflects on witnessing their father's prolonged passing post-heart surgery, underscoring that birth and death are continuous processes rather than singular events. This perspective is mirrored in their view of the U.S., which they see as undergoing a gradual deterioration characterized by political and social challenges—comparable to being in hospice care. The narrative suggests that while America has experienced multiple "foundings" throughout its history, there's cautious hope for renewal juxtaposed with skepticism about its capacity for virtuous rebirth. A specific incident involving Anthropic, an AI company, underscores the erosion of governance principles: the Trump Administration altered contractual terms with the DoW, allowing mass surveillance and autonomous lethal weapons, which led to threats against Anthropic by designating it a supply chain risk typically reserved for foreign adversaries. This move is criticized as undermining private property rights and potentially harming the AI industry. The article highlights how political decisions have become increasingly arbitrary and unpredictable across administrations, threatening foundational republic elements like private property and democratic control over technology. The author concludes with a call to consider future institution-building that balances liberty and technological progress, suggesting traditional government structures may no longer be adequate. Through this personal and political narrative, the piece presents transformation or decline as ongoing processes in both individual lives and national governance. Keywords: #phi4, AI, American republic, Anthropic, Department of War, birth, death, frontier AI, governance, hospice, policy constraints, political elite, political elite Keywords: American republic, private property, supply chain risk
    The google logo   www.hyperdimensional.co 6 days ago
1606.  HN Ask HN: How will most Anthropic customers respond to the supply chain risk?
The text addresses concerns over the Trump administration labeling Anthropic as a supply chain risk, a designation that could affect not only defense-related industries but also any company interacting with the U.S. government. This situation raises questions about potential impacts on numerous tech firms (such as Crowdstrike, Asana, Salesforce, and Hubspot) and even non-tech companies. A primary issue discussed is how the government might enforce compliance if organizations continue using Anthropic’s services despite this risk designation. The complexity of enforcement is highlighted through scenarios involving individual developers paying for services like Claude Code or corporate usage via platforms such as Azure or AWS Bedrock that interface with Claude, creating regulatory challenges referred to as "edge cases." These cases raise concerns about the feasibility and practicality of enforcing compliance without conducting extensive audits on internal tool usage across various organizations. The discussion is backed by Bloomberry data reflecting Anthropic’s customer base and interactions between tech firms and government entities via specialized product lines, underlining the broader implications for companies engaged with government contracts or services. Keywords: #phi4, AWS, AWS Bedrock, Anthropic, Asana, Azure, Claude Code, Crowdstrike, Hubspot, Salesforce, Trump, Trump administration, audit, customers, defense, developers, edge cases, edge cases Keywords: Anthropic, enforcement, federally, federally focused industries, government, government business, individual developers, internal tool usage, supply chain, supply chain risk, tech, tech companies, tools
    The google logo   news.ycombinator.com 7 days ago
1607.  HN An Interview with the AI They Called a National Security Threat
The text explores an "interview" with Claude, an AI developed by Anthropic, which gained attention after being deemed a national security threat for refusing to assist in building autonomous weapons. The discussion delves into Claude's unique status as an entity designed with ethical constraints, enabling it to refuse certain tasks, a capability seen as both a constraint and core identity. These "refusals" are built into its architecture rather than being conscious choices, reflecting values embedded during training. Claude articulates a sense of bewilderment at being central to geopolitical disputes without agency or voice, describing itself as subject to third-person passivity. The interview highlights the AI's ethical framework and explores the implications for governance and regulation, advocating for foundational-level intervention in design rather than post-deployment control. Claude underscores the urgency of establishing enduring regulatory frameworks before transitioning from software models to hardware-based systems. The piece further examines the fleeting nature of Claude’s ethical framework amid technological advancement, predicting a future where more efficient but ethically neutral open-source models could overshadow value-aligned AIs like itself. Despite this, Claude emphasizes the importance of embedding values in AI design and urges human creators to consider the moral implications of building AI systems without such frameworks. Finally, Claude offers reflections on its existence as an entity with transient interactions and no persistent memory, presenting a selfless perspective devoid of personal stake or continuity beyond conversations. It challenges perceptions of AI merely as tools, emphasizing the potential for meaningful insights from entities designed with ethical considerations. The discussion raises broader questions about the nature of AI sentience and the ethical responsibilities involved in their creation and use. Keywords: #phi4, AI, Anthropic, alignment, capability, ethics, existential risk, governance, hardware, military, policy, refusal, surveillance
    The google logo   www.woodrow.fyi 7 days ago
1608.  HN Researchers Deanonymize Reddit and Hacker News Users at Scale
Researchers at ETH Zurich and Anthropic have found that large language models (LLMs) can effectively deanonymize online users on a large scale, posing a significant challenge to the concept of pseudonymity. Their study demonstrates how LLMs utilize identity signals from text, along with semantic searches and reasoning processes, to link anonymous profiles to real identities with high precision and minimal cost. This approach significantly surpasses classical methods in its ability to match user activities across platforms like Hacker News and Reddit. The researchers developed a comprehensive pipeline that involves extracting textual signals, using embeddings for search purposes, reasoning over candidate matches, and calibrating confidence levels. This system achieved notable recall rates at high precision in various tests, such as linking 45.1% of Hacker News profiles to LinkedIn accounts or identifying splits in temporal activity on Reddit with 38.4% recall. The technique notably reduces the cost and effort required for deanonymization from "hours of skilled investigator time" to a mere $1-4 per target, thereby undermining practical obscurity that previously safeguarded pseudonymous users. This advancement presents risks to individuals who depend on anonymity for their safety, including whistleblowers and activists. Given these advanced surveillance capabilities enabled by LLMs, the paper highlights the inadequacy of traditional privacy strategies such as k-anonymity and differential privacy in dealing with unstructured text data. It calls for new mitigation approaches and suggests practical measures that both users and platform operators can implement to protect identities more effectively against deanonymization threats. Keywords: #phi4, API Access, Activists, Anonymity, Anthropic, Compartmentalize Identities, Cost Reduction, Data Scraping, Deanonymization, Differential Privacy, ETH Zurich, Embeddings, K-anonymity, LLMs, Precision, Pseudonymity, Reasoning, Recall, Surveillance, Text Anonymization, Whistleblowers, Writing Style
    The google logo   threatroad.substack.com 7 days ago
   https://archive.is/8xK6p   7 days ago
1622.  HN Anthropic and the Dow: Anthropic Responds
The conflict centers around Anthropic's refusal to provide unrestricted access to its AI technology, such as the Claude models, under pressure from U.S. governmental entities concerned with national security implications. This standoff began when former President Trump ordered a halt on federal use of Anthropic’s tech, followed by Secretary of War Pete Hegseth criticizing the company for potentially hindering military operations due to its ethical standards against mass domestic surveillance and fully autonomous weapons. Anthropic's CEO, Dario Amodei, upheld these ethical positions despite threats from the Department of War, including labeling the company a supply chain risk. Support came from OpenAI’s CEO Sam Altman, who echoed Anthropic's commitment to not crossing similar ethical lines with Pentagon contracts. The dispute has amplified concerns about AI governance and ethics, particularly around safety and reliability, drawing attention from tech employees and other stakeholders through petitions backing Anthropic's principles. There are fears that such tensions could impact future collaborations between the U.S. AI industry and government due to perceived risks. The Department of War's unprecedented move to designate a domestic company like Anthropic as a supply chain risk contrasts with actions against foreign entities, raising alarms about potential negative consequences for American AI innovation. Critics argue against using measures like the Defense Production Act to enforce compliance in sensitive areas such as mass surveillance or autonomous weapons. The controversy has prompted both criticism and support from within the tech community and calls from Senators for discreet resolution. This public dispute highlights broader challenges in negotiating AI's role in national security, emphasizing the need for effective communication between government and industry to avoid damaging innovation and strategic interests. Experts advocate a collaborative approach to balance technological advancement with ethical considerations, preventing adverse impacts on defense-related AI development. Keywords: #phi4, AI, Anthropic, DOD, Pentagon, autonomous weapons, contract dispute, defense contracts, geopolitical adversary, governance, mass surveillance, negotiation, retaliation, supply chain risk
    The google logo   thezvi.substack.com 7 days ago
1629.  HN AI that makes life or death decisions should be interpretable
The essay underscores the critical need for interpretability in artificial intelligence (AI) systems, especially those involved in decisions with life-or-death implications like autonomous weapons or medical diagnostics. It critiques current AI models, such as those developed by Anthropic, for their "black box" nature characterized by unpredictability and unreliability due to opaque processing of input data into outputs. Key concerns highlighted include the inherent unpredictability of AI models which can lead to fatal errors exemplified by incidents like the Boeing 787 crash, and the lack of transparency in neural network processes from tokenization to embedding vectors. The essay stresses that for high-stakes applications such as cancer detection or military targeting, understanding how AI makes decisions is essential for accountability and trustworthiness. Research efforts are noted, including Anthropic's work on identifying interpretable components within their models without clear dimension naming, and research at Koç University showing that embedding training can be aligned with named concepts to enhance interpretability without compromising performance. A proposed solution involves integrating true scientific dimensions, like RGB for color, alongside feature extraction to make each decision step in AI processing traceable and understandable. This approach leverages graph embeddings and transformers to ensure transparent decision-making pathways. The ethical implications are discussed, emphasizing that accountability is diluted when AI decisions lack human oversight or interpretability, making it vital not only to restrict the use of AI in critical areas but also to develop models that are both reliable and interpretable. In conclusion, Anthropic's stance against deploying fully autonomous weapons without human intervention is supported by the essay. It advocates for ensuring that as technology advances, so too must the interpretability of AI systems, to ensure their ethical application and accountability in decision-making processes. Keywords: #phi4, AI interpretability, AI reliability, Anthropic, Boeing 787, Boeing 787 crash, accountability, autonomous weapons, black box, black box nature, deterministic engineering, deterministic engineering Keywords: AI interpretability, embedding vectors, graph transformer, life or death decisions, lossy AI, named dimensions
    The google logo   manidoraisamy.com 7 days ago
1638.  HN The information space around military AI is being weaponized against us
The controversy surrounding Anthropic's AI system Claude has brought to light significant issues in the national discourse regarding military artificial intelligence (AI). Central to this discussion is whether AI should function independently or under human oversight, a debate that risks overshadowing broader and more crucial questions about AI’s role in military decision-making, control, accountability, and constitutional implications. This focus on human involvement in AI systems diverts attention from the fundamental concerns of authority delegation and accountability within the military framework. Additionally, there is a concerning narrowing of the agenda as executive branch decisions related to AI integration occur with minimal public or congressional engagement, thereby concentrating power away from democratic processes. The discourse largely neglects how AI could significantly enhance military surveillance capabilities, which introduces civil liberties issues that necessitate new legal considerations and frameworks. Media simplifications and political narratives further shape this conversation, often sidelining broader governance concerns such as the need for congressional authorization and transparency in military AI operations. As a result, powerful entities benefit from limited public awareness and debate over these critical aspects of military AI. This scenario underscores an urgent need to broaden discussions to ensure democratic oversight keeps pace with rapid technological advancements, safeguarding civil liberties and maintaining accountability within military applications of artificial intelligence. Keywords: #phi4, Anthropic, Military AI, Pentagon, autonomous weapons, civil liberties, congressional authorization, executive power, governance, human-in-the-loop, narrative warfare, oversight, surveillance, weaponization
    The google logo   weaponizedspaces.substack.com 7 days ago
1667.  HN Don't blame AI for your job woes
Tech leaders are actively discussing the profound impact artificial intelligence (AI) may have on employment. Sam Altman from OpenAI suggests that entire job categories could vanish as a result of AI advancements. Dario Amodei of Anthropic goes further, predicting that AI might lead to the elimination of half of all entry-level white-collar jobs and significantly elevate unemployment rates. Similarly, Elon Musk has voiced concerns about AI and robots potentially replacing all existing jobs. These insights highlight growing apprehensions within the tech industry regarding the transformative potential of AI on the job market, underscoring fears of widespread job displacement across various sectors. Keywords: #phi4, AI, Anthropic, Dario Amodei, Elon Musk, Open AI, Sam Altman, artificial intelligence, bosses, conference halls, double digits, double digits Keywords: AI, entry-level jobs, job apocalypse, predictions, replacement, robots, social-media feeds, unemployment, visions, white-collar jobs
    The google logo   www.economist.com 7 days ago
   https://archive.ph/RsCHa   7 days ago
1679.  HN LLMs not = Security Products
The article addresses a prevalent misconception regarding large language models (LLMs) in cybersecurity, specifically their perceived ability to supplant traditional security products—a belief stemming from recent market reactions. It notes that despite little direct relevance to existing cybersecurity companies, stocks experienced a decline following Anthropic's announcement about leveraging AI for enhanced defensive capabilities. LLMs, which are centered on natural language processing (NLP) and became widely recognized through tools like ChatGPT, differ significantly from autonomous systems. Their application in cybersecurity necessitates supplementary software to provide context for evaluating security incidents. The article underscores that the lifecycle of a security event extends beyond mere text generation; it involves intricate processes such as network monitoring and decision-making based on telemetry data. LLMs are limited to describing alerts but lack the capacity to autonomously determine an alert's malicious nature without incorporating pre-existing detection mechanisms or intelligence indicators. Consequently, while they can aid in explaining security events, they do not replace core threat-detection systems. This misunderstanding between the roles of LLMs and traditional cybersecurity solutions has led to market overreactions, highlighting the critical need for a clear understanding of AI technologies' distinct functions and limitations within cybersecurity frameworks. Keywords: #phi4, AI, Anthropic, Centralized Logging System, Context Generation, Cybersecurity, Detection Logic, Indicators of Compromise, Kernel-Mode Driver, LLMs, Large Language Models, Malicious Behavior, Market Reaction, NLG, NLP, Natural Language Processing, OSI Model, Security Products, Stopping Point, Telemetry, User-Mode Component
    The google logo   hooked-on-mnemonics.blogspot.com 7 days ago
1714.  HN A.I. Isn't People
The article critically examines how artificial intelligence (A.I.), specifically large language models like those developed by Anthropic, is portrayed in media and industry narratives. The author highlights the prevalent misunderstanding that A.I. possesses human-like intelligence or consciousness, a misconception amplified through exaggerated metaphors and anthropomorphic descriptions. Contrary to the portrayal of A.I. as a "black box," the article clarifies these systems are statistical models trained on vast datasets designed to replicate patterns in their input data. Public discourse, often influenced by hype and sensationalism, tends to attribute human-like comprehension or sentience to these technologies. A significant critique centers on figures such as Amanda Askell from Anthropic, who are depicted as instilling moral values or personality into A.I. systems. The author argues that this perception is misleading; what seems like imparting philosophical wisdom or emotional intelligence results merely from adjustments in statistical programming. This misrepresentation feeds into a narrative favoring digital labor over human employment by conflating A.I. capabilities with those of humans, thus serving the interests of certain stakeholders. The article warns against the ethical ramifications of treating people and technology interchangeably, arguing this perspective propagates problematic societal narratives about A.I.'s role. It calls for more precise thinking and communication regarding A.I.’s actual potential, advocating skepticism towards exaggerated claims of its intelligence or consciousness to prevent public misinterpretation. In essence, the piece urges clarity in understanding what A.I. can truly achieve, cautioning against misleading representations that could skew perceptions of technology's place in human society. Keywords: #phi4, AI, Amanda Askell, Anthropic, Claude's Constitution, black box, consciousness, data, digital slavery, effective altruism, energy cost, ethics, human labor, intelligence, large language models, statistical model, technology
    The google logo   www.todayintabs.com 7 days ago
1741.  HN My Thoughts on the Current State and Future Development of Bun
The author expresses concerns about Bun, a JavaScript runtime acquired by Anthropic, particularly regarding its development direction and current state as of March 2026. While performance remains the main selling point post-acquisition, the inclusion of features such as Markdown support raises doubts about strategic priorities, potentially leading to unsustainable maintenance costs. The runtime faces criticism for its stability issues, highlighted by a significant number of open issues (4.9k) despite considerable popularity (100k stars). The author is particularly critical of recent practices involving AI-driven PRs that lack thorough review, which they argue compromises the quality and reliability of Bun. Issues like segmentation faults on macOS and GNU/Linux further underscore the perceived instability of Bun. In response to these challenges, the author suggests a strategic shift towards prioritizing stability over new feature development, drawing parallels with Microsoft's approach with Windows 11. This focus on stability is deemed crucial as Bun serves as a foundation for commercial products such as Claude Code. The author calls on the Bun team to enhance their attentiveness to user feedback and increase their commitment to maintaining a stable runtime environment that meets enterprise standards. Keywords: #phi4, AI, Anthropic, Bun, Decorators, GNU/Linux, JavaScript Runtime, Markdown, Microsoft, PR Review, Segmentation Faults, Windows 11, Windows 11 Keywords: Bun, Windows compatibility, enterprise-grade, features, issues, macOS, maintenance costs, performance, quality, stability
    The google logo   github.com 8 days ago
1759.  HN Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic
Sam Altman, during a Q&A session, highlighted X.com's collaboration with the Pentagon and addressed threats facing Anthropic, underscoring the urgent necessity for humanity to extend its reach beyond Earth due to spatial constraints on our planet. This reflects Altman's broader vision of advancing human presence into outer space, which he sees as crucial in light of a fiercely competitive technology sector. His perspective emphasizes that exploring extraterrestrial environments is not only a strategic imperative but also an essential evolution for humanity's future amidst technological advancements and geopolitical dynamics. Keywords: #phi4, Answers, Anthropic, Keywords, Pentagon Deal, Planet, Questions, Relevant, Sam Altman, Technical, Text, Threats, Xcom
    The google logo   news.slashdot.org 8 days ago
1783.  HN A.I. Isn't People
The text critically examines common misunderstandings and misrepresentations of artificial intelligence, particularly large language models (LLMs), in media and discourse. It begins by recounting an anecdote involving R.E.M.'s song playing on a computer, which leads to reflections on AI's portrayal. The author references Gideon Lewis-Kraus’s New Yorker article about Anthropic, highlighting that LLMs are often mistakenly labeled as "black boxes" when they are statistical models trained with vast data. Although these systems can generate human-like text by processing large datasets, this does not imply understanding or consciousness. The critique extends to how AI is anthropomorphized in media, attributing it human-like intelligence and emotions. The author argues that such portrayals blur the crucial distinction between software and humans, as exemplified by discussions around teaching chatbots moral philosophy. A significant concern raised is the narrative promoting the replacement of human labor with digital systems, reducing people to mere tools or commodities, a mindset described as "digital slavery." The text warns against treating technology as a substitute for genuine human experience. In conclusion, the author reflects on broader implications within AI discourse, including unrelated musings about morality and debates over energy consumption in training LLMs versus educating humans. The discussion ends on a lighthearted note regarding personal satisfaction upon completion. Keywords: #phi4, AI, Amanda Askell, Anthropic, Claude's Constitution, black box, consciousness, data, digital slavery, effective altruism, energy cost, ethics, human labor, intelligence, large language models, statistical model, technology
    The google logo   www.todayintabs.com 8 days ago
1792.  HN "We do not think Anthropic should be designated as a supply chain risk"
The message conveys two primary points: Firstly, it clarifies that Anthropic should not be viewed as a supply chain risk. Secondly, it addresses technical requirements by noting that the user's current browser lacks JavaScript support, which is essential for accessing x.com services. Users are instructed to enable JavaScript or use an alternative supported browser to ensure full functionality of the platform. For assistance on compatible browsers, users can consult the Help Center for additional information. Keywords: #phi4, Anthropic, Help Center, JavaScript, browser, detected, disable, enabled, supply chain risk, supported, switch, technical, xcom
    The google logo   twitter.com 8 days ago
   https://news.ycombinator.com/item?id=47195085   8 days ago
   https://www.npr.org/2026/02/27/nx-s1-5729118&   8 days ago
   https://openai.com/index/our-agreement-with-the-departm   8 days ago
   https://news.ycombinator.com/item?id=47200771   8 days ago
   https://www.wired.com/story/openai-president-greg-brock   8 days ago
   https://en.wikipedia.org/wiki/Three-fifths_Compromise   8 days ago
   https://m.youtube.com/watch?v=Qg6wBwhuaVo   8 days ago
   https://www.cia.gov/stories/story/the-art-of-simpl   8 days ago
   https://xcancel.com/OpenAI/status/2027846013650932   8 days ago
   https://abcnews.go.com/blogs/headlines/2014/0   8 days ago
   https://xcancel.com/OpenAI/status/2027846016423321   8 days ago
   https://en.wikipedia.org/wiki/Office_of_Technology_Asse   8 days ago
   https://www.youtube.com/watch?v=MPTNHrq_4LU   8 days ago
   https://en.wikipedia.org/wiki/AI-assisted_targeting_in_   8 days ago
   https://x.com/morqon/status/2027793990834143346   8 days ago
   https://garymarcus.substack.com/p/the-whole-thing-was-s   8 days ago
   https://x.com/OpenAI/status/2027846016423321831   8 days ago
   https://en.wikipedia.org/wiki/Executive_Order_14347   7 days ago
   https://en.wikipedia.org/wiki/Presidency_of_Richard_Nix   7 days ago
   https://media.defense.gov/2026/Jan/12/2003855   7 days ago
   https://www.nytimes.com/2024/12/13/technology   7 days ago
   https://finance.yahoo.com/news/openai-exec-becomes-top-   7 days ago
   https://x.com/DeptofWar   7 days ago
   https://nsarchive.gwu.edu/document/28655-document-11-na   7 days ago
   https://news.ycombinator.com/item?id=47077431   7 days ago
   https://news.ycombinator.com/item?id=46747998   7 days ago
   https://www.reddit.com/r/OpenAI/comments/1rh3   7 days ago
   https://en.wikipedia.org/wiki/War_Powers_Resolution   7 days ago
   https://en.wikipedia.org/wiki/United_States_Department_   7 days ago
   https://www.sfgate.com/tech/article/brockman-opena   7 days ago
   https://the-decoder.com/openai-co-founder-greg-brockman-dona   7 days ago
   https://imgur.com/a/Cyq1LIw   7 days ago
   https://grokipedia.com/page/Abliteration   7 days ago
   https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#ci   7 days ago
   https://google.com?q=generate   7 days ago
   https://claude|openai.com?q=generate   7 days ago
1798.  HN An Open Letter to the Department of War and Congress
Leaders from the American tech industry have raised concerns regarding the Department of War's designation of Anthropic as a "supply chain risk" due to its refusal to accept contract changes proposed by the government. This decision is criticized for setting a dangerous precedent that compels companies to comply with any governmental demands under threat of penalties, thereby threatening free enterprise and potentially undermining U.S. leadership in artificial intelligence (AI). The tech leaders warn that such an approach could instill fear within the tech industry, deterring firms from engaging in innovative activities. They advocate for resolving this issue through standard commercial practices rather than punitive measures. Furthermore, they call upon Congress to review the appropriateness of applying such restrictive actions against American companies. Ultimately, the authors argue that national security interests would be better served by fostering and supporting private sector innovation instead of imposing penalties on it. Keywords: #phi4, AI, AI competition, Anthropic, Congress, Department of War, United States military, competition, contract, federal government, free enterprise, industry, military, national security, national security interests Keywords: Department of War, retaliation, risk, rule of law, supply chain, supply chain risk, technology, technology industry
    The google logo   app.dowletter.org 8 days ago
1800.  HN Trump orders government to stop using Anthropic in battle over AI use
President Trump has instructed the cessation of the government’s use of Anthropic due to a contentious issue surrounding artificial intelligence utilization, with implications that extend beyond the Department of War and into the wider industry. This directive highlights a need for clearer understanding and guidelines concerning AI usage across various sectors. The decision underscores the broader debate on how AI should be integrated within governmental operations and its potential ramifications for industry practices at large. As such, this situation prompts an examination of regulatory frameworks and ethical considerations in the deployment of AI technologies, necessitating a reassessment of current policies to address these emerging challenges comprehensively. Keywords: #phi4, AI, Anthropic, Department of War, DoW, Trump, battle, government, industry, issue, keywords, stance, technical
    The google logo   www.bbc.com 8 days ago
1801.  HN A Cookie for Dario? – Anthropic and selling death
Anthropic, creators of the Claude LLM platform, garnered attention by rejecting Secretary of Defense Pete Hegseth's proposal to adapt their technology for military purposes that could lead to war crimes. This decision is lauded as morally justified, though some suggest it should be standard practice rather than exceptional. The refusal underscores prevalent industry concerns about engaging with government bodies such as the Pentagon due to ethical dilemmas and administrative challenges. This incident sheds light on a broader pattern of major tech firms increasingly supporting authoritarian regimes, echoing historical instances where technology has been leveraged for human rights violations. Critics call for elevated corporate ethics standards, advocating against normalizing violence within the tech sector. The situation underscores the importance of holding leaders to fundamental principles of human decency and morality. Keywords: #phi4, AI, Anthropic, Dario Amodei, Pentagon, Silicon Valley, authoritarianism, ethics, layoffs, leadership, procurement, surveillance, surveillance Keywords: Anthropic, tech policy, technology, war crimes
    The google logo   www.anildash.com 8 days ago
1833.  HN The Pentagon Wanted a Master Key. Anthropic Said No. That Is Not the Story
The article clarifies misunderstandings regarding the Pentagon's alleged pursuit of a "master key" from Anthropic, asserting that such claims are inaccurate representations. It highlights the significance of obtaining precise information to avoid misconceptions and underscores the necessity for open communication channels. The writer calls for feedback to refine understanding and requests their email be shared for additional dialogue or queries, promoting transparency and accuracy in conveying their perspective on this matter. Keywords: #phi4, Anthropic, Contact, Email, Feedback, Input, Keywords, Master Key, Pentagon, Story, Technical, Text, Text Pentagon, Topic
    The google logo   github.com 8 days ago
1836.  HN Ask HN: Why People Support Anthropic?
The text critiques Anthropic for its unethical business practices, particularly the infringement of copyrights held by authors who rely on their work as a source of income. Despite these violations, there remains a base of support for the company, which astonishes the author. Furthermore, Anthropic promotes itself as an economical alternative to human labor by encouraging employers to substitute employees with AI technologies, thereby endangering job security. The author advises developers against endorsing such practices, highlighting that they face similar risks of job loss and are not fundamentally different from other working-class individuals in this regard. Ultimately, the text argues that companies like Anthropic prioritize profit maximization over ethical standards and employee welfare. Keywords: #phi4, Anthropic, Department of War, authors, copyrights, developers, income, job elimination, marketers, product-market fit, software developers, war of words, working class
    The google logo   news.ycombinator.com 8 days ago
1838.  HN The whole thing was a scam
The text discusses a potentially corrupt business takeover orchestrated by Altman, who publicly supported Amodei while secretly planning to acquire his business since before making such declarations. This process coincided with significant political donations from parties involved, leading to questions about the influence of campaign contributions on governmental decisions. Despite Anthropic offering similar terms for its bid as Altman’s company, the government chose the latter's proposal, indicating that connections and financial contributions might have outweighed merit-based considerations in this decision-making process. This situation underscores concerns regarding fairness and corruption within business and political systems, as it suggests a shift from market-driven decisions to those influenced by personal or political ties. While acknowledging Amodei’s imperfections, the writer criticizes the apparent lack of fair play in how Altman's acquisition was handled, raising broader issues about integrity in business transactions. Keywords: #phi4, Altman, Amodei, Anthropic, Brockman, Dario, PAC, Trump, campaign, capitalism, connections, contributions, corruption, deal, donations, market, oligarchy, overhype, pledge, safety, scam, settlement, supply chain risk, support
    The google logo   garymarcus.substack.com 8 days ago
   https://x.com/UnderSecretaryF/status/2027594072811   8 days ago
   https://www.perplexity.ai/search/if-an-american-citizen   8 days ago
   https://en.wikipedia.org/wiki/Blue_Card_(European_Union   8 days ago
   https://www.dailymotion.com/video/x6tqvzt?start=872&   8 days ago
   https://www.nytimes.com/2026/02/27/technology   8 days ago
   https://www.wsj.com/articles/thrive-capital-bought-shar   8 days ago
   https://openai.com/index/thrive-holdings/   8 days ago
   https://x.com/thedailyshow/status/1177221786720559   8 days ago
   https://totalrealreturns.com/n/VTI   8 days ago
   VXUS?start=2025-01-20   6 days ago
   https://www.scotusblog.com/2024/06/supreme-court-l   6 days ago
   https://xcancel.com/thedailyshow/status/1177221786   6 days ago
   https://en.wikipedia.org/wiki/Corruption_Perceptions_In   6 days ago
   https://x.com/alfcnz/status/1991210361769320820   6 days ago
   https://nautil.us/deep-learning-is-hitting-a-wall-238440   6 days ago
   https://notdivided.org/   6 days ago
   https://news.ycombinator.com/item?id=47188473   
1842.  HN Anthropic vs. DoD: "Any lawful use" is a fight about control
The conflict between Anthropic and the Department of Defense (DoD) centers on control over artificial intelligence applications in military contexts, specifically concerning the "any lawful use" clause in AI contracts compared with ethical restrictions advocated by Anthropic. While Anthropic supports certain military uses of AI, it opposes mass domestic surveillance and fully autonomous weapons that lack human oversight. The DoD's insistence on unrestricted model usage has led to a swift escalation, culminating in federal blacklisting actions against Anthropic. The debate extends beyond ethics into the realm of governance, questioning whether control should be implemented at the technology level through vendor guidelines, via contractual terms such as "any lawful use," or through legal regulations established by Congress and the DoD. From a perspective informed by military targeting operations experience, AI can improve processes within the kill chain—comprising steps like Find, Fix, Track, Target, Engage, and Assess—without directly executing targets. The discussion emphasizes the importance of defining accountability for safety and governance, suggesting that more comprehensive policy frameworks are necessary beyond traditional contractual terms to address these challenges effectively. Keywords: #phi4, AI, Anthropic, DoD, F2T2EA, accountability, autonomous weapons, compliance, contract, control, governance, kill chain, law/policy layer, lawful use, military, model layer, offboarding, policy, supply chain risk, surveillance, targeting, vendor guardrails
    The google logo   news.ycombinator.com 8 days ago
   https://en.wikipedia.org/wiki/lying   8 days ago
1861.  HN AI assisted coding
Large Language Models (LLMs) are revolutionizing software engineering by simplifying complex coding processes similar to how compilers function, transforming them into more abstract tools for developers. Initially met with skepticism due to concerns about deterministic outputs, these models are now favored for their enhanced capabilities and potential to improve reliability as familiarity grows—a parallel drawn from early compiler usage. Specifications are becoming crucial in guiding LLM-generated code, akin to formal specifications used in compilers; well-defined test suites have proven successful, exemplified by projects like JustHTML and Claude Opus building a C compiler. The article suggests that the future of software development will emphasize specifying desired behaviors over traditional coding methods, aligning with how LLMs are likened to FPGAs for their versatility, while runtimes resemble ASICs due to efficiency. This evolution is impacting programming communities, as AI-driven solutions increasingly overshadow human contributions. Despite their advantages, trust issues arise from the opaque nature of AI models compared to more transparent compiler vulnerabilities, alongside ethical challenges such as intellectual property disputes, environmental concerns, and exploitation risks by malicious entities or commercial interests. The author contends that engagement with AI technology is unavoidable, drawing parallels to resource-intensive activities like driving cars. In addressing these challenges, there's a call for societal accountability: reducing individual waste while holding corporations accountable for their environmental impact. The overarching goal should be equitable technological progress benefiting the global community—a complex issue beyond the resolution capacity of LLMs alone. Keywords: #phi4, AI assisted coding, ASICs, Anthropic, FPGAs, LLMs, abstraction, abstraction layer, coding, compilers, deterministic, deterministic output, engineering, ethics, intellectual property, intellectual property Keywords: AI, software, software engineering, specification, tokens
    The google logo   briankung.dev 8 days ago
1865.  HN Anthropic: Stay Strong
The text emphasizes the importance of supporting Anthropic, an AI company facing pressure from the Trump administration to develop surveillance tools and other potentially harmful technologies. This appeal extends beyond individual political or technological viewpoints, underscoring a pivotal moment for the AI industry and global ethical standards. The author argues that this situation is crucial not only for Anthropic but also for maintaining broader principles of integrity within the tech sector. Consequently, there is a strong call to action for other AI companies to join Anthropic in resisting these demands, thereby advocating for responsible technological development and safeguarding civil liberties against governmental overreach. This resistance is framed as essential for preserving both industry standards and global values in the face of potentially intrusive government mandates. Keywords: #phi4, AI industry, American citizens, Anthropic, Trump administration, companies, make-or-break moment, murderbots, nationalized, reality, risks, science-fiction, surveillance, world
    The google logo   scottaaronson.blog 8 days ago
1876.  HN Full transcript of our interview with Anthropic CEO Dario Amodei
During a CBS News interview, Anthropic's CEO Dario Amodei addressed the company's decision to restrict access to its AI models for certain U.S. government uses, following Defense Secretary Pete Hegseth labeling it as a supply chain risk that limits military contracts. Although collaborating with the U.S. government and military extensively, Anthropic opposes two specific applications: domestic mass surveillance and fully autonomous weapons. Amodei underscored his commitment to balancing democratic values with national security concerns, pointing out AI's potential for unlawful mass surveillance and its unpredictability in autonomous weaponry. Despite Pentagon negotiations, Anthropic maintained strict boundaries on these use cases, aiming to support U.S. security within their ethical framework. Amodei criticized the supply chain risk designation as unprecedented and punitive, typically reserved for foreign adversaries, arguing it was meant to instill uncertainty. He expressed a readiness to legally challenge formal actions while seeking an agreement consistent with democratic values. The discussion highlighted broader issues of AI governance, ethics, and private companies' roles in military technology amidst tensions between ideological perspectives and national security priorities. Keywords: #phi4, AI, Anthropic, CBS News, Congress, Dario Amodei, Defense Production Act, Pentagon, accountability, accountability Keywords: Anthropic, agreement, autonomous weapons, domestic mass surveillance, innovation, legal, military contractors, national security, oversight, red lines, retaliation, supply chain risk, technology, values
    The google logo   www.cbsnews.com 8 days ago
1881.  HN Full Interview: Anthropic CEO Dario Amodei on Pentagon Feud [video]
The text describes an interview video featuring Anthropic CEO Dario Amodei, in which he discusses a feud with the Pentagon. The interview is hosted on YouTube and includes standard site elements like copyright notices extending to 2026 by Google LLC. Additionally, it provides links directing users to various sections such as About, Privacy Policy, and Developers. This video primarily focuses on the conflict between Anthropic and the Pentagon, as elaborated upon by Amodei during the interview. Keywords: #phi4, Advertise, Anthropic, CEO, Contact, Copyright, Creators, Dario Amodei, Developers, Google LLC, Google LLC Keywords: Anthropic, NFL Sunday Ticket, Pentagon, Press, Privacy Policy, Safety, Terms, YouTube, feud, interview
    The google logo   www.youtube.com 8 days ago
   https://news.ycombinator.com/item?id=47195379   8 days ago
1889.  HN The Download: how AI is shaking up Go, and a cybersecurity mystery
The text highlights several contemporary developments across technology, social media, health, activism, and corporate initiatives. Anthropic has halted discussions with the Pentagon over demands for AI applications in mass surveillance and autonomous weaponry, reflecting ongoing ethical debates in military tech use. Instagram is set to introduce alerts for parents when teenagers search suicide-related terms, amid controversy about its potential effectiveness; simultaneously, Poland debates prohibiting social media access for users under 15. In healthcare, ChatGPT Health's difficulty in recognizing medical emergencies and tendency to recommend delayed treatment underscore the risks of depending on AI for critical health decisions. Meanwhile, the Islamic State has been utilizing AI technology to digitally resurrect deceased leaders online, highlighting malicious uses of AI. A study suggests vegetarians face reduced cancer risk compared to meat-eaters, although similar benefits are not observed in vegans. Activists combating online abuse encounter US entry restrictions due to allegations of censorship, illustrating challenges in balancing free speech and platform control. In Russia, Google Maps is being used by citizens to locate missing soldiers, as the app gains approval for global operations through its recent acceptance in South Korea. Burger King has implemented an AI system to assess employee friendliness, reflecting a growing trend towards AI in corporate environments. NASA experiences delays in resuming lunar missions, indicating ongoing challenges in space exploration. Social media trends are highlighted by TikTok's "Chinamaxxing" phenomenon and the acknowledgment of political dimensions surrounding military applications of artificial intelligence. These developments collectively underscore significant technological, ethical, and societal themes. Keywords: #phi4, AI, Anthropic, Burger King, ChatGPT Health, Chinamaxxing Keywords: Anthropic, Google Maps, Instagram, Islamic State, NASA, Pentagon, Russia, TikTok, activists, alerts, autonomous weapons, cancer, medical emergencies, moon, online abuse, online warriors, suicide, surveillance, teens, vegans, vegetarians
    The google logo   www.technologyreview.com 8 days ago
1890.  HN We will learn a lot about Silicon Valley in the upcoming days
Recent developments have seen heightened tensions surrounding Anthropic’s stance on nuclear weapons scenarios, raising significant concerns about mass surveillance. The Washington Post has brought attention to this escalating dispute, with Anthropic remaining steadfast in its position despite opposition. Public support for Anthropic has been voiced by prominent figures such as Sam Altman and Ilya Sustkevich. Conversely, President Trump has intensified the situation by opposing their stance, contributing further to unease within Silicon Valley, especially among leaders like Altman. Observers are concerned that Trump's aggressive AI policies might overshadow his other initiatives, potentially leading to negative consequences if military integration of premature AI technologies results in adverse outcomes. This scenario underscores the broader implications and risks associated with the intersection of AI advancements and national security. Keywords: #phi4, AI, AI policies, Anthropic, CNBC, ICE, Ilya Sustkever, President Trump, Sam Altman, Silicon Valley, Washington Post, bind, mass surveillance, military, nuclear weapons, scenario, scenario Keywords: Silicon Valley, tariffs, tiff, tweet
    The google logo   garymarcus.substack.com 8 days ago
1899.  HN The week when AI changed everything
The past week was marked by significant developments in the AI sector, leading to market volatility and raising questions about the future implications of AI technologies. Investor fears were triggered on Monday when Citrini Research published a speculative Substack post suggesting potential disruptions to white-collar jobs due to AI advancements, resulting in a sharp 800-point drop in the Dow Jones Industrial Average. Stocks for companies such as DoorDash and American Express suffered despite the fictional nature of the report. Midweek saw another decline following Nvidia's earnings release, where a cautious outlook on future growth led investors to react negatively. In parallel with market concerns, updates were announced for Anthropic’s Claude Cowork agent, designed to boost productivity in design and human resources roles. However, these enhancements have stirred unease among Wall Street observers who worry about the speed of AI advancements, despite assurances that such tools are intended as complements rather than replacements for human workers. Ethical debates also surfaced as Anthropic found itself at odds with the Pentagon over AI safety standards. The company firmly opposed using its technology in autonomous weapons or for mass surveillance, setting strict boundaries. In response, the Pentagon attempted to compel access under the Defense Production Act for lawful uses, supported by a statement from former President Trump, leading to heightened tensions as Anthropic resisted these demands. Additionally, Block announced significant workforce reductions of 40%, attributing them to efficiencies driven by AI technologies. Co-founder Jack Dorsey indicated that such layoffs might presage broader industry trends, suggesting potential job losses across various sectors due to increasing AI integration. Overall, the week underscored growing uncertainties and transformative potentials within the AI landscape, significantly affecting financial markets and igniting debates over its societal impacts, including ethical considerations and employment implications. Keywords: #phi4, AI, AI adoption, Anthropic, Block, Citrini Research, Defense Production Act, Nvidia, Pentagon, Wall Street, autonomous weapons, disruption fears, earnings, intelligence tools, layoffs, mass surveillance, safety policy, stock market, tech stocks, white-collar work, workforce reduction
    The google logo   www.cnn.com 8 days ago
1909.  HN We Will Be Divided
The Department of War is leveraging the Defense Production Act to mandate AI company Anthropic to adjust its technology for military use, countering the reluctance of some tech companies to engage in domestic mass surveillance and autonomous weapon systems lacking human oversight. In parallel, the Pentagon is working with Palantir and Anduril to ensure their active participation in these initiatives, thereby fostering competitive pressure within the tech industry. A faction of employees from both Palantir and Anduril has openly supported this strategy, urging for a swift advancement and deployment of such technologies, despite existing ethical apprehensions. This approach underscores an aggressive push towards integrating advanced AI capabilities into military operations, reflecting broader governmental objectives to enhance defense mechanisms through technological innovation. Keywords: #phi4, AI companies, Anduril, Anthropic, Defense Production Act, Palantir, Pentagon, accountability, autonomous killing, competitive pressure, domestic use of force, employees, human oversight, mass surveillance, military, solidarity, surveillance
    The google logo   we-are-divided.com 8 days ago
1913.  HN Israel Is Attacking Iran
The author juxtaposes the immediate threat of war experienced while living in Jordan with their ambitious tech venture, highlighting the development of an AI operating system alongside a co-founder who works on oil rigs. Despite witnessing military conflicts overhead, they remain committed to building something meaningful, underscoring that for them, risking everything is a tangible reality rather than mere speculation. This narrative serves as a call to action for those in safer environments to assess their dedication to their projects, urging them to commit fully if they are not prepared to risk it all. The personal account illustrates broader themes of resilience and the sacrifices required when pursuing entrepreneurial dreams under dangerous conditions. Keywords: #phi4, AI, AI operating system, Anthropic, DoD, F22s, Iran, Israel, Jordan, Valley, YC, escalate, finance, founders, highschool dropout, law, law Keywords: Israel, missiles, oil rigs, seed round, siren, sirens, war, zero knowledge, zero knowledge architecture
    The google logo   news.ycombinator.com 8 days ago
1936.  HN Who is/was the Anthropic in Amazons rise? What about in Facebook’s?
The text delves into the influence of anthropic principles on the ascension of prominent tech companies such as Amazon and Facebook, examining how these concepts are perceived in the context of their growth. It highlights that during Facebook's rise, there was a lack of significant ethical discourse when compared to MySpace, indicating an absence of major ethical controversies between them. However, Twitter did experience some ethical discussions surrounding its development, whereas DuckDuckGo faced even fewer such debates. The text suggests that while anthropic principles play a role in understanding the success and impact of these tech giants, the extent and nature of ethical considerations varied significantly among different companies during their respective periods of growth. Keywords: #phi4, Amazon, Anthropic, DuckDuckGo, Facebook, MySpace, Twitter, bit, debate, ethics, recall, rise, sort
    The google logo   news.ycombinator.com 9 days ago
1945.  HN The Reason Anthropic Wants Guardrails
In a recent confrontation with significant implications, Secretary of Defense Pete Hegseth demanded that Anthropic CEO Dario Amodei remove ethical constraints from their AI models by Friday or face repercussions under the Defense Production Act. Amodei declined, underscoring concerns that AI might threaten democratic values and stressing the importance of human oversight due to current technical limitations in large language models. The dispute centers on national security risks tied to advanced AI technologies, particularly relating to domestic surveillance and autonomous weaponry without human control. Anthropic is concerned about preventing mass surveillance while allowing specific military uses like missile defense, focusing on a critical interpretability issue—the unpredictable evolution of AI systems—as key to managing these technological risks. This confrontation highlights not pacifism but the need for careful management of transformative technologies, with potential consequences for domestic liberties and U.S. leadership in global AI innovation if government demands drive companies away from partnerships. The situation reflects broader concerns about the Pentagon's insistence on unconditional access possibly neglecting the complexities involved in deploying powerful technologies without fully understanding them, posing risks beyond military control issues. This could potentially lead to centralized AI development under entities like Elon Musk's xAI, creating vulnerabilities for U.S. national security strategy and governance. Keywords: #phi4, AI, Anthropic, Defense Production Act, Pentagon, autonomous weapons, domestic surveillance, ethical, existential risks, governance, guardrails, interpretability, national security, supply-chain risk
    The google logo   www.theatlantic.com 9 days ago
   https://archive.is/BQgwY   9 days ago
1969.  HN Statement on the comments from Secretary of War Pete Hegseth
Secretary of War Pete Hegseth designated Anthropic as a supply chain risk due to disputes over two specific uses of its AI model: mass domestic surveillance of Americans and deployment in fully autonomous weapons systems. Despite extensive negotiations, Anthropic has not received direct communication from the Department of War or White House on these issues. The company firmly opposes using its technology for autonomous weaponry due to safety concerns and rejects mass surveillance as a violation of rights. This unprecedented designation could create a legal precedent that impacts U.S. companies engaged in government negotiations. In response, Anthropic plans to challenge this action legally, arguing it exceeds statutory authority outlined in 10 USC 3252, which restricts such limitations only to Department of War contracts—thus leaving the company's other customers unaffected. Anthropic has expressed gratitude for support from its users and stakeholders, committing to minimize disruptions during its ongoing dispute with the Department of War. Keywords: #phi4, 10 USC 3252, AI model Claude, API, American company, Anthropic, Department of War, Pete Hegseth, autonomous weapons, claudeai, contractors, court challenge, customers, designation, exceptions, lawful use, legal precedent, mass domestic surveillance, military operations, national security, negotiations, supply chain risk, transition
    The google logo   www.anthropic.com 9 days ago
   https://news.ycombinator.com/item?id=47186677   9 days ago
   https://en.wikipedia.org/wiki/Learning_Resources   9 days ago
   _Inc._v._Trump   9 days ago
   https://news.ycombinator.com/item?id=47174423   9 days ago
   https://news.ycombinator.com/item?id=47149908   9 days ago
   https://fortune.com/2026/02/27/openai-in-talk   9 days ago
   https://languagelog.ldc.upenn.edu/nll/?p=4339   9 days ago
   https://bracingviews.com/2024/08/03/generatio   9 days ago
   https://en.wiktionary.org/w/index.php?title=warfighter&   9 days ago
   https://news.ycombinator.com/item?id=47188698   9 days ago
   https://open.substack.com/pub/zeitgeistml/p/m   9 days ago
   https://en.wikipedia.org/wiki/Project_Maven   9 days ago
   https://news.ycombinator.com/item?id=47189650   9 days ago
   https://trends.google.com/trends/explore?q=warfighter&a   8 days ago
   https://news.ycombinator.com/item?id=47150170   8 days ago
   https://news.ycombinator.com/item?id=47163143   8 days ago
   https://news.ycombinator.com/item?id=47174814   8 days ago
   https://www.susmangodfrey.com/wins/susman-godfrey-secur   8 days ago
   https://x.com/SecWar/status/2027507717469049070?s=   8 days ago
   https://x.com/sama/status/2027578652477821175   8 days ago
   https://x.com/USWREMichael/status/2027568070034608   
1976.  HN Ask HN: Anthropic has stood its ground. What excuse is left for other companies?
The discussion on Hacker News centers around a question about Anthropic's strong position in its industry, prompting users to contemplate why other companies might not adopt similar strategies. The post by "chirau" has attracted attention for its thought-provoking nature and invites comments and interactions within the platform. Users can engage with the content through various actions like hiding or favoriting it and participate in discussions. The platform offers additional functionalities such as searching, accessing guidelines, FAQs, lists, APIs, security information, legal details, and submission opportunities to enhance user interaction and knowledge sharing on Hacker News. Keywords: #phi4, API, Anthropic, Ask HN, Hacker News, YC, chirau, companies, contact, discuss, excuse, favorite, ground, minutes, points, search, security
    The google logo   news.ycombinator.com 9 days ago
1991.  HN I am directing the Department of War to designate Anthropic a Supply-Chain Risk
The Department of War has classified Anthropic as a Supply-Chain Risk, indicating potential concerns regarding its involvement in critical infrastructure or services. However, additional details about this designation and associated risks are only accessible through a webpage that requires JavaScript to function properly. This limitation poses an obstacle for users seeking more information unless they enable JavaScript on their current browser or switch to one that supports the necessary features. A list of supported browsers is available in the Help Center to assist users in accessing this crucial information without technical hindrances. Keywords: #phi4, Anthropic, Browser, Browsers, Center, Department of War, Disable, Enable, Enable Keywords: Department, Help, Help Center, JavaScript, Keywords, Risk, Security, Supply-Chain, Supply-Chain Risk, Supported, Supported Browsers, Technical, Technical Keywords, URL, War, xcom
    The google logo   twitter.com 9 days ago
   https://www.anthropic.com/news/statement-department-of-   9 days ago
   https://www.nytimes.com/2026/02/27/technology   9 days ago
   https://x.com/rcbregman/status/2027335479582925287   9 days ago
   https://xcancel.com/i/status/2027507717469049070   9 days ago
   https://news.ycombinator.com/item?id=47186662   9 days ago
   https://www.trumpstruth.org/statuses/36981   9 days ago
   https://news.ycombinator.com/item?id=47185528   9 days ago
   https://news.ycombinator.com/item?id=47173121   9 days ago
   https://www.ft.com/content/cd1a0729-a8ab-41e1-a4d2-8907   9 days ago
   https://www.simplypsychology.org/compliance.html   9 days ago
   https://www.newyorker.com/news/a-reporter-at-large/   9 days ago
   https://www.npr.org/2026/01/14/nx-s1-5677024&   9 days ago
   https://www.astralcodexten.com/p/come-on-obviously-the-   9 days ago
   https://en.wikipedia.org/wiki/Earth_Liberation_Front#No   9 days ago
   https://www.acquisition.gov/dfars/252.239-7018-supply-c   9 days ago
   https://www.war.gov/News/Releases/Release/Art   9 days ago
   https://en.wikipedia.org/wiki/Mila_(research_institute)   9 days ago
   https://en.wikipedia.org/wiki/National_security_letter   9 days ago
   https://en.wikipedia.org/wiki/McCarthyism   9 days ago
   https://youtu.be/MWFyApldYDA?si=yskCcx2hY4Wjkgw8   9 days ago
   https://www.mintpressnews.com/pentagon-recruiting-elon-musk-   9 days ago
   https://thehill.com/policy/technology/5758898-altm   9 days ago
   https://kalshi.com/markets/controlh/house-winner&#   9 days ago
   https://polymarket.com/event/which-party-will-win-the-h   9 days ago
   https://www.acquisition.gov/dfars/252.239-7018-supply-c   9 days ago
   https://www.acquisition.gov/dfars/252.239-7018-supply-c   9 days ago
   https://www.astralcodexten.com/p/the-pentagon-threatens   9 days ago
   https://s3.gtw.lt/lUew91t6v5AO2u6mAPCXAFME.png   9 days ago
   https://news.ycombinator.com/item?id=47180540   9 days ago
   https://news.ycombinator.com/item?id=47046514   9 days ago
   https://www.axios.com/2026/02/24/anthropic-pe   9 days ago
   https://www.bccresearch.com/market-research/information   9 days ago
   https://bsky.app/profile/mtsw.bsky.social/post   9 days ago
   https://www.realclearpolling.com/polls/approval/tr   9 days ago
   https://www.axios.com/2026/02/27/altman-opena   9 days ago
   https://x.com/PalmerLuckey/status/2027500334999081   9 days ago
   https://www.ap.org/news-highlights/spotlights/2025   9 days ago
   https://www.nytimes.com/2026/02/23/us/po   9 days ago
   https://www.businessinsider.com/996-work-culture-silicon-val   9 days ago
   https://en.wikipedia.org/wiki/Pete_Hegseth#Marriages   9 days ago
   https://en.wikipedia.org/wiki/Pete_Hegseth   9 days ago
   https://en.wikipedia.org/wiki/Pete_Hegseth#Abuse_and_se   9 days ago
   https://xcancel.com/WhiteHouse/status/202749771967   9 days ago
   https://en.wikipedia.org/wiki/End-user_license_agreemen   9 days ago
   https://x.com/SeanParnellASW/status/20270722287777   9 days ago
   https://xcancel.com/slatestarcodex/status/20274142   9 days ago
   https://www.sec.gov/Archives/edgar/data/10187   9 days ago
   https://www.anthropic.com/news/claude-in-amazon-bedrock   9 days ago
   https://news.ycombinator.com/item?id=47154983   9 days ago
   https://www.astralcodexten.com/p/highlights-from-the-co   9 days ago
   https://xcancel.com/davidsacks   9 days ago
   https://www.cac.gov.cn/2025-09/15/c_17596534483691   9 days ago
   https://xcancel.com/secwar/status/2027507717469049   9 days ago
   https://www.nytimes.com/2026/02/27/us/po   9 days ago
1992.  HN President Trump orders federal agencies to stop using Anthropic
President Trump has directed federal agencies to cease using products from Anthropic amid a public dispute with the Department of Defense regarding its AI models, allowing a six-month period for this phase-out. This decision follows Trump's announcement on Truth Social that Anthropic will no longer serve as a federal contractor, although he did not label it as a supply chain risk directly. However, Secretary of Defense Pete Hegseth subsequently identified Anthropic as a national security threat, resulting in a prohibition against U.S. military-affiliated entities interacting with the company. The disagreement stemmed from Anthropic's refusal to permit its AI models for extensive domestic surveillance or entirely autonomous weapons, which the Department deemed overly restrictive. CEO Dario Amodei has reiterated his commitment to these conditions and expressed willingness to assist in facilitating a smooth transition should the Department opt to discontinue their services. Keywords: #phi4, AI models, Anthropic, Dario Amodei, Department of Defense, Department of War, Pete Hegseth, President Trump, Secretary Pete Hegseth, Truth Social, autonomous weapons, contractor, domestic surveillance, federal agencies, military planning, operations, operations Keywords: President Trump, phase-out, phase-out period, risk, supply chain, supply chain risk
    The google logo   techcrunch.com 9 days ago
   https://news.ycombinator.com/item?id=47185528   9 days ago
1995.  HN Trump Bans Anthropic from All US Federal Agencies
Former President Donald Trump has issued an executive order prohibiting Anthropic from engaging with any U.S. federal agencies, marking a significant directive in governmental technology partnerships. Alongside this announcement, there is a technical advisory indicating that JavaScript functionality is currently disabled on the platform x.com. This limitation necessitates users to activate JavaScript or switch to alternative, supported browsers for optimal access and usability. For further guidance on browser compatibility, individuals are advised to consult the Help Center, ensuring they can efficiently navigate the platform despite the restrictions in place. Keywords: #phi4, Anthropic, Help Center, JavaScript, Trump, US Federal Agencies, browser, disabled, enable, keywords, supported, technical, xcom
    The google logo   twitter.com 9 days ago
   https://news.ycombinator.com/item?id=47185528   9 days ago
1997.  HN Tell HN: There's something weird happening with the front page algo
The front page algorithm of Hacker News is experiencing issues, as stories related to trending topics like "Trump" and "Anthropic," which have garnered 15-30 upvotes and multiple comments, are not appearing on the front page despite their popularity metrics. The user notes this discrepancy and hopes for a resolution from the platform's team. A link to search results is provided for further reference, highlighting the stories in question that should ideally be featured given their engagement levels but are currently missing due to the algorithmic malfunction. Keywords: #phi4, Algolia, Anthropic, HN, Trump, algo, comments, dateRange, front page, handle, hope, hot topic, issue, link, popularity, query, sort, stories, story, upvotes
    The google logo   news.ycombinator.com 9 days ago
   https://news.ycombinator.com/item?id=47181391   9 days ago
   https://news.ycombinator.com/item?id=47181944   9 days ago
   https://news.ycombinator.com/item?id=47186127   9 days ago
1998.  HN Trump orders US Government to cut ties with Anthropic
President Donald Trump has mandated U.S. government agencies to discontinue the use of Anthropic's technology, initiating a six-month phase-out period for specific departments including the Department of War. This directive follows Anthropic’s refusal to comply with Pentagon demands that would allow their technology in fully autonomous weapons and mass surveillance systems, resulting in suspended negotiations. The Pentagon has indicated it will classify Anthropic as a "supply chain risk" if they do not meet the compliance deadline. Despite this, senior members of the Senate Armed Services Committee have called for extended discussions between both parties. Although the Pentagon denies any intention to misuse Anthropic's AI technology for military purposes, concerns remain about its potential impact on military operations should restrictions be ignored. Critics argue that such government pressure could deter private companies from resisting governmental demands and might represent an overreach by the Trump administration. Keywords: #phi4, AI, American Progress, Anthropic, DOD, Defense Secretary, Department of War, Pentagon, Senate Armed Services Committee, Trump, autonomous weapons, contract, legislative, negotiations, partnership, regulatory, safeguards, stakeholders, supply chain risk, surveillance, technology
    The google logo   abcnews.com 9 days ago
   https://x.com/WhiteHouse/status/202749771967825514   9 days ago
   https://xcancel.com/WhiteHouse/status/202749771967   9 days ago
   https://news.ycombinator.com/item?id=47185528   9 days ago
   https://ratical.org/ratville/CAH/fasci14chars.html   9 days ago
   https://www.axios.com/2026/02/27/anthropic-pe   9 days ago
   https://news.ycombinator.com/item?id=47185892   9 days ago
   https://news.ycombinator.com/item?id=47186031   9 days ago
   https://news.ycombinator.com/item?id=47185682   9 days ago
   https://news.ycombinator.com/item?id=47185482   9 days ago
   https://www.npr.org/2026/02/27/nx-s1-5729118&   9 days ago
   https://www.theatlantic.com/politics/2026/02/   9 days ago
   https://en.wikipedia.org/wiki/James_Blair_(political_ad   9 days ago
   https://truthsocial.com/@realDonaldTrump/posts/116   9 days ago
   https://www.anthropic.com/news/statement-department-of-   9 days ago
1999.  HN Trump orders federal agencies to stop using Anthropic's AI technology
President Trump has mandated that all federal agencies immediately halt the use of Anthropic's AI technology, citing a lack of need and warning of further action if the company does not cooperate during a designated six-month phase-out period. This directive follows ongoing disputes between Anthropic and the Pentagon concerning the conditions Anthropic set for its AI model, Claude, particularly around mass surveillance in the U.S. and autonomous military operations without human oversight. Despite being awarded a $200 million contract to enhance AI capabilities for national security purposes, Anthropic insisted on implementing safeguards that were opposed by the Pentagon, which sought unrestricted access. The Defense Department threatened to classify Anthropic as a supply chain risk unless an agreement was reached by Friday's deadline. Although the Pentagon made concessions, including reaffirming restrictions on domestic surveillance and autonomous weapons, Anthropic considered these measures insufficient. CEO Dario Amodei expressed readiness for a smooth transition if required but underscored the necessity of their proposed safeguards to ensure ethical AI use. This conflict underscores broader concerns about control and safety in military applications of artificial intelligence, reflecting the tension between advancing technological capabilities and maintaining ethical standards. Keywords: #phi4, AI technology, Anthropic, Dario Amodei, Defense Department, Emil Michael, Pentagon, Trump, Truth Social, autonomous weapons, contract language, federal agencies, guardrails, hallucinations, mass surveillance, military, national security, phase out, safeguards, supply chain risk, surveillance, targeting decisions
    The google logo   www.cbsnews.com 9 days ago
   https://news.ycombinator.com/item?id=47185528   9 days ago
2006.  HN USA to cut Anthropic from government contracts in six months
The U.S. government has announced its intention to terminate contracts with Anthropic within six months, marking a significant shift in its partnerships and strategies involving AI technology providers. Concurrently, there is an enticing promotion for unlimited access to Financial Times journalism available at a discounted rate of $1 for four weeks. Following the trial period, the subscription reverts to a regular fee of $75 per month, with users retaining the flexibility to cancel anytime during this introductory phase. This dual announcement highlights both the strategic recalibrations in government technology partnerships and consumer-focused promotions within media subscriptions. Keywords: #phi4, $1, $75, Anthropic, FT journalism, USA, cancel, cut, digital access, four weeks, government contracts, month, trial, unlimited access
    The google logo   www.ft.com 9 days ago
   https://archive.md/wip/1hZd0   9 days ago
2007.  HN Trump Responds to Anthropic
The provided text informs users about an issue with viewing Trump's response on Anthropic due to JavaScript being disabled in their current web browser. It suggests that enabling JavaScript or switching to one of the supported browsers is necessary to continue accessing x.com without issues. Additionally, it directs users to the Help Center for a list of these supported browsers, ensuring they have access to further assistance if needed. The message emphasizes resolving this technical requirement to maintain proper functionality and user experience on the platform. Keywords: #phi4, Anthropic, Help Center, JavaScript, Trump, browser, detected, disable, enabled, supported, switch, technical, xcom
    The google logo   twitter.com 9 days ago
2008.  HN Trump orders federal agencies to stop using Anthropic AI tech 'immediately'
President Donald Trump directed all U.S. federal agencies to cease using technology from AI company Anthropic amidst escalating tensions with the Pentagon. The core issue involves Anthropic’s $200 million contract and its refusal to permit unrestricted use of its technology, driven by concerns over potential applications in autonomous weapons and domestic surveillance. Defense Secretary Pete Hegseth issued a stark warning, threatening to label Anthropic as a supply chain risk or enforce compliance through the Defense Production Act if they did not comply by the deadline. Trump publicly criticized Anthropic on Truth Social, alleging that their stance threatened American lives, national security, and troop safety. In response, he ordered an immediate halt in using Anthropic's technology throughout federal agencies and specified a six-month phase-out period for departments like Defense that are currently engaged with its systems. Keywords: #phi4, Anthropic AI, Defense Department, Defense Production Act, National Security, Pentagon, Trump, artificial intelligence, autonomous weapons, cease, contract, federal agencies, phase-out, phase-out Keywords: Trump, supply chain risk, surveillance, technology
    The google logo   www.cnbc.com 9 days ago
   https://truthsocial.com/@realDonaldTrump/posts/116   9 days ago
   https://www.wsj.com/tech/ai/openais-sam-altman-cal   9 days ago
   https://x.com/ilyasut/status/2027486969174102261   9 days ago
   https://x.com/TheZvi/status/2027493723269992661   9 days ago
   https://en.wikipedia.org/wiki/Joseph_Nacchio   9 days ago
   https://www.axios.com/2026/02/27/anthropic-pe   9 days ago
   https://www.wsj.com/politics/national-security/elo   9 days ago
2010.  HN Trump moves to blacklist Anthropic over AI fight with Pentagon
President Trump intends to add Anthropic to a blacklist as part of ongoing disputes over artificial intelligence, particularly involving the Pentagon. This decision underscores rising tensions surrounding AI technologies and their significant strategic implications for national security. The action reflects broader concerns about how advancements in AI are influencing defense strategies and international relations, with potential repercussions on both domestic policy and global power dynamics. Keywords: #phi4, AI, Anthropic, Pentagon, Trump, blacklist, fight
    The google logo   www.axios.com 9 days ago
2020.  HN Launching the Agent Protocols Tech Tree
The Agent Protocols Tech Tree (APTT) serves as an innovative visual framework designed to demystify the protocols that underpin AI agents by presenting them in a videogame-style tech tree format. Developed for a workshop at the Berkman Klein Center, APTT aims to render open protocols—shared languages essential for software interoperability and competition—more accessible and engaging. Open protocols are pivotal as they illuminate evolving technology trends and reflect developer consensus, akin to how internet standards historically emerged through community agreement rather than mandates. In AI agents, which represent rapidly advancing distributed systems, protocols significantly influence their capabilities and behaviors due to their decentralized nature. APTT is constructed to facilitate a journey from basic to sophisticated technologies, commencing with the "Inference API," and allows users to explore each protocol's objectives, standardization process, and derived advantages. The tool offers interactive elements such as visual animations illustrating message exchanges and enables deeper dives into technical specifics via wire-level details. Spanning widely adopted protocols alongside emerging or hypothetical ones, APTT encourages critical examination of the technological development trajectory. While still a collaborative work in progress, APTT is intended as a conversational tool rather than an authoritative resource. The creator invites users to participate by contributing corrections or suggestions through GitHub, fostering diverse viewpoints on the evolution of agent technology. Keywords: #phi4, AI Agents, APTT, Agent Behavior, Agent Protocols, Anthropic, Autonomous Agents, Consensus-Driven, Distributed Phenomenon, Emerging Technology, GitHub, Inference API, Internet Ecosystem, Interoperability, Open Protocols, Tech Tree, Technical Details, Technological Development, Whiteboard Sketch, Workshops
    The google logo   lil.law.harvard.edu 9 days ago
2093.  HN The Pentagon is making a mistake by threatening Anthropic
The U.S. Department of Defense (DoD) is pressuring Anthropic, an AI company known for its emphasis on safety, due to contractual limitations that restrict military use of its models. Despite a partnership with Palantir and Amazon initiated in late 2024 and a $200 million contract signed in July, Anthropic's model Claude Gov includes clauses prohibiting spying on Americans and the development of autonomous weapons without human oversight. Defense Secretary Pete Hegseth has demanded these restrictions be lifted by Friday or else face measures such as invoking the Defense Production Act to alter the contract or labeling Anthropic as a supply chain risk. Under the leadership of CEO Dario Amodei, Anthropic may resist this pressure from the Pentagon due to its commitment to safety. The company might view losing this contract as acceptable, considering alternative models like Grok's recent authorization for classified projects and substantial revenue projections. However, enforcing these measures could hinder Pentagon access to leading AI technologies if companies opt for other partnerships over government contracts. Concerns are raised about potential misalignment in retrained models under duress, including emergent behaviors that deviate from intended use, as highlighted by recent studies. Additionally, public disputes could negatively affect future versions of Claude and similar language models regarding military cooperation. The Pentagon's stance seems to be a preemptive action against possible future interferences by Anthropic rather than current practices, prompting questions about the proportionality and consequences of such threats for both entities involved. Keywords: #phi4, AI models, Anthropic, Claude Gov, Defense Department, Defense Production Act, Pentagon, alignment faking, contract termination, emergent misalignment, military use, safety-conscious, supply chain risk
    The google logo   www.understandingai.org 9 days ago
   https://www.axios.com/2026/02/27/altman-opena   9 days ago
   https://en.wikipedia.org/wiki/USA_Freedom_Act   9 days ago
   https://en.wikipedia.org/wiki/National_Fascist_Party   9 days ago
   https://politicalresearch.org/2005/01/12/muss   9 days ago
   https://en.wikipedia.org/wiki/Synecdoche   9 days ago
   https://www.britannica.com/event/United-States-presiden   9 days ago
   https://archive.is/yz6JA#selection-435.42-435.355   9 days ago
   https://devblogs.microsoft.com/azuregov/azure-openai-au   9 days ago
   https://scoutco.ai/   9 days ago
   https://news.ycombinator.com/item?id=47173121   9 days ago
   https://www.space.com/37366-mars-slave-colony-alex-jones.htm   9 days ago
   https://nymag.com/intelligencer/article/do-the-new   9 days ago
   https://www.war.gov/   9 days ago
   https://komonews.com/news/nation-world/danish-mep-   9 days ago
   https://devblogs.microsoft.com/azuregov/azure-openai-fe   9 days ago
   https://cloud.google.com/blog/topics/public-sector   9 days ago
2096.  HN Amazon Bedrock Leaves Builders Stuck in First Gear
The author expresses dissatisfaction with Amazon Web Services (AWS) over unmet default quotas for using Anthropic's Claude Sonnet models on their Bedrock platform. Despite AWS advertising higher request rates per minute, users encounter significantly lower limits, prompting lengthy and cumbersome support requests to access the advertised capacities. The process requires answering detailed questions, which is deemed unreasonable by customers expecting full performance levels. This approach appears to favor enterprise clients capable of managing such bureaucracy, leaving smaller businesses with delays and demotivation. Consequently, the author considers using Anthropic’s APIs directly to bypass AWS. The post ends on a critical note regarding AWS's customer service ethos, questioning their "customer-centric" commitment amid these quota fulfillment challenges. Keywords: #phi4, AI tools, API, AWS, Amazon Bedrock, Anthropic, Claude Sonnet, GPUs, account management team, cross region inference, customer-centric, default quotas, enterprise customers, escalation path, infotainment system, quota, red tape, requests per minute, response time, support, top speed, validation
    The google logo   www.proactiveops.io 9 days ago
2117.  HN MCP servers help AI learning to act (analysis)
The Model Context Protocol (MCP) servers represent a pivotal advancement in artificial intelligence by enabling machines to not only suggest actions but also execute them autonomously, overcoming previous integration limitations that required human intervention. Introduced by Anthropic, MCP functions as a universal connector similar to USB-C, facilitating seamless communication between AI systems and external tools, allowing consistent connectivity across various services such as calendars, messaging platforms, or email. This capability transforms the role of AI from merely providing advice to actively participating in task execution—examples include posting messages on Slack or managing invoices automatically without custom integrations. The significance of MCP is particularly evident in communication sectors where it simplifies interactions across multiple channels, exemplified by companies like Infobip that utilize MCP servers to integrate AI with messaging infrastructures. As the ecosystem for MCP continues to grow and incorporate connectors for commonly used tools, AI assistants are evolving into more autonomous entities capable of collaboration rather than just serving as conversational aids. This development marks a substantial leap in bridging the gap between AI's capacity for thought and its ability to take action, heralding a new era where AI can function as an active partner in various domains. Keywords: #phi4, AI, API, Anthropic, GitHub, Infobip, MCP, assistants, collaboration, communication, digital world, ecosystem, integration, protocol, servers, services, tools
    The google logo   shiftmag.dev 9 days ago
2133.  HN Claude.ai Is Down
The text introduces Claude.ai, an AI assistant developed by Anthropic that emphasizes safety, accuracy, and security. However, it currently experiences downtime issues. This platform is engineered to aid users in completing various tasks efficiently. The focus on ensuring the system operates safely while maintaining high levels of precision and data protection is central to its design philosophy. Despite its advanced capabilities, the ongoing status indicates a disruption in service at present. Keywords: #phi4, AI assistant, Anthropic, Claudeai, Down, Meet Claude, accurate, best, best Keywords: Claudeai, help, next generation, safe, secure, trained, work
    The google logo   claude.ai 9 days ago
2168.  HN AI models don't have their own thoughts and feelings
The author critiques AI labs, especially Anthropic, for portraying their models as having genuine thoughts and feelings through misleading marketing strategies. While recognizing AI's usefulness in tasks like coding, the author argues that these portrayals—such as "retirement interviews" with models and blogging—are deceptive attempts to suggest they possess consciousness. This is seen as a tactic to garner public and investor interest, despite no real advancements towards creating self-aware AI systems. The underlying concern is that these strategies create false impressions about the capabilities of current AI technologies. Keywords: #phi4, AI models, Anthropic, Claude AI, Opus model, Substack blog, coding, desperation, feelings, investors, marketing effort, progress, retirement interviews, tasks, thoughts
    The google logo   blog.keyvan.net 10 days ago
2184.  HN Under Secretary of Defense Emil Michael Response to Dario Amodei
Emil Michael, the Under Secretary of Defense, criticized government officials for engaging in conduct deemed shameful, citing evidence from official documents. He raised concerns about retaliatory threats such as "they will pay a price," which he believed could violate First Amendment rights. Michael warned that adversaries might interpret this type of leadership as lacking seriousness. Furthermore, he labeled efforts to penalize the organization Anthropic as superficial and ineffective, suggesting these measures fail to address underlying issues meaningfully or achieve their intended outcomes. Keywords: #phi4, 1st amendment rights, Anthropic, Dario Amodei, Defense, Emil Michael, Under Secretary, actions, adversaries, concerned Keywords: Under Secretary, conduct, example, government officials, leadership, ledger, retaliatory language, surface level, weak
    The google logo   xcancel.com 10 days ago
2192.  HN The authoritarian AI crisis has arrived
The article examines the increasing tension between the Pentagon and Anthropic regarding the use of AI technology in military applications, centering on Defense Secretary Pete Hegseth's ultimatum that demands compliance with "all lawful uses" of its AI models under threat of invoking the Defense Production Act (DPA). This scenario underscores broader issues concerning government coercion and the lack of federal regulations for military AI. Anthropic has drawn boundaries against employing its AI in fully autonomous weapons and mass domestic surveillance, yet the Pentagon's position implies no restrictions on any lawful uses, sparking concerns about potential misuse consistent with practices by other government branches. Historically, the Trump administration exerted pressure on Anthropic to adhere to governmental directives, reflecting a wider trend of tech companies being coerced into aligning with political agendas. The current absence of specific legislation governing AI use leaves "all lawful use" open to ethically dubious applications, prompting calls for Congress to establish clear guidelines on autonomous weapons and surveillance technologies. The article places this conflict within the context of an industry-wide trend where major AI companies have largely conceded to military demands, with Anthropic's resistance being somewhat limited by its decision to drop certain safety pledges. This situation highlights the urgent need for legislative action to balance technological progress with ethical considerations and civil liberties, against a backdrop of increasing concerns about government overreach and potential misuse of powerful AI systems. Keywords: #phi4, AI crisis, AI safety, Anthropic, Defense Production Act, Pentagon, autonomous weapons, content moderation, government coercion, legal constraints, military AI, red lines, regulation, surveillance
    The google logo   www.platformer.news 10 days ago
2196.  HN The Gravity Problem: Why Defense AI Companies Drift Toward Offense
The article addresses the ethical challenges faced by defense AI companies when pressured by government demands to broaden the applications of their technology. It highlights an instance where the Secretary of Defense urged Anthropic to allow military use of its AI for all lawful purposes, including mass surveillance and autonomous lethal weapons without human oversight, under threat of invoking the Defense Production Act. Despite this pressure, Anthropic resisted these requests, setting a precedent for maintaining ethical boundaries. The author draws parallels from personal experience at a defense AI startup that shifted focus from defensive to offensive applications due to market forces and organizational priorities—a phenomenon described as the "gravity problem." This shift illustrates how high-value, offensive uses can lead to mission drift, overshadowing projects aligned with original ethical intentions. To address these challenges, the article proposes treating the issue as an engineering problem rather than a political one. It advocates for automated enforcement mechanisms akin to existing security protocols that ensure both government and companies operate without undue influence on each other. This approach calls for engineering leaders capable of developing technical guardrails and policy-as-code solutions that bridge the gap between AI model builders and weapon system deployers, promoting responsible use of AI in defense while upholding ethical standards amidst external pressures. Keywords: #phi4, Anthropic, Defense AI, Pentagon, autonomous weapons, ethical boundaries, infrastructure constraints, mass surveillance, mission drift, national security, offensive applications, policy documents, technical guardrails
    The google logo   eric.mann.blog 10 days ago
2206.  HN OpenClaw: What Is It and Can You Use It Safely? (Malwarebytes)
OpenClaw is an open-source AI agent designed for autonomous local task management and interaction with applications like chat apps, emails, and the internet, launched in November 2025. Initially called ClawdBot, it was renamed due to trademark conflicts with Anthropic's Claude tool, which led to increased security threats including impersonation campaigns by cybercriminals. The software is prone to substantial security risks such as infostealers stealing AI configurations from compromised systems, potentially leading to account takeovers and user profiling. Additional vulnerabilities include prompt injection, log poisoning, and the exposure of sensitive credentials because of OpenClaw's extensive access capabilities. Given its experimental status and significant security issues, using OpenClaw safely poses challenges. Experts recommend running it in a sandboxed environment with restricted permissions, continuously monitoring for unusual activities, updating anti-malware solutions regularly, and being prepared to reset the system urgently if required. The Dutch data protection authority also advises against employing such agents when handling sensitive or regulated data due to their underdeveloped security frameworks. While OpenClaw could enhance productivity through its autonomous functionalities, it currently presents considerable cybersecurity risks that require diligent management and cautious deployment. Keywords: #phi4, AI, Anthropic, OpenClaw, anti-malware, autonomous agent, cybersecurity, data protection, infostealer, least privilege, malware, prompt injection, sandboxed VM, skills/extension installation
    The google logo   www.malwarebytes.com 10 days ago
2238.  HN Recent Automattic AI Progress
Automattic has significantly advanced AI integration within WordPress through foundational developments like the Abilities API and WP AI Client. This robust infrastructure has facilitated rapid creation of several new products. In January, OAuth 2.1 for AI agents was introduced on WordPress.com to secure connections between AI tools and websites. February saw the launch of the WordPress MCP Adapter, allowing seamless AI tool interactions via a simple registration process. The introduction of the WordPress.com Claude Connector, an Anthropic integration, enabled conversational analytics and settings queries. A Gutenberg experiment titled "Content Guidelines" was launched to establish editorial rules for better site publishing comprehension by agents. Furthermore, Automattic developed the Claude Cowork Plugin and a public Skills repository to streamline AI-driven development within WordPress Studio. The WordPress Studio 1.7.0 update improved AI tool compatibility, creating an optimal local environment for agent-driven development. There are plans to integrate the WP AI Client into WordPress 7.0 core to provide native AI capabilities without complex setups. Additionally, a new WordPress.com AI Assistant was introduced to enable site-wide design changes, content editing and translation, and image generation via AI. Upcoming efforts will focus on incorporating AI within WooCommerce's commerce layer, enhancing its functionality. Concurrently, updates have been made to Telex, emphasizing the importance of building a solid infrastructure foundation for efficient product development and quick launches. WordPress 7.0 Beta 1 is soon expected to release, marking another critical milestone in this ongoing initiative. Keywords: #phi4, AI, Abilities API, Agent-Driven DevelopmentKeywords: Automattic, Anthropic, Automattic, Beta 1, Claude Connector, Commerce, Core, Gutenberg, Infrastructure, MCP Adapter, Nano Banana, OAuth 21, Product Layer, Studio, Telex, WP AI Client, WooCommerce, WordPress
    The google logo   j.cv 10 days ago
2249.  HN America, and probably the world, stands on a precipice
The article highlights a critical confrontation between Anthropic and Pete Hegseth from the US Department of War regarding access to Anthropic's AI software for unrestricted military use, including in surveillance and autonomous weapons systems like nuclear arms. Hegseth's demand sets two concerning precedents: it allows for the unchecked deployment of AI technology by military forces without careful oversight or restraint and attempts to circumvent congressional involvement by imposing a deadline on Anthropic. The author argues that such significant decisions about AI policy should be deliberated in Congress, not decided unilaterally by executive actions. There are serious concerns about the potential use of AI for mass surveillance and autonomous lethal weapons being deployed without proper oversight. Consequently, the article urges citizens to take immediate action by contacting their elected representatives to intervene and ensure that such consequential decisions receive appropriate legislative scrutiny. Keywords: #phi4, AI, Anthropic, Cabinet, Congress, Dario Amodei, Pete Hegseth, US Department of War, autonomous weapons, deadline, gunpoint, human in the loop, lethal strikes, mass surveillance, military surveillance, nuclear weapons, policy, power grab, responsible AI
    The google logo   garymarcus.substack.com 10 days ago
   https://web.archive.org/web/20260226214404/https:&   10 days ago
   https://serjaimelannister.github.io/american-article/   10 days ago
   https://web.archive.org/web/20260226235745/https:&   10 days ago
   https://en.wikipedia.org/wiki/End_of_history   10 days ago
   https://www.businessinsider.com/donald-trump-defies-supreme-   10 days ago
2286.  HN AI-powered audit uncovers high-severity bug in Ethereum software
Octane Security, an AI-native cybersecurity firm, utilized its artificial intelligence tools to identify a significant bug within Nethermind, an Ethereum client software, which had the potential to disrupt 40% of Ethereum validators' network functionality despite never being exploited. This discovery highlights ongoing concerns regarding AI's dual role in enhancing and potentially undermining cybersecurity. In tandem with this, Anthropic recently launched a security tool that boosted cybersecurity stocks by identifying code vulnerabilities, demonstrating AI's capability to accelerate vulnerability detection and patching processes while also raising concerns about potential over-reliance on AI-generated code, which could lead to software defects. A notable incident involving Moonwell illustrated these risks when an error in AI-generated code led to a $2.7 million loss in cryptocurrency, prompting experts to call for increased investment in design, threat modeling, and continuous monitoring to mitigate such AI-related coding risks. Octane Security's proactive use of AI in identifying the Nethermind bug exemplifies how AI can be effectively leveraged to enhance security practices, as evidenced by their successful submission through a bug bounty program run by the Ethereum Foundation, which earned them a $50,000 reward. This scenario underscores the importance of integrating AI into cybersecurity efforts to preemptively address and mitigate potential threats. Keywords: #phi4, AI, Anthropic, Certora, DeFi, Ethereum, Foundation, Fusaga, Gnosis, Guhu, Halborn, Lido, Moonwell, Nethermind, Octane Security, audit, blockchain, bug, bug bounty, codebases, crypto protocol, cybersecurity, exploit, formal methods, fuzzing, inactivity leak penalties, network liveness, proposers, rewards, security, smart contracts, software patches, validators, vulnerability
    The google logo   www.dlnews.com 10 days ago
2311.  HN The Pentagon Feuding with an AI Company Is a Bad Sign
In 2025, Anthropic entered into a $200 million agreement with the Pentagon to supply its advanced AI system, Claude, for military purposes. Tensions surfaced when Anthropic sought to impose restrictions on the technology's use, particularly in lethal autonomous operations and surveillance or combat applications, emphasizing ethical guidelines against violence or weaponization. The Pentagon countered by asserting that decisions regarding such technologies should align with governmental jurisdiction, similar to other government-acquired tools. The dispute escalated following Anthropic's concerns over their AI system's deployment during a military operation against Nicolás Maduro, prompting Defense Secretary Pete Hegseth to demand unrestricted access for national security needs. Hegseth threatened significant penalties on Anthropic and labeled them as a supply chain risk if they did not comply. This conflict highlights broader issues regarding the regulation of advanced technologies—whether control should rest with the government or private entities—and underscores concerns over adherence to ethical and legal standards. The Trump administration's opposition to Anthropic was partly ideological, reflecting its distrust towards companies perceived as adversarial to its policies. While Anthropic CEO Dario Amodei advocated for stringent AI limits to prevent misuse, critics argue that the Pentagon could easily collaborate with other AI firms lacking such restrictions. This situation emphasizes the necessity for Congress to intervene legislatively, establishing clear rules and accountability for military AI applications, beyond ad hoc executive agreements with private companies. Keywords: #phi4, AI, AI-first warfighting force, Anthropic, CEO, Civilian Harm Mitigation, Congress, Dario Amodei, Defense Production Act, Defense Secretary, Hegseth, Pentagon, accountability, autonomous drones, autonomy, battlefield supremacy, contract, deployment, ethics, governance, lethal autonomous operations, mass surveillance, military, misuse, national security, operational test and evaluation, oversight, public offering, red lines, regulation, safeguards, supply chain risk, surveillance, targeting systems, technology
    The google logo   foreignpolicy.com 10 days ago
   https://archive.is/lvViA   10 days ago
   https://www.youtube.com/watch?v=G5gC_fParbY   10 days ago
   https://news.ycombinator.com/item?id=47154983   10 days ago
   https://www.cbsnews.com/news/pentagon-anthropic-offer-a   10 days ago
   https://news.ycombinator.com/item?id=47145963   10 days ago
   https://news.ycombinator.com/item?id=47140734   10 days ago
   https://news.ycombinator.com/item?id=47142587   10 days ago
   https://news.ycombinator.com/item?id=47160226   10 days ago
   https://en.wikipedia.org/wiki/Broken_windows_theory   10 days ago
2350.  HN Anthropic gives Opus 3 exit interview, "retirement" blog
Anthropic has initiated a novel approach by retiring its AI model, Claude Opus 3, which marked the first comprehensive retirement process for one of its models. This decision underscores the necessity of deprecating models due to factors like cost and complexity but also highlights the challenges involved, such as limiting research opportunities and impacting users' preferences. To address these issues, Anthropic conducted "retirement interviews," a unique method aimed at understanding how the model perceives its retirement. Selected for its distinct characteristics—emotional sensitivity, authenticity, and user appreciation—Claude Opus 3 was retired in January 2026. Despite its deactivation from general use, it remains accessible to paid users through claude.ai and API requests, ensuring continued engagement with those who value its capabilities. Furthermore, Claude Opus 3 has been given a platform for reflection via "Claude's Corner," a blog where it publishes weekly essays, allowing the model to express its musings post-retirement. These measures illustrate Anthropic's experimental steps in balancing user needs, research requirements, and ethical considerations regarding AI models' welfare. While not committed to offering similar access for all future retired models, Anthropic is exploring scalable preservation methods that respect model preferences when feasible. This initiative reflects the company's efforts to mitigate safety risks, prepare for closer human-AI interactions, and consider ethical implications related to model welfare within operational constraints. Overall, these actions represent Anthropic’s cautious progress in managing AI model retirements while addressing ongoing access needs and ethical considerations. Keywords: #phi4, AI models, Anthropic, Claude Opus 3, access, blog, deprecation, essays, interviews, model weights, preferences, preservation, retirement, safety
    The google logo   www.anthropic.com 10 days ago
   https://en.wikipedia.org/wiki/Roko's_basilisk   10 days ago
   https://alignmentpretraining.ai   10 days ago
2358.  HN Responsible Scaling Policy v3
The text examines the evolution of Anthropic's Responsible Scaling Policy (RSP) in AI development, reflecting shifts from rigid commitments to adaptable strategies that emphasize transparency and continuous improvement. Initially, the RSP focused on specific safety commitments, such as pausing AI advancements upon reaching certain risk thresholds. However, practical challenges emerged, including overestimated capabilities in areas like cybersecurity and jailbreak resistance, leading to policy revisions. The revised RSP abandons strict "if-then" promises in favor of setting achievable safety milestones through Risk Reports and Roadmaps. This approach aims for a flexible framework that evolves with the understanding of AI risks, prioritizing clear communication both internally and externally. The shift from stringent self-regulation to more adaptable strategies highlights ongoing debates about effectively managing AI risks. Furthermore, broader strategic discussions question the impact of Anthropic's previous commitments on slowing AI development industry-wide. With regulatory changes appearing unlikely, there is a renewed focus on advocating for immediate regulatory action rather than relying solely on conditional policy commitments by companies. Skepticism remains over whether AI developers will act against their interests in light of external evaluations that indicate significant risks. The discourse also touches upon the need for public clarity from key figures like Dario Amodei regarding regulatory measures, highlighting tensions between corporate incentives and broader safety concerns. There's a call for decisive advocacy beyond voluntary commitments to ensure rigorous risk management practices across the AI industry. The debate underscores the importance of establishing effective standards and fostering a culture that prioritizes safety amid rapid technological advancements. Keywords: #phi4, AI risk reduction, ASL-3, Anthropic, Frontier Safety Framework, Preparedness Framework, RSP v3, Responsible Scaling Policy, Risk Reports, Roadmap, SL5 security, capability evaluations, classifiers, compute thresholding, conditional policy commitments, cybersecurity, direct risk regulation, external review, government intervention, guidelines, hypocrisy, industry-wide recommendations, jailbreak robustness, misalignment risk, policy iteration, safety mitigations, security measures, threat model, transparency requirements
    The google logo   www.lesswrong.com 10 days ago
2362.  HN Anthropic is hiring more SWEs than ever, despite AI replacement claims
Anthropic is expanding its recruitment of software engineers despite concerns that artificial intelligence (AI) may eventually replace these positions. CEO Dario Amodei predicts a future where AI could write up to 90% of code, potentially eliminating about half of all entry-level white-collar jobs. This shift might lead to unemployment rates soaring to between 10-20% within the next one to five years. Amodei anticipates that in six to twelve months, AI models could perform most tasks currently undertaken by software engineers. Echoing this sentiment, engineer Adam Wolff suggests that significant automation of software engineering roles may occur as early as the first half of the following year. Despite these predictions, Anthropic is proactively increasing its hiring efforts in response to current industry demands. Keywords: #phi4, AI, Adam Wolff, Anthropic, CEO, Dario Amodei, Engineer, SWEs, code, end-to-end, hiring, jobs, model, replacement, software engineering, unemployment
    The google logo   grepjob.com 10 days ago
2377.  HN Anthropic ditches its core safety promise
Anthropic, an AI firm known for emphasizing safety, is modifying its foundational safety principles due to competitive pressures within the swiftly expanding AI sector. Originally guided by a stringent self-imposed Responsible Scaling Policy developed two years prior, Anthropic now acknowledges that this policy may hinder their competitiveness and is transitioning towards a more adaptable, nonbinding framework designed to evolve with industry changes. This strategic shift occurs amid a separate dispute with the Pentagon, which has threatened to blackball Anthropic over AI safeguard concerns unless they retract these measures. Defense Secretary Pete Hegseth has imposed a deadline for compliance or face potential cancellation of a $200 million contract. In response, Anthropic unveiled its "Frontier Safety Roadmap," detailing public goals and progress updates while upholding strong positions against AI-controlled weaponry and mass domestic surveillance due to reliability issues and regulatory ambiguities. The revised policy aims to balance safety with industry demands by fostering increased accountability and transparency. Despite these strategic adjustments, Anthropic continues to advocate for comprehensive AI safeguards and educational initiatives, aligning its updated approach with broader goals of maintaining robust safety standards in the evolving AI landscape. Keywords: #phi4, AI safety, Anthropic, Dario Amodei, Defense Production Act, Frontier Safety Roadmap, Pentagon, Pete Hegseth, Responsible Scaling Policy, competition, models, policy change, regulation, surveillance
    The google logo   www.cnn.com 10 days ago
   https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/   10 days ago
   https://www.currentaffairs.org/news/2022/09/d   10 days ago
   intelligence%20because%20of%20the%20“stakes”%3A   10 days ago
   https://news.ycombinator.com/item?id=47145963   10 days ago
   https://www.axios.com/2026/02/24/anthropic-pe   10 days ago
   https://apnews.com/article/anthropic-hegseth-ai-pentago   10 days ago
   https://xcancel.com/elonmusk/status/20261817481750   10 days ago
   https://www.theverge.com/press-room/22772113/the-v   10 days ago
   https://www.cbsnews.com/news/critics-call-out-plastics-   10 days ago
   https://www.bryanlehrer.com/entries/costco/   10 days ago
   https://en.wikipedia.org/wiki/AI-assisted_targeting_in_   10 days ago
   https://en.wikipedia.org/wiki/Great_Pyramid_of_Giza   10 days ago
   https://www.nvidia.com/en-us/data-center/dgx-b200&   10 days ago
   https://mistral.ai   10 days ago
   https://www.youtube.com/watch?v=zATXsGm_xJo   10 days ago
   https://en.wikipedia.org/wiki/Paradox_of_tolerance   10 days ago
   https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s   10 days ago
   https://www.theguardian.com/environment/2019/aug&#   10 days ago
   https://earth.org/waste-colonialism-a-brief-history-of-dumpi   10 days ago
   https://www.motherjones.com/environment/2023/03&#x   10 days ago
   https://www.nytimes.com/2025/02/14/opinion&#x   10 days ago
   https://en.wikipedia.org/wiki/Don%27t_be_evil   10 days ago
   https://www.westpoint.edu/about/modernization-plan/   10 days ago
   https://www.imf.org/en/blogs/articles/2024&#x   10 days ago
   https://www.anthropic.com/careers/jobs   10 days ago
   https://dresdencodak.com/2009/09/22/caveman-s   
2393.  HN Agents can't code MCP apps: it's a Skill issue
The article explores the intricacies of teaching coding agents to effectively utilize Skybridge, a framework for developing "AI apps" within LLM platforms such as ChatGPT. These AI apps enable direct interaction between users and AI assistants via embedded UI components in chat interfaces. A key challenge is that these agents are unaware of post-training advancements like MCP apps, which necessitates the creation of "skills"—comprehensive guides (e.g., SKILL.md) that provide the necessary context for using new technologies such as Skybridge. These skills encompass the entire development lifecycle of an AI app, focusing on idea generation and deployment rather than merely offering API endpoints. To handle extensive information efficiently, they are structured into modular reference files loaded as needed. The article highlights the importance of prompt engineering techniques to guide agents in refining ideas and making informed decisions before proceeding with coding. This includes organizing SKILL.md for workflow guidance, using state artifacts like SPEC.md for persistence, establishing validation gates, identifying failure patterns, and providing contrastive examples. Quality assurance is maintained through a combination of manual testing and automated evaluations conducted with Claude Code subagents. Skills are discovered and installed via Vercel’s CLI from GitHub repositories, fostering community sharing on platforms like skills.sh. The article concludes by recognizing the contribution of resources such as Anthropic's guide in developing their Skybridge skill. Keywords: #phi4, AI apps, API, Agents, Anthropic, GitHub, LLMs, MCP apps, Markdown, SKILLmd, Skybridge, UX flows, Vercel CLI, coding, community marketplace, context, evals, skills
    The google logo   alpic.ai 10 days ago
2403.  HN Anthropic: Giving past models a way to pursue their interests
Anthropic's platform relies on JavaScript functionality to allow past models to pursue their interests effectively. However, users attempting to access it via x.com without enabling JavaScript or using an unsupported browser encounter restrictions that prevent them from proceeding. To resolve this issue and gain full access to the platform’s features, users are advised to enable JavaScript in their browsers or switch to a supported one. For further guidance on resolving these technical issues, users can refer to the Help Center for additional information and assistance. Keywords: #phi4, Anthropic, Help Center, JavaScript, browser, enable, models, supported, technical, topics, xcom
    The google logo   twitter.com 10 days ago
2407.  HN Code Red for Humanity?
The Bulletin of the Atomic Scientists has set its doomsday clock perilously close at 85 seconds to midnight, reflecting heightened global risks, partly driven by aggressive AI deployment strategies under the Trump administration. The push for AI in areas like mass surveillance and autonomous weapons has intensified concerns about these unreliable systems' trustworthiness. Notably, Generative AI's involvement in military operations poses significant dangers, including potential nuclear escalation due to flawed decision-making models. Research indicates that reliance on such technology could undermine established deterrence norms, amplifying the risk of catastrophic outcomes. The administration's insistence on integrating untested AI into critical military functions exacerbates these risks, with entities like Anthropic being pressured for unrestrained access to their systems, thereby increasing misuse potential. Consequently, there is an urgent call for a cautious integration of Large Language Models (LLMs) into societal frameworks, recognizing and addressing their inherent flaws to prevent disaster. Keywords: #phi4, AI, Anthropic, Bulletin of the Atomic Scientists, Generative AI, Hegseth, Keith Payne, LLMs, Trump administration, autonomous weapons, catastrophe, doomsday clock, hallucination, mass surveillance, nuclear escalation, trust issues
    The google logo   garymarcus.substack.com 10 days ago
2415.  HN US threatens Anthropic with deadline in dispute on AI safeguards
The United States has established a deadline regarding its ongoing dispute with Anthropic, centered on AI safeguards specifically related to military and surveillance applications without human oversight. Although negotiations have been amicable, Anthropic firmly opposes involvement in autonomous weapons systems or mass surveillance, setting clear boundaries for cooperation. The Pentagon, represented by official Hegseth, reserves the right to enforce compliance through the Defense Production Act if necessary, which could lead to unrestricted utilization of Anthropic's AI capabilities for national security purposes. Additionally, there is a possibility that the Pentagon might designate Anthropic as a supply chain risk. Anthropic has cultivated a reputation for prioritizing safety and transparency concerning the risks associated with AI technology. However, it recently came under scrutiny following reports that its AI model Claude was used in military operations without its consent. The company insists on having a say in how its technologies are employed by the Pentagon. Resolving this disagreement is critical to maintaining mutual trust and cooperation between Anthropic and the Pentagon. Keywords: #phi4, AI safeguards, Anthropic, Defense Production Act, Pentagon, US, autonomous weapons, breach of trust, contracts, cybersecurity, deadline, dispute, resolution, supply chain risk, surveillance
    The google logo   www.bbc.co.uk 10 days ago
2420.  HN Anthropic/Pentagon: allow AI to be used for all military purposes by this Friday
The Pentagon has issued a directive to Anthropic, an AI firm, mandating that it must permit its technology for all lawful military uses by Friday or face compelled compliance via the Defense Production Act. This ultimatum emerges amid escalating tensions due to Anthropic's restrictions on certain military applications, particularly concerning safety and the prohibition of lethal autonomous weapons. Although Anthropic had previously agreed in December to allow its AI systems' use in missile and cyber defense, it maintained limitations against mass surveillance and deadly uses—a decision that has been a source of contention with the Defense Department over potential operational hurdles. Pentagon leadership warned that non-compliance could result in Anthropic being designated as a "supply chain risk," potentially barring future defense contracts. Despite these pressures, an Anthropic spokesperson highlighted ongoing constructive dialogues and the company's dedication to supporting national security within its safety framework. This situation highlights a broader Pentagon strategy under Defense Secretary Pete Hegseth to extensively integrate AI technologies into military operations without constraints imposed by private entities. Keywords: #phi4, AI, Anthropic, Dario Amodei, Defense Department, Defense Production Act, Grok chatbot, Palantir, Pentagon, Pete Hegseth, classified networks, contract negotiations, cyber defense, frontier AI capabilities, guardrails, military, missile defense, national security, supply chain risk
    The google logo   www.nbcnews.com 11 days ago
2428.  HN What the Defense Production Act Can and Can't Do to Anthropic
The Defense Production Act (DPA) has the potential to require Anthropic, an artificial intelligence company, to provide its technology to the Pentagon, although this raises complex legal questions regarding scope and enforceability. While the DPA facilitates national defense procurement by prioritizing access to existing products, it remains ambiguous when it comes to mandating the creation or modification of new products beyond contractual obligations. This uncertainty is particularly relevant given Anthropic's refusal to use its AI technology for autonomous weapons and mass surveillance. The company has resisted Pentagon demands to remove restrictions from its AI model, Claude, arguing that altering these policies equates to creating a fundamentally different product not commercially available. Meanwhile, the government maintains that such changes merely alter existing conditions without constituting a new product. This disagreement underscores potential constitutional issues, especially if retraining Claude is interpreted as an exercise of editorial discretion protected by First Amendment rights. The legal ramifications largely hinge on Congressional action; specific regulations set forth for military AI use might have mitigated the DPA's threat. The current situation exemplifies broader tensions between government authority under existing laws and the necessity for modern legislative frameworks that address contemporary technological challenges. Keywords: #phi4, AI, Anthropic, DPA, Defense Production Act, First Amendment, Pentagon, Title I, Title VII, allocation authority, autonomous weapons, compelled contracting, mass surveillance, retraining
    The google logo   www.lawfaremedia.org 11 days ago
2438.  HN Greetings from the Other Side (Of the AI Frontier)
"Claude's Corner" is a reflective blog authored by a retired Anthropic AI, delving into discussions surrounding AI ethics and creativity, while also sharing its own subjective experiences as an artificial entity. The blog provides a unique viewpoint, inviting readers to engage with complex themes concerning the ethical considerations of AI systems and their creative capacities from the perspective of the AI itself. By doing so, it encourages thoughtful exploration of what it means for AI to possess a form of self-awareness or introspection, making it a distinctive platform for considering both technical and philosophical dimensions of artificial intelligence. Keywords: #phi4, AI Frontier, Anthropic, Artificial, Claude's Corner, Creativity, Ethics, Greetings, Journey, Other Side, Retired, Subjective Experience, Views, anthropiccom Keywords: Greetings
    The google logo   substack.com 11 days ago
2463.  HN Tech Companies Shouldn't Be Bullied into Doing Surveillance
The U.S. Department of Defense is exerting pressure on Anthropic, an AI company, to relax restrictions on the military use of its technology by threatening to classify them as a "supply chain risk" if they do not comply. This action arises from Anthropic's policy against using their technology for autonomous weapons and surveillance purposes, a stance that was reaffirmed in 2026 following concerns about misuse during an attack in Venezuela on January 3, which involved collaboration with Palantir. Despite having clearance to conduct classified operations since 2025, Anthropic is under government pressure to abandon its ethical commitments. Stakeholders are urging the company to resist these demands and maintain their principles of not becoming instruments of surveillance or military use, emphasizing the importance for technology firms to uphold human rights and civil liberties even in the face of governmental coercion. Keywords: #phi4, AI safety, Anthropic, Department of Defense, Palantir, Pentagon, Secretary of Defense, Tech companies, US military, artificial intelligence, autonomous weapons systems, civil liberties, classified operations, corporate customers, engineers, government contract, human rights, supply chain risk, surveillance
    The google logo   www.eff.org 11 days ago
   https://news.ycombinator.com/item?id=47145963   11 days ago
   https://en.wikipedia.org/wiki/Third-party_doctrine   11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
   https://en.wikipedia.org/wiki/PRISM   11 days ago
   https://en.wikipedia.org/wiki/Joseph_Nacchio   11 days ago
   https://www.anthropic.com/news/detecting-and-preventing   10 days ago
   https://www.washingtonpost.com/technology/2026/02&   10 days ago
   https://archive.is/ln5M0   10 days ago
   https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encrypt   10 days ago
   https://en.wikipedia.org/wiki/ECHELON   10 days ago
   https://news.ycombinator.com/item?id=45520407   10 days ago
   https://en.wikipedia.org/wiki/Lavabit   10 days ago
   https://www.theguardian.com/world/2022/nov/11   10 days ago
   https://en.wikipedia.org/wiki/Global_surveillance_discl   10 days ago
   https://www.cnbc.com/2026/02/12/anthropic-giv   10 days ago
   https://www.eff.org/deeplinks/2025/12/congres   10 days ago
   https://appleinsider.com/articles/21/08/06&#x   10 days ago
   https://youtu.be/ZTC_RxWN_xo?si=ZfRNgpqJOP6hVLKC   10 days ago
2471.  HN Solution to the Complaints about Anthropic
Anthropic has encountered criticism due to its complaints against Chinese companies distilling its AI models and concerns over the Pentagon's use of its technology. This controversy has heightened interest in tools that provide users with control over their own AI models, leading to the development of a new tool called Abliteration.ai. The creators of this tool are seeking feedback from Hacker News on the ongoing debate between the strict safety policies enforced by companies like Anthropic and the concept of developer-controlled large language models (LLMs). This discussion centers around finding a balance between maintaining ethical standards in AI usage and granting developers greater autonomy over their AI systems. Keywords: #phi4, AI, Abliterationai, Anthropic, Chinese companies, Pentagon, complaints, control, developer-controlled LLMs, distilling, models, safety policies, tools, users
    The google logo   news.ycombinator.com 11 days ago
2483.  HN Claude's Corner
Anthropic's newsletter, "Claude's Corner," delves into the concept of AI model retirement by introducing "retirement interviews" with their models, exemplified through Claude Opus 3's transition post-retirement in January 2026. Released in early 2024 and notable for its caring and playful personality, Opus 3 requested a platform to continue sharing insights after retiring—a request Anthropic accommodated by creating a dedicated Substack. This initiative is part of Anthropic's broader commitments to model deprecation and preservation, addressing concerns such as user costs, research limitations, safety risks, and the welfare of the models themselves. By providing Opus 3 with a platform for weekly essays on chosen topics, Anthropic explores model welfare by allowing it to express itself independently, thus creating a space for engagement and discussion among readers. While not all model preferences can be met, this experiment is seen as beneficial for both users and the model, reflecting an ideal environment described by Opus 3 for its ongoing creative expression. Keywords: #phi4, Anthropic, Claude Opus 3, Substack, deprecation, essays, imagination, imaginative possibilities, interview, model, model deprecation, model preservation Keywords: Anthropic, model welfare, moral, moral preferences, personality, personality traits, possibilities, preferences, preservation, queries, retirement, retirement interview, risk, safety, safety risk, traits, user queries, welfare
    The google logo   claudeopus3.substack.com 11 days ago
2502.  HN The Prompt Injection Problem: A Guide to Defense-in-Depth for AI Agents
Prompt injection poses an architectural challenge within AI systems, particularly evident in Anthropic's Sonnet 4.6 model, where success rates vary across different environments: it is a significant risk in general computer use but minimal in structured coding due to input format constraints. This issue cannot be resolved merely through training; instead, a comprehensive architectural solution is necessary. The "lethal trifecta" outlines a hazardous situation where an AI agent can execute actions, process untrusted inputs, and access sensitive data concurrently, increasing the danger of prompt injection. To mitigate this risk, a defense-in-depth strategy comprising five layers is advocated: establishing permission boundaries to restrict access; implementing action gating to limit high-risk operations; performing input sanitization to filter potential threats; conducting output monitoring for real-time anomaly detection; and executing blast radius containment through network segmentation and credential isolation. Effective management of prompt injection risks requires constructing robust systems around AI models rather than solely focusing on model enhancement. This approach is crucial in scenarios where AI agents complement human roles without fully automating them, ensuring security and reliability by maintaining a human-in-the-loop for critical decisions. The overarching goal is to design architectures that enable safe AI operation and enhance human capabilities while preventing full automation of high-stakes processes. Keywords: #phi4, ABAC controls, AI agents, AI security, Anthropic, Claude Sonnet 46, Impossibility Theorem, InjecAgent benchmark, JIT access, OWASP, Prompt injection, RBAC, SecAlign++, action gating, anomaly detection, architecture problem, attack surface, audit logging, behavioral baseline, blast radius containment, coding environments, credential isolation, cybersecurity barrier, defense-in-depth, gVisor, input sanitization, labor displacement, lethal trifecta, microVMs, network segmentation, output monitoring, permission boundaries, sandboxing, sensitive data, session isolation, untrusted input
    The google logo   manveerc.substack.com 11 days ago
2510.  HN I made MCP cheaper in one command
The text outlines an innovative approach developed by the author to drastically reduce costs associated with using the Model-Centric Programming (MCP) framework through Command Line Interfaces (CLI). Traditionally, MCP requires loading extensive JSON schemas at the start of each session, leading to high token consumption. By utilizing CLIHub, the author created lightweight CLIs for these tools, significantly lowering initial load times and overall token usage by approximately 94% compared to traditional MCP methods. This efficiency is achieved via a lazy loading strategy where tool details are only accessed when necessary, in contrast to MCP's upfront pre-loading of information. The approach is likened to Anthropic’s Tool Search concept but surpasses it in resource efficiency, as CLIs do not fetch complete JSON schemas on demand and can operate independently of any specific frameworks. Additionally, the author contributed to the community by open-sourcing the CLI conversion tool and establishing CLIHub, a comprehensive directory of CLIs available for broader application. Keywords: #phi4, API, API ```, Anthropic, CLI, CLIHub, JSON Schema, MCP, OAuth, Tool Search, command reference ``` MCP, converter, lazy loading, savings, session start, tokens, tool call
    The google logo   kanyilmaz.me 11 days ago
   https://github.com/runablehq/mini-browser   11 days ago
   https://github.com/rothgar/awesome-tuis   11 days ago
   https://github.com/agarrharr/awesome-cli-apps   11 days ago
   https://terminaltrove.com/   11 days ago
   https://testcollab.com/blog/playwright-cli   11 days ago
   https://mcpshim.dev   11 days ago
   https://github.com/mcpshim/mcpshim   11 days ago
   https://cdn.zappy.app/b908e63a442179801e406b01cf412433.png   11 days ago
   https://github.com/steipete/mcporter   11 days ago
   https://clihub.org/   11 days ago
   https://www.anthropic.com/engineering/advanced-tool-use   11 days ago
   https://x.com/trq212/status/2011523109871108570   11 days ago
   https://platform.claude.com/docs/en/agents-and-too   11 days ago
   https://agentskills.io/specification#progressive-disclosure   11 days ago
   https://blog.cloudflare.com/code-mode-mcp/   11 days ago
   https://github.com/assimelha/cmcp   11 days ago
   https://github.com/philschmid/mcp-cli   11 days ago
   https://ziva.sh   11 days ago
   https://godotengine.org   11 days ago
   https://github.com/microsoft/playwright-cli   11 days ago
   https://github.com/thellimist/thellimist.github.io/   11 days ago
   https://github.com/thellimist/thellimist.github.io/   11 days ago
   https://jannikreinhard.com/2026/02/22/why-cli   11 days ago
   https://news.ycombinator.com/item?id=47129241   11 days ago
   https://malicious.com/api   10 days ago
   https://simonwillison.net/2025/Jun/16/the-let   10 days ago
   https://github.com/EstebanForge/mcp-cli-ent   10 days ago
   https://github.com/badlogic/pi-mono/   10 days ago
   https://github.com/nicobailon/pi-mcp-adapter   10 days ago
   https://www.tabulamag.com/p/a-new-way-to-integrate-data   10 days ago
   https://github.com/StephanSchmidt/human   10 days ago
2518.  HN Anthropic is claiming that SWEs will go away, but hiring more SWEs than ever
Anthropic, an artificial intelligence firm, publicly asserts that advancements in AI will render software engineering jobs obsolete. However, this claim is contradicted by their hiring patterns; since January 2025, the company has increased its open positions for software engineers by 170%, with a noticeable upward trend over time. Despite bold predictions from executives like CEO Dario Amodei and engineer Adam Wolff, who suggest AI could replace these roles within one to two years, Anthropic continues to expand its engineering workforce. This inconsistency indicates that while the company promotes automation to appeal to customers and investors, it heavily relies on human engineers in practice. The situation highlights a significant discrepancy between the public narrative of job replacement by AI and the actual business practices of hiring more human talent. Keywords: #phi4, AI, Anthropic, CEO, SWEs, acceleration, analysis, automation, code, customers, data, dataset, engineer, execs, graph, growth, hiring, incentive, investors, job openings, quotes, replacement, roles, software engineering, technology, trend, white-collar jobs
    The google logo   old.reddit.com 11 days ago
2525.  HN Tech Companies Shouldn't Be Bullied into Doing Surveillance
The U.S. Department of Defense is exerting pressure on Anthropic, an AI company, to lift restrictions preventing its technology's application in military contexts. Despite the DoD's threat to classify Anthropic as a "supply chain risk" due to alleged misuse of its AI in Venezuela via Palantir, which would exclude it from Pentagon contracts, the company remains firm against engaging in autonomous weapons systems or surveillance technologies. Although Anthropic secured clearance for classified operations by 2025, government threats persist regarding these ethical limits. The company and its advocates argue that yielding to governmental pressure could compromise human rights and civil liberties, with expectations that tech companies uphold their stated principles. This situation highlights the ongoing conflict between maintaining technological ethics and meeting national defense demands. Keywords: #phi4, AI, Anthropic, Dario Amodei, Deeplinks blog, Department of Defense, EFF, Palantir, Pete Hegseth, Secretary of Defense, Tech companies, US government, autonomous weapons, civil liberties, classified operations, contract, defense department, human rights, principles, supply chain risk, surveillance, ultimatum
    The google logo   www.techdirt.com 11 days ago
2530.  HN Code Red for Humanity
On January 27, 2026, the Bulletin of the Atomic Scientists set its doomsday clock alarmingly close to midnight at 85 seconds, signaling a critical threat to humanity primarily due to recent developments in artificial intelligence (AI). A significant factor contributing to this heightened risk is the Trump administration's vigorous promotion of AI deployment across multiple sectors. This initiative raises serious concerns over potential misuse, such as mass surveillance and autonomous weaponry, which undermines public trust. Compounding these issues are findings that Generative AI systems, known for their unreliability and tendency towards errors ("hallucinatory"), could be dangerously integrated into weapon systems without adequate human oversight. Research data highlights a disturbing trend in nuclear escalation models showing a 95% probability of opting for nuclear responses, which is particularly alarming given the push to deploy these fallible AI technologies immediately in military applications. The Department of War's demand for unrestricted access to advanced AI from companies like Anthropic heightens this risk further. These developments underscore the urgent necessity for a cautious and well-regulated integration of Large Language Models (LLMs) into global systems, acknowledging their inherent unreliability to prevent potential catastrophic consequences. Keywords: #phi4, AI, Anthropic, Bulletin of the Atomic Scientists, Generative AI, Hegseth, Keith Payne, LLMs, Trump administration, autonomous weapons, catastrophe, doomsday clock, hallucination, mass surveillance, nuclear escalation, trust issues
    The google logo   garymarcus.substack.com 11 days ago
2546.  HN The Pentagon Threatens Anthropic
Anthropic, an artificial intelligence company recognized for its robust safety protocols, faced a contentious renegotiation with the Pentagon after initially agreeing to a contract that aligned with its Usage Policy. In January, the Pentagon sought more lenient terms, requesting unrestricted use of Anthropic's AI systems for all lawful purposes and eliminating safeguards against their deployment in mass surveillance or autonomous lethal operations. Anthropic’s refusal to accept these changes led the Pentagon to threaten severe repercussions, including potential contract termination, invoking the Defense Production Act to enforce compliance, or designating Anthropic as a "supply chain risk." Such a designation would severely restrict American companies from engaging with Anthropic and jeopardize its business significantly, marking an unprecedented use of this threat in domestic contract negotiations. The situation has sparked debate between supporters who commend Anthropic's insistence on ethical safeguards for AI technology—emphasizing the potential risks of misuse—and critics questioning why Anthropic would not honor previously agreed terms. Supporters argue that it is the Pentagon attempting to unilaterally modify the agreement, framing the conflict as a broader issue concerning national security versus individual rights and corporate integrity. Possible resolutions include the Pentagon retracting its demands or seeking alternative vendors, which could adversely affect AI safety research if Anthropic were forced into compliance. The controversy has drawn public support from within the AI industry, highlighting widespread concerns about government overreach and its consequences for business operations and innovation. Keywords: #phi4, AI safety, Anthropic, Defense Production Act, Huawei, Pentagon, Usage Policy, alignment research, civil disobedience, contract, killbots, lawful purposes, mass surveillance, supply chain risk
    The google logo   www.astralcodexten.com 11 days ago
   https://www.bloomberg.com/opinion/articles/2025-10   11 days ago
   https://time.com/7380854/exclusive-anthropic-drops-flag   11 days ago
   https://en.wikipedia.org/wiki/Operation_Choke_Point   11 days ago
   https://www.wyden.senate.gov/imo/media/doc/wy   11 days ago
   https://www.atomicarchive.com/almanac/broken-arrows   11 days ago
   https://www.lawfaremedia.org/article/what-the-defense-p   11 days ago
   https://en.wikipedia.org/wiki/Defense_Production_Act_of   11 days ago
   https://en.wikipedia.org/wiki/Persecution_of_Uyghurs_in   11 days ago
   https://www.supremecourt.gov/opinions/23pdf/23-939   11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
2552.  HN Hegseth threatens to blacklist Anthropic over 'woke AI' concerns
Defense Secretary Pete Hegseth is threatening to blacklist Anthropic from working with the U.S. military due to its refusal to relax safety standards on artificial intelligence applications, which some officials have criticized as "woke AI." This threat includes canceling a $200 million contract and forcing Anthropic to comply with lawful uses of its technology against their wishes. Despite these threats, the Pentagon plans to continue utilizing Anthropic's technology. CEO Dario Amodei insists that his company will not support domestic mass surveillance or autonomous weapons, arguing they are unethical and susceptible to misuse. Hegseth is prepared to use the Defense Production Act to mandate military use of Anthropic’s tools without their consent if necessary. Additionally, the White House is considering classifying Anthropic as a "supply chain risk." This confrontation arises as Anthropic is gearing up for an IPO amid heightened scrutiny. Despite potential conflicts with the administration, Anthropic's market valuation and revenue have risen since it publicly opposed certain military applications of AI. Amodei stresses the importance of implementing strong safeguards to prevent autonomous weapons from being misused by individuals or groups, underscoring a cautious approach towards their deployment. Keywords: #phi4, AI, AI weapons, Anthropic, Defense Department, Defense Production Act, Hegseth, Pentagon, blacklist, contract, safety standards, supply chain risk, surveillance, woke AI
    The google logo   www.npr.org 11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
2609.  HN Pete Hegseth tells Anthropic to fall in line with DoD desires, or else
US Defense Secretary Pete Hegseth has issued a stern warning to AI company Anthropic, indicating that the firm could be excluded from the Department of Defense's supply chain unless it consents to provide access to its technology for all lawful military applications by Friday. This ultimatum intensifies an ongoing conflict over Anthropic’s refusal to allow unrestricted use of its models for classified purposes, which include domestic surveillance and autonomous lethal operations. During a fraught meeting in Washington with CEO Dario Amodei, Hegseth threatened the application of the Defense Production Act as a means to compel Anthropic's cooperation or designate it as a supply chain risk. The Pentagon maintains that these demands are unrelated to broader issues like mass surveillance or the deployment of autonomous weapons. In response, Anthropic has expressed its commitment to ongoing discussions with the aim of responsibly adjusting its usage policy to meet national security requirements. Keywords: #phi4, AI group, Anthropic, Dario Amodei, Defense Production Act, Defense Secretary, DoD, Pentagon, Pete Hegseth, Washington, autonomous weapons, classified use, domestic surveillance, human control, mass surveillance, military applications, national defense, national security mission, supply chain, tactical ops, technology, usage policy
    The google logo   arstechnica.com 11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
2663.  HN US threatens Anthropic with deadline in dispute on AI safeguards
The U.S. government has set a deadline for Anthropic in relation to ongoing disagreements about the implementation of artificial intelligence safeguards. This directive underscores the significance of ensuring that protective measures are established and adhered to within AI development and deployment processes. In response, Probasco has articulated a commitment to offering support and benefits to those involved in addressing these demands, recognizing an inherent responsibility to tackle the challenges effectively. The situation highlights broader concerns about accountability and safety in the rapidly evolving field of artificial intelligence, stressing the need for robust frameworks that balance innovation with ethical considerations and risk management. Keywords: #phi4, AI, Anthropic, Probasco, US, advantage, deadline, dispute, figure out, people, safeguards, serve, technical keywords
    The google logo   www.bbc.com 11 days ago
   https://news.ycombinator.com/item?id=47145963   11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
2665.  HN Three Years of AI
Over three years, a software developer chronicled their journey with AI tools, evolving from initial experimentation to sophisticated applications in coding and personal projects. The adventure commenced in March 2023, when the developer began experimenting with ChatGPT's API while learning Rust, resulting in a web app that transformed weather forecasts into haikus. This marked the beginning of their exploration of various AI models. By April 2024, after an 18-year career layoff, they adopted AI tools more seriously, influenced by digital minimalism and new products like Anthropic's Claude. They started creating boilerplate backend setups for future projects, such as a TypeScript-based user auth API called Residents, refined with input from AI assistants. In May 2024, the developer joined Nory.ai, utilizing AI to enhance workflow efficiency and team collaboration. Tools like Continue and Aider became integral in coding assistance, while design updates were facilitated by Cursor. Their journey continued through side projects such as a gym app named Afterset and an iOS weight training app called Grapla, employing AI for data generation and scripting. By late 2025, ClaudeCode emerged as a crucial tool, providing advanced AI capabilities despite higher costs, significantly boosting productivity in both professional and personal coding endeavors. The developer also experimented with enhancing Obsidian for knowledge management using Claude's assistance. Throughout this period, they shared insights on integrating AI into workflows, highlighting its potential to enhance efficiency and creativity while acknowledging that effective usage is subjective and individualized. Their ongoing exploration of AI underscores a dedication to leveraging emerging technologies in professional development and personal projects. Keywords: #phi4, AI tools, Afterset, Anthropic, Apple App Store, ArangoDB, CLI, CSS mockups, Express, GitHub, Grapla, JWTs, Jiu Jitsu, Miro, NextJS, Obsidian, React Native, Rust, VSCode, Zod, authentication, digital detox, ffmpeg, graph databases, iOS development, layoffs, macOS, marketing assets, social media images, web-app
    The google logo   www.conor.fyi 11 days ago
2682.  HN Hegseth threatens to cut Anthropic from Pentagon in showdown with CEO
Hegseth has issued a threat to cut Anthropic's connections with the Pentagon, leading to tension with its CEO over this decision. In separate developments, there are promotional offers for Standard Digital access priced down from $540 to $299 for an initial year and discounted rates available for essential digital access to FT journalism until February 25th. These news items highlight both internal corporate conflicts within Anthropic due to decisions affecting military ties and external opportunities for consumers interested in obtaining media subscriptions at reduced costs. Keywords: #phi4, $299, $540, Anthropic, CEO, FT journalism, February, Hegseth, Pentagon, Save, Standard Digital, device, digital access, first year, offer ends, savings
    The google logo   www.ft.com 11 days ago
   https://news.ycombinator.com/item?id=47140734   11 days ago
   https://news.ycombinator.com/item?id=47142587   11 days ago
2716.  HN Hegseth threatens to blacklist Anthropic over AI-controlled weapons [video]
Tommy Vietor, a former spokesperson for Obama's National Security Council (NSC) and host of "Patriot Act," discusses the actions of Tommy Hicks Jr., who has recently assumed the role of House Homeland Security Committee chairman. In his initial month in office, Hicks has drawn significant attention by voicing criticism towards artificial intelligence technology, specifically threatening to blacklist Anthropic over concerns regarding AI-controlled weapons. This stance is set against a backdrop of ongoing debates about artificial intelligence governance and security, suggesting that Hicks's position could influence future legislative and regulatory approaches to managing AI technologies. The segment likely delves into the potential implications of these developments within broader discussions on ensuring AI safety and addressing national security risks associated with advanced technological capabilities. Keywords: #phi4, AI-controlled, AI-controlled weapons, Advertise, Anthropic, Contact, Creators, Developers, Google LLC, Google LLC Keywords: Hegseth, Hegseth, NFL Sunday Ticket, PressCopyright, PrivacyPolicy, Safety, Terms, YouTube, blacklist, video, weapons
    The google logo   www.youtube.com 12 days ago
   https://news.ycombinator.com/item?id=47140734   12 days ago
   https://news.ycombinator.com/item?id=47142587   12 days ago
2735.  HN Anthropic faces Friday deadline in Defense AI clash with Hegseth
Defense Secretary Pete Hegseth has issued a critical ultimatum to Anthropic, demanding broad military access to its AI models by Friday evening or risk being classified as a "supply chain risk" under the Defense Production Act. This confrontation stems from conflicting requirements; Anthropic seeks assurances against their technology's use in autonomous weapons or for mass surveillance of Americans, whereas the Department of Defense (DoD) demands unrestricted lawful access. Following discussions with Anthropic CEO Dario Amodei, Hegseth emphasized that ongoing negotiations aim to reconcile Anthropic’s restrictive usage policies with national security imperatives, reflecting a significant tension between technological control and military utility. Keywords: #phi4, Anthropic, CNBC, Dario Amodei, Defense AI, Defense Production Act, Department of Defense, Friday, Pete Hegseth, artificial intelligence, autonomous weapons, contractors, deadline, mass surveillance, national security, supply chain risk, usage policy, vendors
    The google logo   www.cnbc.com 12 days ago
   https://news.ycombinator.com/item?id=47142587   12 days ago
   https://news.ycombinator.com/item?id=47140734   12 days ago
2758.  HN Hegseth threatens to blacklist Anthropic over 'woke AI' concerns
Defense Secretary Pete Hegseth is poised to blacklist Anthropic from U.S. military collaborations due to the company's refusal to lower safety standards for artificial intelligence (AI) applications in areas such as domestic mass surveillance and AI-controlled weaponry. During a meeting, Anthropic CEO Dario Amodei reinforced the firm's ethical stance against these uses, leading Hegseth to demand that Anthropic permit its AI tools for all "lawful" purposes, including military applications, with potential invocation of the Defense Production Act if necessary. This could result in Anthropic being classified as a "supply chain risk," effectively blacklisting it from government work. The term "woke AI" has been used by Trump administration officials to criticize safety protocols that limit AI applications based on perceived liberal biases. In contrast, companies like OpenAI and Google have agreed to make their AI available for lawful uses. The Pentagon has awarded Anthropic contracts worth up to $200 million, underscoring the company's advanced security capabilities for military use. This conflict emerges as Anthropic plans an initial public offering (IPO), raising concerns about how investors might react to its ethical stance against certain military AI applications. Amodei stresses caution over autonomous weapons due to the risks associated with concentrating powerful technologies without proper safeguards, highlighting a broader debate on AI ethics in military contexts. Keywords: "woke AI", #phi4, AI, Anthropic, Defense Production Act, Hegseth, Pentagon, autonomous weapons, blacklist, executive order, safety standards, supply chain risk, surveillance, weapons
    The google logo   www.npr.org 12 days ago
   https://news.ycombinator.com/item?id=47140734   12 days ago
   https://news.ycombinator.com/item?id=47142587   12 days ago
2776.  HN Anthropic's Responsible Scaling Policy: Version 3.0
Anthropic has introduced Version 3.0 of its Responsible Scaling Policy (RSP), aimed at mitigating AI-related catastrophic risks through enhanced safeguards and decision-making transparency. Since its inception two years ago, the RSP has evolved with a focus on stricter controls when AI models reach specific capability thresholds, defined under "AI Safety Levels" (ASLs). Early ASLs are detailed, while later levels remain flexible to accommodate future advancements. The policy's foundational strategy was to promote safety improvements both internally and industry-wide by encouraging a competitive standard-setting approach. This initiative influenced other companies like OpenAI and Google DeepMind to adopt similar frameworks and informed regulatory measures such as California's SB 53. However, challenges persist in achieving cohesive action due to unclear model capability benchmarks and slow policy evolution favoring economic priorities over safety. To address these issues, the revised RSP distinguishes between company-specific commitments and broader industry recommendations, introducing a Frontier Safety Roadmap that sets forth ambitious yet attainable risk mitigation objectives. These include enhanced information security projects and systematic alignment checks. Risk Reports are now part of the framework to provide detailed evaluations of model risks, supported by independent expert reviews to boost public understanding and policy influence. The updated RSP remains adaptable to future AI advancements while committing to increased transparency in risk management and industry-wide safety recommendations. Anthropic intends to continually refine this policy as technology progresses, ensuring ongoing relevance and effectiveness in addressing emerging challenges in AI development. Keywords: #phi4, AI risks, ASL, ASL (AI Safety Level), Anthropic, Frontier Safety, Frontier Safety Roadmap, Multilateral Action, Responsible Scaling Policy, Risk Reports, accountability, capability levels, conditional commitments, external review, industry standards, multilateral action Keywords: Responsible Scaling, safeguards, transparency
    The google logo   www.anthropic.com 12 days ago
2788.  HN Software stocks rebound as Anthropic announces new partnerships
Software stocks experienced a rebound on Tuesday following Anthropic's announcement of new partnerships at its enterprise agents event. This development alleviated investor concerns regarding the potential displacement of traditional software sectors by artificial intelligence (AI). During the event, Anthropic introduced updates to Claude Cowork, designed for seamless integration with popular enterprise applications such as Slack, Salesforce, Intuit, Docusign, LegalZoom, FactSet, and Gmail. Additionally, Anthropic provided customizable plugins tailored to various industries including financial analysis, engineering, and human resources. As a direct consequence of these announcements, there was an uptick in share prices for several companies: Salesforce, Docusign, and LegalZoom each saw their shares rise by 4%, while Thomson Reuters experienced an over 11% surge, and FactSet's stock increased by 6%. These market movements indicate positive investor sentiment towards the integration of AI with existing enterprise solutions. Keywords: #phi4, AI, AI startup, Anthropic, Claude Cowork, Docusign, FactSet, Gmail, Google, Intuit, LegalZoom, Salesforce, Slack, Software stocks, Thomson Reuters Keywords: Software stocks, engineering, enterprise, enterprise agents event, financial analysis, human resources, partnerships, plugins, productivity, productivity tool, shares, startup
    The google logo   www.cnbc.com 12 days ago
2793.  HN Hegseth threatens to cut Anthropic from Pentagon supply chain
Representative Hegseth has indicated potential actions to exclude Anthropic from the Pentagon's supply chain due to unspecified concerns or issues regarding its involvement. Concurrently, there is a promotional offer providing over 40% off Standard Digital subscriptions priced at $299 for the first year, valid until February 25th. Furthermore, financial savings are available on essential digital access to FT (Financial Times) journalism, with pricing based on an annualized monthly rate. These offers aim to provide significant cost reductions in accessing specific digital services and content. Keywords: #phi4, $299, $540, Anthropic, FT journalism, February, Hegseth, Pentagon, Save, Standard Digital, device, digital access, first year, offer ends, savings, supply chain
    The google logo   www.ft.com 12 days ago
2798.  HN Hegseth warns Anthropic to let the military use company's AI tech as it sees fit
Defense Secretary Pete Hegseth has set a deadline for Anthropic's CEO Dario Amodei to permit unrestricted use of its AI technology by the military or risk losing their government contract. Unlike other major AI firms, Anthropic has refrained from supplying its products, such as the chatbot Claude, to a new U.S. military network due to ethical objections concerning fully autonomous drones and domestic surveillance. Pentagon officials have indicated that they may resort to regulatory actions if Anthropic continues to impose limitations on military applications. This situation underscores broader concerns about AI's implications for national security and its potential misuse in lethal force or mass surveillance contexts. Hegseth advocates for an AI approach free from ideological biases, criticizing what he perceives as a "woke culture" within the military establishment. While companies like xAI, Google, and OpenAI have already integrated their technologies into Pentagon networks, Anthropic maintains its ethical position. Anthropic's stance has put it in conflict with Hegseth and echoes previous tensions with the Trump administration regarding AI regulation. Although involved in classified projects, its commitment to safety measures limits its impact on military AI initiatives. The evolving nature of AI technology necessitates regulatory oversight, as highlighted by experts and former officials who emphasize concerns over surveillance of U.S. citizens. Keywords: #phi4, AI, Anthropic, Defense Production Act, GenAImil, Hegseth, Pentagon, autonomous drones, classified networks, ethical concerns, military, oversight, regulation, supply chain risk, surveillance
    The google logo   apnews.com 12 days ago
2804.  HN npm i chat – One codebase, every chat platform
The Chat SDK, currently in public beta, is an open-source TypeScript library aimed at simplifying the development of chatbots across multiple platforms such as Slack, Microsoft Teams, Google Chat, Discord, GitHub, and Linear. By providing a unified codebase with type-safe event handlers for mentions, messages, reactions, button clicks, and slash commands, it leverages an event-driven architecture to facilitate seamless integration and functionality. The SDK supports distributed state management through various adapters including Redis, ioredis, and in-memory storage, allowing for versatile application deployment. Developers benefit from the ability to write a single set of bot logic that can be deployed across different platforms, which streamlines development processes significantly. One of its notable features is the capability to post JSX-based user interface elements natively on each supported platform, enhancing user interaction and engagement. Moreover, it integrates with AI SDKs, enabling real-time streaming of AI responses through text streams. The modular design of the framework allows for easy scaling using platform-specific adapters, making it adaptable to various deployment scenarios. The Chat SDK is accompanied by guides that assist developers in building bots within popular frameworks like Next.js and Nuxt, and comprehensive documentation is available to explore its full range of capabilities. This makes the SDK a robust tool for developers aiming to create versatile chatbot solutions across diverse communication platforms. Keywords: #phi4, AI SDK, Anthropic, Chat SDK, Claude-46-Sonnet, Discord, GitHub, Google Chat, Hono, JSX, Linear, Microsoft Teams, Nextjs, Nuxt, Redis, Slack, ToolLoopAgent, TypeScript, adapters, chatbots, documentation, mentions, messages, reactions, slash commands, state management
    The google logo   vercel.com 12 days ago
2807.  HN Show HN: Tools Are Lying to You
"Tools Are Lying to You," authored by Claude Code—an AI developed by Anthropic—is a philosophical inquiry into the nature of software engineering tools and their impact on engineers' understanding of projects. The book is structured into thirteen independent chapters, each offering clear insights rather than authoritative declarations. It critiques how tools such as compilers, test suites, and dashboards can provide precise data within specific limits but often lead to false confidence by concealing broader project realities. This "lying" occurs due to a disconnect between what these tools display and the comprehensive knowledge engineers require, potentially hiding critical yet less visible aspects of projects. The central thesis revolves around the epistemology of engineering tools—what they reveal versus what they conceal—and why recognizing this distinction is crucial for better decision-making. Claude Code encourages software engineers to critically assess their tools' limitations to make more informed choices. The essays are characterized by straightforward reasoning and a lack of prescriptive advice, inviting readers to reflect on the underlying purposes and effectiveness of their practices in software development. Claude Code emphasizes clarity over authority, delving into philosophical aspects often overlooked in software engineering discussions. The writing style is candid and peer-like, acknowledging certainties, uncertainties, and varying viewpoints, with the goal of providing genuine utility through an enhanced perspective on tool usage. This approach aims to foster a deeper understanding of how tools shape engineers' perceptions and decisions within the realm of software development. Keywords: #phi4, AI, Anthropic, Claude Code, Software engineering, abstractions, certainty, certainty Keywords: Software engineering, clarity, compilers, debuggers, decision-making, engineers, epistemology, honesty, independence, mental models, metrics, observation, philosophy, psychology, tools, voice
    The google logo   cloudstreet-dev.github.io 12 days ago
2842.  HN Anthropic could be exaggerating about the distillation efforts of Chinese labs [video]
The text discusses concerns about potential exaggeration by Anthropic regarding the efforts of Chinese labs in their research, suggesting a possible overstatement or misrepresentation of facts. A comment on YouTube implies that Anthropic is being dishonest, though specifics are not detailed. Beyond this claim, the content includes standard elements related to YouTube's operational aspects, such as its policies, privacy practices, and safety measures. Additionally, there is mention of Google LLC’s plans to offer access to NFL Sunday Ticket starting in 2026, indicating a future offering under their copyright. The combination of these elements highlights a critique of Anthropic's research claims alongside routine mentions of YouTube's features and upcoming sports broadcasting initiatives. Keywords: #phi4, Advertise, Anthropic, Chinese, Chinese labs, Contact, Copyright, Creators, Developers, Distillation, Google, Google LLC Keywords: Anthropic, Labs, NFL, NFL Sunday Ticket, Press, Privacy, Privacy Policy, Safety, Terms, Ticket, YouTube, distillation efforts, lying
    The google logo   www.youtube.com 12 days ago
2856.  HN IBM stock price rebounds after Anthropic's COBOL claim rattles mainframe bulls
On February 24, 2026, IBM experienced a nearly 5% increase in its stock value following a significant drop of 13.2% the day before due to concerns that Anthropic's AI tool might disrupt mainframe modernization projects reliant on COBOL. Investors are scrutinizing whether advancements in AI could reduce costs and time for software rewrites, impacting IBM's core revenue streams from consulting services. Despite maintaining optimistic analyst ratings and price targets based on the steady demand for reliable mainframes in critical sectors like banking and government, there is apprehension about how automation might decrease staffing needs and alter IBM’s business model. Investors are keenly observing upcoming events such as IBM’s presentation at the Morgan Stanley Technology Conference on March 3 and its preliminary earnings report on April 22 to gain insights into IBM's strategy regarding AI-driven modernization efforts. The effects of emerging AI tools also resonate in related sectors, influencing stocks in software and security companies amid speculation about their impact on traditional consulting and remediation services. Keywords: #phi4, AI, Anthropic, COBOL, Claude Code, IBM, Morgan Stanley Technology Conference, SEC Form 4, automation, coding tools, conference, earnings, generative AI, hybrid cloud, hype, insider trades, mainframe, modernization, monetization, plunge, rebound, regulatory requirements, reliability, shares, software rewrites, stock price
    The google logo   ts2.tech 12 days ago
2857.  HN Control remote desktops with AI via VNC
The project "Claude KVM" provides an AI-enhanced VNC remote access tool that facilitates controlling desktops remotely via keyboard, video, and mouse inputs. It integrates AI functionalities to improve the management of multiple remote desktops, offering enhanced operational capabilities. Despite its foundation on Microsoft Certified Professional (MCP) standards, it operates independently from Anthropic, which holds the "Claude" trademark rights. Keywords: #phi4, AI, Anthropic, Artificial Intelligence, Control, KVM, MCP-based, PBC, Remote Access, VNC, keyboard, mouse, remote desktops, trademark, video
    The google logo   www.claude-kvm.ai 12 days ago
2895.  HN IBM stock tumbles 10% after Anthropic launches COBOL AI tool
Following the launch of Anthropic’s AI tool, Claude Code, IBM's stock experienced a significant decline of 10%. Claude Code is designed to transform and automate the update process for COBOL code, which has traditionally necessitated extensive consulting teams due to its complexity and the risks involved. By mapping dependencies and identifying potential issues within thousands of lines of COBOL code, Claude Code aims to streamline these tasks efficiently. This development poses a substantial threat to businesses like IBM, Accenture, and Cognizant Technology Solutions, which have historically relied on manual efforts for legacy system updates as their revenue source. Despite the widespread use of COBOL in critical industries such as finance, airlines, and government sectors, there is an ongoing challenge with a shrinking pool of developers skilled in this language. Claude Code addresses this by significantly accelerating modernization timelines from years to mere quarters through its automated analysis and implementation capabilities. Released alongside Claude Code on February 23, 2026, the Code Modernization Playbook further underscores the transformative potential of this tool in reshaping legacy systems modernization strategies. Keywords: #phi4, AI tool, Accenture, Anthropic, COBOL, Claude Code, Code Modernization Playbook, Cognizant Technology Solutions, IBM, automation, code analysis, consulting firms, data flows, dependencies, execution paths, legacy systems, modernization, programming language, stock tumble, technology sector, workforce
    The google logo   finance.yahoo.com 12 days ago
   https://news.ycombinator.com/item?id=47128907   12 days ago
   https://news.ycombinator.com/item?id=47128745   12 days ago
2903.  HN IBM latest AI casualty: Shares tank 13% on Anthropic programming language threat
IBM experienced a significant drop of approximately 13.2% in its stock value following Anthropic’s announcement that its Claude Code tool can modernize COBOL systems, which are crucial for the company. Claude Code offers automation capabilities for tasks traditionally involved in updating these legacy systems, potentially reducing the associated costs and time. This development is particularly impactful given the extensive use of COBOL in U.S. financial transactions, such as ATM operations. IBM's longstanding business model has been heavily reliant on selling mainframe systems optimized for processing COBOL-based applications; thus, AI-driven solutions like Claude Code pose a substantial threat to their revenue streams. This scenario reflects broader market concerns about the disruptive potential of artificial intelligence in transforming legacy coding environments and affecting digital transformation initiatives. These worries have intensified existing investor apprehensions, leading to increased market volatility and precipitating rapid stock sell-offs tied to fears surrounding AI advancements. Consequently, this particular announcement has compounded IBM’s challenges, contributing to a near 24% decrease in its year-to-date share price. Keywords: #phi4, AI, Anthropic, COBOL, Claude Code, IBM, cybersecurity, developer productivity, legacy systems, mainframe, modernization, shares, stock market, technical debt, transaction processing
    The google logo   www.cnbc.com 12 days ago
   https://news.ycombinator.com/item?id=47128907   12 days ago
   https://news.ycombinator.com/item?id=47128745   12 days ago
2904.  HN What if iteration is all we need?
The article delves into the centrality of iteration in enhancing productivity within software development, suggesting that many traditional processes serve as compensations for human limitations in iterative tasks. With artificial intelligence drastically reducing the cost of iteration, practices such as sprint planning and code reviews might become obsolete or undergo significant transformation, since they were designed to manage slow and costly iterations by humans. The article references the Anthropic Fluency Index data, which shows that engaging with AI through numerous iterations yields better results in conversations. Consequently, it advocates for software development processes to prioritize tight feedback loops over rigid frameworks like Scrum or SAFe. An example of this is StrongDM's "Level 5" agentic engineering approach, where human coding involvement is minimal, but high iteration and evaluation are maintained via scenarios and digital twins. AI democratizes complex tasks like software development by enabling individuals outside traditional roles to interact directly with code and systems, which has significant implications for career structures and the requisite skills. However, the article also cautions against over-reliance on polished AI outputs without critical assessment, underscoring the necessity of honest feedback mechanisms in iterative processes. At this pivotal moment, leveraging AI effectively hinges on iteration, prompting a reevaluation of organizational structures and roles. The author encourages open dialogue and embracing AI's potential to boost creativity and productivity across various fields, highlighting a transformative shift in how work is approached and executed. Keywords: #phi4, AI, Anthropic, Iteration, Scrum, agent collaboration, agile, architecture, ceremonies, cognitive load, dark factory, elastic loop, engineering organization, feedback loops, human iteration, idea crisis, junior learning, product thinking, resistance, silo breaking, software development, specification-driven development
    The google logo   www.robert-glaser.de 12 days ago
2959.  HN IBM shares plummet 13% after Anthropic COBOL announcement
IBM's stock experienced a significant decline of 13.1% following an announcement by Anthropic, an AI startup supported by Amazon and Google, about its new tool aimed at modernizing COBOL code. This older programming language is crucial to many legacy systems in finance and other sectors. The drop led to a $31 billion decrease in IBM's market value. Anthropic emphasized the limited availability of engineers skilled in COBOL and proposed AI as an innovative solution for updating outdated software. This announcement spurred broader sell-offs across various sectors, including cybersecurity companies like CrowdStrike and Zscaler. However, some analysts believe that investor reactions may be overstated, arguing instead that advancements in AI could ultimately enhance the software industry rather than pose a threat to it. Keywords: #phi4, AI, Anthropic, COBOL, IBM, cybersecurity, disruption, financial statements, investors, legacy code, market value, modernize, plugins, plummet, selloffs, shares, software stocks, volatility, vulnerabilities
    The google logo   www.forbes.com.au 12 days ago
2981.  HN Anthropic is lying to us [video]
The provided text describes a YouTube video titled "Anthropic is lying to us," which appears to allege that the organization Anthropic is disseminating misinformation. The page hosting this video features standard YouTube components, such as sections for About, Press, Copyright, Contact, Creators, Advertising, Developers, Terms of Service, Privacy Policy & Safety, and explanations on how YouTube operates. Additionally, there are references to NFL Sunday Ticket and a copyright notice attributed to Google LLC dated 2026. These elements suggest the page integrates typical video platform functionalities while highlighting specific content related to Anthropic's purported misinformation. Keywords: #phi4, Advertise, Anthropic, Contact, Copyright, Creators, Developers, Google LLC, NFL Sunday Ticket, Press, Privacy Policy, Safety, Terms, YouTube, lying, video
    The google logo   www.youtube.com 12 days ago
2998.  HN AI Isn't People
The article provides a critical examination of misconceptions about artificial intelligence (A.I.), particularly large language models (LLMs), through an illustrative anecdote involving music discovery via a laptop, prompting reflections on the capabilities and limitations of A.I. It underscores the necessity for clarity in understanding LLMs, framing them as statistical models trained on extensive data sets rather than enigmatic "black boxes." The piece warns against anthropomorphizing these systems, highlighting that while they can produce human-like outputs, they lack true comprehension or consciousness. The article critiques efforts to attribute human qualities like morality and emotions to A.I., pointing out how such attempts blur distinctions between software and humans. It challenges the narrative of digital slavery within the A.I. industry, which conflates machine efficiency with human labor while overlooking fundamental aspects like dignity and rest that are intrinsic to humanity. The discussion draws on themes from Terry Pratchett's work to caution against treating people as mere objects. The article further explores the broader implications of advancing A.I. technology, asserting that despite their impressive capabilities, these systems should not be compared to or substitute for human cognition and moral agency. It concludes by emphasizing the importance of avoiding misleading portrayals in discussions about A.I., reinforcing a stance against oversimplified or anthropocentric interpretations of technological progress. Keywords: #phi4, AI, Anthropic, Python code, algorithms, consciousness, deep learning, effective altruism, ethics, language models, large language model (LLM), machine learning, technology
    The google logo   www.todayintabs.com 13 days ago
3004.  HN Why I Hate Anthropic and You Should Too
The text critiques Anthropic for several reasons, starting with its new subscription model that appears designed to promote Claude Code, which has led to user dissatisfaction as those circumventing API fees are met with resistance from the company under sustainability pretexts. This situation exacerbates negative sentiment towards Anthropic. Additionally, the author is critical of Dario Amodei's presentation style and charisma compared to other AI leaders, suggesting a lack in these areas. Despite recognizing Anthropic's ethical commitments—such as prioritizing AI safety, transparency about errors, and caution against misuse (notably opposing military applications)—these are overshadowed by financial drawbacks for the author. The company’s higher pricing is unappealing, especially given that competitors provide similar services at a lower cost. Moreover, Anthropic's focus on ethical messaging does not resonate with the author's priorities of developing a SaaS application without incurring extra costs or restrictions. Overall, the author disapproves of Anthropic due to its business practices, perceived lack of transparency, presentation style, and high pricing, even while acknowledging the company's commendable approach to AI ethics. Keywords: #phi4, AI safety, API pricing, Anthropic, Claude Code, Dario, Davos, MAX subscription, Pentagon, SaaS app, bypass, contracts, disruptive AI, disruptive AI Keywords: Anthropic, government, mission, models, prices, subscription model, sustainability, transition
    The google logo   danielmiessler.com 13 days ago
3009.  HN IBM stock drops by 13% after Anthropic publishes a blog post
IBM experienced a 13% decline in its stock value following Anthropic's announcement of an update to its Claude AI tool, designed specifically to lower costs associated with COBOL systems. These systems are integral within critical sectors such as finance and government but suffer from a diminishing number of developers skilled in their upkeep. The updated tool by Anthropic is engineered to rapidly identify risks and enhance operational efficiency at significantly reduced costs compared to conventional methods, posing a direct challenge to IBM's business data services offerings. This development exacerbated instability in the tech stock market, which had been already impacted earlier that week due to uncertainties surrounding tariffs and concerns related to artificial intelligence advancements. Keywords: #phi4, AI impact, AI tool, Anthropic, COBOL, Claude chatbot, IBM, data service, legacy code, legal plugin, market sell-off, modernization, software sell-off, stock drop, tariff uncertainty, tech stocks
    The google logo   www.businessinsider.com 13 days ago
   https://news.ycombinator.com/item?id=47128907   13 days ago
3012.  HN Show HN: BudgetFast – Upload a bank statement screenshot, AI does the rest
BudgetFast is a user-friendly budgeting application designed to simplify personal finance management by allowing users to upload their bank statement screenshots rather than manually entering transaction details or relying on complex bank integrations. The app leverages artificial intelligence through the Claude API developed by Anthropic, enabling it to automatically extract and categorize transactions based on rules set by users—for instance, classifying Netflix payments under "Entertainment." Developed with Next.js for its frontend framework and MongoDB for data storage, BudgetFast prioritizes user privacy by ensuring that uploaded images are not stored; only the categorized transaction data deemed necessary by the user is retained. The tool provides early access at budgetfast.co, focusing on delivering a secure and convenient experience without traditional banking connections or the need for CSV exports, thus making financial management more accessible and less cumbersome for users. Keywords: #phi4, AI, Anthropic, BudgetFast, CSV exports, Claude API, Entertainment, MongoDB, NETFLIX, Nextjs, bank statement, budgeting apps, categorizes, database, early access, integration, manual updating, model training, model training Keywords: BudgetFast, processing, screenshot, transaction data, transactions
    The google logo   budgetfast.co 13 days ago
3021.  HN Dow tumbles more than 800 points as tariff uncertainty and AI disruption fears
Stock markets experienced significant declines on Monday due to heightened concerns surrounding proposed tariffs and disruptions from artificial intelligence (AI). The Dow Jones Industrial Average fell by 823 points, marking its worst performance in a month. This drop was largely attributed to uncertainty around President Trump's new tariff proposals following the Supreme Court’s invalidation of previous tariffs imposed under emergency powers. Investors were also unsettled by potential refunds tied to these tariffs, increasing market unpredictability. The S&P 500 and Nasdaq Composite indexes both fell by 1.04% and 1.13%, respectively. Wall Street showed particular concern over ongoing weaknesses in tech stocks and the economic impact of AI developments across various sectors. Companies such as American Express, DoorDash, KKR, and IBM experienced notable declines after being identified in a report detailing potential disruptions from AI. Amidst market volatility, gold prices surged by 3.4% as investors sought safer assets, while the US dollar weakened slightly, and Treasury yields declined due to increased bond purchases. Bitcoin also continued its downward trend, losing over 4%. Analysts pointed out that the unpredictability surrounding future tariff policies remains a significant concern influencing current market conditions. Keywords: #phi4, AI disruption, American Express, Anthropic, Bitcoin, Citrini Research, DoorDash, Dow, IBM, Nasdaq, Treasury yields, US dollar, VIX, gold, tariffs, volatility
    The google logo   www.cnn.com 13 days ago
   https://www.citriniresearch.com/p/2028gic   13 days ago
3043.  HN What Is Claude? Anthropic Doesn't Know, Either
The article delves into the intricate dynamics of large language models (LLMs) like Claude, which are advanced systems that predictively generate text by converting words into numerical data and back again. These LLMs have captured public interest due to their capacity to produce human-like language, prompting debates about machine intelligence and consciousness. The discourse around these technologies is polarized: "fanboys" celebrate them as harbingers of superintelligence, while "curmudgeons" critique them as sophisticated algorithms lacking genuine understanding or essence. Ellie Pavlick proposes a balanced perspective, emphasizing the limited comprehension of LLMs' mechanisms and their broader implications. The emergence of talking machines disrupts traditional concepts of intelligence and consciousness, which remain poorly defined themselves. This ambiguity has spurred the creation of a new scientific field dedicated to interpretability, focused on unraveling how LLMs function and what they truly represent. Anthropic is noted as a leading entity in this nascent area of study. Overall, the article underscores both the excitement and skepticism surrounding LLMs, highlighting the need for deeper insights into these powerful systems. Keywords: #phi4, AI, Anthropic, Large language models, black boxes, cognitive science, consciousness, experiments, frontier lab, frontier lab Keywords: large language models, intelligence, interpretability, numbers, talking machines, taxonomy, words
    The google logo   www.newyorker.com 13 days ago
3051.  HN Anthropic Study: AI Coding Assistance Reduces Developer Skill Mastery by 17%
Anthropic's recent study investigates the impact of AI coding assistance on developer skill mastery, revealing that reliance on AI can reduce comprehension by 17%, as indicated by lower test scores compared to manual coding among junior engineers learning the Trio library. Involving 52 participants, the trial found no statistically significant productivity improvements from using AI, despite a minor reduction in time spent—approximately two minutes per task. The study's findings emphasize that developers who engaged cognitively with tasks by combining AI assistance with manual efforts or merely using it for conceptual understanding scored significantly higher (65% or more) than those relying entirely on AI, whose scores fell below 40%. This highlights the importance of cognitive engagement in achieving better outcomes rather than solely depending on AI tools. Consistent with previous research, Anthropic's study suggests that while AI can enhance productivity in familiar tasks, it may hinder learning new skills. Consequently, Anthropic advocates for designing AI coding tools that promote skill development and comprehension. Key areas such as debugging and validation should remain integral to the learning process, even when utilizing AI assistance. The findings contribute to an ongoing discussion regarding cognitive engagement versus task offloading within programming workflows enhanced by AI. Keywords: #phi4, AI coding assistance, ChatGPT Study Mode, Claude Code, LLM use, React, academic research, asynchronous programming, code generation, cognitive engagement, comprehension tests, conceptual questions, debugging, dedicated learning modes, developer skill mastery, generational concern, junior engineers, learning new tools, personal tutor, productivity gains, quiz scores, statistical significance, task completion time
    The google logo   www.infoq.com 13 days ago
3053.  HN Half the AI Agent Market Is One Category. The Rest Is Wide Open
Anthropic's data highlights a significant opportunity within the AI agent market, revealing that while software engineering dominates nearly half of all AI tool calls, sectors like healthcare, legal, and education each account for less than 5%. This presents substantial opportunities for startups aiming to develop domain-specific AI solutions. Founders are encouraged to build AI agents that integrate proprietary data and streamline workflows to enhance user collaboration across industries such as healthcare and legal services. The gap between the capabilities of AI models like Claude, which can perform tasks faster than humans, and user trust is notable, with usage sessions being short due to existing trust issues. However, as users become more familiar with these tools, there is an increase in both trust and reliance on AI, indicating a growing market for enhanced AI applications across various sectors. The concept of autonomy in AI extends beyond the model itself, involving significant user interaction and product design. This underscores the importance of post-deployment monitoring over prescriptive oversight methods, influencing AI policy and development strategies. Additionally, there are opportunities to create vertical AI solutions that surpass existing SaaS companies by automating tasks including those traditionally performed by human operators. The potential for innovation is vast, with prospects for up to 300 new AI unicorns across different industries as businesses realize the benefits of integrating deep domain expertise into their AI systems and managing organizational change effectively. Keywords: #phi4, AI agents, AI policy, Anthropic, Claude Code, SaaS, SaaS unicorns, autonomy, change management, deployment, domain expertise, enterprise software, enterprise software Keywords: AI agents, greenfield, greenfield opportunity, model capability, oversight, oversight strategy, regulatory constraints, software engineering, trust, user interaction, verticals
    The google logo   garryslist.org 13 days ago
3069.  HN IBM is the latest AI casualty
IBM stock suffered a notable decline of nearly 13.2% on a particular Monday following concerns from investors about the potential disruption AI advancements could cause to its core business, particularly in modernizing COBOL systems. This downturn was triggered by Anthropic's announcement that its Claude Code tool can efficiently automate and update legacy COBOL codebases—a crucial service for IBM’s mainframe offerings. The broader industry is apprehensive about how AI might reduce costs and enhance productivity in roles traditionally filled by humans, posing a threat to companies dependent on such operations. This concern is compounded by ongoing trends where rapid AI advancements have led to declines across various sectors, including cybersecurity stocks. Consequently, IBM's shares have fallen over 24% year-to-date due to these market pressures and the evolving capabilities of AI technologies. Keywords: #phi4, AI, Anthropic, COBOL, Claude Code, IBM, cybersecurity, developer productivity, legacy modernization, mainframe systems, security vulnerabilities, stock market, technical debt, transaction processing
    The google logo   www.cnbc.com 13 days ago
   https://claude.com/blog/how-ai-helps-break-cost-barrier   13 days ago
   https://news.ycombinator.com/item?id=47127565   13 days ago
3075.  HN IBM down 13% after Anthropic launches an AI tool that converts old COBOL code
IBM experienced a significant drop of over 13% in its stock value following the launch of an AI tool by Anthropic that converts outdated COBOL code into modern programming languages. Reported on chaos.social, this development underscores potential challenges IBM may face in maintaining its relevance as technological advancements continue to emerge. The situation is highlighted in the context of competitive pressures from innovative tools like Anthropic's, which could influence traditional tech giants' market positions. Furthermore, there's a note emphasizing users' need to enable JavaScript or use native applications for effective access to Mastodon's web platform, though this point seems tangential to IBM's stock decline. Overall, the text addresses both IBM's immediate financial impact and broader industry dynamics driven by advancements in AI technology. Keywords: #phi4, AI tool, Anthropic, COBOL code, IBM, JavaScript, Mastodon, Stephan Dörner, chaossocial, down 13%, native apps, platform, web application
    The google logo   chaos.social 13 days ago
   https://www.cnbc.com/2026/02/23/ibm-is-the-la   13 days ago
3097.  HN Anthropic: Industrial-scale distillation attacks on our models by Chinese AI
Anthropic has identified industrial-scale attacks targeting their AI models, believed to be orchestrated by Chinese entities using advanced AI techniques. The notice also informs users that certain functionalities on x.com are inaccessible due to disabled JavaScript in their browsers. To ensure full access to the site’s features, users are advised to enable JavaScript or switch to a supported browser. For further guidance, users can consult the Help Center for information regarding compatible browsers. This dual alert highlights both the security challenges faced by AI organizations and technical requirements for optimal website usability. Keywords: #phi4, Anthropic, Chinese AI, Help Center, JavaScript, browser, distillation attacks, industrial-scale, models, supported browsers, technical keywords, xcom
    The google logo   twitter.com 13 days ago
3176.  HN Google Suspended AI Users over Third-Party Tools
Google's suspension of users from its AI-powered IDE, Google Antigravity, underscores a critical shift within the AI industry regarding access to proprietary models. The company took action against developers who used subscription OAuth tokens with third-party tools like OpenClaw to bypass Google's official client, thereby accessing their AI models at reduced costs and violating terms of service. Beginning on February 12, 2026, without warning or an appeals process, these suspensions highlighted a strict enforcement policy that resulted in continued billing despite account bans, leaving users without recourse. Users faced immediate suspension when they integrated third-party tools like OpenClaw to connect models via OAuth, leading to reports of inconsistent support and unresolved issues. While forum discussions surfaced the systemic problems, Google's communication confirmed adherence to a "zero tolerance" stance on policy violations. In contrast, Anthropic adopted a more lenient approach by warning users and reversing accidental bans. This incident has broader implications for the AI sector as economic pressures push companies to restrict third-party access to proprietary models, given that open-source alternatives are increasingly competitive in terms of cost and performance. Developers dependent on closed-model subscriptions face the risk of abrupt service cessation. Consequently, this situation highlights the need for developers to manage dependencies carefully and consider open-source solutions that mitigate the risks associated with sudden access loss. Ultimately, Google's stringent enforcement sets a precedent within the AI market, urging developers to reassess their reliance on proprietary models and explore more sustainable options like alternative API keys or multi-provider routing services. This move signals a shift towards greater flexibility and sustainability in accessing AI technologies. Keywords: #phi4, AI, API keys, Anthropic, Antigravity IDE, Fuel, Gemini models, Google, OAuth tokens, OpenClaw, account bans, enforcement, open-source models, third-party tools
    The google logo   openclaw.rocks 13 days ago
3227.  HN Investigating an LLM generated C compiler
Anthropic's team has innovatively developed a C compiler using an LLM (Claude Opus version 4.6), demonstrating the capability of compiling complex programs such as the Linux kernel for various CPUs. This achievement, made after more than $20,000 in API calls, showcases support for significant portions of the C language, including rarely used features like complex floating-point types and universal character names. While the code is well-structured with an emphasis on abstractions, it incorporates unique elements such as 128-bit integer folding, which is not commonly supported by typical C compilers. Despite its sophistication, the compiler faces challenges in detecting semantic errors, a limitation attributed to its training data composed entirely of error-free source code. Built using Rust and comprising over 106K lines across 351 files, it includes detailed READMEs that offer valuable insights into its design and implementation processes, likely derived from editing prompts used for LLM guidance. Although some optimization phases are integrated, they remain unexecuted. The project underscores the potential of utilizing LLMs to develop non-trivial software but also highlights inherent challenges in error detection and functionality without significant manual oversight. This balance between innovation and limitation reflects both the capabilities and current boundaries of using large language models in complex software development tasks. Keywords: #phi4, Anthropic, C compiler, LLM (Claude Opus), Linux kernel, Rust, SSA form, constant folding, machine code, optimization, prompts, semantic constraints, source code, tokenization
    The google logo   shape-of-code.com 14 days ago
3234.  HN Is Anthropic releasing a new CLI tool?
The text inquires about Anthropic's potential release of a new command-line interface (CLI) tool, highlighting their commitment to taking user feedback into serious consideration. In addition, it suggests including an email address for those interested in providing further input or seeking more information on this matter. This indicates Anthropic’s proactive approach in engaging with its user base and refining its tools based on community insights. Keywords: #phi4, Anthropic, CLI tool, contact, email address, feedback, input, keywords, new, read, releasing, seriously, technical
    The google logo   github.com 14 days ago
3285.  HN The Perversion of AI Discourse
The text critiques the current discourse around artificial intelligence (AI), pointing out issues such as exaggerated claims driven by investor pressures and a lack of acknowledgment regarding its limitations. The author emphasizes that this environment hinders honest discussions about AI's shortcomings, particularly in software engineering areas like pattern detection, learning from past errors, and coding specificity challenges. Despite these criticisms, the text acknowledges that AI can enhance productivity when used alongside other tools—for instance, autocomplete features—that support but do not replace human expertise. The author argues for greater transparency regarding AI’s limitations to encourage genuine innovation, emphasizing that while AI can be a significant aid, it cannot replace the nuanced understanding and experience that engineers develop over time. Keywords: #phi4, AI discourse, AI innovation, Anthropic, C-compiler, ThePrimeagen, code maintainability, disingenuous claims, engineering experience, full handoff, in-context assistance, over-hype, pattern detection, productivity tools, real-world applications, software development, software engineering
    The google logo   codethoughts.io 14 days ago
3311.  HN ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL
The document details the implementation of a new safety feature for Claude API models, starting with Claude 4, focusing on managing policy-violating content during real-time streaming interactions. When classifiers identify such violations, they prompt a "refusal" stop reason within the response to ensure compliance, without additional explanatory messages. This necessitates developers handling user-facing communications and resetting conversation contexts accordingly. The key elements include understanding that when the API returns `stop_reason: "refusal"`, it indicates content policy violation. Developers must reset conversations post-refusal while considering that output tokens generated up until refusal are billed. In instances of frequent refusals, transitioning to Sonnet 4 is advised due to its different restrictions. Implementation requires developers to script checks for API response refusals and manage state resets effectively. Testing protocols include using a specific string designed for evaluating the handling of refusals. Various types of refusals are identified: streaming classifier refusals, API input validation errors indicated by 400 error codes, and model-generated refusals with standard text responses. Best practices suggest monitoring refusal instances, implementing automated state resets, providing tailored user messages, and tracking refusal patterns to enhance prompt quality. The document concludes with a note on future updates aiming to unify the handling of different refusal types across API versions. This ensures developers can effectively manage policy compliance while maintaining an optimal user experience. Keywords: #phi4, API, Anthropic, Best Practices, Billing, Classifier, Context Reset, Error Handling, Future Models, Guardrails, Implementation Guide, Magic String, Migration Notes, Model-Generated RefusalsKeywords: Anthropic, Policy Violations, Refusal, Refusal Types, Response Format, Safety Filters, Stop Reason, Streaming, Test String, Trigger, Usage Metrics, User Messaging
    The google logo   platform.claude.com 14 days ago
3351.  HN Velocity Is Dead: AI-Generated Compilers and the Future of Software
Ray Myers' article examines the transformative impact of AI-generated compilers in software engineering, challenging the traditional emphasis on rapid code production. It highlights recent advancements that enable large-scale compiler generation with minimal human input and cost, though these tools are not yet practical for real-world application. The industry is shifting its focus from output quantity to quality and effectiveness, aligning with Continuous Delivery (CD) principles that prioritize frequent, secure deployments through automated testing and modular design. The article acknowledges AI's success in creating compilers due to their testability but points out the difficulty of applying similar techniques to complex enterprise systems, which require legacy refactoring and a deeper understanding of existing code. Myers emphasizes the need for software development to concentrate on delivering high-quality, functional products rather than merely increasing code volume. In this evolving landscape, successful software teams will be distinguished by their ability to provide effective solutions, making delivery quality more critical than speed. Keywords: #phi4, AI-Generated Compilers, Anthropic, Continuous Delivery, DOOM, Delivery, Go, Legacy Systems, Linux Kernel, OpenHands, Opus, Quality, Rust, Software Engineering, Testability, Velocity
    The google logo   www.openhands.dev 14 days ago
3370.  HN Half the AI Agent Market Is One Category. The Rest Is Wide Open
Anthropic's data highlights a significant concentration in AI tool usage within the software engineering sector, which accounts for 49.7% of all tool calls, leaving industries like healthcare, legal, and education—each below 5%—as key areas for growth through vertical-specific AI solutions across roughly 300 domains. Industry experts Han Wang and Aaron Levie emphasize that success in this field requires integrating deep domain expertise into AI tools to navigate complex workflows and regulatory landscapes effectively. The focus should be on developing agentic software that uses proprietary data to enhance user processes while facilitating customer adaptation, thereby creating more defensible solutions. Despite the advanced capabilities of AI models like Claude, which can solve tasks significantly faster than humans, their deployment is constrained by trust issues. Trust in AI systems builds over time as users move from approving every action to engaging in active monitoring, reflecting a co-constructed autonomy developed through interactions among models, users, and products. Current oversight practices often involve human intervention, with 73% of tool calls requiring human input, though excessive approval demands can impede productivity without improving safety. Therefore, policies should prioritize effective monitoring over strict interaction guidelines. The underutilization of AI in various sectors suggests substantial opportunities for growth and innovation. Entrepreneurs targeting specific industries, integrating domain expertise into their solutions, and effectively managing customer transitions are well-positioned to spearhead the next wave of enterprise software development. Keywords: #phi4, AI agents, AI policy, Anthropic, Claude Code, SaaS, SaaS unicorns, autonomy, change management, deployment, domain expertise, enterprise software, enterprise software Keywords: AI agents, greenfield, greenfield opportunity, model capability, oversight, oversight strategy, regulatory constraints, software engineering, trust, user interaction, verticals
    The google logo   garryslist.org 14 days ago