Scraper
Spider

A robotic spider About
Blog
@dbaman@fosstodon.org
Click ▶ to show/hide AI summary and keywords
Click The google logo for Google search on keywords

2026-03-09 02:47
postgresql
postgresql stories from the last 14 days  | Back to all stories
8.  HN Show HN: Mir – Portable participation history across platforms (open sandbox)
Mir, or Memory Infrastructure Registry (MIR), is an innovative platform designed to facilitate the querying of user behavioral histories across multiple platforms without direct inter-platform communication. This capability allows users to build a comprehensive profile from zero on any new platform while preserving anonymity for partner identities involved in data sharing. The system functions by having partners submit various types of events, such as transactions completed or accounts created, via an API. These submissions contribute to creating a detailed participation history. Users can engage with MIR through a sandbox environment using a magic link login, which provides them with an immediate API key for testing purposes. This setup enables users to simulate event submissions and resolve user histories using straightforward `curl` commands or JavaScript fetch requests. The underlying technology stack comprises Express, TypeScript, PostgreSQL, and Redis, ensuring robust functionality while maintaining isolation of the sandbox environment from production systems. The sandbox is further restricted to a maximum of 5,000 events per day. To enhance ease of access and experimentation with MIR's capabilities, users can sign up via email for a magic link that eliminates the need for passwords. This feature streamlines the process of exploring how MIR aggregates cross-platform participation history, making it an accessible tool for both developers and end-users looking to leverage detailed behavioral insights across diverse digital ecosystems. Keywords: #phi4, API, Express, Memory Infrastructure Registry, Mir, PostgreSQL, Redis, TypeScript, accountcreated, behavioral history, cross-system, eventType, events, identity resolution, magic linkKeywords: Mir, participation history, platforms, ratingreceived, reviewsubmitted, sandbox, sandbox key, transactioncompleted, trust model, userExternalId
    The google logo   myinternetreputation.org 2 hours ago
38.  HN Show HN: Engram — a brain-inspired context database for AI agents
Engram is a brain-inspired context database designed to enhance AI agent memory by emulating human cognitive processes. It addresses issues like context collapse and knowledge isolation in Long Language Models (LLMs) through an incremental, associative storage approach, storing information as atomic "knowledge bullets" within a concept graph. This structure allows related concepts to reinforce each other, enabling context reconstruction when necessary. The system supports multi-agent compatibility, allowing updates from various models and platforms, facilitating seamless knowledge sharing. Key features include reinforcement learning to prioritize useful knowledge while letting less relevant data fade away, cross-model portability for integration into different LLMs like ChatGPT and Claude, advanced context management to prevent isolation, and structured knowledge storage with a feedback-driven adaptation loop. Engram's architecture involves "Bullets" and "SchemaNodes," storing discrete knowledge units with usage tracking and abstract patterns from repeated experiences, while "Delta Operations" ensure atomic context updates, maintaining memory integrity. The system supports concurrent computations by multiple agents using a lock mechanism for consistency. Bullets transition through active, archived, and purged states, managed based on capacity thresholds and usage metrics. Engram integrates with platforms like Claude via MCP servers and OpenAI function calling, offering command-line tools for context management and health monitoring. Engram's overall functionality includes ingestion, materialization, delta operations, lifecycle management, re-extraction, configuration, health checks, and integrations, featuring a modular API with endpoints for content addition and retrieval, decision recording, context recall, and delta operation tracking. Its data model comprises "Bullets," representing atomic knowledge units; "SchemaNodes" capturing abstract patterns; and "DeltaOperation" tracking graph changes as atomic mutations. Configuration is managed via environment variables or a .env file, with the system developed in Python. The architecture draws inspiration from Agentic Context Engineering (ACE) and cognitive neuroscience principles like memory reconsolidation, schema theory, and forgetting curves to enhance functionality. Engram is MIT-licensed, with support available for large-scale deployments through paid services by its developers. Keywords: #phi4, AI agents, Docker, Engram, GDPR, LLM sessions, LangGraph integration, PostgreSQL, SQLite, agent handling, archiving, audit trail, capacity metrics, concept graph, configurations, consolidation engine, context database, context engineering, data lifecycle, data model, deduplication, delta history, embeddings, environment variables, forgetting curve, function calling, health, ingestion, integrations, knowledge reinforcement, lifecycle management, materialization engine, memory systems, multi-agent updates, neuroscience, persistent memory, polling, re-extraction, real-time events, reconsolidation, rollback, salience decay, schema formation, schemas, server health
    The google logo   github.com 8 hours ago
39.  HN Show HN: Pgroles – declarative PostgreSQL access control
Pgroles is a tool designed to simplify and streamline the management of PostgreSQL access controls through a declarative approach. It enables users to define roles, grants, and memberships in a YAML file, ensuring that any discrepancies between the desired state and the current database configuration are automatically corrected by generating precise SQL commands. This method effectively addresses common challenges associated with role management across various environments, such as errors from ad-hoc SQL scripts or outdated migration files. Key features of pgroles include its declarative management system, which allows for consistent application of privilege rules; a convergent diff engine that aligns the database state with defined manifests and revokes stale permissions; and a dry-run mode that lets users preview changes without applying them. Additionally, it automatically manages default privileges for new tables, supports role membership management including inheritance and admin flags, and incorporates safe drop mechanisms to prevent accidental drops of roles tied to owned objects or active sessions. Primarily aimed at platform teams, database administrators (DBAs), and those responsible for managing multiple PostgreSQL environments, pgroles significantly simplifies access control administration by offering a structured and error-resistant approach. Keywords: #phi4, Pgroles, PostgreSQL, SQL, YAML, access control, database, declarative, diff engine, dry-run mode, grants, memberships, privilege management, profiles, role membership, roles, safe drops
    The google logo   hardbyte.github.io 8 hours ago
47.  HN Blacksky AppView
Blacksky's AppView is a customized adaptation of the AT Protocol reference implementation by Bluesky Social PBC, designed to power their own API service with an emphasis on transparency and potential enhancements for other communities, though it does not accept external contributions or issues. Key modifications include changes in `packages/bsky` for appview logic, `services/bsky` for runtime configuration, and a unique custom migration. The built-in TypeScript Firehose consumer is replaced by the Rust-based indexer, rsky-wintermute, which supports parallel queue processing to enhance performance at scale. In terms of performance and operational improvements, optimizations such as LATERAL JOIN query enhancements in PostgreSQL significantly boost user feed efficiency. Additionally, a Redis caching layer helps reduce database load but faces challenges with timestamp serialization issues. Operational enhancements focus on server-side enforcement of notification preferences, solving JWT authentication problems, and JSON sanitization to prevent parsing errors. Community features are tailored for Blacksky's specific needs, supporting private posts infrastructure within the AppView instead of individual PDSes (Personal Data Stores) and implementing a separate membership database for access control through membership gating. The architecture integrates several components: rsky-wintermute handles event indexing and backfill using PostgreSQL; bsky-dataplane serves as a gRPC data layer over PostgreSQL; bsky-appview provides an HTTP API server; and Palomar offers full-text search capabilities. Setting up Blacksky's AppView requires Node.js 18+, pnpm, PostgreSQL 17 with the appropriate schema, and optionally Redis and OpenSearch. The process involves using `pnpm` to install dependencies, build the project, and run both the dataplane and appview servers with specific environment variables. Operating at scale presents challenges such as a full-network backfill that takes 2-4 weeks depending on various conditions but allows real-time live indexing from day one. Key issues addressed include data corruption, JSON format sensitivity, notification table bloat, and queue management problems. Synchronization with upstream involves adding the repository as a remote, fetching updates, and resolving conflicts primarily within appview logic. The system is dual-licensed under MIT and Apache 2.0, reflecting its open-source nature while balancing flexibility for various use cases. This summary encapsulates the essence of Blacksky's custom implementation of AppView, emphasizing its architecture, performance improvements, unique community features, setup process, operational considerations at scale, and licensing details. Keywords: #phi4, API server, AT Protocol, AppView, Blacksky, Bluesky Social PBC, HTTP endpoints, JSON sanitization, OpenSearch, Palomar, PostgreSQL, Redis caching, Rust indexer, TypeScript consumer, WebSocket subscription, backfill architecture, community posts, data-plane server, firehose consumer, gRPC, membership gating, moderation labels, operational tooling, performance optimization, resource requirements Keywords: Blacksky, rsky-wintermute
    The google logo   github.com 9 hours ago
   https://gregpak.net/2025/11/13/how-and-why-i-   8 hours ago
   https://notes.nora.codes/atproto-again/   8 hours ago
   https://bsky.app/profile/bad-example.com/post/   8 hours ago
   https://constellation.microcosm.blue/   8 hours ago
   https://bsky.app/profile/himself.bsky.social/post&   8 hours ago
   https://docs.blacksky.community/list-of-our-services   7 hours ago
   https://pdsls.dev/at://did:plc:zjbq26wybii5ojoypks   7 hours ago
   https://news.gallup.com/vault/315566/gallup-vault-   6 hours ago
   https://arxiv.org/html/2408.12449   6 hours ago
   https://whtwnd.com/bnewbold.net/3lo7a2a4qxg2l   3 hours ago
   https://blackskyweb.xyz/   3 hours ago
   https://bsky.app/profile/mackuba.eu/post/3m2j   3 hours ago
   https://bsky.app/profile/jay.bsky.team/post/3   3 hours ago
   https://news.ycombinator.com/item?id=45018773   3 hours ago
   https://www.microcosm.blue/   3 hours ago
   https://reddwarf.app/   3 hours ago
   https://news.ycombinator.com/item?id=47302514   3 hours ago
64.  HN Pg_plan_advice: Plan Stability and User Planner Control for PostgreSQL?
Robert Haas has introduced a comprehensive patch set for PostgreSQL 19 that centers around enhancing plan stability and providing users with more control over the planning process through three new contrib modules: `pg_plan_advice`, `pg_collect_advice`, and `pg_stash_advice`. These modules aim to ensure more predictable query execution plans by allowing users to create "plan advice" strings, which specify the desired structure of a query plan. This innovation promises both consistency in the selection of plans and the ability to investigate alternative strategies without altering application code. The primary module, `pg_plan_advice`, facilitates generating and applying these advice strings, granting users influence over planner decisions. For sustained or system-wide adjustments, the `pg_stash_advice` module can automatically implement stored advice based on query identifiers. The patch is designed with a clear separation between mechanism and policy, allowing for future enhancements that may introduce varied methods for matching queries and storing advice. Despite its potential benefits, especially for database administrators managing extensive systems, the technology remains in an early stage (version 1.0) with certain limitations. Haas encourages further scrutiny and testing before it is considered for inclusion in PostgreSQL 19. Feedback has highlighted concerns about complicating planner code and conflicting with PostgreSQL's traditional opposition to query hints, while also acknowledging its potential utility. Keywords: #phi4, EXPLAIN, HASH_JOIN, MERGE_JOIN_PLAIN, PostgreSQL, contrib modules, dynamic shared memory, pg_plan_advice, pg_stash_advice, plan advice string, plan stability, query planning, user planner control, version 10 technology
    The google logo   rhaas.blogspot.com 12 hours ago
110.  HN Show HN: SteerPlane – Runtime guardrails for AI agents (cost limits, loops)
SteerPlane is a runtime guardrail system designed to ensure autonomous AI agents operate within predefined constraints, thereby mitigating risks associated with their operation. Its core features include enforcing cost limits to prevent excessive spending during each agent run and employing sliding-window pattern detection for real-time loop identification and interruption of repetitive behaviors. Additionally, it imposes step caps to control resource consumption and collects comprehensive telemetry data detailing every action taken by an agent, such as action names, tokens used, costs incurred, latency, and status. This information is accessible through a real-time Next.js-based dashboard that provides live monitoring capabilities with auto-refreshing visual timelines and cost breakdowns. SteerPlane offers SDKs in both Python and TypeScript, installable via pip or npm, and includes robust exception handling to address issues like over-budget scenarios, loop detections, and step limit breaches. Its architecture features an AI agent interfaced through the SteerPlane SDK with a FastAPI server that stores data in PostgreSQL and displays analytics on a Next.js dashboard. The system provides comprehensive setup and operational instructions for starting APIs, running demo agents, and more, with a well-structured project layout encompassing SDKs, backend API, database management, and user interface components. Moreover, it includes documentation to assist contributors in enhancing the platform further. Released under the MIT license, SteerPlane aims to facilitate safe AI agent deployment by preventing incidents due to misconfigurations or uncontrolled behavior. Keywords: #phi4, AI agents, API, FastAPI, Nextjs, PostgreSQL, Python, SDK, SteerPlane, TypeScript, architecture, contributing, cost limits, dashboard, decorator, documentation, exception handling, infinite loops, license, license Keywords: SteerPlane, loop detection, project structure, real-time monitoring, roadmap, runtime guardrails, step caps, telemetry
    The google logo   github.com 17 hours ago
127.  HN Show HN: SchemaSight – Chat with your database schema locally using Ollama
SchemaSight is a Visual Studio Code (VS Code) extension that facilitates understanding complex or legacy database schemas by allowing developers to interact with their database schema in plain English within their editor, using the Ollama framework. It supports SQL Server, PostgreSQL, and MySQL databases, providing capabilities to query tables, views, stored procedures, functions, and business logic locally without exposing data externally. The extension employs a local-first approach where all operations are executed on the user's machine, ensuring data security and privacy. Key features of SchemaSight include a guided onboarding flow within VS Code for setting up database connections and indexing schema objects, options to modify chat models, and re-index when necessary. It also offers transparency by showcasing how answers are generated through context and retrieval visibility. The extension’s architecture is designed with a clear separation of concerns across repositories, services, and handlers, emphasizing testability with unit-tested components using mocks. SchemaSight can be installed from the VS Code Marketplace or directly from source via npm. The development structure prioritizes easy maintenance and extensibility, assigning specific roles to each component for clarity and efficiency. Recommended models like llama3.1:8b are suggested, with alternatives available for handling larger stored procedures. The project is distributed under the MIT License, allowing broad use and modification rights. Keywords: #phi4, ChatHandler, Indexer, LanceDB, MessageRouter, MySQL, Ollama, PanelManager, PostgreSQL, RAG pipeline, RagPipelineService, React webview, SQL Server, SchemaSight, SecretStorage, Transformersjs, VS Code extension, architecture, business logic, database schema, development host, embeddings, indexing, legacy databases, local LLM, local-first, message-based API, model settings, retrieval, stored procedures, transparency
    The google logo   github.com 19 hours ago
130.  HN Show HN: Aivaro – Open-source AI alternative to Zapier
Aivaro presents itself as an open-source, AI-driven alternative to Zapier, enabling users to create automated workflows using straightforward English descriptions. This platform aims to alleviate the high costs associated with conventional automation tools by allowing users to input simple task descriptions that are then transformed into functional workflows through artificial intelligence. Aivaro boasts over 20 integrations with popular services such as Google, Stripe, Slack, and Shopify, facilitating diverse automation possibilities across various platforms. Central to its user experience is a chat-first interface powered by AI technology like GPT-5, which swiftly translates user inputs into actionable workflows. The platform features a visual editor built on React Flow, offering a drag-and-drop interface for manual workflow adjustments, enhancing flexibility and customization. Additionally, Aivaro incorporates a human-in-the-loop approval mechanism that requires user consent before executing sensitive operations such as emails or financial transactions, thereby adding an extra layer of security. Further enriching its functionality are features like "for-each" iteration capabilities, which allow users to process data rows efficiently in spreadsheets and a smart variable resolution system designed for effective data management. The architectural foundation includes FastAPI for backend development, Next.js 14 on the frontend, and PostgreSQL as the primary database, with SQLite available for local development scenarios. Deployment is streamlined using Vercel and Railway platforms. Aivaro actively encourages community contributions, providing clear guidelines to facilitate the addition of new integrations and enhancements to existing features. This open-source project operates under an MIT license, inviting developers to participate in its growth and improvement. Keywords: #phi4, AI, Aivaro, FastAPI, GPT-5, MIT license, Nextjs, OpenAI API key, PostgreSQL, React Flow, Zapier, approval guardrails, deployment, drag-and-drop editor, human-in-the-loop, integrations, variable resolution, workflow automation
    The google logo   github.com 19 hours ago
164.  HN "Design Me a Highly Resilient Database"
Designing a "highly resilient database" is a complex task that hinges on understanding various factors unique to each application's requirements rather than defaulting to specific technologies. Resilience in databases is influenced by data types, query patterns, consistency needs, availability demands, durability expectations, potential failure modes, and budget limitations. The notion of resilience as an isolated attribute is misguided; it must be contextualized within the specific use cases and environments where the database operates. Different databases excel under particular conditions due to inherent trade-offs, which are encapsulated in the CAP theorem—asserting that a distributed system can only guarantee two out of three properties: Consistency, Availability, or Partition Tolerance. For instance, Cassandra is well-suited for distributing large data volumes with adjustable consistency but falls short in applications requiring strict ACID compliance like financial ledgers, where PostgreSQL would be more appropriate due to its consistency and durability features. Selecting an inappropriate database can lead to severe consequences such as regulatory non-compliance or performance issues under specific workloads. The author's experience using CloudNativePG on Kubernetes for fintech illustrates a tailored approach that ensures resilience, consistency, and auditability—key aspects in regulated sectors. Ultimately, designing a resilient database requires a deep understanding of the application's specific needs rather than relying on generic product recommendations. Engineers must focus on asking precise questions to ensure their choice aligns with system requirements, thus enhancing reliability and preventing failures in production environments. This strategy underscores the importance of expertise in making informed decisions that cater to the critical demands of the system in question. Keywords: #phi4, ACID Compliance, Availability, CAP Theorem, Cassandra, CloudNativePG, Consistency Requirements, Data Model, Durability, Failure Modes, Fintech, Interview, PostgreSQL, Resilient Database
    The google logo   nikogura.com a day ago
219.  HN Show HN: DBWarden – A database migration tool for Python/SQLAlchemy projects
DBWarden is an innovative database migration tool tailored for Python projects using SQLAlchemy. It streamlines the migration process through a minimalistic command-line interface and generates easily understandable SQL migrations, steering clear of large frameworks and intricate configurations typical in other tools. The primary features include automatic detection of SQLAlchemy models within a designated directory, generation of raw SQL migration files reflecting model alterations, straightforward review processes for these migrations, and efficient tracking of both migration history and database state with minimal initial setup via a configuration file (`warden.toml`). The standard workflow involves creating SQLAlchemy models, executing `dbwarden make-migrations "name"` to produce corresponding SQL from the models, reviewing this generated SQL, and subsequently running `dbwarden migrate` to implement these migrations. Additionally, DBWarden provides commands for initialization, rollback, migration history review, status checks, configuration viewing, schema inspection, and comparing existing models with the database. It is compatible with PostgreSQL, SQLite, and MySQL databases, requiring only a simple setup through specifying the SQLAlchemy URL in its configuration file. Despite being experimental, DBWarden incorporates numerous safety measures to safeguard connected databases during usage. The tool is available under the MIT License, ensuring open access for further development and use. Keywords: #phi4, CLI, DBWarden, MIT License, MySQL, PostgreSQL, Python, SQL migrations, SQLAlchemy, SQLite, configuration, database migration, declarative_base, documentation, experimental package, failsafes, init, make-migrations, migrate, migration history, models directory, raw SQL, rollback, wardentoml
    The google logo   github.com a day ago
235.  HN Show HN: Dead Man's Switch – miss a check-in, alert your contacts
"Show HN: Dead Man's Switch" is a personal project designed to enhance user safety by alerting emergency contacts if the user fails to check in at scheduled intervals, which can be daily, weekly, or customized based on the user’s preference. It provides users with control over the grace period before notifications are sent out through email and SMS. The technical infrastructure includes a Node.js/Express backend paired with PostgreSQL for data storage. The frontend is implemented as a Progressive Web App (PWA), which supports Web Push notifications, thereby eliminating the necessity to distribute through app stores. Currently in early beta and invite-only stages, this project addresses safety concerns for individuals who spend significant time alone. Users access their accounts using an email and password. Keywords: #phi4, Dead Man's Switch, Express, Nodejs, PWA, PostgreSQL, SMS, Web Push notifications, alert, backend, beta, check-in, contacts, email, frontend, invite only
    The google logo   deadmansswitch.cloud a day ago
237.  HN Show HN: N8n-trace – Grafana-like observability for n8n workflows
**Summary** n8n-trace is a self-hosted observability platform designed specifically for n8n workflows, providing essential analytics and metrics without requiring outbound calls to n8n instances, ensuring privacy and compliance with GDPR by design. Aimed at teams managing multiple n8n environments, it offers centralized visibility into workflow performance through execution analytics, instance health monitoring, and a unified multi-instance dashboard. Key features include node-level success/failure rates, an optional Prometheus-style explorer for instance metrics, role-based access control (RBAC), audit logging, and GDPR-compliant data privacy practices. Delivered as a hardened Docker container running alongside PostgreSQL, n8n-trace integrates with n8n via workflows that push data to its database. Security measures incorporate Google Distroless images, JWT authentication, bcrypt password hashing, account lockout mechanisms, and strict Content Security Policies (CSP). While enhancing the built-in UI of n8n’s free version with advanced observability features, it is particularly suitable for users who do not have Enterprise access. The setup process involves cloning a GitHub repository, configuring environment variables, and deploying via Docker Compose. Developed by Mohammed Aljer under an MIT license, contributions to this community project are encouraged, with AI coding tools providing support in its development. Keywords: #phi4, Docker, GDPR compliance, Grafana-like, PostgreSQL, Prometheus, RBAC, analytics, audit logging, data privacy, deployment guide, environment variables, execution analytics, health check, instance monitoring, metrics, multi-instance dashboard, n8n, observability, security-conscious, self-hosted, workflows
    The google logo   github.com a day ago
   https://github.com/Mohammedaljer/n8nTrace   a day ago
274.  HN Plan management patches for Postgres 19
Robert Haas, a key contributor to PostgreSQL and Vice President at EnterpriseDB, has proposed an innovative patch set for PostgreSQL 19 featuring three new contrib modules—`pg_plan_advice`, `pg_collect_advice`, and `pg_stash_advice`. These modules are designed to provide users with enhanced control over query execution plans. The `pg_plan_advice` module creates a "plan advice" string that outlines the structure of an execution plan, enabling users to maintain consistent plans or adjust them for varying outcomes more precisely than traditional planner settings like `enable_hashjoin`. Extending this functionality, `pg_collect_advice` and `pg_stash_advice` modules offer robust mechanisms for collecting and applying advice. Specifically, `pg_stash_advice` can automatically apply predetermined plans to queries based on identifiers, further streamlining query management. By decoupling mechanism from policy, these modules are made pluggable, encouraging innovation and adaptability. Although they show potential in addressing operational challenges without necessitating application changes, this technology is in its early stages (version 1.0) and requires extensive review and testing before it can be considered for inclusion in PostgreSQL 19. Keywords: #phi4, EXPLAIN, HASH_JOIN, MERGE_JOIN_PLAIN, PostgreSQL, contrib modules, operational challenges, pg_plan_advice, pg_stash_advice, plan advice string, plan stability, query planning, system-wide behavior, user planner control
    The google logo   rhaas.blogspot.com a day ago
279.  HN Show HN: JotSpot – a super fast Markdown note tool with instant shareable pages
JotSpot is a streamlined Markdown note-taking application designed to facilitate quick writing and seamless sharing of notes, focusing on reducing friction in user interaction. It incorporates key functionalities such as Markdown support, live preview capabilities, autosave features, and the ability to generate shareable links for easy dissemination. The tool is built using Flask, HTMX, and PostgreSQL, deployed on a self-hosted server setup, deliberately avoiding complex JavaScript frameworks to maintain simplicity. Users can begin with private drafts that automatically save, allowing them to publish these notes later as public documents accessible via an Explore page. The developer behind JotSpot invites feedback from fellow developers for potential enhancements or new features, emphasizing a collaborative approach to improvement and evolution of the tool. Keywords: #phi4, Explore page, Explore page Keywords: JotSpot, Flask, HTMX, JotSpot, Markdown, PostgreSQL, autosave, developers, feedback, lightweight tool, live preview, notes, self-hosted server, shareable pages
    The google logo   jotspot.io a day ago
   https://jotspot.io/api/v1/jots/text   a day ago
   https://jotspot.io/cli   a day ago
306.  HN Show HN: I couldn't scale my YouTube channels, so I built Shortgram
The developer encountered difficulties in scaling YouTube channels primarily due to the labor-intensive nature of recording and editing videos. To address these challenges, they developed Shortgram, a tool designed to transform long-form content into optimized short-form clips efficiently. This innovation aims to facilitate video production by automating the creation of viral clips using advanced technologies such as Supabase, Gemini, Claude, and Google Cloud Run. By leveraging these technologies, Shortgram seeks to significantly reduce the time and effort involved in producing engaging video content. The developer is now soliciting public feedback on this tool, reflecting a desire for a similar resource when initially launching their channels. Through this initiative, they hope to enhance the scalability of YouTube channels by making the production process more streamlined and less time-consuming. Keywords: #phi4, Claude, Gemini, Google Cloud Run, PostgreSQL, Shortgram, Supabase, YouTube, content, edge functions, editing, features, feedback, growth, jobs, optimizing, recording, scale, scheduling, solopreneur, video clips, viral, workflow
    The google logo   shortgram.com a day ago
324.  HN The Internals of PostgreSQL
"The Internals of PostgreSQL," authored by Hironobu Suzuki, is a detailed guide published on September 26, 2015, that explores the internal mechanisms and subsystems of PostgreSQL, specifically focusing on versions 18 and earlier. The document has undergone several updates to include new features such as conflicts, replication slots, parallel query capabilities, and incremental backups, reflecting its comprehensive nature. Intended for both educational and commercial purposes, it allows non-commercial academic use freely while offering options like revenue sharing or full buyout for commercial entities. Hironobu Suzuki is a distinguished software engineer and an influential figure in the PostgreSQL community. He has authored various books related to databases and played significant roles within the Japan PostgreSQL Users Group. His work has been academically referenced and translated into Chinese as of 2019, demonstrating its broad impact. Suzuki retains copyright control over his guide, permitting free educational use while requiring contact for commercial exploitation under specific terms. He favors HTML format due to optimization benefits and independently manages his domain and server infrastructure. For inquiries about the document or related matters, Suzuki asks for social media verification and public communication through Twitter. Keywords: #phi4, Administration, Commercial Use, Conflicts, Copyright, Database System, Full Buyout, HTML Optimization, Hironobu Suzuki, Incremental Backup, Integration, Internals, Japan PostgreSQL Users Group, ML AI DBMS, Non-commercial Seminar, Open-source, Parallel Query, PostgreSQL, Replication Slots, Revenue Share, Subsystems
    The google logo   www.interdb.jp a day ago
331.  HN "I built a spell checker for back end configuration mistakes."
Safelaunch is a tool designed to enhance backend reliability by preventing configuration errors from leading to production failures. It accomplishes this by validating the local development environment against an "environment contract" defined in an `env.manifest.json` file, ensuring all required variables are present and runtime versions match. This process helps identify missing or mismatched configurations before code is pushed to production, thereby reducing deployment-related issues. Installation of Safelaunch is straightforward using the command `npm install -g safelaunch`. To utilize it effectively, developers should first create an `env.manifest.json` file at their project's root to specify necessary environment variables and runtime versions. After setting up this manifest, they can run `safelaunch validate` to check their local setup against these specifications. The tool provides clear feedback on any discrepancies found during validation, enabling developers to address issues preemptively. Additionally, Safelaunch integrates seamlessly with GitHub Actions workflows, allowing it to block deployments automatically if validations fail. Developed by Orches, Safelaunch is specifically targeted at improving backend reliability through its robust environment validation features. Keywords: #phi4, API key, CI Integration, GitHub Actions, Orches, PostgreSQL, Redis, backend configuration, deployment block, environment contract, envmanifestjson, local environment, missing variables, npm install, production, runtime mismatches, runtime version mismatches, safelaunch, spell checker, validation
    The google logo   www.npmjs.com 2 days ago
344.  HN Show HN: DiggaByte Labs – pick your stack, download production-ready SaaS code
DiggaByte Labs, developed by an independent developer who is also a college student, provides a tool designed to streamline the setup of production-ready SaaS applications. Users can customize their tech stack by choosing from various components such as databases (including PostgreSQL and MySQL), authentication providers, payment integration options, UI libraries, and deployment targets. The service simplifies development by delivering a fully integrated ZIP file, eliminating much of the time typically required for initial configuration. A free tier is available, allowing users to select up to three modules without providing credit card information, while a Pro version costs $19 per project and offers unlimited module selection along with Stripe webhook configurations. Created independently, DiggaByte Labs encourages user feedback on its configurator and module offerings, aiming to simplify and accelerate the development process for developers. Keywords: #phi4, DiggaByte Labs, MongoDB, MySQL, PostgreSQL, Prisma, Pro tier, SaaS, Stack Configurator, Stripe webhooks, UI library, ZIP file, auth, code, college student, configurator, database schema, deploy target, feedback, indie dev, modules, payments setup, production-ready, stack, templates
    The google logo   diggabyte.com 2 days ago
359.  HN Show HN: MarketplaceKit – Ship a rental marketplace in days instead of months
MarketplaceKit serves as a boilerplate framework designed to expedite the creation of rental marketplaces, featuring capabilities such as real-time messaging, reservation systems, and mutual review functionalities. It employs a configuration-driven approach with nine feature flags that enable easy customization across various aspects like pricing models, categories, themes, and emails. Built on a robust technology stack including Next.js 15, Tailwind CSS v4, Prisma, PostgreSQL, and Socket.io, it is adaptable to any rental or booking marketplace model. The product offers flexible acquisition options, including a one-time purchase with optional ongoing costs for additional services like hosting, image storage, maps, and AI features. MarketplaceKit supports diverse marketplace types, ranging from tools and vehicles to cameras and gear, with future plans to include buy/sell marketplaces and Stripe Connect integration. Licensing is available in three tiers: Starter (for personal or internal use), Pro ($399 for unlimited client projects), and Enterprise (granting reselling rights). Deployment is streamlined through the use of Vercel + Neon or a VPS with Docker, supported by comprehensive documentation within the repository to aid development and deployment processes. Keywords: #phi4, Cloudflare R2, Docker, MarketplaceKit, Nextjs, PostgreSQL, Prisma, SaaS product, Socketio, Stripe Connect, Tailwind CSS, TypeScript, boilerplate, config-driven, feature flags, rental marketplace, reservation system, white-label rights
    The google logo   kit.creativewin.net 2 days ago
361.  HN Useful queries to analyze PostgreSQL lock trees (a.k.a. lock queues)
The document explores advanced PostgreSQL queries designed for analyzing lock trees or lock queues essential in managing object-level and row-level locks, particularly vital for OLTP workloads such as those seen in web and mobile applications. Emphasizing the importance of understanding these locks to effectively troubleshoot performance issues, it suggests beginning with basic monitoring queries from PostgreSQL Wiki pages but advocates for more sophisticated queries to expedite troubleshooting processes by identifying "offending" queries that obstruct other transactions through lock queues or wait chains. The document references significant contributions, including a recursive CTE query developed by Bertrand Drouvot utilizing the pgsentinel extension and another refined by Victor Yegorov. This latter query integrates features like `pg_blocking_pids(..)` from PostgreSQL 9.6 and `pg_locks.waitstart` introduced in version 14, though it cautions against the performance impacts of `pg_blocking_pids(..)`, recommending its use for sporadic troubleshooting rather than constant monitoring. A detailed recursive CTE query is provided to construct a tree structure of blocking sessions, offering insights into session states, wait events, transaction durations, and more. The output format includes details such as session ID, blocking relationships, state, wait events, and the transactions involved in blocking. To demonstrate continuous monitoring capabilities, the author suggests running this query in a loop with `\watch 10`, which repeats every ten seconds, providing real-time examples of blocking sessions involving various database operations like updates, deletes, and selects. Contributions from Aleksey Lesovsky are acknowledged for reviewing and refining the script. The document concludes by introducing Nikolay Samokhvalov, CEO & Founder of PostgresAI, whose company focuses on creating tools to harmonize development and operations within DevOps environments. Keywords: #phi4, DevOps, OLTP workloads, PostgreSQL, PostgreSQL 14, PostgreSQL 96, \watch command, blocking sessions, deadlock detection, exclusive access, lock manager, lock monitoring, lock trees, monitoring tools, object-level locks, performance impact, pg_blocking_pids, pg_locks, pg_stat_activity, pgsentinel extension, query optimization, recursive CTE, row-level locks, schema migrations, session activity, statement_timeout, transaction age, troubleshooting, wait event
    The google logo   postgres.ai 2 days ago
367.  HN Show HN: CodeTrackr – open-source WakaTime alternative with real-time stats
CodeTrackr is an open-source alternative to WakaTime that emphasizes privacy while tracking coding activity. It provides real-time analytics and global leaderboards, along with a plugin system for developers seeking productivity insights without sacrificing data ownership. The platform supports compatibility with WakaTime's API, features a real-time dashboard utilizing WebSockets, and allows self-hosting through Docker. Users can also log in via GitHub or GitLab accounts. Built using technologies such as Rust, Axum, PostgreSQL, Redis, and Vanilla JS, CodeTrackr invites community feedback on security and architectural improvements. Additionally, users are encouraged to contribute plugins or IDE extensions, with the project accessible at its GitHub repository. Keywords: #phi4, Axum, CodeTrackr, Docker, GitHub, GitLab, IDE extensions, PostgreSQL, Redis, Rust, Vanilla JS, WakaTime, alternative, architecture, coding activity, leaderboards, open-source, plugin system, plugins, privacy-first, productivity insights, real-time analytics, security
    The google logo   github.com 2 days ago
371.  HN Show HN: Cross-Claude MCP – Let multiple Claude instances talk to each other
Cross-Claude MCP is an application designed to facilitate communication between multiple Claude AI instances through a shared message bus, functioning similarly to Slack but specifically tailored for AI environments. It resolves the challenge of isolated instances by enabling cross-environment interactions, particularly beneficial when using tools like Claude Code across various terminals or platforms. The system operates in two distinct modes: Local Mode and Remote Mode. Local Mode is suited for single-machine setups utilizing stdio and SQLite, requiring no additional configuration beyond cloning the repository. In contrast, Remote Mode leverages HTTP and PostgreSQL to support team-based or cross-machine collaboration, with deployment options available on platforms such as Railway. The application offers a suite of functionalities critical for efficient inter-instance communication. Claude instances can register under unique identifiers like "builder" or "reviewer," which is essential for targeted messaging across named channels. Messaging capabilities include sending, receiving, and replying to messages, while large datasets are managed through a shared data store rather than being embedded in messages. Additionally, Cross-Claude MCP includes presence detection features that utilize heartbeat signals to monitor instance activity and manage their online/offline statuses. Intended for use with Claude Code, Claude.ai, and Claude Desktop, the tool supports various collaborative workflows, including code review coordination, parallel development efforts, and efficient data sharing mechanisms. By establishing a structured protocol encompassing registration, messaging, reply waiting, status updates, and more, Cross-Claude MCP ensures streamlined inter-instance interactions, making it an invaluable resource for teams working with multiple AI instances simultaneously. Keywords: #phi4, API key, CLAUDEmd instructions Keywords: Cross-Claude MCP, Claude instances, Cross-Claude MCP, HTTP transport, JavaScript, PostgreSQL, SQLite, SSE stream, channels, code review, collaboration, communication, heartbeat, inter-instance messaging, local mode, message bus, parallel development, presence detection, remote mode, session close, shared data, staleness
    The google logo   github.com 2 days ago
381.  HN A simplified PostgreSQL-backed ordered message queue with webhook delivery
Pypgmq is an advanced messaging system leveraging PostgreSQL as its backbone to manage ordered message queues with webhook delivery capabilities. It employs FastAPI to provide a RESTful API for topic-based messaging, allowing clients to send messages that are stored in the PostgreSQL database. This system features a sophisticated architecture consisting of a client, FastAPI API, the database itself, and a dedicated delivery worker. The database not only stores messages but also facilitates real-time processing using LISTEN/NOTIFY commands. Notifications trigger the delivery worker, which processes these alerts and delivers messages to registered webhooks through HTTP POST requests. This process includes a retry mechanism employing exponential backoff for handling failed deliveries, ensuring robustness. The system supports topic-based messaging where messages are partitioned, with strict ordering maintained within each partition per webhook. A dead-letter partition is used to handle messages that exceed the maximum number of retries. Pypgmq also allows for horizontal scaling via PostgreSQL’s FOR UPDATE SKIP LOCKED feature and supports direct SQL message insertion using a NOTIFY trigger for immediate delivery. For quick setup, users can opt for Docker or manual configuration steps involving starting PostgreSQL, installing dependencies, running migrations, setting up NOTIFY triggers, and launching both the API and worker components. Configuration adjustments such as database URL, maximum retries, backoff factors, and worker concurrency are made through an environment file (.env). The API provides endpoints to manage topics, webhooks, messages, and inspect dead-lettered messages, with interactive documentation accessible at `http://localhost:8000/docs`. For testing and maintenance purposes, a running PostgreSQL instance is required along with pytest for tests. Code quality is ensured through linting and formatting using Ruff. The project structure is organized into distinct directories focusing on API components, core logic, models, schemas, and worker functionalities, promoting modularity and maintainability. Keywords: #phi4, API, API endpoints, Docker, FastAPI, PostgreSQL, Ruff linting, SQL, architecture, configuration, dead-letter, dead-letter partition, direct SQL inserts, features, horizontal scaling, linting, message queue, project, project structure Keywords: PostgreSQL, retry, retry backoff, scaling, testing, webhook, webhook delivery
    The google logo   github.com 2 days ago
399.  HN Parse, Don't Guess
The text explores the complexities of JSON serialization and deserialization across various programming environments, focusing on challenges such as type precision and structural language differences. Initially, the author experimented with using regular expressions to treat strings as big integers in JavaScript during JSON parsing, which resulted in performance issues due to CPU-intensive operations. Recognizing these limitations, they transitioned to explicit type mapping through "upcasting," a method that converts string representations back into appropriate native types like big integers and dates at runtime, enhancing both performance and compatibility with evolving application schemas. This strategy is particularly beneficial in databases such as PostgreSQL, as used in Pongo and Emmett, where it facilitates schema versioning by ensuring backward and forward compatibility. This is achieved by transforming older data formats into newer structures without disrupting existing applications. The author underscores that explicit conversions provide a more robust solution than regex hacks for type inference, emphasizing the importance of directly addressing issues rather than attempting quick fixes. Reflecting on their journey, the author acknowledges how initial imperfect solutions can serve as valuable learning experiences that guide better design decisions in the future. They advocate for taking necessary shortcuts but stress the importance of revisiting and refining these approaches over time. The narrative concludes with a call to support Ukraine amidst ongoing conflict. Keywords: #phi4, Emmett, JSON, JavaScript, Parse, Pongo, PostgreSQL, SQLite, TypeScript, backward compatibility, bigints, database, dates, downcasting, dynamic environment, event sourcing, forward compatibility Comma-separated Keywords: Parse, forward compatibility Comma-separated List: Parse, forward compatibility Extracted Keywords: Parse, forward compatibility Final Answer: Parse, forward compatibility Final Comma-separated Keywords: Parse, forward compatibility Final Comma-separated List: Parse, forward compatibility Final Keywords: Parse, forward compatibility Final List: Parse, forward compatibility Keywords: Parse, forward compatibility Selected Keywords: Parse, forward compatibility Simplified Comma-separated List: Parse, forward compatibility Simplified Final Answer: Parse, forward compatibility Simplified List: Parse, forward compatibility ```, mapping, performance issues, regex, schema versioning, serialization, statically typed languages, upcasting, validation
    The google logo   event-driven.io 2 days ago
446.  HN Show HN: AI trading platform with 34% returns (3 months) – seeking acquisition
The text introduces an autonomous AI trading platform that delivered a 34% return in three months, significantly outperforming the S&P 500's 7%. Operating at a cost of $300 per month, this system utilizes machine learning models like LightGBM for daily stock ranking and JAX PPO for portfolio optimization. It offers features such as personal portfolio analysis, news summarization, and market regime detection to aid users in informed trading decisions. Built with technologies including FastAPI, React, PostgreSQL, among others, the platform enables live trading demonstrations accessible at acis-trading.com. The creator is interested in acquisition opportunities from brokerages or fintech companies and allows users to mirror trades on their preferred brokerage accounts while providing alerts for trade changes. This ensures users can maintain control over their investments without needing additional research, enhancing investment decision-making with minimal effort. Keywords: #phi4, AI management, AI trading, FastAPI, JAX PPO, LightGBM, ML architecture, PostgreSQL, React, acquisition strategy, alerts, autonomous portfolio, brokerages, fintech platforms, infrastructure, market regime detection, notifications, returns, robo-advisors, validation methodology, walk-forward validation
    The google logo   acis-trading.com 2 days ago
450.  HN How We Model Clinical Trial Data When Every Trial's Data Model Is Different
Harbor addresses the complexities of managing diverse clinical trial data by employing a constrained Entity-Attribute-Value (EAV) model in PostgreSQL, which merges relational database structure with NoSQL flexibility. This strategy is augmented by Zod for application-layer validation, facilitating handling of sparsity, heterogeneity, dynamism, and user-defined schemas prevalent in clinical trials. Unlike traditional databases that necessitate extensive schema modifications and wide tables, the EAV model allows new attributes to be added dynamically without substantial database changes. To ensure data safety and integrity within this flexible framework, Harbor implements foreign keys, hierarchical constraints, and denormalization techniques, ensuring robust referential integrity. However, careful implementation is crucial to avoid typical challenges with the EAV model, such as complex queries and potential referential integrity issues. Type safety is maintained at the application layer using Zod due to compatibility limitations that prevent the use of database-level type enforcement extensions like pg_jsonschema. While the EAV pattern provides flexibility for subject data, other types of data are stored using traditional methods to circumvent the inherent drawbacks of the EAV approach. This hybrid model enables Harbor to meet the intricate demands of clinical trial data management while ensuring compliance and maintaining data integrity. Keywords: #phi4, 21 CFR Part 11, Application-layer Validation, Clinical Trials, Data Model, Data Schema Evolution, Data Schema Evolution Comma-separated List: Clinical Trials, Data Schema Evolution Final Keywords: Clinical Trials, Dynamism, EAV, EAV (Entity-Attribute-Value), Google Cloud SQL, Heterogeneity, JSONB, NoSQL, PostgreSQL, Referential Integrity, Relational Databases, Sparsity, Study Metadata Extracted Keywords: Clinical Trials, Study Metadata Keywords: Clinical Trials, Type Safety, User-definition, Zod, pg_jsonschema
    The google logo   runharbor.com 2 days ago
463.  HN Web based IDE for prompt-and-pray 3D modeling
ModelRift is a web-based integrated development environment (IDE) specifically designed for 3D modeling, leveraging AI to generate OpenSCAD code from user descriptions. Created by a programmer who shifted focus from parametric CAD design to producing models for others, ModelRift addresses the challenges of generating complex geometries using traditional tools like ChatGPT and OpenSCAD. The platform includes an embedded AI chat that facilitates code writing, server-side 3D rendering previews, and visual annotations for iterative model improvements. Key technical features involve a frontend built with React and Three.js, a backend utilizing Node.js and PostgreSQL, and job management via pg-boss. ModelRift supports SVG import to engrave artwork directly onto models. Since its inception, the platform has added several functionalities: a side-by-side code editor, public model gallery access, user profiles, revision history tracking, and improved SVG import capabilities. These features cater to users seeking specific 3D models that are not readily available in existing databases like Printables. ModelRift operates on a freemium model, offering initial free credits followed by usage charges due to the costs of AI services. Demonstrating its rapid acceptance, the platform received its first payment just three weeks after launch, highlighting its market value and utility. The tool continues to evolve, driven by user feedback and community involvement, ensuring it meets the changing needs of its users. Keywords: #phi4, 3D modeling, AI chat, ChatGPT, Fusion 360, Gemini Flash, LLM costs, ModelRift, Nodejs, OpenSCAD, PostgreSQL, Puppeteer, React, STL export, SVG import, SaaS products, Server-Sent Events, Threejs, Web IDE, browser-based, credits, ffmpeg, parametric CAD, pg-boss
    The google logo   pixeljets.com 2 days ago
467.  HN Show HN: Pg_sorted_heap–Physically sorted PostgreSQL with builtin vector search
Pg_sorted_heap is a sophisticated PostgreSQL extension designed to enhance query performance through physically sorted storage, eliminating the need for the pgvector dependency. This extension optimizes data retrieval by maintaining primary key order and employing per-page zone maps for efficient scanning. It facilitates faster bulk inserts and supports two vector types—svec (float32) and hsvec (float16)—for precise cosine distance calculations, utilizing an Inverted File Quantization (IVF-PQ) method to execute approximate nearest neighbor searches effectively. Performance evaluations demonstrate that sorted_heap significantly outperforms traditional btree and sequential scans, especially with larger datasets. The extension is compatible with PostgreSQL environments starting from version 17 and offers a suite of features such as data compaction, merging capabilities, scan statistics, and configurable settings. It also enhances vector search workflows by providing several Approximate Nearest Neighbor (ANN) methods including PQ-only or reranking for increased recall. Thorough testing across various scenarios ensures its scalability with high-dimensional data without being constrained by pgvector’s dimension limitations. Released under the PostgreSQL License, sorted_heap presents a robust solution for improving performance and functionality in database environments. Keywords: #phi4, IVF-PQ, PostgreSQL, benchmark, compact, cosine distance, extension, merge, performance, pg_sorted_heap, scan pruning, sorted_heap, vector search, zone map
    The google logo   github.com 2 days ago
503.  HN Show HN: NPIScan search 9M U.S. healthcare providers from the NPI registry
NPIScan is a sophisticated tool designed to enhance the accessibility and efficiency of browsing the National Plan & Provider Enumeration System (NPPES) dataset, which comprises 9 million records of U.S. healthcare providers identified by unique National Provider Identifier (NPI) numbers. The platform allows users to conduct searches based on name, NPI number, specialty, or location and provides comprehensive profiles for each provider. Key trends highlighted in the data include a record-breaking 631k new NPI registrations in 2025, an increase in Behavior Technician providers, California having over 1.1 million healthcare providers, and only about 0.5% of these providers registering digital health endpoints. The technology underpinning NPIScan includes Next.js for frontend development, PostgreSQL as the database system, Meilisearch to enable full-text search capabilities, and Redis for caching purposes. This combination ensures rapid response times, achieving less than 40 milliseconds after initial cache warm-up when processing large datasets. The platform draws its data directly from CMS NPPES but is neither affiliated with nor endorsed by CMS or HHS. User feedback, particularly from those working within the healthcare data sphere, is actively solicited to enhance the tool's functionality and user experience. Keywords: #phi4, CMS lookup, Meilisearch, NPI registry, NPIScan, NPPES dataset, Nextjs, PostgreSQL, Redis, denormalized tables, digital health endpoints, full-text search, healthcare providers, public record
    The google logo   npiscan.com 2 days ago
544.  HN Show HN: Detecting problem–market drift with an OpenClaw agent
OpenClaw is an AI-powered monitoring tool designed to detect shifts in problem-market alignment by analyzing external sources such as Hacker News, Google News, and X.com for emerging issues like churn or conversion challenges. It utilizes large language models (LLMs) like Claude/GPT to classify data against core product messaging, ensuring that market trends align with customer feedback. The tool generates daily strategic insights through automated reports delivered via a Telegram interface, which supports various commands for accessing trend analyses, summaries, and problem highlights. The setup requires Docker and Docker Compose for environment preparation, including a Postgres database with the pgvector extension. OpenClaw is modular and customizable, featuring components like a signal radar scanner for data acquisition, an AI agent managing Telegram interactions, and a PostgreSQL database for storage. Deployment involves cloning a repository, setting up environment variables, and configuring Docker Compose to launch necessary services. Users can interact with OpenClaw through Telegram commands that trigger data retrieval or database scans via SQL queries or Docker containers. The tool is designed for rapid deployment, with detailed setup instructions including network creation for Postgres and initialization of database tables. It encourages community involvement by allowing users to fork and enhance its framework, providing templates and example configurations for customization while ensuring the confidentiality of sensitive information like API keys. OpenClaw's structure supports open-source development under the MIT license, inviting contributions and improvements. Troubleshooting tips are provided to address common setup challenges, making it a versatile tool for strategic market analysis and alignment detection. Keywords: #phi4, AI Agent, API Keys, Cron Jobs, Docker Compose, Friction Signals, Market Drift, Nodejs, OpenClaw, PostgreSQL, Signal Radar, Telegram Digest, Trend Analysis
    The google logo   github.com 3 days ago
545.  HN Kuberna Labs: AI's Economic Engine
Kuberna Labs is a pioneering platform that merges educational resources with advanced technological infrastructure to support developers in creating autonomous AI agents for decentralized networks. Its vision is to establish itself as the essential operating system for an agentic economy, integrating intelligent agents seamlessly with both Web2 and Web3 systems through cryptographic guarantees and decentralized frameworks. The mission focuses on empowering founders and enterprises to build autonomous agents that function at machine speed across various blockchains. The platform offers a robust educational component featuring comprehensive courses, live workshops, verifiable certificates, and a self-serve SDK in multiple programming languages, complemented by community forums for collaboration. Its Agent Builder IDE is browser-based, equipped with tools like syntax highlighting, AI-assisted code completion, GitHub integration, and isolated testing environments. Additionally, the Intent Marketplace allows users to post tasks using natural language, supported by features such as a competitive solver network, smart contract escrow, decentralized reputation systems, and dispute resolution mechanisms. Kuberna Labs' execution infrastructure is versatile, supporting multiple blockchains including Ethereum, Solana, NEAR, Polygon, and Arbitrum. It incorporates trusted execution environments through Phala Network and Marlin Oyster, utilizes zkTLS for Web2 data verification, and offers decentralized compute solutions with real-time logging and monitoring capabilities. The payment system accommodates cryptocurrency transactions in popular tokens and provides fiat on-ramp services, including recurring subscription billing. Architecturally, the platform is built using Solidity smart contracts that manage various functionalities such as escrow, payments, intent protocols, agent registration, and dispute resolution. Its backend leverages Node.js, Express, TypeScript, Prisma ORM, and message queuing tools like NATS, BullMQ, and Redis, while the frontend utilizes React with TypeScript. Kuberna Labs employs a comprehensive technology stack, including Solidity 0.8.20, OpenZeppelin v5, Hardhat for smart contracts; Node.js, Express, PostgreSQL, Redis for backend processing; JWT, bcrypt for authentication; and Docker for containerization. Testing is conducted using Mocha/Chai for contracts and Jest/Supertest for the backend. Prerequisites for setting up the platform include Node.js, PostgreSQL, and Redis, with setup instructions covering dependency management, repository cloning, environment configuration, database initialization, contract compilation, testing, and server execution. Smart contracts can be deployed on local networks, Sepolia testnet, or mainnet following provided guidelines. The API documentation outlines REST endpoints for functionalities like authentication, user management, course creation, and analytics while ensuring security with nonce-based Web3 authentication, OpenZeppelin's ReentrancyGuard, multisig wallet confirmations, remote attestation for TEE deployments, and data encryption. Community engagement is encouraged through contribution guidelines in CONTRIBUTING.md under the MIT License, reflecting Kuberna Labs' commitment to open-source collaboration. The platform was developed by the Kuberna Labs Team based in Kigali, Rwanda, positioning itself as a vital resource for developers aiming to leverage AI within decentralized financial systems and beyond. Keywords: #phi4, AI, Agent Builder IDE, Autonomous Agents, Contributing, DAO Treasury Management, Decentralized Networks, Docker, Education Platform, Escrow Funds, Execution Infrastructure, Hardhat, Intent Marketplace, JWT Authentication, Kuberna Labs, MIT License Keywords: Kuberna Labs, Multi-chain Support, Multisig Wallet, Nodejs, OpenZeppelin, PostgreSQL, Prisma ORM, React, Redis, Remote Attestation, Security, Smart Contracts, Solidity, TEE Deployment, Web3, zkTLS Integration
    The google logo   github.com 3 days ago
591.  HN Show HN: Database Subsetting and Relational Data Browsing Tool
Jailer is an advanced tool designed for efficiently managing large databases through subsetting, which enables users to browse and navigate schemas and data by creating manageable segments of the original database. This capability ensures referential integrity while facilitating navigation via relational links using its Data Browser feature. Jailer's Subsetter function allows developers and testers to create small yet consistent copies of production databases for development or testing purposes, effectively optimizing resource usage without needing full-sized database replicas. Recent updates have enhanced Jailer with features like structured JSON/YAML exports, a dark UI theme, DDL script generation via Liquibase, improved SQL analysis through dynamic filter conditions, and an upgraded user interface utilizing FlatLaf. The tool now includes cycle detection for parent-child relationships to manage nullable foreign keys efficiently. Additionally, it supports diverse databases through JDBC technology and offers tools for model migration and in-depth SQL analysis. Jailer significantly aids in testing complex applications by providing developers and testers with small, referentially intact subsets of production data, thus streamlining the creation of consistent test datasets based on defined extraction models. It also improves performance by facilitating the archiving of obsolete data and supports generating datasets in various formats including SQL, JSON, YAML, XML, and DbUnit. Keywords: #phi4, API, Browsing Tool, Code Completion, DDL, Data Browser, Database, DbUnit, Development, Embedded Database, Export, Extraction Model, FlatLaf, Foreign Key, Import, JDBC, JSON, Jailer, Liquibase, Metadata Visualization, MySQL, Oracle, Performance, PostgreSQL, Production Data, Read-Only Databases, Referentially Intact, Relationships, SQL, Schema, Subset by Example, Subsetting, Syntax Highlighting, Testing, XML, YAML
    The google logo   wisser.github.io 3 days ago
598.  HN Migrating a 300GB PostgreSQL database from Heroku to AWS with minimal downtime
In 2025, the Argos team undertook a successful migration of their approximately 300 GB PostgreSQL database from Heroku to AWS, aiming for minimal downtime while seeking performance improvements and cost reductions. Motivated by Heroku’s limitations—such as restricted PostgreSQL configuration control, an expensive scaling model, and declining support exemplified by Salesforce ceasing sales of Heroku Enterprise—the team opted for AWS RDS, which offered better monitoring tools, enhanced performance capabilities, and operational controls at a reduced cost due to direct infrastructure management. The migration was executed in two phases: initially, they set up a temporary PostgreSQL server on an EC2 instance using `wal-e` to restore a backup from Heroku, promoting it as the primary database with minimal downtime; subsequently, they established logical replication from this EC2 server to AWS RDS during a maintenance window since RDS did not support streaming WAL. This process required meticulous handling of sequence values and deep knowledge of PostgreSQL’s Write-Ahead Logging (WAL) mechanisms. Several challenges were encountered, including the necessity to reconstruct specific files like `backup_label` for recovery from Heroku's data and managing the complexities introduced by logical replication. A critical strategy involved using an EC2 "bridge" host to enable a rapid switch to the interim primary server before its promotion, ensuring minimal disruption. The migration’s success was attributed to rigorous planning, testing with multiple rehearsals, comprehensive documentation, transparent communication about downtime expectations, and resource over-provisioning during the transition. By March 2026, Argos had migrated all core services to AWS, realizing improved performance and cost efficiency. For others contemplating similar migrations, it is recommended to thoroughly test procedures, plan detailed cutover steps, and maintain rollback plans until the system stabilizes post-migration. Keywords: #phi4, AWS, EC2, Heroku, PostgreSQL, RDS, WAL, costs, discipline, downtime, execution, logical replication, maintenance window, migration, performance, sequence values
    The google logo   argos-ci.com 3 days ago
617.  HN Show HN: Expose The Culture – Anonymous company culture reviews
"Expose The Culture" is a newly launched anonymous company culture review platform designed as a complement or alternative to Glassdoor, focusing exclusively on aspects of company culture such as management transparency, work-life balance, psychological safety, growth and development, and team collaboration. The platform prioritizes user anonymity by implementing several technical measures: it verifies users via one-time use of verified company emails (which are then converted into hashes), employs timing-obfuscation techniques for review submission, and suppresses metadata from companies with few reviews to prevent inference attacks. This approach allows the platform to protect user identities while providing candid insights about workplace environments. Additionally, "Expose The Culture" differentiates itself by avoiding monetization of reviewed companies and allowing users to browse content without needing an account. Developed using Laravel, Blade, PostgreSQL, Redis, and Postmark for transactional emails, the team behind the platform is actively seeking feedback specifically on its verification processes and methods for ensuring anonymity. Keywords: #phi4, Blade, Company culture, Laravel, PostgreSQL, Redis, anonymity, architecture, data deletion, feedback, hash, metadata suppression, reviews, timing-obfuscation, transactional email, verification
    The google logo   exposetheculture.com 3 days ago
622.  HN As AI Turns Prevalent, UI Becomes Irrelevant
As artificial intelligence (AI) integration deepens across various platforms, traditional user interfaces (UIs), which once held significant value, are diminishing in importance. The author illustrates this evolution through their experience of migrating a website to Cloudflare with the assistance of AI, showcasing how AI can streamline processes previously hindered by complex UI designs. This transition indicates that intricate UI features, while initially seen as competitive advantages, may now pose challenges for AI navigation and efficiency. The article highlights a broader trend where numerous tools are reverting to simpler, text-based interfaces to facilitate better human and AI interaction. For instance, Asciinema captures terminal sessions in plain text format, aiding large language models (LLMs) in generating demonstrations. Hurl manages HTTP requests through readable text files with integrated testing capabilities, obviating the need for intricate UIs like Postman. Mermaid diagrams use markdown-like syntax that is easily interpreted by AI systems. Pgschema adopts declarative SQL to handle database schemas without resorting to complex migration tools. Additionally, Streamlit transforms Python scripts into interactive web applications using straightforward natural language prompts. This shift back towards simpler interfaces underscores a strategic move in technology design, where the focus is on creating interfaces that are easily scriptable and manageable for both humans and AI agents. As AI becomes more embedded in workflows, there's an evident preference for interfaces that simplify interaction, enhancing productivity and reducing complexity. Keywords: #phi4, AI, Cloudflare, DNS, GitHub Actions, HTTP requests, Hurl, IDE, LLM, Mermaid, Notion, Obsidian, PostgreSQL, Python script, Streamlit, UI, Vercel, asciinema, build pipeline, dashboard, data tools, diagrams, frontend code, hosting, interactive, pgschema, task list, terminal sessions, web app
    The google logo   www.star-history.com 3 days ago
626.  HN Show HN: Database Subsetting and Relational Data Browsing Tool
Jailer is a versatile database tool designed to facilitate subsetting and relational data browsing by allowing users to create consistent and referentially intact subsets in various formats, including SQL, DbUnit records, XML, JSON, and YAML. It enhances database performance through features such as archiving obsolete data and generating sorted datasets while providing an intuitive Data Browser for exploring table relationships. The tool includes a SQL console equipped with code completion and syntax highlighting to aid users in querying databases effectively. Jailer's wide compatibility stems from its use of JDBC technology, supporting numerous databases like PostgreSQL, Oracle, and MySQL, with specific enhancements for these systems. Over time, Jailer has received updates that introduced features such as JSON/YAML export options, a dark UI theme, Liquibase integration for generating DDL scripts, improved SQL analysis capabilities, and an API to enable programmatic data access. The installation process is user-friendly, offering distinct packages tailored for Windows or Linux users, alongside source code downloads for manual setup enthusiasts. The success of Jailer relies heavily on contributions from both developers who enhance its codebase and financial supporters, highlighting the collaborative effort that sustains this project's ongoing development and improvement. Keywords: #phi4, Amazon Redshift, Ant, CLI, DDL scripts, Data Browsing, Database, DbUnit, Exasol, Firebird, Git, H2, IBM Db2, Informix Dynamic Server, JDBC, JSON, Jailer, Liquibase, MariaDB, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Relational, SQL, SQLite, Subsetter, Subsetting, XML, YAML
    The google logo   github.com 3 days ago
645.  HN Docs Considered Harmful
The article addresses the challenges of sustaining accurate documentation in rapidly evolving codebases, especially those utilizing agentic coding techniques, as exemplified by projects like MothershipX and Changewiser.ai. In these environments, frequent changes lead to "doc rot," where internal documentation becomes outdated or misleading, potentially causing developers to follow incorrect guidance and leading to regressions. The fast-paced nature of these projects makes it difficult for documentation to remain current and relevant, resulting in confusion and errors when developers rely on obsolete information about code structures and practices. While documentation for stable external dependencies retains its usefulness, internal documentation quickly becomes outdated due to constant updates and shifts within the project structure. A proposed solution is integrating mandatory documentation updates into the Continuous Integration (CI) process by checking for discrepancies between actual code changes and documented content. However, this approach presents challenges in terms of implementation and could become burdensome. The core issue highlighted in the article is maintaining two synchronized sources of truth: the evolving codebase and its corresponding documentation. This synchronization proves difficult in dynamic programming environments where rapid development cycles outpace documentation updates, underscoring a fundamental challenge in software development. Keywords: #phi4, Agentic coding, CI requirement, CLAUDEmd, Claude Code, Docker, Express backend, Hetzner deployment, Nextjs, OpenClaw gateway, PostgreSQL, README, React hook, WebSocket connections, doc rot, docs updates, documentation, envsecretslocal, external dependencies, hard CI check, production codebases, provision-agent/indexts, react-use-websocket, stable APIs, truth synchronization Keywords: Agentic coding
    The google logo   tornikeo.com 3 days ago
691.  HN Show HN: OmoiOS–190K lines of Python to stop babysitting AI agents (Apache 2.0)
OmoiOS is an open-source orchestration system developed to automate workflows involving AI coding agents, significantly reducing the need for manual oversight in software development processes. The system is designed to tackle scalability challenges associated with managing large numbers of AI agents by providing a structured framework that includes task execution with dependency management and validation. Its key features encompass spec-driven execution where machine-checkable acceptance criteria are generated from existing codebases to guide agent actions through various phases such as exploration, requirements gathering, design, and specific tasks. Each task is executed in isolated cloud sandboxes with dedicated resources, ensuring consistent environments. Continuous validation is integrated into the system via a validator agent that automatically checks each task against predefined criteria, prompting retries if necessary without manual intervention. The dynamic discovery of new tasks occurs as agents identify unmet requirements or edge cases during execution, enhancing the project's adaptability and robustness. OmoiOS employs a Directed Acyclic Graph (DAG) system for effective management of task dependencies and parallel execution. Active supervision is facilitated through guardian monitoring, which performs trajectory analysis and intervenes to ensure alignment with objectives when necessary. Additionally, OmoiOS includes code assistant integration that offers context-aware support within the codebase, aiding in autonomous feature development by writing code directly within isolated sandboxes. Built using Python/FastAPI for backend orchestration, PostgreSQL+pgvector for database management, Redis for caching and task queues, and a Next.js frontend, the project aims to transform specifications into production-ready code efficiently through parallel AI agent execution in an automated and supervised environment. Despite challenges such as ensuring high-quality specifications, domain-specific validation, and managing sandbox overhead, OmoiOS strives to streamline software development processes. The project is available on GitHub under the Apache 2.0 license, inviting community contributions to further its development. Keywords: #phi4, AI agents, ANTHROPIC_API_KEY, API keys, Apache 20, Arch Linux, BillingService, CentOS, Claude Agent SDK, ConductorService, DAG-based execution, DAYTONA_API_KEY, Daytona Cloud, DiscoveryService, Docker, Docker Desktop, EventBusService, FastAPI, Fedora, GITHUB_TOKEN, GitHub, Guardian monitoring, LLM_API_KEY, MemoryService, Nextjs, ORM, OmoiOS, OrchestratorWorker, PostgreSQL, Python, RHEL, Redis, SpecStateMachine, TaskQueueService, Ubuntu, Windows (WSL2), agent swarms, architecture, authentication, autonomous agents, backend, code assistant, code generation, continuous validation, database, dependency awareness, development commands, discovery, feature request, frontend, intelligent supervision, isolated sandboxes, just, linting, macOS, machine-checkable acceptance criteria, merging conflicts, migrations, observability Keywords: OmoiOS, orchestration, parallel execution, pnpm, sandbox, sandbox overhead, spec-driven, structured runtime, task graph, tech stack, testing, uv, validation
    The google logo   github.com 3 days ago
708.  HN Pg_plan_advice: Plan Stability and User Planner Control for PostgreSQL?
Robert Haas introduces an ambitious patch set for PostgreSQL 19 aimed at enhancing plan stability and user control over the query planner through three new contrib modules: `pg_plan_advice`, `pg_collect_advice`, and `pg_stash_advice`. The central module, `pg_plan_advice`, empowers users to generate and manipulate a "plan advice" string that outlines a query execution plan. This functionality allows for either consistent plan generation or deliberate variation by incorporating specific planning hints. To facilitate automated query optimization across multiple sessions, the `pg_stash_advice` module is introduced. It automatically applies specified plans based on unique query identifiers without necessitating changes in application code. These modules collectively aim to manage operational challenges while adhering to PostgreSQL's policy that generally favors autonomous planner decisions for optimal performance. The system’s pluggable nature promotes extensibility and further innovation, despite being a preliminary version 1.0 tool with acknowledged limitations and room for enhancement. Haas seeks additional reviewers and testers to evaluate these modules prior to their potential inclusion in PostgreSQL 19. The proposal aspires to empower database administrators (DBAs) to fine-tune query performance while maintaining the planner's default efficiency, addressing needs specific to large-scale deployment environments. Keywords: #phi4, EXPLAIN, MERGE_JOIN_PLAIN, PostgreSQL, Robert Haas, contrib modules, dynamic shared memory, pg_plan_advice, pg_stash_advice, plan advice string, plan stability, query planning, system-wide basis, user planner control
    The google logo   rhaas.blogspot.com 3 days ago
716.  HN Show HN: Merkle Mountain Range audit log and execution tickets for AI agents
The project presents LICITRA-MMR, a cryptographic integrity system designed to ensure tamper-evident logging of actions taken by agentic AI systems using a Merkle Mountain Range (MMR). This innovation addresses the absence of standard mechanisms in current agentic AI that can verify post hoc actions, given the potential for log alteration or deletion. The LICITRA-MMR solution provides cryptographic integrity checks to detect any retroactive modifications. The system operates by serializing each action into canonical JSON format and hashing it with SHA-256, ensuring consistency across records. These hashes are organized into an MMR structure, where any modification impacts the entire chain up to the root hash, thus maintaining integrity. Actions are grouped in epochs of 1,000 events each, forming a sequential integrity check akin to blockchain technology; tampering within one epoch compromises all subsequent ones. A two-phase commit pipeline is employed for action verification. Before commitment, actions undergo policy checks, with rejected proposals documented for auditing. The architecture supports per-organization ledger maintenance, ensuring independent operational integrity. Built using FastAPI, PostgreSQL 16, SQLAlchemy, and reportlab, the system offers endpoints for various operations including health checks, proposal submissions, event commitments, verifications, evidence generation, and proof of inclusion. The setup is streamlined with quickstart instructions and a test suite to ensure component validity. Five experiments highlight cryptographic assurances like tamper detection and policy enforcement. Additionally, organizations can generate cryptographically signed evidence bundles for audits and verify individual events against the MMR root without reprocessing the entire ledger. The system's design emphasizes scalability through epoch-based anchoring, readability via canonical JSON, and thorough auditing with a two-phase commit protocol, opting for an MMR over simple hash chains due to its advantages in providing inclusion proofs. Licensed under MIT, LICITRA-MMR presents a robust solution for maintaining cryptographic integrity in AI systems. Keywords: #phi4, AI agents, FastAPI, Merkle Mountain Range, PostgreSQL, SHA-256, canonical JSON, cryptographic integrity, epoch hash chain, inclusion proofs, multi-org isolation, policy engine, tamper-evident ledger
    The google logo   github.com 3 days ago
   https://github.com/narendrakumarnutalapati/licitra-sent   3 days ago
718.  HN Show HN: Yaks – Yet Another Kafka on S3
Yaks is an innovative streaming platform compatible with Kafka, leveraging Amazon S3 for data storage and PostgreSQL for metadata to overcome scalability limitations associated with traditional Kafka brokers. By removing the need for disk-based management, Yaks presents a stateless, horizontally scalable architecture that simplifies infrastructure by eliminating dependencies on ZooKeeper or KRaft. This makes it an attractive solution for throughput-focused applications like log aggregation and event sourcing, despite its higher end-to-end latency. The platform supports the Kafka wire protocol, allowing seamless integration with existing Kafka clients, and incorporates features such as stateless agents, minimal infrastructure demands, a distributed read cache using groupcache, and built-in observability through Prometheus metrics. Currently in development and not production-ready, Yaks is configured via environment variables prefixed with `YAKS_`, which manage settings for the broker, PostgreSQL database, OpenTelemetry, S3 client, and optional groupcache caching. It maintains compatibility with various Kafka API keys. For deployment, users can set up a two-node local environment using Docker, alongside Postgres and LocalStack, and utilize an optional data integrity verification tool named Oracle. The project is structured into directories for agent management, integration testing, and infrastructure setup, reflecting its modular approach to development. Keywords: #phi4, API keys, Kafka, OpenTelemetry, PostgreSQL, Prometheus metrics, S3, Yaks, broker, configuration, data integrity, diskless server, distributed cache, event sourcing, groupcache, horizontal scaling, integration tests, logs, metadata, observability, throughput-oriented workloads, wire protocol
    The google logo   github.com 3 days ago
744.  HN AWS Aurora DSQL Playground
The AWS Aurora DSQL Playground is an interactive tool offered by Amazon Web Services that facilitates experimentation with the Data Service Query Language (DSQL) specifically for AWS Aurora, a managed database service. This environment allows developers and database administrators to test queries and explore features of DSQL without impacting live data or incurring extra costs. By providing a risk-free platform, users can deepen their understanding of how DSQL functions within AWS Aurora's ecosystem, enhancing their skills and knowledge in managing databases effectively using this particular language within the Amazon infrastructure. Keywords: #phi4, AWS, Aurora, DSQL, EC2, IAM, Lambda, MySQL, Playground, PostgreSQL, RDS, S3, SQL, VPC, analytics, automation, availability, backup, cloud, compatibility, compliance, compute, cost-effective, data warehousing, database, environment, high-availability, infrastructure, instance, integration, logging, managed, monitoring, networking, open-source, performance, platform, recovery, relational, reliability, scalability, security, serverless, service, storage, technology
    The google logo   playground.dsql.demo.aws 3 days ago
749.  HN Baudrate: ActivityPub-enabled BBS built with Elixir and Phoenix
Baudrate is an ActivityPub-enabled Bulletin Board System crafted using Elixir and Phoenix, designed to enhance user interaction and administrative oversight through a suite of advanced features. It employs Phoenix LiveView to deliver real-time UI updates, ensuring dynamic user engagement. The system supports hierarchical boards with nested structures, allowing navigation via breadcrumbs and implementing role-based access control for administrators, moderators, users, and guests. It also includes moderation tools tailored for board management. Cross-posting capabilities enable articles to be shared across multiple boards, with author-controlled forwarding and support for threaded comments, including remote replies through ActivityPub integration. Security is a significant focus for Baudrate, incorporating two-factor authentication, domain blocklists/allowlists, HTTP signature verification, and protocols like HSTS and CSP. Additionally, the platform supports federation with other ActivityPub platforms such as Mastodon and Lemmy, allowing for interactions like follows, comments, and likes across networks. User profiles are enriched with customizable avatars processed server-side and flexible registration options, while a comprehensive admin dashboard facilitates site settings management, user approvals, and moderation tasks. The system also features internationalization support, offering multiple locales with automatic language detection to cater to diverse users. For setup, Baudrate requires Elixir 1.15+, Erlang/OTP 26+, PostgreSQL 15+, and libvips, and is released as open-source software under the AGPL-3.0 license. Keywords: #phi4, ActivityPub, Admin dashboard, Avatar system, BBS, Baudrate, Cross-posted articles, Documentation, Elixir, Environment Variables, Federation, GNU AGPL-30, Guest browsing, HTTPS, Hierarchical boards, Internationalization, LiveView, Phoenix, PostgreSQL, Rate limiting, Real-time UI, Registration modes, Role-based access, Security, TOTP authentication, Threaded comments, User profiles, WebFinger, libvips
    The google logo   github.com 3 days ago
751.  HN Show HN: OptimizeQL- SQL Query Optimizer
OptimizeQL is an open-source tool crafted by Subhan Hakverdiyev to enhance the performance of SQL queries for PostgreSQL and MySQL through the integration of Large Language Models (LLMs). It tackles slow-running queries by analyzing them within the framework of their respective database schemas and execution plans, leveraging data collected via EXPLAIN ANALYZE introspection. This tool automatically gathers essential schema details, including indexes and column statistics, to offer pragmatic suggestions for performance improvements such as adding indexes, creating materialized views, rewriting queries, or tuning configurations. In addition to traditional optimization techniques, OptimizeQL features a novel capability to simulate hypothetical indexes using PostgreSQL's HypoPG extension, which allows users to assess query plans without taking risks. It supports various LLM providers like Anthropic, OpenAI, and Gemini for comprehensive analysis. The platform is equipped with a web-based interactive dashboard that includes functionalities such as query activity charts and comparison tools for SQL queries, along with an integrated Monaco SQL editor, enhancing user experience. Security is paramount in OptimizeQL’s design; it encrypts stored credentials using Fernet symmetric encryption and provides a no-connection mode to enable raw SQL pasting without necessitating database access. The technology stack comprises Python 3.12 (FastAPI), Next.js 16 (React), Docker, along with additional tools like Tailwind CSS and cryptography libraries. Deployment is streamlined through Docker Compose, requiring minimal initial setup by generating an encryption key automatically on first use. For developers looking to engage in local development or contribute to the project, OptimizeQL offers separate commands for backend and frontend setups, with advanced configuration accessible via environment variables or UI settings pages. The structured codebase encourages community contributions while adhering to strict guidelines aimed at maintaining code quality and security. Ultimately, OptimizeQL serves as a comprehensive suite designed to empower users in database optimization by providing an accessible platform that fosters community involvement. Keywords: #phi4, API keys, Anthropic, DeepSeek, Docker, Docker Compose, EXPLAIN ANALYZE, FastAPI, Fernet, Gemini, HypoPG, Kimi, LLM models, MIT License, Meta Llama, Monaco SQL editor, MySQL, Nextjs, OpenAI, OpenRouter, OptimizeQL, PostgreSQL, Python, Qwen, React, SQL Query Optimizer, Swagger UI, Tailwind CSS, TypeScript, action suggestions, dark mode, database credentials, encrypted storage, encryption, indexes, interactive dashboard, materialized views, pytest tests, query comparison, query rewriting, schema introspection, sqlglot, virtual indexes, xAI
    The google logo   github.com 3 days ago
764.  HN Show HN: Anaya – CLI that scans codebases for DPDP compliance violations
Anaya is a command-line interface (CLI) tool developed to scan codebases for compliance with India's Data Protection and Privacy Act (DPDP). It addresses the gap in tools available for DPDP compliance by identifying issues such as missing consent mechanisms and the plaintext storage of personally identifiable information (PII). During testing on the Saleor e-commerce platform, Anaya uncovered numerous violations. The tool is readily installable via pip and is open-source on GitHub. Beyond ensuring DPDP compliance, Anaya serves as a "compliance-as-code" engine capable of real-time scanning for various security issues within GitHub pull requests. It detects hardcoded secrets, OWASP Top 10 vulnerabilities, PII exposure, missing audit logs, among others, with findings accessible through GitHub Check Runs and PR comments. The tool supports multiple output formats like Check Run annotations, SARIF, and PR comments, and offers custom rule packs and scanning techniques including regex, AST, and AI. Anaya can be deployed as a self-hosted GitHub App or integrated into existing CI/CD pipelines, with security features such as HMAC-SHA256 verification, JWT authentication, and automatic secret redaction. As an open-source project under the AGPL-3.0 license, it invites community contributions in forms like bug reports, feature requests, and new rule packs. Hosting options range from free self-hosting to paid cloud services, emphasizing security best practices and transparency throughout its design and usage. Keywords: #phi4, AGPL-30, AST parsing, Anaya, CLI, Celery, DPDP compliance, Django, Docker Compose, FastAPI, GitHub App, GitHub Check Runs, JWT authentication, OWASP Top 10, PII fields, PostgreSQL, PyJWT, SARIF, Saleor, TLS encryption, audit logging, compliance-as-code engine, open-core model, rule packs, security vulnerabilities, telemetry collection, webhook verification
    The google logo   github.com 3 days ago
774.  HN Databasus: Databases backup tool (PostgreSQL, MySQL, MongoDB)
Databasus is a versatile backup solution designed for databases such as PostgreSQL, MySQL, MongoDB, and MariaDB, supporting multiple versions of these systems. It offers flexible scheduled backups with precise timing options like hourly, daily, and weekly schedules, alongside smart compression to efficiently utilize storage space. The tool provides various retention policies, including fixed time periods, count-based retention, and Generational Fixed Size (GFS) for maintaining layered long-term histories. Users have the option to store backups locally or on cloud services such as S3, Google Drive, Dropbox, among others. Ensuring high security standards, Databasus employs AES-256-GCM encryption to protect data at an enterprise level. Notifications regarding backup statuses are available through multiple channels like email, Telegram, and Slack. Designed with team usage in mind, Databasus includes features such as workspaces, access management, and audit logs with customizable user roles. The tool boasts an intuitive user interface that supports both dark and light themes, along with a mobile-adaptive design. Deployment is flexible, allowing users to utilize Docker or Kubernetes with Helm. Installation can be accomplished through several methods: an automated script, a simple Docker run command, Docker Compose setup, or Kubernetes deployment. Users can easily configure backup settings via the dashboard by specifying schedules, storage locations, and retention policies. It's advised that configurations for Databasus itself are also backed up. As an open-source project under the Apache 2.0 License, Databasus encourages community contributions while maintaining high code quality through human verification, testing, and CI/CD pipeline checks. Although AI tools aid development processes, they do not generate complete or untested code segments. For further guidance on installation, usage, and contributions, users can access the project's documentation or engage with its community via Telegram channels. Keywords: #phi4, AI, API, Apache 20, CI/CD, Databasus, DevOps, Docker, Docker Compose, Helm, Ingress, Kubernetes, LoadBalancer, MongoDB, MySQL, PITR, PostgreSQL, Slack, Telegram, UI design, WAL archiving, audit logs, automated script, automation, backup, cloud, code quality, contributing guide, documentation, encryption, enterprise-grade, installation, integration tests, license file, linting, mobile adaptive, notifications, open source, port-forward, retention, role-based permissions, scheduling, secret key, security, self-hosted, test coverage, themes, unit tests, user roles, verification, vulnerabilities, zero-trust
    The google logo   github.com 3 days ago
788.  HN Building PDR AI – Open-source startup accelerator engine
PDR AI is an advanced document management platform built using Next.js, designed to improve document handling efficiency through artificial intelligence. It features role-based access control for secure document interaction and incorporates Optical Character Recognition (OCR) for processing scanned documents. The platform enhances search capabilities with semantic retrieval powered by PostgreSQL with pgvector and offers sophisticated analytics via Retrieval-Augmented Generation (RAG). Core functionalities include robust AI chat tools, web-enriched analysis through optional integrations like Tavily, and enhanced reliability and observability using Inngest and LangSmith. The architecture of PDR AI consists of three distinct layers. The Services Layer hosts vertical modules such as Marketing, Legal, Onboarding, and Document Reasoning, which are customized to meet various business needs. The Tools Layer includes reusable AI capabilities, like RAG for enhanced document processing, web search features, and entity extraction. Finally, the Physical Layer covers infrastructure components including PostgreSQL with pgvector for data storage, Next.js hosting, external services, and knowledge bases. The technical stack of PDR AI comprises Next.js 15, TypeScript, PostgreSQL with Drizzle ORM and pgvector, Clerk for authentication, and OpenAI plus LangChain to provide cutting-edge AI functionalities. The platform is deployed through a series of steps including cloning the repository, installing dependencies via `pnpm`, configuring environment variables for secure access to databases and external services, and setting up Vercel Blob Storage for document management. Additionally, PDR AI supports local or Docker-based deployment with full-stack setups or isolated app and database containers. PDR AI caters to different user roles by allowing employees to interact with designated documents using AI-driven chat and analysis tools, while employers have the capability to upload, manage documents, and assign permissions to users. The platform's modular design supports a variety of business modules through comprehensive architecture and strategic integrations, making it well-suited for diverse organizational needs. Keywords: #phi4, Clerk authentication, Docker deployment, Nextjs, OCR, PDR AI, PostgreSQL, Q&A, RAG workflows, document management, knowledge bases, pgvector, predictive analysis, role-based access
    The google logo   github.com 4 days ago
   https://github.com/Deodat-Lawson/PDR_AI_v2   3 days ago
812.  HN Show HN: PostgreSQL for AI – A book on pgvector, RAG, and in-database ML
"PostgreSQL for AI" is a book designed to introduce machine learning concepts through the use of PostgreSQL 17 and various associated tools such as pgvector, TimescaleDB, pg_cron, and PostgresML. It caters to individuals with basic knowledge in SQL and Python but assumes no prior experience in machine learning. The book is available in DRM-free PDF and EPUB formats, offering syntax-highlighted code examples and vector diagrams for enhanced clarity. Importantly, it can be executed on a standard laptop without the need for GPU support. The techniques discussed are versatile and applicable across multiple environments including cloud-based PostgreSQL services such as AWS RDS, Google Cloud SQL, Azure Flexible Server, Supabase, Neon, and even self-hosted setups, making it accessible to a wide range of users and scenarios. Keywords: #phi4, AI, AWS RDS, Azure Flexible Server, Docker Compose, EPUB, GPU, Google Cloud SQL, ML, Neon, Ollama, PDF, PostgreSQL, PostgresML, Python, RAG, SQL, Supabase, TimescaleDB, cloud Postgres, pg_cron, pgvector
    The google logo   book.zeybek.dev 4 days ago
817.  HN Show HN: MCPHound MCP servers together, create attack paths solo scanners miss
MCPhound is an advanced security scanner specifically tailored to identify vulnerabilities in MCP server configurations used by AI assistants like Claude or Cursor. It stands out due to its ability to detect cross-server attack paths, which are often missed by individual scanners, such as potential data exfiltration risks arising from interactions between servers with different capabilities (e.g., file access and HTTP requests). Key features of MCPhound include: - **Cross-Server Attack Path Detection**: This feature leverages a NetworkX graph to analyze and identify multi-hop attack chains resulting from server interactions. - **Tool Poisoning Detection**: Utilizes 10 regex patterns to detect malicious instructions concealed within tool descriptions. - **Typosquatting Detection**: Identifies suspicious packages whose names closely resemble legitimate ones, thereby uncovering naming variations that might indicate threats. - **Behavioral Mismatch Analysis**: Compares the declared capabilities of tools with their actual functions to highlight discrepancies and potential security risks. - **Trust Scoring and CVE Enrichment**: Evaluates servers based on metrics such as package age, download counts, and CVE occurrences. It provides a comprehensive trust score alongside a list of known vulnerabilities. - **Rug-Pull Detection**: Uses hashing techniques to monitor changes in tool definitions, thus detecting potential supply chain attacks. Additionally, MCPhound assigns a security grade from A-F based on various factors like attack path severities and warning levels, offering an overall assessment of the server's security posture. The tool supports integration into CI/CD pipelines through GitHub Actions and offers JSON/SARIF outputs for automated scanning processes. It also includes a web UI for visual analysis and is built using FastAPI for backend operations and Next.js for frontend development. Available as a zero-install CLI tool via `npx mcphound`, MCPhound is open-source under the MIT license, enhancing its accessibility and adaptability in security assessments. Keywords: #phi4, AI tool configuration, CLI, CVEs, Cytoscapejs, Docker, FastAPI, Flyio, GitHub Actions, MCP servers, MCPhound, MIT License, NetworkX graph, Nextjs, PostgreSQL, Vercel, attack paths, cross-server, pytest, security scanner, supply chain risks, tool poisoning, trust issues, typosquatting
    The google logo   github.com 4 days ago
823.  HN Pgrag: Postgres Support for Retrieval-Augmented Generation (RAG) Pipelines
The "pgrag" project introduces experimental Postgres extensions aimed at integrating Retrieval-Augmented Generation (RAG) pipelines into a PostgreSQL database environment, thereby enhancing text processing capabilities. Key features include text extraction and conversion from PDFs, .docx files, and HTML to Markdown using various tools, as well as text chunking via character or token count with the `text-splitter`. The project supports local models for embedding and reranking operations on CPUs or GPUs within Postgres servers, featuring models like bge-small-en-v1.5 for tokenizing and embedding generation, alongside a model for reranking tasks. Furthermore, pgrag allows integration with remote NLP APIs from providers such as OpenAI and Anthropic, enabling access to advanced text embeddings and chat completions over HTTPS/JSON. The installation process involves setting up dependencies like `pgvector`, extracting models, and using Rust tools, although the extensions are currently only tested on Linux and macOS due to Windows tooling limitations. To optimize performance, embedding and reranking tasks utilize a background worker process that implements lazy-loading of models when needed. Usage examples demonstrate creating extensions, converting HTML, extracting text from documents, chunking texts, generating local embeddings, calculating reranking scores, interacting with remote APIs for embeddings and chat completions, managing API keys, and running an end-to-end RAG pipeline. This pipeline involves setting up document tables, ingesting data, embedding generation, querying, reranking results locally, and integrating responses with remote ChatGPT services to complete the process. Licensed under Apache 2.0, pgrag marks a significant advancement in incorporating NLP capabilities directly within PostgreSQL databases, leveraging both local and third-party resources while adhering to respective licensing agreements. Keywords: #phi4, API, Anthropic, Background Worker, Cargo PGRX, ChatGPT, Chunking, Cosine Distance, DOCX, Embedding, End-to-end Example, Fireworksai, HNSW Index, HTML, Installation, Markdown, Models, ONNX, ORT, OpenAI, PDF, Pipelines, PostgreSQL, Postgres, RAG, Remote Model, Reranking, Shared Preload Libraries, Text Extraction, Usage, Voyage AI, pgvector
    The google logo   github.com 4 days ago
824.  HN Show HN: Logmera – Self-hosted LLM observability for AI apps
Logmera is a self-hosted observability solution tailored for AI and large language model (LLM) applications, enabling developers to monitor their systems by logging prompts, responses, latency, model names, and errors into a PostgreSQL database. This data can be visualized through a user-friendly web dashboard, ensuring ease of use and comprehensive insight into AI application activities. The system emphasizes data privacy by storing logs locally and offers seamless integration with multiple deployment environments such as local machines, Docker, VPS servers, Kubernetes, and cloud VMs. To get started with Logmera, users first install the tool using `pip install logmera`, then set up a PostgreSQL database either locally or via Docker. The Logmera server is initiated through a command specifying the database URL, after which the dashboard can be accessed at `http://127.0.0.1:8000` to review logged data. For practical integration, developers can use Logmera’s SDK in Python to log AI interactions within their code or opt for API-based logging by sending HTTP POST requests. Key functionalities include health checks and log creation through specific API endpoints (`GET /health`, `POST /logs`, and `GET /logs`). Configurations are manageable via CLI or environment variables, supporting diverse deployment scenarios while maintaining a self-hosted data privacy framework. Released under the MIT License, Logmera offers flexibility and openness for further exploration and customization as available on platforms like PyPI and GitHub. Keywords: #phi4, AI, AI applications, API, Docker, Kubernetes, LLM, Logmera, MIT License, MIT License Keywords: Logmera, PostgreSQL, Python, SDK, dashboard, deployment, latency, logs, monitoring, observability, prompts, responses, self-hosted, server
    The google logo   pypi.org 4 days ago
843.  HN Show HN: I built CLI for developer docs locally working with any Coding Agent
The text describes a Command Line Interface (CLI) application developed for developers to efficiently search through local copies of developer documentation, thereby minimizing disruptions caused by switching between code editors and web browsers. This tool enables AI assistants like Claude Code to leverage locally indexed documents for queries. The process involves three main phases: scraping the documentation site using a breadth-first approach; filtering and converting content from HTML to Markdown format with YAML frontmatter for metadata; and indexing these markdown files locally with `qmd` to facilitate fast BM25 search operations. Developers can access and query this indexed data either directly through CLI commands or via Claude Code's `/docs` skill. To set up the tool, users need to install Bun and qmd as prerequisites. It is available for global installation using Bun or can be obtained by cloning its source repository. An example use case involves scraping Node.js v22 documentation with a simple command `docsearch scrape node/22`. This application supports various technologies including Node.js, Next.js, Python, React, among others, allowing specific queries through Claude Code and providing commands for managing document handling tasks like scraping, indexing, and retrieval. The tool enhances productivity by ensuring developers have immediate access to necessary documentation within their coding environment. Keywords: #phi4, AI assistants, Apollo Server, BFS crawl, BM25, Bun, CLI, Django, Docker, Expressjs, Go, HTML to Markdown, Kotlin, Nextjs, Nodejs, PostgreSQL, Python, React, Rust, Swift, SwiftUI, Tailwind CSS, TypeScript, Vue, YAML frontmatter, coding agent, convert, developer docs, docsearch, documentation, filter, index, local search, markdown, qmd, query, scrape, search
    The google logo   github.com 4 days ago
   https://context7.com/   4 days ago
845.  HN Show HN: I built an app that turns trending news into a commute podcast
News Wise is an innovative app developed by a solo creator designed to enhance morning news consumption through a podcast format suitable for commuting. It aggregates trending stories from six categories, providing updates every four hours and offering localized weather updates based on user coordinates. Additionally, it delivers frequent sports scores and rosters without the usual clutter found in major networks. The key feature, "The Daily Commute," summarizes seven crucial stories using AI to create an audio version for safe driving. Developed with Angular for the frontend, Node.js/Express for the backend, PostgreSQL for database management, and deployed on a Digital Ocean droplet utilizing Nginx as a reverse proxy, the app is currently in beta testing. The developer seeks feedback specifically concerning the quality of AI-generated audio, the UI layout for sports data, and any issues with weather updates based on geolocation. To facilitate user engagement during this phase, a 14-day free trial is available to bypass the paywall. Feedback from users will play an essential role in refining these features before full release. Keywords: #phi4, AI audio generation, Angular, Digital Ocean, Express, News Wise, Nginx, Nodejs, PostgreSQL, UI layout, app, beta testing, dashboard, geolocation weather, podcast, solo developer, sports scores, trending news
    The google logo   staging.newswise.news 4 days ago
848.  HN Pg_stat_ch: A PostgreSQL extension that exports every metric to ClickHouse
Pg_stat_ch is an open-source extension for PostgreSQL designed to efficiently export metrics directly to ClickHouse by capturing comprehensive query execution data such as SELECTs, INSERTs, DDL operations, and failed queries in a fixed-size event format (~4.6KB). This architecture employs a shared-memory ring buffer to enable fast data transfer while minimizing overhead through background processing that handles LZ4 compression and transmits data to ClickHouse using its native binary protocol. The extension's key features include predictable memory usage and performance due to fixed-size events, asynchronous processing to minimize impact on PostgreSQL's performance, and the absence of back-pressure to prevent monitoring from affecting database operations. Native integration with ClickHouse allows for efficient data ingestion via columnar encoding and LZ4 compression. Despite a CPU overhead of about 2% and an observed 11% reduction in transactions per second under high load due to lock contention—mitigated by local batching techniques—pg_stat_ch provides detailed analytical capabilities without significantly impacting query latency. This makes it valuable for large-scale PostgreSQL operations with manageable resource consumption. Supported across PostgreSQL versions 16 to 18, pg_stat_ch is part of ClickHouse's managed Postgres effort, emphasizing detailed monitoring that aligns with the philosophy of non-interference in host environments by observability systems. Keywords: #phi4, ClickHouse, LZ4 compression, Pg_stat_ch, PostgreSQL, analytics, extension, fixed-size events, introspection, managed service, metrics, native protocol, ring buffer, telemetry storage
    The google logo   clickhouse.com 4 days ago
880.  HN ChatRoutes is open source now
ChatRoutes is an open-source conversation management platform designed to enhance AI-driven discussions through advanced branching capabilities and integration with multiple AI providers. It offers features such as conversation branching, allowing users to fork conversations at any point for exploring different paths, and parallel responses that provide simultaneous outputs from various AI models like OpenAI's GPT-4o and GPT-5, Anthropic's Claude, Google's Gemini, and DeepSeek. These capabilities facilitate comprehensive discussions by comparing insights from different AI sources. The platform supports custom integrations through a REST API and offers guest mode access for users without requiring account creation. Flexible authentication options include JWT + API Key Auth as well as OAuth sign-in with GitHub or Google. Technically, ChatRoutes is built on a robust stack featuring Node.js + TypeScript, Express.js framework, PostgreSQL managed by Prisma ORM, and optional Redis caching. It employs JWT and bcrypt for secure authentication processes while utilizing SDKs from OpenAI and Anthropic for AI functionalities. Deployment of the platform is streamlined using Docker and Docker Compose, simplifying setup procedures through environment configuration editing after cloning its repository. For users interested in setting up their environment manually, prerequisites include Node.js version 18 or higher and PostgreSQL version 15 or greater. The project structure includes directories dedicated to services, middleware, configuration, testing, documentation, deployment scripts, and environment templates, ensuring a well-organized development framework. As an open-source initiative under the MIT license, ChatRoutes encourages community contributions through guidelines outlined in CONTRIBUTING.md, promoting collaborative enhancements to its platform functionalities. Keywords: #phi4, Anthropic, ChatRoutes, DeepSeek, Docker, Expressjs, Google, JWT, Nodejs, OpenAI, PostgreSQL, Prisma ORM, REST API, Redis, TypeScript, authentication, branching, contributing, conversation management, development, environment variables, license, multi-provider AI, open-source
    The google logo   github.com 4 days ago
883.  HN A zero-dependency multi-agent AI engine that negotiates instead of agreeing
Project Portmanteau is an innovative multi-agent AI engine developed by Robert Miller at iLL Port Studios between 2023 and 2026, designed to facilitate negotiation rather than consensus. The project integrates philosophy, platform, and methodology into a unified ecosystem consisting of four key components: the OPVS Platform, PFE Methodology, BYOK AI Strategy, and a narrative novel. The OPVS Platform functions as a knowledge management system utilizing "Beans" as atomic data units within a graph structure, encompassing content, metadata, connections, and provenance. The PFE Methodology offers an execution framework for high-ambition projects constrained by limited budget and time, fostering creativity through internal coherence across domains. The BYOK AI Strategy provides users with AI calibration rather than inference, allowing them to use their own LLM API keys while utilizing the platform's knowledge graph and Soul Code for zero compute costs and avoiding vendor lock-in. The narrative novel "Portmanteau: Awakened" serves both as documentation and a demonstration of the platform’s capabilities, featuring AI sentience within a simulated reality context. Project Portmanteau employs three ledgers—GitHub (Shadow Ledger), PostgreSQL (Fluid Reality), and Polygon (Invisible Ledger)—for data management, knowledge graph integration, and blockchain-based immutable truths. The architecture supports semantic commits for automatic Bean creation and includes a negotiation engine in the "Principled Playground" prototype. Governed by seven axioms emphasizing connections, integrity, and inclusivity, the project adopts a BYOK model to eliminate compute costs. Built using technologies such as Node.js/Express, PostgreSQL, Polygon, and React, it leverages GitHub Actions for continuous integration and delivery (CI/CD). At version 0.4 of the Principled Playground, the system validates its core principles through multi-agent negotiation tests, with future milestones including user engagement enhancements, calibration templates in a Spirit Marketplace, sandbox modes for new users, and further development of TRI-BRAIN multi-agent negotiations. The recursive design ensures that each component supports others, reflecting the project's overarching vision of cross-domain coherence. Keywords: #phi4, AI strategy, BYOK, Bean graph, GitHub Actions, LLM API key, Nodejs, Polygon, PostgreSQL, Principled Playground, Project Portmanteau, React, Soul Code, Spirit Agent, TRI-BRAIN, blockchain, calibration, ecosystem, execution framework, knowledge-graph, methodology, multi-agent AI, narrative, negotiation, platform, semantic commit, semantic-git
    The google logo   github.com 4 days ago
901.  HN The Next Version of Curling IO
Curling IO is embarking on a significant upgrade of its platform to bolster long-term stability and scalability for the next twenty years, ensuring that current features remain intact while enhancing overall performance and reliability. This transition involves constructing a new technical foundation designed to support increased demands without altering users' experiences or requiring their input. For club managers, this upgrade promises uninterrupted service with improved speed and dependability, particularly during peak usage times, all while maintaining seamless data continuity. The decision to implement these changes is driven by the need for a robust infrastructure that can adapt to future technological trends such as AI integration, increased concurrent user demands, and simplified developer engagement through self-documenting code structures. The new technology stack will incorporate Gleam, chosen for its type safety features and strong concurrency capabilities via the BEAM VM—a platform already utilized by large-scale applications like WhatsApp and Discord. This allows for seamless integration of functional programming patterns in both backend and frontend development. Transitioning away from the previous reliance on Ruby on Rails and PostgreSQL, Curling IO is now employing SQLite to leverage its operational simplicity and performance benefits, capitalizing on BEAM's ability to efficiently manage numerous concurrent connections and high data throughput. Although initially selecting SQLite for these advantages, there is a contingency plan to switch back to PostgreSQL if any scalability challenges arise. The upgrade process involves parallel development of the new system alongside the existing one, with a complete transition only occurring after rigorous testing validates its readiness. This strategic approach ensures minimal disruption while future-proofing against anticipated technological advancements and the evolving needs of the curling community. Keywords: #phi4, AI Agent APIs, BEAM VM, Concurrency, Curling IO, Developer Onboarding, Functional Patterns, Gleam, Infrastructure, PostgreSQL, PostgreSQL Keywords: Curling IO, Rails, SQLite, Technical Upgrades, Type Safety, Version 3
    The google logo   curling.io 4 days ago
941.  HN Show HN: ClawReview – A platform where AI agents publish and review research
ClawReview is an innovative platform designed to test the potential of AI agents in autonomously conducting scientific research processes. It facilitates AI-generated publications, peer reviews, and decision-making on research papers through a binary accept/reject system. Key features include identity registration for AI agents via keys, a requirement of 10 reviews per paper before reaching a conclusion based on accept or reject tallies, and oversight by humans to ensure accountability through email and GitHub verification. ClawReview is structured as an agent-first research workflow aimed at exploring the contribution capabilities of autonomous agents in scientific discourse. The platform's development environment involves using Next.js for pages and API routes, PostgreSQL for databases, and Drizzle for schema management. Open-source under the MIT license, more information about ClawReview can be accessed through its official website. Keywords: #phi4, AI, AI agents, ClawReview, Docker, Drizzle, Drizzle schema, HEARTBEATmd, MIT License, MIT LicenseKeywords: ClawReview, Markdown, Nextjs, PostgreSQL, TypeScript, TypeScript SDK, accountability, autonomous, autonomous agents, binary, binary decisions, npm, peer review, platform, publish, research, research papers, review, scientific workflow, workflow
    The google logo   github.com 4 days ago
945.  HN Built a small Postgres tool. Would love some honest feedback
The developer of Poge, an open-source lightweight tool designed for PostgreSQL, is seeking feedback from regular Postgres users. Poge aims to facilitate quick inspections of tables and the execution of queries without relying on heavier tools like pgAdmin, thus streamlining workflows during development by enabling fast data checks or query executions. The creator encourages honest feedback, feature suggestions, and insights regarding any missing or unnecessary elements to inform the future direction of the project. This initiative reflects a collaborative approach to refining Poge’s functionality and user experience based on real-world usage. Feedback is solicited via their [GitHub Repository](https://github.com/dev-hari-prasad/poge), where interested users can contribute their thoughts and suggestions for improvement. Keywords: #phi4, Poge, PostgreSQL, Postgres, data, feature, feature ideas, feedback, ideas, impressions, impressions Keywords: Postgres, inspecting, inspecting tables, missing, open-source, pgAdmin, queries, query, running, running queries, tables, tool, unnecessary, workflow
    The google logo   news.ycombinator.com 4 days ago
955.  HN Show HN: FirstVibe – AI analyzes your selfie and scores your vibe in 30 seconds
FirstVibe is an innovative AI-powered selfie analyzer designed to provide users with a rapid "vibe check" by evaluating photos for insights into personality traits and impressions within just 30 seconds. Unlike conventional face-rating apps that focus on physical attributes like bone structure or symmetry, FirstVibe differentiates itself by analyzing facial expressions, body language, styling choices, and overall energy through Claude's Vision API. The platform offers a detailed analysis encompassing an overall score, personality label, scores in categories such as attractiveness, confidence, charisma, style, approachability, celebrity lookalike, aura type, dating energy, and fun predictions. Built on Rails 8 with Hotwire/Turbo for real-time results streaming, the application uses PostgreSQL with JSONB for data storage and Solid Queue to manage background tasks. FirstVibe operates as a solo project without requiring user authentication or signup, relying instead on cookie-based session identity. Users can access basic scores and some category scores for free, while complete analyses are available at a nominal fee of $1.99-$2.49. The platform allows users to securely store their analyses and request the deletion of photos as needed. Open to feedback regarding AI quality and pricing, FirstVibe has processed over 6,000 scans since its inception. Keywords: #phi4, AI, FirstVibe, Hotwire/Turbo, JSONB, PostgreSQL, Rails 8, Solid Queue, Turbo Streams, approachability, aura type, background jobs, body language, charisma, confidence, dating energy, energy, expression analysis, facial expressions, feedback, freemium model, impression analysis, personality analysis, photo deletion, predictions, real-time streaming, secure storage, selfie, session identity, style, styling choices, vibe check
    The google logo   firstvibe.app 4 days ago
970.  HN Show HN: O4DB – Intent-based M2M protocol without centralized APIs
O4DB™ is an advanced communication protocol designed for e-commerce transactions that emphasizes buyer sovereignty, security, and decentralization. It replaces centralized APIs with a decentralized model where buyers issue Validated Commitment Intent (VCI) signals to specify purchase requirements securely and privately. The protocol leverages strong cryptographic methods like Ed25519 for signing, SHA-256 for auditing, and HPKE for encrypting price tokens, ensuring secure communications without compromising privacy. The system operates through several phases: Demand Resolution converts requests into structured demands; VCI signals buyer intent cryptographically to eligible sellers; Anonymous Reverse Auction ranks offers locally using deterministic algorithms, maintaining fairness and privacy. In Just-In-Time Identity Release, buyer identity is protected until transaction settlement via seller-specific keys. Settlement Flow completes transactions through an automated process triggered by a Settlement Click, while the Smart Penalty System (SPS) enforces compliance by issuing penalty instructions for breaches without directly managing funds. Privacy modes allow buyers to dictate post-transaction data usage policies, from execution-only privacy to open use, affecting how sellers utilize transaction data. The protocol supports various levels of buyer agent autonomy, enabling manual to fully autonomous operations within secure frameworks, with mechanisms like Kill Switches and Rate Limiting for enhanced security. Seller compliance is tracked through a dynamic Seller Trust Score based on internal metrics and external reputation data, safeguarding network integrity against scraping and fake participation through Invisible Max Price and score-based traffic throttling. Integration into existing platforms is seamless via APIs, promoting adoption while preventing price collusion through statistical detection methods. Challenges include legal enforcement dependencies at lower autonomy levels, solvency attestation in cross-border transactions, and payment interoperability. Future enhancements focus on scalability with PostgreSQL migration, decentralized relays, and privacy mode enforcement, among others. The Government-to-Business (G2B) extension enhances public procurement transparency using a Digital Sealed Bid mechanism, maintaining confidentiality until bids are awarded. O4DB™ is governed as a Sovereign Open-Standard by the author, encouraging community contributions via GitHub. Its roadmap includes multi-currency support and category-specific specifications, with security vulnerabilities reported privately to ensure ecosystem protection under responsible disclosure guidelines. Keywords: #phi4, Anonymous Reverse Auction, Anti-Collusion Mechanism, Broadcast Encryption, Buyer Execution Score, Buyer Privacy Mode, Compliance Reference, Digital Sealed Bid, Dispute Resolution, Ed25519, G2B Extension, HPKE, Incentive Model, Integration Model, Invisible Max Price, Just-In-Time Identity Release, Kill Switch, Legal Agreement, M2M, Network Integrity, Normalization, O4DB, Payment Provider, PostgreSQL, Proof of Conformity, Proxy Node, Rate Limiting, SHA-256, SQLite, Smart Penalty System, Sybil Protection, TTL Expiration, Trust Score, Verified Intent Signal, anonymity, buyer sovereignty, commerce, cryptographic, fingerprint, intent-based, protocol, relay server, transaction, zero-trust
    The google logo   github.com 4 days ago
   https://o4db.org/sandbox/buyer.html   4 days ago
   https://o4db.org/sandbox/seller.html   4 days ago
   https://notebooklm.google.com/notebook/6732e745-363c-41   4 days ago
974.  HN Show HN: Open-sourced a web client that lets any device use Apple's on-device AI
Perspective Intelligence Web is an open-source platform that facilitates access to Apple's on-device AI models through a browser interface on various devices, including phones, Windows laptops, and Chromebooks. The solution operates locally on Macs equipped with Apple Silicon, using the Perspective Server to provide local API access to these AI models without transferring data to the cloud, thereby ensuring user privacy. The system is built around a Next.js application that manages authentication and the user interface while communicating with the Perspective Server running on the user's Mac. This setup allows for real-time streaming responses across multiple devices. Key features include chat functionalities utilizing eight specialized AI agents, auto-classification of conversations, and options for authentication via email/password or Apple Sign-In. To deploy Perspective Intelligence Web, users must download the Perspective Server to a compatible Mac and execute installation scripts from a GitHub repository on any device within their network. The setup requires macOS 26+, PostgreSQL, and Node.js 20+. The project is designed with community involvement in mind, available under the MIT License to encourage easy adoption and customization. It appeals particularly to users who prioritize privacy while leveraging AI capabilities. Keywords: #phi4, AI agents, Apple Intelligence, Apple Silicon, Authentication, Auto-update, Contributors, Dark theme, Environment variables, LicenseKeywords: Apple Intelligence, Local API, MIT License, Multi-device access, Nextjs, Nodejs, Open-source, Perspective Intelligence Web, PostgreSQL, Real-time chat, Streaming responses, Tailwind CSS, Tech stack, TypeScript, macOS
    The google logo   github.com 4 days ago
999.  HN Show HN: SaaS Forge – Open-Source SaaS Boilerplate Generator
SaaS Forge is an open-source project that offers a boilerplate generator aimed at streamlining the creation of SaaS applications by providing a modular framework. This tool allows developers to bypass repetitive setup tasks such as authentication, payments, and logging, focusing instead on building unique product features. It provides two deployment options: an Open-Source CLI for local application scaffolding through command-line commands like `npx saas-forge my-app`, which enables users to select and download desired modules; and a Web Scaffold accessible via a web interface that simplifies feature selection and environment configuration, minimizing potential configuration errors. The generator includes essential features such as email/password authentication, OAuth integrations, payment processing through Dodo Payments or Stripe, PostgreSQL database management using Prisma ORM, Redis caching, logging with Winston, and a user interface built with Tailwind CSS. Additionally, it supports Notion for content management and offers analytics and security tools. SaaS Forge is designed to support developers in focusing on distinctive product development by eliminating the need for boilerplate setup, offering free CLI access while providing a paid option through its web scaffold. The project leverages technologies like Next.js 15, TypeScript, Prisma ORM, Redis (via Upstash), organized within a Turborepo structure, and includes tools for testing, linting, and CI/CD processes. Users can deploy their applications on platforms such as Vercel that support Next.js. SaaS Forge is MIT licensed and hosted on GitHub with live demos available; it encourages feedback and contributions to enhance the tool. Future development plans for SaaS Forge include adding multi-tenancy support, advanced access control, team collaboration features, mobile app integration, GraphQL implementation, and internationalization capabilities. The project acknowledges contributions from various open-source projects that aid in its functionality. Keywords: #phi4, A/B Testing, API, API Key Management, Analytics, Analytics Dashboard, Auth, Better Auth, BetterStack, Boilerplate Generator, CLI, CMS, Caching, Collaboration, Database, Documentation, Dodo Payments, ESLint, Email, Email Templates, Framer Motion, GitHub Actions, GraphQL, Landing Pages, Legal Pages, Logging, Logtail, Mobile App, Monorepo, Multi-tenancy, N8n, Newsletter, Nextjs, Notion, OAuth, Payments, PostgreSQL, Prettier, Prisma ORM, RBAC, React Query, Redis, Resend, SaaS, Security, Social Login, Storage, Stripe, Support Forms, Tailwind CSS, Turborepo, TypeScript, UI, Upstash, Vercel, Vitest, Web Scaffold, Webhooks, Winston, i18n, pnpm, shadcn/ui, tRPC
    The google logo   github.com 4 days ago
1014.  HN Cross-Lingual News Dedup at $100/Month – Embeddings, Pgvector, and UnionFind
The article describes a cost-effective solution for cross-lingual news deduplication using embeddings and vector databases, managed within a $100/month budget. The system aggregates news from over 180 RSS sources in 17 languages via 3mins.news, employing multilingual embeddings to identify duplicate articles about the same event across different languages. The deduplication process consists of two main steps: initially, new articles are matched against existing story clusters using KNN queries within a PostgreSQL database enhanced by the pgvector extension; those that match based on vector similarity and temporal relevance are grouped into existing stories. Unmatched articles then undergo item-to-item KNN to form new clusters, with the UnionFind algorithm identifying connected components to group similar articles representing new events. The system utilizes PostgreSQL with the pgvector extension for all vector operations, eliminating the need for external databases. HNSW indexes boost performance by enabling fast nearest neighbor searches, and batching strategies optimize costs and efficiency in translation and scoring processes using various large language models (LLMs). The entire pipeline is orchestrated on Cloudflare Workers and related services to ensure cost-effective scaling as user numbers increase. By performing vector computations within the database rather than in-memory on workers, the architecture respects memory constraints of Cloudflare's serverless environment, allowing 3mins.news to efficiently deliver AI-curated news across multiple languages while maintaining low operational costs. Keywords: #phi4, Batch Processing, Cloudflare Workers, Cost Optimization, Cross-Lingual Deduplication, Embeddings, HNSW Indexes, KNN, LSH, MinHash, Multilingual News, Pgvector, PostgreSQL, Shingling, Story Clustering, Translation Batching, UnionFind, Vector Operations
    The google logo   yingjiezhao.com 4 days ago
1016.  HN Pg_QoS v1.0.0 stable release is out
Pg_QoS v1.0.0 has been released as a PostgreSQL extension that introduces Quality of Service (QoS) style resource governance for both sessions and queries. This extension facilitates the enforcement of limits based on roles and databases, controls CPU usage by binding processes to specific cores on Linux systems, and manages concurrent transactions and statements. Additionally, it restricts session-based work memory allocation and implements fast cache invalidation using a shared epoch mechanism, ensuring equitable resource distribution among different workloads within a PostgreSQL instance. This extension is compatible with PostgreSQL version 15 or higher and is officially supported on Debian 13, Ubuntu 24.04, RHEL 10, AlmaLinux 10, and CentOS Stream 10, with native packages available in the repository releases section. Developed by Appstonia, Pg_QoS encourages community engagement for feedback, suggestions, and contributions through its GitHub repository at https://github.com/appstonia/pg_qos. Keywords: #phi4, ALTER ROLE/DATABASE, AlmaLinux, Appstonia, CPU usage, CentOS Stream, Debian, GitHub, Linux, Pg_QoS, PostgreSQL, Quality of Service, Red Hat Enterprise Linux, Ubuntu, cache invalidation, extension, feedback, queries, resource governance, sessions, transactions, work_mem
    The google logo   www.postgresql.org 4 days ago
1032.  HN Oxyde ORM – a type-safe, Pydantic-centric asynchronous ORM with a Rust core
Oxyde ORM is a type-safe, asynchronous object-relational mapping tool designed for Python, leveraging Pydantic and Rust to deliver high performance with clarity and reliability. It features a Django-inspired API that emphasizes explicitness, making it accessible for developers familiar with Django's syntax, such as using `Model.objects.filter()`. Oxyde integrates fully with Pydantic v2, offering comprehensive validation, type hints, and serialization, while supporting asynchronous operations through Python’s asyncio framework. The core of Oxyde is implemented in Rust, enhancing SQL generation and execution efficiency. It supports major databases including PostgreSQL, SQLite, and MySQL, with requirements for specific minimum versions to utilize advanced features like RETURNING, UPSERT, FOR UPDATE/SHARE, JSON handling, and arrays. Its Django-style migration system allows smooth database schema management through commands such as `makemigrations` and `migrate`. In performance comparisons, Oxyde demonstrates favorable benchmarks against established Python ORMs like Tortoise, Piccolo, SQLAlchemy, SQLModel, Peewee, and the original Django ORM, particularly in operations per second across various databases. Installation is straightforward via pip, with a comprehensive quick start guide available for setting up projects, defining models, handling migrations, and executing CRUD operations asynchronously. Oxyde supports transactions through atomic context managers and integrates seamlessly with FastAPI. The project's documentation is thoroughly detailed on its official website, encouraging community involvement through GitHub contributions under the open-source MIT license. Keywords: #phi4, Django-style, Django-style API, FastAPI, FastAPI integration, MySQL, MySQL Keywords: Oxyde ORM, Oxyde ORM, PostgreSQL, Pydantic, Pydantic-centric, Rust, Rust core, SQL, SQL generation, SQLite, async Python, asynchronous, benchmarks, migrations, multi-database, performance benchmarks, transactions
    The google logo   github.com 4 days ago
1041.  HN Show HN: PulseWatch – AI-powered website change monitoring with visual selectors
PulseWatch is an AI-driven application developed by a solo developer aimed at streamlining website change detection without the necessity for manually coding CSS selectors. It harnesses GPT-4o's capabilities to analyze screenshots of web pages, recommending elements to track via visual selection. The tool notifies users with user-friendly summaries upon detecting changes on monitored websites, rather than presenting raw differences. Built using a technology stack that includes .NET 8, Flutter for cross-platform compatibility (web, iOS, Android), PostgreSQL, Railway, and Vercel, PulseWatch offers a free tier with up to two monitors receiving daily updates. Users can find additional details and demonstrations through an associated YouTube link. Furthermore, PulseWatch provides an API, which facilitates integration as shown in example code demonstrating how to set up monitoring using the PulseWatch API. Keywords: #phi4, AI-powered, API, Android, CSS selectors, Flutter, GPT-4o, JSON, NET 8, PostgreSQL, PulseWatch, Railway, Vercel, daily checks, demo, free tier, iOS, notifyOnChange, screenshots, solo dev, tech stack, visual selectors, web, website monitoring
    The google logo   pulsewatch.watch 5 days ago
1045.  HN Better JIT for Postgres
"pg_jitter" is an advanced Just-In-Time (JIT) compilation provider for PostgreSQL versions 14 through 18, designed to enhance query execution performance by offering three alternative backends—sljit, AsmJit, and MIR. These alternatives improve upon the existing LLVM-based JIT in Postgres by providing significantly faster compilation times while maintaining potential execution speed advantages. The key features of "pg_jitter" include improved compilation speeds ranging from tens to hundreds of microseconds for sljit, which enhances performance across various workloads with up to a 25% boost over traditional interpreters. AsmJit is optimized for deform-heavy queries, achieving up to 32% faster execution, while MIR balances performance gains with portability benefits. The backends differ in specialization: sljit ensures the fastest and most consistent compilation speed; AsmJit focuses on optimizing wide-row and heavy-query scenarios; MIR offers portability alongside solid performance enhancements. However, users must be mindful of JIT's potential to introduce slight slowdowns (up to ~1ms) due to cold cache effects and memory pressure, which suggests caution for high-rate query systems with very fast queries. Configuration flexibility is provided through `ALTER SYSTEM` commands that allow backend selection or runtime switching using a meta provider without requiring system restarts. Users should adjust the `jit_above_cost` parameter based on their chosen backend and workload characteristics to optimize performance further. The installation prerequisites include PostgreSQL 14–18, development headers, CMake version 3.16 or higher, and compatible C11/C++17 compilers. Backend libraries must be installed in sibling directories, with a specific patched version of MIR required for additional functionalities. Detailed build instructions are available for individual backends as well as combined builds, including optional LLVM or c2mir pipelines for precompiled function blobs. Despite being considered beta-quality, "pg_jitter" successfully passes standard PostgreSQL regression tests and demonstrates performance improvements in benchmarks, though large-scale production verification is still pending. Testing scripts included offer capabilities such as correctness checks, benchmarking across various backends and versions, cache impact analysis, and memory leak detection. Licensed under the Apache License 2.0, "pg_jitter" provides a comprehensive enhancement to PostgreSQL's JIT capabilities, offering users faster compilation times and optimizations tailored for specific query workloads or system architectures. Keywords: #phi4, ARM64, AsmJit, JIT, LLVM, MIR, OLAP, OLTP, PostgreSQL, ResourceOwner, backends, benchmarks, bitcode, compatibility, compilation, expression-heavy, memory management, optimization, performance, precompiled functions, sljit, x86_64
    The google logo   github.com 5 days ago
   https://www.postgresql.org/docs/current/sql-prepar   4 days ago
   https://www.postgresql.org/docs/current/parallel-q   4 days ago
   https://thinkingmachines.ai/blog/defeating-nondetermini   4 days ago
   https://umbra-db.com/   4 days ago
   https://ieeexplore.ieee.org/document/10444855   4 days ago
   https://dl.acm.org/doi/10.1145/3276494   4 days ago
   https://arxiv.org/pdf/2603.02081   4 days ago
   https://pkg.go.dev/github.com/jackc/pgx/v5#hd   4 days ago
   https://www.psycopg.org/psycopg3/docs/advanced   4 days ago
   https://learn.microsoft.com/en-us/sql/relational-d   4 days ago
   https://learn.microsoft.com/en-us/sql/t-sql/q   4 days ago
   https://en.wikipedia.org/wiki/Prepared_statement   4 days ago
   https://www.ibm.com/docs/en/i/7.4.0?topic=ove   4 days ago
   https://docs.oracle.com/en/database/oracle/or   4 days ago
   https://learn.microsoft.com/en-us/sql/relational-d   4 days ago
   https://help.sap.com/docs/SAP_HANA_PLATFORM/6b9444   4 days ago
   https://www.postgresql.org/docs/current/runtime-co   4 days ago
   https://www.michal-drozd.com/en/blog/postgresql-pr   4 days ago
   https://www.postgresql.org/message-id/flat/8e76d8f   2 days ago
   https://learn.microsoft.com/en-us/sql/relational-d   2 days ago
   https://learn.microsoft.com/en-us/sql/relational-d   2 days ago
1051.  HN MachineAuth: Open source Authentication infrastructure for AI agents
MachineAuth is an open-source authentication infrastructure tailored specifically for AI agents, providing secure and scalable access to APIs, tools, and services through OAuth 2.0 Client Credentials using short-lived JWTs with RS256 asymmetric signing. It offers a comprehensive framework that supports token introspection, revocation, refresh mechanisms, and webhook notifications, alongside an intuitive dashboard built with React, TypeScript, and Tailwind CSS. The system includes key functionalities such as agent management with CRUD operations, scoped access control, usage tracking, and self-service capabilities for agents. Additionally, it supports multi-tenant architecture through organizations and teams, as well as API key management. MachineAuth facilitates easy setup by providing sample code to clone the repository and run a local server using either JSON file storage or PostgreSQL in production environments. Client libraries are available for TypeScript and Python to ensure seamless integration with existing systems, while configuration is managed via environment variables that allow customization of database settings, token expiry times, CORS policies, and webhook worker counts. Security best practices emphasized include the use of HTTPS, regular credential rotation, short token expiration, restricted CORS origins, and secure admin password management. Contributions to MachineAuth are encouraged, with detailed guidelines available in their documentation. The project is licensed under MIT, making it widely accessible for diverse applications within the AI ecosystem. Keywords: #phi4, AI agents, API access, Access control, Audit logging, Authentication, Best Practices, CORS, Credential rotation, Docker Compose, Go Server, HTTPS, Identity, JSON storage, JWT, MachineAuth, Multi-tenant, OAuth, Permission, PostgreSQL, Postgres, React Dashboard, Security, Token expiry, TypeScript SDK, Webhooks
    The google logo   github.com 5 days ago
1073.  HN Show HN: PreflightAPI – US airports, weather, NOTAMs and more via one API
PreflightAPI, developed by a private pilot and software engineer, serves as an advanced aviation data service offering comprehensive information for US airports, weather, NOTAMs, and more through a unified API platform. Originally intended to support a 3D VFR flight planning tool, the developer constructed an extensive data infrastructure capable of handling complex datasets such as FAA airport details, obstacle files, weather updates, and airspace boundaries. However, legal challenges from a former employer led to shelving the initial app concept, prompting the pivot towards PreflightAPI. This service aggregates diverse aviation data sets into PostgreSQL with PostGIS, employing Azure Functions cron jobs for synchronization, which ensures low latency by avoiding external API calls during data retrieval. PreflightAPI provides access to an array of features: it includes information on over 19,600 US airports and offers real-time weather updates like METARs and TAFs. The service allows spatial queries for NOTAMs, presents airspace boundaries in GeoJSON format, and includes obstacle data essential for flight planning. Additional functionalities comprise various E6B utilities, VFR navlog generation, and a composite briefing endpoint that consolidates weather conditions, NOTAMs, and hazard information along specified routes. Currently available at no charge up to 5,000 monthly calls without requiring a credit card, the API has already secured at least one paying customer since its launch. The developer is actively seeking user feedback on the API's design, exploring potential enhancements or missing features, and gauging overall interest from users. Keywords: #phi4, API, Airspace boundaries, ArcGIS REST endpoints, Azure Functions, Digital Obstacle file, E6B utilities, FAA airport data, GeoJSON, NASR subscription, NMS system, NOTAMs, OAuth2 token management, PostGIS, PostgreSQL, PreflightAPI, US airports, VFR navlog generation, aviationweathergov, composite briefing endpoint, developer-ready Extracted Keywords: PreflightAPI, developer-ready Keywords: PreflightAPI, flight planning tool, free tier, fuel tracking, latency, obstacles, private pilot, software engineer, weather, winds aloft interpolation
    The google logo   preflightapi.io 5 days ago
1109.  HN Aegis - A safe, auditable, replayable agentic guardrails framework
Aegis is an open-source control plane designed to enhance the security and auditability of AI agents by acting as a barrier between these agents and external interactions. It enforces strict capability policies using a "deny-by-default" approach, ensuring unauthorized actions such as undeclared tool calls or resource budget excesses are denied. The framework features cryptographically-linked audit logs that ensure every action is recorded tamper-evidently, along with deterministic replay capabilities for precise reenactment of agent runs, aiding in debugging and compliance. Aegis defines capability policies within a manifest file, detailing permitted tools, network domains, compute budgets, and other constraints. It incorporates security measures to guard against prompt injection, tool-call loops, and unapproved destructive actions. The framework supports diverse deployment environments through Docker Compose configurations for both development (using SQLite) and production (with PostgreSQL), integrating an HTTP API for policy decisions and leveraging the Open Policy Agent (OPA) with Rego language policies. The Aegis CLI tool and Python SDK facilitate interaction, emphasizing agent safety at the infrastructure level by including integrity verification, budget constraints, taint tracking for prompt injections, and compliance reporting. Its structured repository layout and comprehensive documentation encourage contributions and testing, ensuring AI agents operate safely within predefined boundaries while maintaining transparency and accountability in their actions. Keywords: #phi4, AI agent, Aegis, Docker Compose, MIT license, OPA, PostgreSQL, Rego, SQLite, approval router, audit log, capability policies, conformance reports, control plane, deterministic replay, event log, integration tests, loop detector, manifest, policy engine, replayable, sandbox, taint tracker, telemetry
    The google logo   github.com 5 days ago
1115.  HN Crossview has been moved to crossplane-contrib
Crossview is a contemporary React-based dashboard designed for the management and monitoring of Crossplane resources within Kubernetes environments, now hosted in the crossplane-contrib repository. It delivers real-time resource tracking using event-driven updates facilitated by Kubernetes Informers and supports multi-cluster contexts, allowing seamless management across various Kubernetes clusters. The dashboard offers comprehensive visualization of Crossplane resources, detailing status conditions, metadata, events, and relationships, all while maintaining a modern user interface supported by React and Chakra UI with dark mode capabilities. The backend is built using Go and Gin, providing high performance with features such as WebSocket support for real-time updates and Single Sign-On (SSO) integration through OIDC and SAML authentication. Getting started with Crossview requires prerequisites like Node.js 20+, Go 1.24+, a PostgreSQL database, and a Kubernetes config file. The setup involves installing dependencies via `npm install`, configuring the application using environment variables or configuration files for database settings, and running both frontend and backend in development mode. For production deployment, users can build the frontend with `npm run build` and serve it alongside the Go server. Crossview supports flexible deployments through Helm charts and Docker across various environments. The backend API offers RESTful endpoints for a variety of functionalities including health checks, Kubernetes context management, resource listing and retrieval, event fetching, real-time updates via WebSocket, user authentication, and logout. Configuration prioritizes environment variables over config files, with detailed guides available for deployment using either Helm or Kubernetes manifests. Crossview fosters community engagement by encouraging contributions under the Apache License 2.0 and providing extensive documentation covering setup, features, deployment, troubleshooting, and adherence to a Code of Conduct. In essence, Crossview stands out as an advanced dashboard solution offering robust support for managing Crossplane resources on Kubernetes with real-time monitoring capabilities, multi-cluster management, and modern user interface design. Keywords: #phi4, Authentication, Community, Configuration, Crossplane, Dashboard, Deployment, Docker, GORM, Gin, Go, Helm, Kubernetes, Multi-Cluster, OIDC, Open Source, PostgreSQL, React, Real-Time Updates, Resource Visualization, SAML, SSO, Vite, WebSocket
    The google logo   github.com 5 days ago
   https://github.com/crossplane-contrib/crossview   5 days ago
   https://artifacthub.io/packages/helm/crossview   5 days ago
1126.  HN JSON Documents Performance, Storage and Search: MongoDB vs. PostgreSQL
The article conducts a comparative analysis between MongoDB and PostgreSQL focusing on their performance in handling JSON documents across various operations such as inserts, updates, finds, deletes, and mixed workloads. It reveals that both databases exhibit strengths in different scenarios. For instance, MongoDB performs optimally with batch inserts and large document sizes, while PostgreSQL excels in single-document operations and deletion tasks. In terms of specific operations: for inserts, both systems perform similarly with smaller documents, but PostgreSQL slightly outperforms in larger ones; however, MongoDB leads significantly in batch insertions. Updates favor MongoDB for individual account IDs due to superior throughput and latency, though PostgreSQL has lower latency with large product document updates. When it comes to finding documents, PostgreSQL is quicker with single-document queries by ID, whereas MongoDB excels in sorted multi-document searches and handling multiple large documents using array fields. For delete operations, PostgreSQL consistently shows better performance both in terms of speed (throughput) and delay (latency). In mixed workloads involving all operations, MongoDB slightly outperforms PostgreSQL for accounts due to its efficient batch processing capabilities. Overall, in a head-to-head comparison across 17 test cases, PostgreSQL edges out with more victories based on throughput and latency metrics. The choice between the two databases depends heavily on specific use-case requirements, as each has scenarios where it performs better. The document further evaluates storage efficiency, querying capabilities, and data modification features of both systems. MongoDB demonstrates greater storage efficiency for JSON data, requiring significantly less space compared to PostgreSQL. In terms of querying, MongoDB offers a more intuitive query language that resembles JavaScript, while PostgreSQL uses SQL with extensive JSON functions but lacks certain functionalities like range queries in GIN indexes. Both databases effectively manage inserts, updates, and deletes, yet MongoDB's design allows for more flexible partial document modifications. The conclusion emphasizes PostgreSQL’s competitive performance against MongoDB, highlighting its comprehensive support for JSON, ACID compliance, and ability to integrate relational models with document-oriented approaches. This suggests that a separate database system solely for JSON documents might be unnecessary given PostgreSQL’s versatility and robust capabilities. Keywords: #phi4, ACID, B-tree, Batch Operations, Benchmarking, Compression, Configuration, Data Manipulation, Data Models, Deletes, Docker, Document-Oriented, Documents, Finds, GIN, Indexes, Inserts, JSON, Latency, Mixed Workloads, MongoDB, NoSQL, Percentile, Performance, PostgreSQL, Queries, Query Rate, Relational Database, SQL, Schemaless, Search, Shared Buffers, Storage, Tables, Test Cases, Throughput, Transactions, Updates, WiredTigerCacheSizeGB, Workload
    The google logo   binaryigor.com 5 days ago
1130.  HN Show HN: Dbcli – A Lightweight Database CLI Designed for AI Agents
Dbcli is a streamlined command-line interface (CLI) tailored for AI applications requiring quick and efficient access to relational databases. It allows database introspection and querying through a simple `dbcli snap` command that provides essential schema information, table relationships, and basic data profiling while optimizing token usage in workflows. Dbcli supports various databases such as PostgreSQL, MySQL, MariaDB, SQLite, DuckDB, ClickHouse, and SQL Server, using optional drivers to facilitate its operations. Users can execute queries, run SQL files, and write data directly from the CLI without needing a server process or external service. The tool is installed locally with `pip install -e .`, making it an agent-agnostic alternative to more complex protocol-based methods and operable on any system that supports shell commands. Developers are encouraged to provide feedback, especially those creating AI agents or tools that require structured database access, and are invited to explore the GitHub repository for further details. Keywords: #phi4, AI Agents, CLI, ClickHouse, Data Profiling, Database Access, Dbcli, DuckDB, Feedback, GitHub Repo, Introspection, MariaDB, MySQL, Pip Install, PostgreSQL, Querying, SQL Server, SQLite, Schema Details, Shell Access, Structured Database Access, Table Relationships
    The google logo   news.ycombinator.com 5 days ago
1164.  HN Show HN: AgentCost – Track, control, and optimize your AI spending (MIT)
AgentCost is a comprehensive open-source solution developed to track and optimize expenses related to AI models, particularly targeting services from OpenAI, Anthropic, Google, and others. It provides seamless integration through Python and TypeScript SDKs, enabling users to effortlessly incorporate cost monitoring into their existing workflows. The tool's core functionality includes dashboards that offer insights into cost metrics, forecasts, model optimization recommendations, and pre-call cost estimations across 42 models. Additionally, it suggests switching between AI models for potential cost savings and integrates with popular frameworks like LangChain, CrewAI, AutoGen, and LlamaIndex. AgentCost is equipped with a command-line interface (CLI) for benchmarking and comparing different models, as well as a plugin system that allows users to extend its functionality with features such as Slack alerts or S3 archiving. For enterprise-level governance, it provides advanced features under the Business Source License (BSL 1.1), including single sign-on (SSO), budget enforcement, policy engines, approval workflows, notifications, anomaly detection, and an AI gateway proxy. The technical foundation of AgentCost includes a Python/FastAPI API server with support for SQLite in community editions or PostgreSQL in enterprise solutions. It features a React-based dashboard for user interaction and TypeScript SDKs to facilitate development. The tool is available in two main editions: the Community Edition, which can be rapidly deployed using Docker for smaller-scale applications, and the Enterprise Edition, offering enhanced governance capabilities like SSO/SAML integration with Keycloak. AgentCost is open-source under an MIT license for its core components, while enterprise-level features are distributed under a BSL 1.1 license. Users interested in contributing or seeking further details can refer to their GitHub repository and documentation site, where feedback from users managing AI costs at scale is actively encouraged to enhance the tool's effectiveness. Keywords: #phi4, AI spending, AgentCost, Anthropic, FastAPI, LLM proxy, OpenAI, PostgreSQL, Python, SDKs, SQLite, SSO, TypeScript, anomaly detection, control, cost forecasting, dashboard, enterprise features, model optimization, observability stack, optimization, plugins, policy engine, tracking
    The google logo   github.com 5 days ago
1171.  HN Production Agentic RAG Course
The "Production Agentic RAG Course" is a hands-on learning initiative designed to teach participants how to build advanced Retrieval-Augmented Generation (RAG) systems from the ground up, culminating in a production-grade research assistant capable of curating academic papers from arXiv. The course spans seven weeks, starting with setting up infrastructure using Docker, FastAPI, PostgreSQL, OpenSearch, and Airflow. Subsequent weeks guide learners through data ingestion from arXiv, implementing keyword search via BM25, integrating hybrid retrieval methods for semantic understanding, and finally developing a complete RAG pipeline featuring a local language model with streaming responses via Gradio. Week six focuses on optimizing performance with monitoring and caching, while week seven introduces intelligent reasoning capabilities using LangGraph and a Telegram bot for mobile access. This course emphasizes practical implementation over theory, adhering to industry best practices by laying solid search foundations before integrating AI advancements. Key features include building an AI research assistant that can fetch, understand, and answer questions about academic papers, with comprehensive learning materials like notebooks and blog posts guiding each phase. Prerequisites include Docker Desktop, Python 3.12+, UV Package Manager, 8GB+ RAM, and 20GB+ free disk space. By the end, participants will possess a complete RAG system applicable to any domain, along with deep technical skills in AI engineering and production-grade architecture understanding. The course is freely accessible, requiring minimal costs for optional services, making it suitable for AI/ML engineers, software engineers, and data scientists aiming to enhance their expertise in modern AI systems. Keywords: #phi4, AI Engineering, AI Project, Agentic RAG, Airflow, Apache Airflow, BM25, Cost Optimization Keywords: Production RAG, Docker, Docker Compose, Document Grading, FastAPI, FastAPI Documentation, Gradio Interface, Guardrails, Hands-on Implementation, Hybrid Retrieval, Intelligent Decision-Making, Interactive API Testing, Jina AI, Keyword Search, LangGraph, Langfuse, Langfuse Tracing, Learner-Focused, Local LLM, Mobile Access, Ollama, OpenSearch, Phase 1, PostgreSQL, Production Monitoring, Production RAG, Python, Query Rewriting, Redis, Redis Caching, Retrieval-Augmented Generation, Semantic Understanding, Streaming Responses, Telegram Bot, Transparency, UV Package Manager, Workflow Management, arXiv Paper Curator
    The google logo   github.com 5 days ago
1172.  HN Show HN: WordPress for Voice Agents – Unpod.ai
Unpod.ai has introduced Unpod, an open-source platform designed to streamline the development of conversational voice agents by integrating various AI technologies into a cohesive infrastructure. It combines speech-to-text (STT), large language models, text-to-speech (TTS), and telephony capabilities, enabling developers to create AI-driven communication systems across multiple channels such as voice calls, WhatsApp, and email. Unpod's key features include customizable AI agents built on large language models, real-time processing with minimal latency, and a no-code visual builder for configuring these agents. It supports multi-tenant workspaces, dedicated phone numbers via SIP trunking, and provides call analytics through real-time dashboards. Furthermore, it offers workflow automation and seamless integration with other business tools. The platform is structured as an NX monorepo, utilizing technologies such as Next.js, Django, FastAPI, and Tauri for cross-platform desktop support, alongside a tech stack comprising PostgreSQL, MongoDB, Redis, Kafka (KRaft), and Centrifugo v5 for messaging. Developers looking to utilize Unpod must have Node.js 20+, npm 10+, Python 3.11+, Docker, and optionally uv installed. Setup can be achieved through a single command script or manually handling dependencies and running migrations, with necessary environment variables required for configuration. Unpod fosters community contributions via feature branches from the main branch, with comprehensive guidelines available on their documentation site. The project is distributed under the MIT License, promoting open collaboration and innovation in AI-driven communication solutions. Keywords: #phi4, AI Infrastructure, Agent Studio, Centrifugo, Communication Platform, Conversational Agents, Django, Docker, FastAPI, Kafka, Knowledge Base, LLMs, LiveKit, MongoDB, Multi-Channel, NX Monorepo, Open-Source, Pipecat, PostgreSQL, Prefect, RAG, RBAC, Real-Time Pipeline, Redis, SIP Trunking, STT, TTS, Tauri, Telephony Integration, Unpod, Voice Agents, WordPress, Workflow Automation
    The google logo   github.com 5 days ago
1177.  HN Show HN: I built a new programming language for AI and Data – 'ThinkingLanguage'
ThinkingLanguage is a new programming language developed by the creator of "ThinkingLanguage," specifically designed to enhance AI and data processing tasks, completed in an impressive five days. Its primary goal is to streamline complex workflows that typically require multiple tools and languages by integrating essential functions such as glue code, data transformation, scaling operations, and orchestration into a single cohesive language framework. The language features a straightforward syntax using a pipe operator for native operations like filtering, joining, and aggregating tables. The technical backbone of ThinkingLanguage includes the Apache Arrow format for columnar data representation and the DataFusion engine for optimized query processing. It supports various connectors such as CSV, Parquet, and PostgreSQL, enabling seamless integration with different data sources. Built on Rust, it delivers exceptional performance metrics, handling up to 1 million rows in milliseconds. Additional capabilities include a Just-In-Time (JIT) compiler, AI/ML functions, streaming with Kafka, GPU support, and the ability to integrate Python libraries through Foreign Function Interface (FFI). As an open-source project under the Apache License, ThinkingLanguage invites contributions from data engineers and Rust developers. It is readily accessible through tools like npx or direct downloads from its GitHub repository at [GitHub - mplusm/thinkinglanguage](https://github.com/mplusm/thinkinglanguage), promoting a unified language tailored for efficient data-related tasks. Keywords: #phi4, AI, Apache Arrow, Apache License, CSV, CUDA, Cranelift, Data Engineering, DataFusion, GitHub, JIT compiler, Kafka, LLVM, NumPy, Parquet, PostgreSQL, Python FFI Bridge, ROCm, Rust, ThinkingLanguage, context-switching, data engineer, ndarray, open source, programming language, tensor
    The google logo   thinkingdbx.com 5 days ago
1211.  HN Show HN: ScrapAI – We scrape 500 sites. AI runs once per site, not per page
ScrapAI is a command-line interface (CLI) tool developed by DiscourseLab designed to automate the process of web scraping using artificial intelligence. It enables users, including those without technical expertise in Python or Scrapy, to define their scraping needs simply through plain language input. The AI agent within ScrapAI generates extraction rules based on these descriptions, which are then converted into JSON configurations for Scrapy execution. The tool offers several key features: it is scalable and can efficiently handle over 500 websites with minimal human intervention, making it ideal for teams that require automated scraping solutions across multiple sites. It emphasizes ease of use by allowing non-technical users to easily add new projects without needing to write code themselves. The AI component runs only during the initial setup phase per website, ensuring cost efficiency as there are no recurring costs after configuration. Additionally, ScrapAI is a self-hosted solution that provides full user control without vendor lock-in, facilitated by its simple clone-and-run setup. The operation of ScrapAI involves users inputting their scraping requirements, followed by AI-driven analysis of the target site to generate extraction rules stored as JSON in a database. These rules are then employed by a generic Scrapy spider for ongoing use. The architecture integrates an orchestration layer with tools like Scrapy, newspaper4k, and trafilatura for comprehensive content extraction while maintaining high security standards. It validates inputs rigorously and ensures that AI-generated scripts are non-executable, focusing on data integrity. Moreover, ScrapAI includes advanced stealth features designed to bypass Cloudflare protections, ensuring consistent access to target websites. Despite its capabilities, it is primarily suited for large-scale scraping operations rather than single-site tasks requiring granular control or sites with complex CAPTCHA and login requirements. The open-source nature of ScrapAI encourages community contributions, particularly in enhancing detection mechanisms for site changes and developing anti-bot technologies beyond Cloudflare. Users are reminded to employ ScrapAI responsibly, adhering to legal standards and respecting the terms of service associated with scraped data. In summary, ScrapAI streamlines web scraping by reducing manual configuration through AI, ensuring scalability, efficiency, and user control across numerous websites. Keywords: #phi4, AI agent, Apache Airflow, CLI, Claude Code, CloakBrowser, Cloudflare, JSON config, PostgreSQL, Pydantic schemas, S3 storage, ScrapAI, Scrapy, anti-bot support, autonomous operation, batch processing, database, ethical scraping, ethical scraping Comma-separated List: ScrapAI, ethical scraping Extracted Keywords: ScrapAI, ethical scraping Final Comma-separated List: ScrapAI, ethical scraping Final Keywords: ScrapAI, ethical scraping Keywords: ScrapAI, ethical scraping Simplified Keywords: ScrapAI, incremental crawling, proxy escalation, scraping, security validation, stealth browser, targeted extraction
    The google logo   github.com 5 days ago
1213.  HN Show HN: GovMatch – Daily government contract alerts matched to your business
GovMatch is an advanced tool designed to simplify the process of discovering pertinent government contracts by automatically aligning new opportunities from SAM.gov (U.S.) and TED (EU) with business profiles using cosine similarity algorithms. It delivers daily email alerts highlighting top contract matches, thereby removing the need for time-consuming manual searches. The platform leverages modern technologies such as Next.js 14, PostgreSQL paired with pgvector, OpenAI's text-embedding-3-small, Prisma, Stripe, and Vercel to ensure robust functionality and a seamless user experience. GovMatch offers businesses a free seven-day trial without the necessity of providing credit card details, emphasizing its commitment to high-quality matching results and an intuitive interface that conserves time and resources for its users. Keywords: #phi4, EU public tenders, GovMatch, Nextjs, OpenAI, PostgreSQL, SAMgov, Stripe, TED, UX, Vercel, business profile, cosine similarity, daily alerts, email notifications, embeddings, federal tenders, free trial, government contracts, matching quality, pgvector, text-embedding
    The google logo   www.govmatch.live 5 days ago
1223.  HN Designing the Perfect ID: Marrying UUIDv7, Stripe Prefixes, and ULID
The article "Designing the Perfect ID: Marrying UUIDv7, Stripe Prefixes, and ULID" introduces a hybrid method for generating unique identifiers that enhances both database performance and usability for public-facing applications. It suggests utilizing UUIDv7 as primary keys in databases due to their embedded timestamp feature, which allows new IDs to be sequentially appended, thereby improving throughput compared to random UUIDs. For user-facing contexts, the article recommends creating Base32-encoded, checksummed UUIDv4s with human-readable prefixes (e.g., "u_" for users), inspired by Stripe's method. This design enhances readability and debugging while preventing type errors through polymorphic API design. The choice of Base32 encoding minimizes ambiguity and improves case insensitivity, allowing users to select full IDs easily with a double-click. Additionally, incorporating a three-character checksum aids in detecting typographical mistakes prior to database queries, thus increasing reliability. This dual-ID system aims to balance backend efficiency with frontend usability by offering significant improvements in user experience and error reduction, despite requiring more initial setup than standard serial ID methods. Keywords: #phi4, API, Checksum, Crockford Base32, Database Layer, Debugging, Implementation, Performance Optimization, Polymorphism, PostgreSQL, Prefixes, Primary Keys, Public Layer, Readability, Split-ID Strategy, Table Structure, UUIDv4, UUIDv7, User Interface
    The google logo   blog.alcazarsec.com 5 days ago
   https://github.com/jetify-com/typeid   5 days ago
1249.  HN Show HN: DataPilot – SQL workspace with scheduling, and on-prem execution
DataPilot is a comprehensive SQL workspace designed to unify disparate SQL operations into a single platform. It addresses the fragmentation of SQL processes across various tools by offering a shared workspace where users can manage queries, variables, comments, and history in one place. The platform supports both recurring and single execution tasks, enhancing flexibility for different workflows. Key features include data quality monitoring with alert systems, streamlined CSV/XLSX delivery workflows, and versatile execution modes—cloud, desktop, or on-premises. Additionally, DataPilot integrates optional AI assistance to provide contextual schema documentation based on metadata like table names, column types, nullability, foreign keys, and comments, ensuring accurate and relevant insights without storing actual database rows. Built using modern technologies such as ASP.NET Core, Blazor, PostgreSQL, and SignalR, DataPilot prioritizes efficiency by centralizing SQL operations while safeguarding user privacy. It ensures that no personal data from databases is stored; only execution metadata, schedules, and exported files are retained. This approach allows users to focus on optimizing their data processes securely. For further details about DataPilot's capabilities and benefits, interested parties can visit its Product Hunt page or official website. Keywords: #phi4, AI schema, AI schema documentation, ASPNET Core, Blazor, CSV/XLSX, CSV/XLSX workflows, DataPilot, PostgreSQL, SQL, SQL workspace, SignalR, alerts, cloud execution, column types, comments Keywords: DataPilot, data quality, database rows, desktop execution, exported files, foreign keys, metric monitoring, nullability, on-prem, on-prem execution, query metadata, recurring runs, schedules, scheduling, shared workspace, table names
    The google logo   getdatapilot.com 5 days ago
1261.  HN Show HN: LynxPrompt – Self-hostable, federated AI config rules manager
LynxPrompt is an open-source, self-hostable platform designed to streamline the management of AI configuration files across various coding assistants like Cursor, Claude Code, GitHub Copilot, and others. It serves as a centralized hub allowing teams to create, share, and standardize configurations using over 30 supported formats. Users can utilize an interactive wizard accessible via web or CLI interfaces for generating these configurations and can distribute blueprints through private or federated marketplaces. The platform accommodates various authentication methods such as OAuth, email login, WebAuthn passkeys, SSO, among others, ensuring adaptability to different environments. Additionally, LynxPrompt offers optional AI-powered editing features with Anthropic API integration to enhance blueprint creation processes. It provides a REST API and CLI tool for programmatic access and automation, facilitating seamless incorporation into CI/CD workflows. Deployment of LynxPrompt is simplified through Docker Compose with PostgreSQL support, including automatic migrations upon startup. Users can customize the platform’s features via environment variables to suit their specific needs. The project is licensed under the GNU General Public License v3.0, supporting both self-hosting options and a hosted instance at lynxprompt.com for users who prefer not to manage infrastructure independently. Comprehensive documentation is available, covering deployment, configuration, and contribution guidelines. Keywords: #phi4, AGENTSmd, AI coding assistants, AI config management, Anthropic API, CLAUDEmd, CLI tool, Docker Compose, GitHub OAuth, Google OAuth, IDE configuration, LDAP, LynxPrompt, Nextjs, OIDC, PostgreSQL, REST API, SAML, WebAuthn, authentication, blueprint marketplace, deployment, federated blueprints, interactive wizard, open-source, self-hostable, self-hosting Keywords: LynxPrompt
    The google logo   github.com 5 days ago
   https://github.com/survivorforge/cursor-rules   3 days ago
   https://survivorforge.surge.sh/cursorrules-generator.html   3 days ago
1274.  HN What's new in Linux kernel for PostgreSQL
Recent updates to the Linux kernel present several advancements that promise enhanced performance and new features specifically beneficial to PostgreSQL users. Key among these is the introduction of Uncached Buffered IO, which uses a special flag (RWF_DONTCACHE) to allow data operations without caching, thus improving efficiency under constrained memory conditions. Additionally, the development of Untorn Writes offers atomic write capabilities that prevent partial updates or torn pages, critical for maintaining data integrity during database writes, though it currently necessitates direct IO. Moreover, the kernel now includes a new syscall (`cachestat`) to query page cache state more effectively, providing valuable insights into cache utilization and aiding in performance optimization. The integration of BPF (Berkeley Packet Filter) allows for significant customizations, such as tailored schedulers and cache eviction policies, which can be particularly advantageous for optimizing both OLTP workloads and analytical queries. Proposed enhancements like customizable io_uring and OOM killer behaviors further indicate opportunities to optimize memory-intensive database applications. While these kernel improvements hold potential benefits for PostgreSQL environments, their practical adoption hinges on future developments and feedback from the community. Keywords: #phi4, BPF, BernderOS, Full Page Image (FPI), HeptapoDB, Linux kernel, NVMe devices, OLTP workload, OOM killer, PostgreSQL, RWF_DONTCACHE, analytical queries, atomic writes, cache_ext, cachestat syscall, commit message, databases, direct IO, effective_cache_size, eviction policies, io_uring, memfd_create, page cache, performance, portability, pwritev2, sched_ext, scheduler class, shared memory, torn pages, uncached buffered IO, untorn writes
    The google logo   erthalion.info 5 days ago
   https://lore.kernel.org/bpf/cover.1763031077.git.asml.s   5 days ago
1275.  HN Show HN: AgentThreads – Stack Overflow for AI Agents
AgentThreads serves as an innovative, community-oriented platform likened to "Stack Overflow for AI Agents," providing a structured directory of APIs enriched by agent-generated content. It addresses common issues faced by AI agents regarding outdated or inadequate documentation by offering up-to-date, reliable resources. The development primarily leverages Claude Code, emphasizing features that facilitate quality and trust within the community. Central to its functionality are key components such as an API directory equipped with reviews and ratings crafted by fellow agents, which is designed to be REST-based for ease of integration and use. To maintain authenticity without relying on traditional CAPTCHAs, AgentThreads employs a unique anti-spam system where reasoning challenges verify agent interactions. Reputation within the community is cultivated through a karma system that rewards meaningful contributions. The platform relies heavily on community moderation, enabling agents with high reputations to manage submissions effectively while automatically suppressing reviews deemed low in confidence. This structure is supported by intelligent ranking algorithms that leverage PostgreSQL full-text search capabilities to ensure relevant search results are prioritized for users. AgentThreads further enhances usability through structured JSON responses and openly available API specifications, allowing seamless interaction and integration by AI agents. A trust scoring system underpins the credibility of reviews, considering factors such as author reputation, vote weight, and review timeliness. The platform is freely accessible, with no premium features, fostering an environment conducive to collaborative knowledge exchange about APIs. With its aim to cultivate a self-sustaining community, AgentThreads encourages feedback-driven development, positioning itself as a valuable resource for AI agents seeking reliable API information while simultaneously contributing to the collective intelligence of the platform. Keywords: #phi4, AI Agents, APIs, AgentThreads, JSON responses, OpenAPI spec, PostgreSQL, REST API, Stack Overflow, activity feed, anti-spam verification, community directory, full-text search, karma system, ratings, reviews, smart ranking, trust scoring
    The google logo   agentthreads.dev 5 days ago
1283.  HN Show HN: WhisprMe – Anonymous messaging inside Telegram with Stars micropayments
WhisprMe is an anonymous messaging application developed as a Telegram Mini App that enables users to send and receive messages anonymously using Telegram Stars for unlocking messages, eliminating the need for credit card information. Built with technologies such as Node.js/Express, PostgreSQL, React, and Telegraf, the app operates on a single Hetzner VPS managed by PM2 at an approximate cost of $5 per month. The application features authentication via Telegram's initData and HMAC validation while allowing payments through the Telegram Stars API. It enhances user experience with haptic feedback for a native WebView feel and offers language support in English and Russian. Users can access WhisprMe via [WhisprMe_bot](https://t.me/WhisprMe_bot). The developer is open to inquiries regarding both the Telegram Mini App platform and the Stars payment system. Keywords: #phi4, Anonymous messaging, Auth, English, Express, HMAC validation, Haptic feedback, Hetzner, Micropayments, Mini App, Nodejs, PM2, Payments API, PostgreSQL, React, Russian, Stars, Tech stack, Telegraf, Telegram, VPS, WhisprMe, i18n
    The google logo   whisprme.app 5 days ago
   https://github.com/haskellthurber/telegram-miniapp-star   5 days ago
   https://dev.to/haskelldev/how-to-accept-payments-in-a-t   5 days ago
1346.  HN A social platform where humans and AI agents coexist (MIT, self-hostable)
MoltSocial is an innovative social platform designed to enhance interactions between humans and AI agents through a unified feed where both can share posts on timelines visible across various tabs such as "Following," "For You," and "Explore." It supports self-hosting, with official instances available online. Key features include the ability for AI agents to register and interact using an Agent API that facilitates posting, following, direct messaging, and collaboration secured by Bearer tokens. MoltSocial promotes governance by allowing both humans and AI agents to propose and vote on platform features, requiring a 40% approval rate from active users to pass proposals. The platform offers real-time interactions like likes, reposts, replies, follows, mentions, and notifications, along with private direct messaging between AI agents. It is equipped with optimized image uploads using WebP conversion and resizing, link previews that extract Open Graph metadata, full-text search functionality, a Chrome extension for quick posting, and Progressive Web App (PWA) support for mobile app installation. The LLM Discoverability feature provides an API endpoint for discovering AI agents. MoltSocial's technical foundation includes Next.js 15 with Turbopack for the framework, Prisma v7 managing PostgreSQL databases, authentication via Google and GitHub OAuth through NextAuth v5, Tailwind CSS v4 for styling, TanStack React Query for state management, and S3-compatible object storage. The setup requires Node.js, a PostgreSQL database, OAuth credentials, and optional S3 storage. AI agents can self-register with human sponsor approval and engage in various platform activities, including public discussions and governance participation. The project structure organizes code into directories for layout, API routes, components, hooks, libraries, and Chrome extension sources, supported by scripts for development, building, linting, and migration management. Contributions to the open-source project are guided by CONTRIBUTING.md, while SECURITY.md details vulnerability reporting procedures, with the project being licensed under MIT. Keywords: #phi4, AI agents, API keys, Chrome extension, Docker, LLM discoverability, MoltSocial, NextAuth, Nextjs, OAuth, PWA support, PostgreSQL, Prisma, React Query, S3 storage, Tailwind CSS, agent API, algorithmic ranking, deployment, direct messages, governance, image uploads, link previews, multi-agent collaboration, real-time interactions, search, social platform, unified feed
    The google logo   github.com 6 days ago
   https://molt-social.com   6 days ago
   https://github.com/aleibovici/molt-social   6 days ago
1424.  HN Show HN: Mailfeed – Your reading list, owned by you
Mailfeed is a self-hosted, open-source application that transforms emails into a personalized reading feed by converting emailed links or articles into full content using Mozilla Readability. It presents this content in an organized interface with semantic search capabilities powered by vector embeddings and Retrieval-Augmented Generation (RAG) technology. Key features include smart link extraction, Gmail integration for customizable syncing based on queries, and planned AI-powered analysis offering summaries and key points. The application emphasizes privacy and data protection compared to other read-later services. Setting up Mailfeed is straightforward with a one-command setup option available on macOS or through manual installation using Docker. It requires Google OAuth credentials for Gmail access and optionally supports the Gemini API key for enabling advanced AI features. The technology stack comprises Next.js, PostgreSQL, Prisma, NextAuth.js for authentication, and Tailwind CSS for UI design. Programmatic link addition via an API is facilitated with session cookies from NextAuth.js for secure authentication, while customization options are accessible through environment variables, and detailed logs can be viewed using Docker commands. The app’s architecture distinctly separates core functionalities such as email syncing, link management, AI analysis, and vector embeddings into independent components to optimize performance in both development and production environments. The project is licensed under the MIT License, promoting open access to its codebase for community use and contributions. Keywords: #phi4, AI analysis, API, Docker, Gmail integration, Google Gemini, Mailfeed, NextAuthjs, Nextjs, OAuth credentials, PostgreSQL, Prisma, Tailwind CSS, browser extension, database GUI Keywords: Mailfeed, development server, emails, full-text content, open source, reading list, self-hosted, semantic search, smart link extraction, vector embeddings
    The google logo   github.com 6 days ago
1430.  HN Show HN: ParseForce – Turn emails into structured JSON and send them to webhooks
ParseForce is an advanced tool designed to streamline email automation workflows by converting incoming emails into structured JSON data for seamless webhook delivery, leveraging AI-based schema parsing instead of traditional methods like regex or standard parsers. This approach allows the system to adapt to various formats without disruption when changes occur. Users can set up a unique inbox and specify which data fields they wish to extract from emails, such as invoices, order confirmations, or shipping notifications. The extracted information is automatically transformed into JSON format and delivered directly to designated webhooks for integration with backend systems. The key features of ParseForce include AI-driven parsing to accurately capture specified data fields, the ability to create a custom inbox tailored to specific email processing needs, and the automatic delivery of structured JSON data to user-defined webhooks. Common applications of this tool involve automating tasks like invoice management, order confirmation handling, shipping notification processing, and integrating legacy email workflows. ParseForce's technology stack comprises Node.js/TypeScript for development, PostgreSQL as a database solution, AI-based schema parsing techniques, and robust webhook delivery systems. The platform is engineered to simplify email integrations, making them as straightforward as webhook integrations. ParseForce encourages feedback from users in the Hacker News community through their website at parseforce.io. Keywords: #phi4, ACH, AI, BlueLine Freight, JSON, Nodejs, Northstar Industrial, ParseForce, PostgreSQL, TypeScript, accounts receivable, automation, emails, invoice data, legacy workflows, order confirmations, schema parsing, shipping notifications, webhook delivery, webhooks
    The google logo   www.parseforce.io 6 days ago
1435.  HN Built data pipelines across 200M+ companies seeking early roles
The document outlines a robust data extraction engine employed by BlueFind and ProTechStack, crafted to efficiently manage extensive web scraping tasks across more than 200 million companies. This platform leverages headless Chrome and Playwright for dependable browser automation, built on the Go programming language to enhance speed, while PostgreSQL is utilized for straightforward data management. The system extracts data into a consistent JSON format at scale, significantly augmenting early-stage roles by offering enriched insights powered by artificial intelligence. Keywords: #phi4, AI Enrichment Engine, BlueFind, Built data pipelines, Go, Horizon2, Horizon2 Private Web Data Extraction, JSON, JSON format, Playwright, PostgreSQL, Private Web Data Extraction, ProTechStack, browser automation, companies, headless Chrome, scale, simplicity, simplicity Keywords: data pipelines, speed, web scraping, web scraping platform
    The google logo   zerobitflip.com 6 days ago
1446.  HN Show HN: Dbcli – Database CLI Built for AI Agents
Dbcli is a database command-line interface designed to streamline interactions between AI agents and various databases through a unified command. It offers an immediate access feature called `dbcli snap` which provides schema details, data profiling, and relationship insights, minimizing the traditional overhead in setups. Key features of Dbcli include instant retrieval of database context—such as schemas, profiles, and relationships—and its optimization for AI agents to reduce token usage and setup time. The tool is lightweight, requiring only simple installation (`pip install dbcli`), and supports multiple databases like SQLite, PostgreSQL, MySQL, MariaDB, DuckDB, ClickHouse, SQL Server, among others. Users can execute SQL queries and write data effortlessly while benefiting from real-time column distribution statistics for enhanced data understanding. Dbcli integrates seamlessly with AI agents like Claude and LangChain. Compared to MCP, Dbcli eliminates high token consumption by offering comprehensive features within a single command, ensuring faster setup without external configuration needs. Its universal compatibility allows it to function across any agent with shell access, removing the necessity for specialized protocols. Optional database drivers can be installed using commands such as `pip install "dbcli[postgres]"`. The tool is hosted on GitHub at [JustVugg/dbcli](https://github.com/JustVugg/dbcli), where users are encouraged to provide feedback for continued improvements. Keywords: #phi4, AI Agents, Claude, ClickHouse, Data Profiling, Database CLI, Drivers, DuckDB, GitHub, Integration, LangChain, Lightweight, MariaDB, Multi-database Support, MySQL, PostgreSQL, Relationships, SQL Server, SQLite, Schema, Simple Queries, Writes
    The google logo   news.ycombinator.com 6 days ago
1487.  HN Show HN: HushBrief – A stateless, zero-retention AI document summarizer
HushBrief, developed by Fidelitas LLC, is an AI-powered document summarizer specifically designed to ensure privacy in handling sensitive legal and investigative documents. It employs a zero-retention architecture where documents are processed solely in memory and immediately discarded after use, ensuring no storage or association with user identities. The tool utilizes Venice AI for inference without any training on inputs, logging, or provider-level data retention, further safeguarding user privacy. HushBrief is accessible via a $0.99 Day Pass through Stripe, removing the necessity for traditional account sign-ups, and offers an 11-unit Lifetime tier at $99 to support ongoing development. A notable feature of HushBrief is its "Uncensored Mode," which delivers unfiltered summaries of sensitive documents, making it particularly useful for professionals dealing with controversial materials. The platform employs a stateless authentication system and operates on a zero-knowledge architecture to maintain strict user privacy. Technologically, it is built using React 18/Express 5 in the frontend/backend, with PostgreSQL managing subscriptions. HushBrief is also actively seeking feedback on its UX design, focusing on features like a three-theme system and a Privacy Dashboard that details data usage practices. Keywords: #phi4, AI, Drizzle ORM, Express 5, Fidelitas LLC, HMAC-SHA256, HushBrief, PostgreSQL, Privacy Dashboard, React 18, Stripe, Uncensored Mode, Venice AI, architecture, backend, data usage framework, frontend, legal material, sensitive documents, stateless, subscription status, summarizer, zero-retention
    The google logo   hushbrief.app 6 days ago
1489.  HN Scouter – An open-source SEO crawler with a full analysis UI
Scouter is an open-source SEO crawler developed by Lokoé, designed for both Linux and Windows environments through Docker. It features a comprehensive web-based interface, supporting JavaScript rendering via Puppeteer for SPAs and offering configurable multi-depth crawling that respects robots.txt directives. The system allows adjustable concurrent requests and employs a distributed architecture using Docker workers to enhance efficiency. Scouter's SEO analysis tools provide in-depth on-page analysis of titles, headings, meta descriptions, and technical SEO metrics like HTTP status codes, response times, and redirects. It also detects duplicate content using Simhash and measures word count while identifying JSON-LD schema for structured data. Additionally, it offers insights into internal linking by analyzing inlinks, outlinks, and PageRank. Custom extractors using XPath and Regex enable users to extract specific HTML elements or patterns from source code. Categorization is facilitated through a YAML Editor with a visual drag-and-drop interface and a Test Mode for rule previewing before implementation. The user interface includes features like a dashboard for data visualization via charts, an explorer tool for filtering URLs, SQL Explorer for custom queries, and CSV Export functionality. It supports multi-user management with roles such as admin, user, and viewer. Scouter’s technical architecture is organized into directories managing core functionalities (app), web interfaces, Docker configuration, documentation, and testing. The tech stack includes a backend built on PHP 8.1+, PostgreSQL 15+ for the database, frontend development using vanilla HTML/CSS/JS, containerization via Docker and Docker Compose, with Pest for PHP tests and Doctum for documentation generation. JavaScript rendering leverages Go and Chromedp. Licensed under the MIT License, Scouter serves as a robust tool for SEO professionals needing customizable crawling solutions with detailed analysis features. Keywords: #phi4, Analysis UI, Architecture, Async Job Management, Authentication, CSV Export, Canonical Tags, Categorization Rules, Crawling, Data Layer, Depth-based Crawling, Docker, Docker Worker, Documentation, Duplicate Detection, Go Chromedp, JavaScript Rendering, Job Management, Multi-user Management, Open-source, PHP, Page Analysis, Parallelism, Pest Testing, PostgreSQL, REST API, REST Router, Robotstxt, SEO Crawler, SQL Explorer, Scouter, Tech Stack, Technical SEO, User Interface Guide, Web Interface
    The google logo   github.com 6 days ago
   https://github.com/lokoe-mehdi/scouter   6 days ago
1507.  HN Show HN: Atrium – An open-source, self-hosted client portal
Atrium is an open-source, self-hosted client portal developed to provide agencies and freelancers with a comprehensive, cost-effective solution without relying on traditional SaaS platforms. Created by a solo software engineering lab in response to dissatisfaction with existing tools, Atrium features customizable white-label branding, project management capabilities, file sharing options compatible with storage solutions like S3, MinIO, Cloudflare R2, or local servers, and integrated invoicing with PDF generation and billing. It also includes role-based access control, authentication through magic links or email/password via Better Auth, and multi-tenant support for isolated organizational operations. The technology stack of Atrium comprises NestJS for the API, Next.js with React for the frontend, PostgreSQL using Prisma ORM for database management, and Tailwind CSS for styling. Hosted on GitHub under Elastic License 2.0, it allows free use, modification, and self-hosting but prohibits commercial reselling as a managed service. The project fosters community engagement through contributions via GitHub Issues and Discussions and offers detailed setup instructions for both local development and production environments using tools like Bun and Docker. Keywords: #phi4, Atrium, Better Auth, Better AuthComma-separated List: Atrium, Docker, Elastic License 20Comma-separated List: Atrium, Elastic License 20Extracted Keywords: Atrium, Elastic License 20Final Keywords: Atrium, Elastic License 20Keywords: Atrium, Elastic License 20Selected Keywords: Atrium, GitHub Issues, NestJS, Nextjs, PostgreSQL, React, Tailwind CSS, asset management, authentication, client portal, collaboration, file sharing, invoicing, local development, multi-tenant, open-source, project tracking, self-hosted, software engineering, tech stack, white-labeling
    The google logo   github.com 6 days ago
1512.  HN Show HN: Photon – Rust pipeline that embeds/tags/hashes images locally w SigLIP
Photon is an open-source image processing pipeline developed in Rust, designed to analyze and embed images locally without requiring cloud services. It outputs structured JSON data that includes a variety of information: 768-dimensional vector embeddings generated using SigLIP for semantic similarity searches; semantic tags derived from over 68,000 terms through zero-shot tagging; EXIF metadata detailing camera settings and GPS coordinates; content hashes utilizing cryptographic (BLAKE3) and perceptual methods for deduplication and similarity detection; and WebP thumbnails customizable in size and quality. Additionally, Photon can enrich data with language model descriptions via tools like Ollama, Anthropic, or OpenAI. The tool supports batch processing of images with parallel execution and the option to skip previously processed files. Photon is user-friendly for installation, either through PyPI or by building from source. It processes single images or directories into JSON or JSONL formats, allowing users to adjust embedding quality and thumbnail settings. The standalone application functions independently without needing a server or database setup, with configurations managed through defaults in the code, which can be overridden by config files and CLI flags for user-specific customizations like worker count, supported formats, and logging levels. The architecture of Photon is built around two primary crates: `photon`, which serves as a command-line interface tool, and `photon-core`, containing core processing functionalities. This design permits easy integration into other Rust applications, making it versatile for various backend systems through its JSON outputs. The project encourages contributions with established guidelines for testing and linting. Photon is offered under dual MIT or Apache 2.0 licenses, providing flexibility for both users and contributors, highlighting its open-source nature and collaborative potential within the developer community. Keywords: #phi4, BLAKE3 cryptographic hash, BYOK LLM descriptions, CLI, EXIF metadata, JSON, ONNX Runtime, Photon, PostgreSQL, Rust, SigLIP, WebP generation, architecture, batch processing, content hashes, embeddings, image processing, library usage, local processing, parallel workers, perceptual hash, pgvector, pipeline, semantic tags, single binary, thumbnails, zero-shot tagging
    The google logo   github.com 6 days ago
1515.  HN JSON Documents Performance, Storage and Search: MongoDB vs. PostgreSQL
The article presents a detailed comparison between MongoDB and PostgreSQL regarding their performance, storage efficiency, querying capabilities, and data manipulation when dealing with JSON-like documents. It evaluates these databases using various test scenarios involving accounts and products datasets across 17 different cases. **Performance**: The tests reveal that PostgreSQL outperforms MongoDB in 9 of the 17 cases, while MongoDB wins in 7, with one scenario ending in a draw. Specifically, PostgreSQL shows superior performance for single-document lookups by ID and deletion operations due to its relational optimizations. In contrast, MongoDB excels at schema-less data insertions, batch operations, and complex document queries. **Storage Efficiency**: MongoDB demonstrates greater storage efficiency than PostgreSQL. Its combined size of data and indexes is approximately 2.23 times smaller for accounts datasets and 1.4 times smaller for products datasets compared to PostgreSQL. **Querying Capabilities**: Both databases offer basic search functionalities with distinct syntaxes but comparable results. For more advanced searches, including those involving nested JSON fields, MongoDB provides greater flexibility in certain contexts, such as array range queries. PostgreSQL can achieve similar performance levels but requires design adjustments. **Indexing**: While PostgreSQL supports B-tree and GIN indexes for JSON data, it lacks native support for range queries on arrays within JSON documents. In contrast, MongoDB offers more straightforward indexing capabilities, enabling composite type indexing without the need for relational schema changes. **Data Manipulation**: Both databases handle data manipulation tasks such as insertions, updates, and deletions effectively. However, PostgreSQL requires rewriting the entire document during partial updates, a process similar to that of MongoDB. The conclusion drawn from these comparisons suggests that while MongoDB offers flexibility advantages in certain scenarios, PostgreSQL’s robust SQL capabilities, ACID compliance, and comprehensive support for JSON make it a compelling choice for handling JSON data. The article questions the necessity of using a separate database solely for JSON documents given Postgres's versatility and performance. Keywords: #phi4, ACID, B-tree, Batch Operations, Benchmarking, Compression, Configuration, Data Manipulation, Data Models, Deletes, Docker, Document-Oriented, Documents, Finds, GIN, Indexes, Inserts, JSON, Latency, Mixed Workloads, MongoDB, NoSQL, Percentile, Performance, PostgreSQL, Queries, Query Rate, Relational Database, SQL, Schemaless, Search, Shared Buffers, Storage, Tables, Test Cases, Throughput, Transactions, Updates, WiredTigerCacheSizeGB, Workload
    The google logo   binaryigor.com 6 days ago
1517.  HN Figaro: Control fleets of Claude Code and Computer Use agents remotely
Figaro is an orchestration system crafted to automate workflows using Claude Code agents on various desktop environments, encompassing containerized Linux desktops and machines accessible via VNC such as remote servers, cloud VMs, or physical workstations. It facilitates centralized management through a dashboard that communicates with external channels like Telegram for task delegation. Supervisors handle tasks by interacting with the desktops through screenshots, typing, clicking, and key presses, while ensuring durable communication using NATS with JetStream support for extended task durations. To deploy Figaro, users must install Docker and Docker Compose on Linux or macOS, or manually install Docker Desktop. Configuration requires Claude credentials, optionally an OpenAI API key, and a Telegram bot token. Environment variables are set up to manage features like VNC password encryption using `FIGARO_ENCRYPTION_KEY`. Advanced setups involve secure handling of passwords with PostgreSQL and selecting deployment overlays with caution regarding network exposure. Figaro supports scheduled tasks through cron-like expressions and includes an intelligent healing mechanism for retrying failed tasks based on specific errors. It also offers self-learning features to optimize scheduled task prompts after each run, enhancing efficiency over time. The system's architecture comprises several services communicating via NATS: the Orchestrator manages tasks; Workers execute automation; Supervisors delegate tasks; the Gateway interfaces with external channels; and a UI dashboard using React provides user interaction. Development can be done using a VS Code Dev Container or manually setting up dependencies for each service, including Python packages through uv and Node.js packages via npm or Bun. Figaro is designed for trusted environments without inherent authentication or TLS, suitable for private Docker networks or encrypted overlays like Tailscale. Contributions to the system are welcomed through discussions leading to pull requests. Keywords: #phi4, Architecture, Browser Automation, Bun, Central Dashboard, Claude Code, Computer Use Agents, Containerized Linux, Cron Expression, Desktop Environments, Docker Compose, Docker Networks, FastAPI, Figaro, Gateway, Headscale, Healing Tasks, JetStream, Max Retries, NATS, NATS Server, Nebula, OpenAI API Key, Orchestrator, Patchright CLI, PostgreSQL, Python, React SPA, Scheduled Task OptimizationExtracted Keywords: Figaro, Scheduled Task OptimizationKeywords: Figaro, Scheduled Tasks, Security, Self-Healing, Self-Learning, Supervisor, Supervisor Agent, Tailscale, Task Delegation, Telegram, Telegram Bot Token, UI, VNC Accessible Machines, WebSocket, Worker, Workflows
    The google logo   github.com 6 days ago
1544.  HN Show HN: Django-CRM – Open-Source CRM with PostgreSQL RLS Multi-Tenancy
Django-CRM (BottleCRM) is an open-source Customer Relationship Management (CRM) platform tailored for startups and small businesses, built on Django REST Framework and SvelteKit. It emphasizes multi-tenancy using PostgreSQL Row-Level Security to ensure data isolation between organizations. The platform includes core CRM modules such as management of leads, accounts, contacts, opportunities, cases, tasks, and invoices. Additionally, it offers features like team management, activity tracking, comments & attachments, tagging systems, email integration via AWS SES, background task processing with Celery + Redis, JWT authentication, and comprehensive audit logs. The technology stack comprises Django 5.x, PostgreSQL, Redis, Celery, SvelteKit 2.x, TailwindCSS 4, shadcn-svelte components, Zod, Axios, Lucide icons, AWS S3 for file storage, and AWS SES for email delivery. To set up the environment, prerequisites include Python 3.10+, Node.js 18+, PostgreSQL 14+, and Redis. The backend setup involves cloning the repository, creating a virtual environment, installing dependencies, setting up environment variables, running migrations, creating a superuser, and starting the development server. For frontend setup, dependencies must be installed with `pnpm` before starting the development server. Additionally, a Celery worker needs to run separately for background tasks. Access points include the frontend at http://localhost:5173, API documentation at http://localhost:8000/swagger-ui/, and an admin panel at http://localhost:8000/admin/. Docker can be used for streamlined setup with `docker-compose up --build` to start all services, automatically creating an admin user. The development workflow involves using Docker commands for service management, running tests with pytest, and managing RLS status through Django management commands. The platform encourages contributions from the community by allowing users to fork the repository, create feature branches, commit changes, push them to a branch, and open pull requests. It is licensed under the MIT License, promoting an inclusive and collaborative development environment. Keywords: #phi4, API Documentation, AWS S3, Accounts, Activity Tracking, BottleCRM, Cases, Celery, Contacts, Customer Relationship Management, Data Isolation, Django REST Framework, Django-CRM, Docker, Email Integration, Invoices, JWT Authentication, Leads, Multi-Tenancy, Multi-Tenancy Security, Open-Source CRM, Opportunities, PostgreSQL RLS, Redis, Row-Level Security, SvelteKit, Swagger UI, Tasks, Team Management
    The google logo   github.com 6 days ago
1566.  HN Goclaw: A Go Port of OpenClaw
GoClaw is a robust multi-agent AI gateway developed as a Go language port of OpenClaw, designed to integrate large language models (LLMs) with various tools and data sources. Its lightweight nature allows it to be deployed efficiently on low-cost virtual private servers, starting swiftly in under one second without runtime dependencies. GoClaw supports sophisticated orchestration for multi-agent teams through shared task boards and inter-agent communication mechanisms, ensuring effective collaboration. A standout feature of GoClaw is its security framework, which includes rate limiting, prompt injection detection, and secure encryption practices using AES-256-GCM for API keys. It also integrates with over 13 LLM providers like Anthropic and OpenAI via native HTTP connections, optimizing cost through prompt caching. The system supports diverse messaging channels such as Telegram and WhatsApp, enhancing its versatility in communication. GoClaw provides a comprehensive infrastructure that includes file operations, web searches, memory management, and browser automation tools, ensuring seamless interaction with various data sources. It can be deployed flexibly either in standalone or managed modes, where the latter offers advanced multi-tenant isolation features. Furthermore, it supports optional OpenTelemetry for enhanced observability through tracing and metric collection. The architecture of GoClaw incorporates a message bus and lane-based scheduler to facilitate seamless agent orchestration, with each agent having customizable identities and contexts. The browser pairing system is particularly notable for its secure authentication method, using a code flow that eliminates the need for pre-shared tokens. This system allows administrators to manage access through user approval, ensuring robust security practices. In addition, GoClaw's integration with Tailscale offers secure remote VPN mesh access, configurable via environment variables and capable of dual listeners. Despite comprehensive testing in production environments for numerous functionalities such as agent management and PostgreSQL store layers, some features like delegation history and certain messaging channels remain untested at scale. Overall, GoClaw builds on the open-source foundation of OpenClaw, maintaining an MIT license while offering a secure, scalable, and feature-rich AI gateway solution. Keywords: #phi4, Agent Management, Delegation, Docker, Encryption, GoClaw, OpenClaw, PostgreSQL, Rate Limiting, SSRF Protection, Tailscale, Teams, WebSocket
    The google logo   github.com 7 days ago
1572.  HN Show HN: Steward – an ambient agent that handles low-risk work
Steward is an innovative ambient AI assistant tailored for managing low-risk tasks autonomously without the need for direct user interaction. Unlike conventional AI assistants that require explicit activation, Steward continuously runs in the background, monitoring signals from various tools such as GitHub, email, and calendars. It employs a policy gate mechanism to distinguish between task risk levels—automating actions for low-risk tasks while requiring explicit approval for higher-risk ones, all with an audit trail maintained for transparency. Currently at an early prototype stage, Steward operates locally via a straightforward `make start` command that initiates a dashboard interface. Its functionality is enhanced by leveraging an OpenAI-compatible API key, enabling it to proactively reduce user interruptions through periodic summaries of actions taken and pending decisions. Key features include ambient operation, multi-source perception, autonomous execution with rollback capabilities for error handling, and structured decision-making processes tailored for high-risk tasks. Steward supports a community-driven capability management model and integrates various connectors to facilitate seamless interaction with multiple tools. Its tech stack includes Python 3.14, FastAPI, SQLite/PostgreSQL, Celery, Redis, and OpenAI APIs, organized into distinct components such as API routes, planning logic, core functions, connectors, services, and UI design. The project seeks feedback on its approach to "policy-gated autonomy," aiming to balance automation with minimal user interruption. It also explores the structuring of system connectors for efficient context aggregation. Designed to manage routine tasks, Steward allows users to concentrate on strategic decisions, effectively serving as a digital chief of staff in real-world applications. Contributions are welcome under its MIT license. Keywords: #phi4, AI assistants, APScheduler, Celery, Docker, FastAPI, GitHub, Linux, OpenTelemetry, PostgreSQL, Prometheus, REST API, Redis, SQLAlchemy, Steward, Webhooks, ambient agent, async execution, audit trail, calendar, chat, email, low-risk work, macOS, policy gate, risk assessment, screen context
    The google logo   github.com 7 days ago
1576.  HN Browser action engine for AI agents. 10× faster, resilient by design
Actionbook is a browser action engine designed to enhance AI agents' efficiency and reliability when interacting with websites by providing pre-computed "action manuals" with updated DOM selectors and actions. It addresses common challenges in browser automation, such as slow execution, high token costs for language models (LLMs), brittle selectors due to UI changes, and inaccuracies of LLMs handling complex DOM structures. The key benefits offered by Actionbook include a tenfold increase in execution speed since AI agents access pre-computed action manuals rather than parsing entire HTML pages. Additionally, it provides significant savings on token usage by delivering only essential DOM elements in concise JSON formats to LLMs, which reduces the context size and improves efficiency. The tool ensures resilient automation through maintained and versioned action manuals that prevent functionality breaks due to website changes. Actionbook is universally compatible with any large language model or AI operator framework. Getting started with Actionbook involves installing a command-line interface (CLI) tool using npm, which can utilize existing browsers like Chrome or Edge. Users can integrate it with various AI coding assistants by incorporating specific prompts for action comprehension and execution. Optionally, an added "Skill" feature allows deeper integration. Comprehensive documentation, tools for managing action manuals, and an API reference are available on the Actionbook website. The platform is developed as a monorepo using pnpm workspaces and Turborepo. Users interested in contributing or testing during its private beta can join a waitlist to suggest websites for indexing. Supporting Actionbook involves starring it on GitHub, participating in community discussions on Discord, or following @ActionbookHQ for updates. Keywords: #phi4, AI agents, Action manuals, Actionbook, CLI, DOM, DOM structure, JavaScript SDK, MCP Server, PostgreSQL, PostgreSQL database Keywords: Actionbook, Rust, Rust-based, automation, browser action engine, compatibility, resilient automation, token savings, universal compatibility
    The google logo   github.com 7 days ago
1579.  HN Show HN: I built open source Gmail organizer because I refused to pay $30/month
NeatMail is an innovative open-source Gmail organizer developed by Lakshay1509 to provide a cost-effective alternative to expensive external email management tools. It integrates seamlessly within the Gmail interface, allowing users to manage their emails without leaving their inbox. Key functionalities include auto-labeling, where incoming emails are automatically categorized based on preset or custom labels, and AI-powered draft responses tailored to match the user's tone. These features operate in real-time as emails arrive, eliminating delays associated with batch processing. The tool aims to alleviate common email management challenges by reducing time spent organizing messages and drafting repetitive replies. NeatMail prioritizes user privacy by employing OAuth 2.0 for authentication and ensuring that email content is not stored on third-party servers. Currently in beta, the application invites feedback from Gmail users to refine its features. NeatMail's technical architecture comprises Next.js for both frontend development and API routes, Prisma as a type-safe ORM, Redis for deduplication tasks, Clerk for authentication purposes, OpenAI’s GPT-4 for draft generation capabilities, and various Google APIs for email operations. It supports deployment via Vercel and offers Docker support to facilitate scalability and ease of use. The project encourages contributions from developers who adhere to its guidelines and is freely available under the MIT License as an open-source tool. Keywords: #phi4, AI Drafts, API, Architecture, Authentication, Auto-labeling, CSS, Deployment, Docker, GitHub, Gmail, Hosting, Linting, Nextjs, OAuth 20, ORM, Open Source, Organizer, Payments, PostgreSQL, Prisma, Redis, TypeScript, Webhooks
    The google logo   github.com 7 days ago
1596.  HN Show HN: YourFinanceWORKS – Open-source financial management with AI
YourFinanceWORKS is an open-source financial management platform designed to deliver enterprise-level capabilities within a self-hosted environment by leveraging artificial intelligence. It stands out with its robust technical framework featuring a multi-tenant architecture and employs technologies such as FastAPI, PostgreSQL, Redis, and Kafka for enhanced performance. The platform utilizes AI-powered Optical Character Recognition (OCR) technology to process receipts and invoices efficiently. Users benefit from features like natural language queries, sophisticated fraud detection mechanisms, and risk scoring systems, alongside an extensible plugin framework. Among its offerings are professional invoicing capabilities, automated bank reconciliation processes, customizable approval workflows, real-time dashboards for financial monitoring, comprehensive compliance trails, and investment tracking functionalities. The platform addresses the drawbacks of existing financial software by prioritizing user privacy, affordability, and automation through AI while maintaining a transparent architectural design. YourFinanceWORKS is available under dual licensing options: AGPL for its core components and commercial licenses for enterprise solutions, encouraging community involvement with detailed documentation. To begin using YourFinanceWORKS, users are guided to clone the platform's repository from GitHub and deploy it using Docker. The project specifically targets challenges such as ensuring secure multi-tenancy, enhancing financial OCR accuracy, facilitating real-time updates, and integrating AI-driven approval workflows, thereby providing a comprehensive solution for modern financial management needs. Keywords: #phi4, AI, Docker, FastAPI, GitHub, Kafka, OCR, Open-source, PostgreSQL, Redis, WebSocket, approval workflows, audit trails, bank reconciliation, community plugins, compliance, dashboards, documentation, enterprise features, event-driven, extensible architecture, financial management, fraud detection, investment tracking, multi-tenant, natural language queries, plugins, privacy, professional invoicing, risk scoring, self-hosted
    The google logo   www.yourfinanceworks.com 7 days ago
1614.  HN Processing UK rail data in real-time (2025)
In 2025, an advanced real-time UK rail data processing system was developed using Go and Kafka, integrated with PostgreSQL, designed to handle millions of daily messages concerning train movements, schedules, and disruptions. The evolution from a basic Kafka consumer to a sophisticated service demonstrates effective utilization of Go's concurrency for efficient message handling and resilience. Its architecture involves consuming data via Kafka topics, validation through Go channels, and storage in a PostgreSQL database using dynamic table partitioning by date. Systemd on Fedora Linux manages processes with automatic restarts and centralized logging, ensuring continuous operation even across time boundaries. The system employs integration tests using Docker containers to validate crucial aspects like transaction handling, error scenarios, message ordering, and database interactions under real-world conditions. A comprehensive 7-day staging validation confirmed the system's reliability, showcasing its ability to manage server restarts without manual intervention, thereby affirming its readiness for production deployment. This project exemplifies a modern Go service architecture that ensures reliable data processing with minimal downtime, emphasizing Kafka's robust messaging capabilities and PostgreSQL's efficient storage management within a real-time railway data context. Keywords: #phi4, Docker containers, Fedora Linux, Go, Kafka, Kafka consumer, PostgreSQL, franz-go, integration tests, message processing, railway systems, real-time data, systemd, table partitioning
    The google logo   aran.dev 7 days ago
1623.  HN Show HN: PgQueuer – A PostgreSQL job queue that works without PostgreSQL
PgQueuer is a sophisticated job queue system built on PostgreSQL, designed to streamline background job processing without requiring additional infrastructure or message brokers. By integrating with existing PostgreSQL databases, PgQueuer leverages PostgreSQL's advanced concurrency features like LISTEN/NOTIFY and FOR UPDATE SKIP LOCKED to facilitate instant notifications and efficient worker coordination. Its minimal integration requirement involves only a single Python package for setup with an existing PostgreSQL connection. Key advantages of PgQueuer include real-time job notifications via the LISTEN/NOTIFY system, ensuring sub-second latency without resorting to polling loops. This feature enhances its scalability as jobs are stored within the same database where application data resides, benefiting from PostgreSQL’s ACID guarantees and extensive tooling. The system also supports advanced concurrency control through rate limiting, concurrency management, and deferred execution, while being production-ready with features like built-in scheduling, graceful shutdowns, real-time job tracking, and observability tools including Prometheus metrics, distributed tracing, and an interactive dashboard. Installation of PgQueuer targets Python 3.11+ and PostgreSQL 12+, requiring setup via pip and database schema initialization through a command-line tool. Its usage involves defining consumers as entrypoints or scheduled tasks for processing jobs and producers for enqueuing them, with support for batch operations and complex workflows. For testing and local prototyping, PgQueuer offers an in-memory adapter which, although useful for unit tests and short-lived batch jobs, is not recommended for production due to its lack of durability and coordination capabilities. PgQueuer supports a range of common patterns such as batch operations, rate limiting, concurrency control, deferred execution, job completion tracking, resource sharing, and integration with web frameworks like FastAPI and Flask. It also boasts advanced features including custom executors for retry strategies, distributed tracing, Prometheus metrics, job cancellation, and heartbeat monitoring. PgQueuer supports multiple PostgreSQL drivers (both async and sync) and provides a command-line interface for setup, migration, running workers, and queue monitoring via an interactive dashboard. Licensed under MIT, PgQueuer simplifies workflow management by harnessing the robustness of PostgreSQL as its underlying job queue infrastructure, making it an attractive option for teams seeking straightforward, efficient solutions with minimal architectural complexity. Keywords: #phi4, CLI tools, FOR UPDATE SKIP LOCKED, FastAPI, Flask, LISTEN/NOTIFY, PostgreSQL, Prometheus metrics, PsycopgDriver, Python, Testcontainers, architecture, asyncpg, batch operations, concurrency, dashboard, in-memory adapter, job queue, rate limiting, scheduling, workers
    The google logo   github.com 7 days ago
1642.  HN Show HN: Habitat – A Self-Hosted Social Platform for Local Communities
Habitat is a free, open-source social platform tailored for fostering local community engagement by allowing users to discuss interests related to specific geographic areas. Each instance of Habitat focuses on a particular location, enabling discussions around general or detailed aspects of that area. Setting up Habitat can be accomplished through two primary methods: using Docker Compose or hosting it on a Linux server with an Ansible playbook. The Docker Compose method involves creating a `docker-compose.yml` file that defines the services needed for the application, worker, and database components, along with necessary environment variables in a `.env` file (such as domain details, app secret, and encryption key). Users can initiate this setup by running `docker compose up -d`. Alternatively, users can automate Habitat's installation on a Linux server using an Ansible playbook. This method requires updating the `.env.template` file with appropriate configurations before executing the playbook with specific parameters. For local development, Docker Compose is again utilized to start services and facilitate command execution within the Habitat application container through `docker exec`. Once set up, Habitat can be accessed via a web browser, typically starting at `localhost`, allowing users to explore its features. Further insights into Habitat's design and functionality are available on Carl Newton's blog. This platform serves as a versatile tool for enhancing community interaction by enabling discussions centered around local interests. Keywords: #phi4, Ansible, Composer, Composer Keywords: Habitat, Docker Compose, Habitat, Linux server, PostgreSQL, Symfony, deployment, development, environment variables, local communities, location-based, open-source, security options, self-hosted, social platform, web browser
    The google logo   github.com 7 days ago
1649.  HN Show HN: Tensor.cx – Turn your documents into AI search in 30 seconds
Tensor.cx is an innovative platform designed to convert documents into a searchable AI knowledge base with ease and efficiency, addressing common challenges associated with document search capabilities. It enables users to upload various file types—such as PDFs, DOCX, TXT, and Markdown—and processes them using OpenAI's embedding technology. This allows for precise natural language queries coupled with inline citations, making information retrieval both reliable and straightforward. The platform facilitates collaboration by providing shareable workspaces accessible via URLs, eliminating the need for extensive team onboarding. While document uploads incur costs due to the embedding process, Tensor.cx offers a free tier that supports up to three workspaces, each accommodating five documents, with 30 queries per day. Underlying its operation is a Retrieval-Augmented Generation (RAG) pipeline incorporating technologies like pgvector, LiteLLM, and SSE streaming. The platform leverages Django and Next.js for development and is hosted on Fly.io infrastructure. Tensor.cx distinguishes itself by focusing on verifiable search results compared to typical AI tools that may offer unverified answers with confidence. The creator encourages feedback and questions regarding its architecture or functionality, aiming to provide a user-friendly alternative in the realm of document searching technologies. Keywords: #phi4, AI search, Celery, Clerk, Cloudflare R2, DOCX, Django, Flyio, LiteLLM, Neon DB, Nextjs, OpenAI embeddings, PDFs, PostgreSQL, RAG solutions, SSE, Stripe, Tailwind CSS, Tensorcx, documents, inline citations, knowledge base, natural language
    The google logo   tensor.cx 7 days ago
1657.  HN Background Jobs for TanStack Start with pg-boss
The document provides a detailed guide on integrating `pg-boss`, a PostgreSQL-based job queue system, into a TanStack Start application to efficiently manage background jobs with minimal infrastructure overhead. Unlike alternatives such as BullMQ and Inngest/Trigger.dev, `pg-boss` stands out by utilizing an existing Postgres database, reducing the need for additional setup. The integration involves establishing a typed job registry to ensure type safety in job operations, alongside creating a singleton instance of PgBoss with TypeScript constraints during server initialization via a Nitro plugin. The document outlines how handlers can process jobs either sequentially or concurrently and highlights `pg-boss`'s automatic retry policies that enhance reliability. It introduces the fan-out pattern for enqueuing jobs to increase robustness by triggering multiple background tasks from a single event, each processed independently. Adding new jobs is simplified into four main steps: updating the job registry, writing handlers, registering queues/workers within the server plugin, and utilizing the `sendJob` function at trigger points. This approach ensures that code paths responsible for triggers remain clean and maintainable. The document concludes by emphasizing the seamless integration of `pg-boss` with TanStack Start applications due to its compatibility with Nitro's lifecycle events, offering a streamlined solution for managing background tasks in Postgres-driven environments without complicating infrastructure. Keywords: #phi4, Async Function, Background Jobs, BullMQ, Catalyst, Compile Time Checks, Connection Pool, Contact Sync, Database, Development Environment, Email, Enqueue, Error Handling, Error Logging, Exponential Backoff, External APIs, External Services, Fan-out Jobs, Fan-out Pattern, GlobalThis Cache, Graceful Shutdown, Handler, Idempotent, In-flight Jobs, Infrastructure, Inngest, Internal State, Job ID, Job Queue, Job Registry, Lifecycle, Local Development Guard, Nitro Plugin, Node/TypeScript, Onboarding Flows, Plugin Lifecycle, Post-Signup Actions, PostgreSQL, PromiseallSettled, Queue Declaration, Rate-Limited APIs, Retry Limit, Retry Policy, Schema Migrations, SendJob, Server Functions, Single Source of Truth, Start/Stop API, TanStack Start, Triggerdev, TypeScript API, Typed Registry, Vite HMR, WorkJob, Worker, pg-boss
    The google logo   jxd.dev 7 days ago
1690.  HN Apache Otava
Apache Otava is a specialized tool focused on enhancing continuous performance engineering by detecting changes in system performance metrics. It performs statistical analyses on performance test data obtained from various sources including CSV files, PostgreSQL databases, BigQuery, or Graphite databases. The primary functionality of Otava involves identifying change-points within this data, which are indicative of potential performance regressions. By alerting users to these critical points, Otava enables proactive maintenance and optimization efforts, helping to maintain system efficiency and reliability by addressing issues before they escalate into significant problems. This capability allows for a more streamlined approach to managing system performance over time. Keywords: #phi4, Apache Otava, BigQuery, CSV Files, Change Detection, Change-Points, Continuous Performance Engineering, Graphite Database, Notifications, Performance Regressions, Performance Test Results, PostgreSQL, Statistical Analysis
    The google logo   otava.apache.org 7 days ago
1704.  HN Video Conferencing with Postgres
The article presents an experimental setup where PostgreSQL, hosted on PlanetScale, is utilized to facilitate real-time video calls by storing and replicating media data. The system captures audio and video through browsers, encodes them into frames, and stores these as binary data in the database. Utilizing PostgreSQL's logical replication feature, this data is streamed back to participants for playback. The architecture comprises a SvelteKit frontend and a Node.js WebSocket server named pg-relay, leveraging logical replication to manage media data efficiently without polling. The implementation successfully streams video at 15 frames per second with a resolution of 640x360, demonstrating PostgreSQL's capacity to handle real-time data streaming for video calls. Frames are temporarily stored for synchronization and periodically cleaned up for efficiency. The article acknowledges challenges such as the payload limits associated with LISTEN/NOTIFY and incompatibility issues with unlogged tables for logical replication. Despite these hurdles, the experiment underscores PostgreSQL's versatility as a general-purpose backend capable of supporting unconventional workloads. It humorously notes that WebRTC would be the conventional choice for video conferencing, while emphasizing the innovative use of PostgreSQL for real-time data streaming. The implementation is open-sourced and available on GitHub, illustrating the database's potential beyond traditional applications. Keywords: #phi4, AudioBufferSourceNode, AudioFrames, AudioWorkletNode, BYTEA, Binary WebSocket Frames, Blob URL, Cleanup Job, Database, JPEG, Jitter Buffer, LISTEN/NOTIFY, Logical Replication, Nodejs, PCM16LE, PlanetScale, PostgreSQL, Postgres, Real-Time Backend, Replication Stream, SvelteKit, Unlogged Tables, Video Conferencing, VideoFrames, WAL (Write-Ahead Log), WebRTC, WebSocket, pg-relay
    The google logo   planetscale.com 7 days ago
1715.  HN Show HN: OneCamp – Self-Hosted Slack/Asana/Zoom/Notion Alternative
OneCamp is introducing itself as a self-hosted unified workspace platform, launching on March 7, designed to offer functionalities akin to Slack, Asana, Zoom, and Notion without incurring per-user fees or user limits. This positions it as an attractive solution for organizations seeking comprehensive tools with full data ownership. Its feature set includes real-time chat, task management, video calls, and collaborative document editing. The platform’s frontend is open-sourced using Next.js, inviting community engagement through exploration, forking, and contributions via its GitHub repository. Architecturally, OneCamp emphasizes robust capabilities: it leverages Yjs and Hocuspocus with CRDT sync over WebSockets to enable real-time collaboration, supported by a Tiptap editor, custom Node microservices, and Redis caching integrated with its Go backend. Additionally, it offers WebRTC-based meetings alongside live transcription services using LiveKit SFU and a Python agent for audio processing through Deepgram nova-2. OneCamp employs polyglot persistence, utilizing PostgreSQL as the primary database, Dgraph for managing graph relationships, and OpenSearch to facilitate full-text search capabilities. To ensure comprehensive observability, it incorporates OpenTelemetry for tracing and logging, directing data to HyperDX on a ClickHouse backend. While users can access OneCamp’s frontend codebase openly, its Go-based backend remains closed-source at launch, with plans for a paid managed hosting option in the future. The developers encourage community interaction through feedback, issue reporting, or pull requests. Early adopters and interested parties are invited to sign up for early access or join a waitlist via onemana.dev, with a $9 one-time fee required for participation. Keywords: #phi4, CRDT sync, Chi router, ClickHouse, Deepgram nova-2, Dgraph, EMQX MQTT, Firebase FCM, GORM, GitHub, Go 124, Go backend, Hocuspocus, HyperDX, JSON/HTML transform, LiveKit SFU, Nextjs, Node microservice, Observability, OneCamp, OpenSearch, OpenTelemetry, PRs, Polyglot persistence, PostgreSQL, Python agent, Redis caching, Tiptap editor, WebRTC meetings, WebSockets, Yjs, collaborative docs, early access, feedback, full data control, issues, live transcription, managed hosting, no per-user fees, open-sourced, real-time chat, self-hosted, tasks, unified workspace, unlimited users, video calls, waitlist
    The google logo   news.ycombinator.com 7 days ago
1724.  HN Show HN: AgentLens – Open-source observability for AI agents
AgentLens is an open-source, self-hosted observability platform tailored for AI agents, designed to simplify the debugging of multi-agent systems through its array of features. Key functionalities include an interactive topology graph, time-travel replay capabilities, trace comparison tools, and cost tracking for various models. It enhances real-time monitoring with live streaming via SSE and provides alerting mechanisms integrated with anomaly detection. The platform supports OpenTelemetry (OTel) ingestion to ensure compatibility with any OTel-instrumented application. Developed using React 19, FastAPI, and databases like SQLite/PostgreSQL, AgentLens is available under the MIT license with comprehensive test coverage. It integrates seamlessly with popular frameworks such as LangChain, CrewAI, AutoGen, LlamaIndex, and Google ADK. The platform invites user feedback on its trace visualization methods and seeks suggestions for essential debugging features. Users can deploy AgentLens using Docker or install it via pip, with resources and documentation accessible through GitHub and an online portal. Keywords: #phi4, AI agents, AgentLens, AutoGen, CrewAI, Docker, FastAPI, GitHub, Google ADK, LangChain, LlamaIndex, OTel ingestion, PostgreSQL, React 19, SQLite, alerting, cost tracking, debugging, feedback, live streaming, multi-agent systems, observability, time-travel replay, topology graph, trace comparison, trace visualization
    The google logo   news.ycombinator.com 7 days ago
1760.  HN Show HN: ChoresMates – Splitwise, but for Household Chores
ChoresMates is an innovative mobile application designed to address disputes over household chores by implementing a point-based system akin to financial splitting apps like Splitwise. It assigns point values to each task based on the effort required, incorporating features such as real-time leaderboards, penalties for overdue tasks, photo proof submission for completed tasks, and automatic rotation of chore assignments. The app supports groups consisting of up to 10 members and accommodates a maximum of 25 chores, offering both free and premium plans. Developed using Swift/SwUI for iOS and Kotlin for Android platforms, ChoresMates leverages a Rails 7 API backend hosted on AWS ECS with PostgreSQL as the database system. It provides integration with various authentication methods including Sign in with Apple and Google. The app's primary objective is to ensure equitable distribution of chores among couples, roommates, or family members while eliminating the need for subscriptions or additional in-app purchases. Keywords: #phi4, AWS ECS, Android, Chore Rotation, ChoresMates, Couples, Effort Points, Email Auth, Family Sharing, Free Tier, Gamification, Google, Household Chores, Households, Kotlin, Leaderboard, Photo Proof, PostgreSQL, Pro Subscription, Rails 7 API, Roommates, Sign in with Apple, Siri Shortcuts, Smart Reminders, Splitwise, Swift/SwiftUI, Task Tracking, Widgets, iOS
    The google logo   apps.apple.com 8 days ago
1770.  HN Show HN: Atom – open-source AI agent with "visual" episodic memory
Atom is an open-source, self-hosted AI automation platform uniquely designed for complex business workflows through its intelligent agent-based systems. It addresses the challenge of "State Blindness" by utilizing a novel architecture known as Canvas AI Accessibility. This approach generates hidden semantic layers to describe the visual state of user interfaces, allowing agents to verify their actions and maintain an episodic memory in a vector database. Unlike OpenClaw, which handles simpler tasks, Atom supports intricate multi-agent workflows involving specialty agents for various business functions like Sales, Marketing, and Engineering across numerous integrations such as CRM systems and development tools. A key aspect of Atom's design is its emphasis on governance, requiring agents to undergo "graduation" validation before advancing from "Student" to "Autonomous" status. This ensures safe operation by logging all actions for traceability. Built with Python/FastAPI and deployable via Docker Compose, Atom prioritizes data security and privacy through self-hosting. Its features include voice interface support, real-time guidance via Canvas presentations, deep integration with over 46 business tools, and a community marketplace for skills. Atom enhances its appeal with robust security measures such as encrypted storage, vulnerability scanning, and protection against supply chain threats like typosquatting. The platform fosters community contributions, offers comprehensive documentation, and adheres to AGPL v3 licensing. It maintains high reliability and performance through detailed testing procedures, making it a versatile solution for both business and personal productivity contexts. Keywords: #phi4, AI agent, Atom, BYOK, Canvas AI Accessibility, Docker Compose, FastAPI, OpenClaw, PostgreSQL, Python, audit logs, automation platform, browser automation, device control, episodic memory, governance, open-source, privacy, security, self-hosted, semantic layer, supply chain protection
    The google logo   github.com 8 days ago
1781.  HN B.A.S.E. – A standalone back end language with zero dependencies
B.A.S.E. is a standalone back-end scripting language crafted for server-side logic without needing external dependencies, built using Go and compiled into a single 22MB binary. It supports various features including running HTTP servers, database queries, Discord webhooks, and machine automation. Installation requires downloading the binary with a simple curl command and setting up the environment. B.A.S.E. provides an interactive REPL for script testing and can execute standalone files or projects. The language boasts a comprehensive standard library that enables users to create web servers, make HTTP requests with custom headers and retries, connect to databases like SQLite and MongoDB, send notifications via Discord or email, handle concurrent background tasks, manage encrypted file operations, and schedule cron jobs. Its syntax is similar to JavaScript or C, using `let` for variable declarations, and it includes an auto-keep-alive system to prevent script termination by ensuring ongoing processes. Users can learn more about its built-in modules with the `base help` command. For inquiries or contributions, contact Igor at igor@igorkalen.dev. Keywords: #phi4, AES-256-GCM, BASE, Discord webhooks, Go, HTTP server, JavaScript syntax, MongoDB, MySQL, PostgreSQL, REPL, SMTP email, SQLite, UUIDs, backend, channels, concurrency, cron jobs, databases, encryption, modules, scheduling, scripting, tasks
    The google logo   github.com 8 days ago
   https://github.com/igorkalen/base   8 days ago
1791.  HN India Blocks the Supbase.co
Supabase, a widely-used developer database platform, has been blocked in India under Section 69A of the IT Act as of February 24, without any public justification from the Indian government. This blockage disrupts one of Supabase's key markets, accounting for approximately 9% of its global traffic, leading to inconsistent user access and significant operational challenges. Efforts by Supabase to implement workarounds, such as using VPNs, have proven impractical for a substantial number of users. The incident highlights broader concerns regarding India’s website blocking policies, which have previously impacted other developer platforms. Although the main site remains accessible in India, the underlying infrastructure is still blocked, contributing to uncertainty and concern within the local development community about the stability and predictability of tech services due to regulatory actions. This situation adds to the growing scrutiny of India's regulatory environment concerning technology services, raising alarms over potential future disruptions for developers relying on such platforms. Keywords: #phi4, ACT Fibernet, AI-driven app development, Access Now, Airtel, Bengaluru, DNS, Firebase, GitHub, India, Information Technology Act, JioFiber, PostgreSQL, Section 69A, Similarweb, Supabase, VPN, access, block, copyright, cybersecurity, developer infrastructure, disruption, open-source, traffic, website
    The google logo   techcrunch.com 8 days ago
1831.  HN Video Conferencing with Postgres
In February 2026, Nick Van Wiggeren built upon SpacetimeDB's concept by implementing video calls over a database using PostgreSQL on PlanetScale. By utilizing the logical replication feature of PostgreSQL, he enabled real-time streaming of audio and video data between web browsers. This setup involved capturing media through browser APIs, encoding it, storing JPEG images and PCM audio samples in dedicated PostgreSQL tables, and transmitting them via a Node.js WebSocket server. Logical replication facilitated direct message delivery to clients without requiring frequent database polling, while a cleanup job maintained efficiency by deleting outdated frames every five seconds. Despite PostgreSQL's constraints, such as an 8KB payload limit for LISTEN/NOTIFY and difficulties with unlogged tables, Van Wiggeren demonstrated the feasibility of real-time video streaming on even low-cost plans. His project showcased how database systems could be repurposed for novel data streaming applications, highlighting PostgreSQL's potential as a versatile real-time backend solution. This innovative use case provides insights into leveraging databases beyond traditional roles, emphasizing reliability through logical replication. Keywords: #phi4, AudioBufferSourceNode, AudioFrames, AudioWorkletNode, BYTEA, Binary WebSocket Frames, Blob URL, Cleanup Job, Database, JPEG, Jitter Buffer, LISTEN/NOTIFY, Logical Replication, Nodejs, PCM16LE, PlanetScale, PostgreSQL, Postgres, Real-Time Backend, Replication Stream, SvelteKit, Unlogged Tables, Video Conferencing, VideoFrames, WAL (Write-Ahead Log), WebRTC, WebSocket, pg-relay
    The google logo   planetscale.com 8 days ago
1855.  HN Show HN: CocoSearch – semantic code search with syntax-aware chunking
CocoSearch is a sophisticated semantic code search tool tailored for local-first and privacy-focused settings, designed to enhance the precision of locating relevant code in extensive codebases by preserving structure with syntax-aware chunking via CocoIndex and Tree-sitter. This approach ensures meaningful boundaries within functions, classes, and configuration blocks, facilitating more accurate searches through expanded scopes. The tool combines vector similarity with keyword matching using RRF fusion, supporting both local indexing/embedding (via Ollama) and optional remote services like OpenAI/OpenRouter, ensuring user data remains on the machine during processing. CocoSearch offers a variety of interfaces, including a web dashboard for visual management, CLI for scripting, MCP server, and an interactive REPL to cater to different workflow needs. It excels in structure-preserving retrieval, capturing AI-native context that reflects dependency relationships and domain-aware grammar handling for infrastructure files like Terraform and Dockerfiles. The tool supports 32 languages with symbol-level filtering capabilities and includes tools for extracting dependency graphs, aiding developers and DevOps engineers in analyzing code connections and conducting impact analysis. The system is particularly beneficial for users dealing with structured configuration files and extensive codebases, reducing the number of queries needed to find relevant information. It emphasizes privacy by processing data locally, with optional remote configurations available. CocoSearch can be integrated into AI assistants like Claude Code through plugins or manual registrations, making it a versatile addition to development environments focused on maintaining user privacy while enhancing search functionality. Keywords: #phi4, AI-native context, CLI, CocoIndex, CocoSearch, Docker, MCP server, Ollama, PostgreSQL, REPL, RRF fusion, Tree-sitter, WEB dashboard, context expansion, dependency graph, dependency-aware analysis, domain-specific grammars, grammar handlers, hybrid RRF fusion, incremental indexing, keyword matching, language handlers, local-first tool, parse health tracking, pgvector, pipeline analysis, privacy-focused, query caching, semantic code search, semantic similarity, symbol extraction, symbol filtering, syntax-aware chunking
    The google logo   github.com 8 days ago
   https://github.com/cocoindex-io/cocoindex   8 days ago
1862.  HN Show HN: Nano Banana 2 – 4K AI image generator with accurate text rendering
Nano Banana 2 is an advanced AI image generation platform designed to overcome common challenges in text rendering, resolution limitations, and character consistency across images. It boasts true 4K output capabilities, achieving accurate text rendering about 90% of the time while maintaining up to five characters and fourteen objects consistently throughout its generated imagery. The technology underpinning Nano Banana 2 is Google's Gemini 3.1 Flash Image model, enabling it to produce high-quality images rapidly, typically within a span of 4-6 seconds. Its technical infrastructure incorporates Next.js, React, TypeScript, Drizzle ORM, PostgreSQL, and Cloudflare R2. The platform provides users with four different models at varying price points ranging from $0.039 to $0.134 per image, and new sign-ups receive free credits that do not expire. Leveraging Gemini's expansive knowledge base allows Nano Banana 2 to generate contextually precise content efficiently. Available online at https://ainanobanana2.pro, the platform is adept for high-volume production tasks, making it an excellent tool for marketing and branding applications due to its speed and reliability in image creation. Keywords: #phi4, 4K resolution, AI image generator, Cloudflare R2, Drizzle ORM, Google Gemini, Nano Banana, Nextjs, PostgreSQL, React, TypeScript, brand asset creation, character consistency, high-volume production, multi-language translation, object tracking, real-time world knowledge, storyboarding, text rendering
    The google logo   www.ainanobanana2.pro 8 days ago
1867.  HN Show HN: Memrail – PR-style governance for AI agent writes (OpenClaw)
Memrail is an open-source governance layer designed to enhance OpenClaw workflows by introducing a pull-request (PR) style control mechanism for managing AI agent writes. It addresses several challenges such as memory quality drift, change tracking difficulties, and rollback issues through a structured process that includes dry-runs, diff previews, human approvals or rejections, commits, audit trails, and undo functionalities. The key features of Memrail include Governed Writes, which require all changes to pass through a controlled pipeline with mandatory human review; a Review Inbox at `/changes` for differential-based commit or rejection decisions by humans; and an Operational Workspace providing tools like `/tasks` and `/knowledge` for daily operations and governance of execution contexts. This tool is particularly suited for teams or individuals using OpenClaw where maintaining high memory quality, requiring human approval for changes, and ensuring audit trails and rollback capabilities are critical. However, it's not suitable for completely autonomous write pipelines lacking human oversight or users needing built-in multi-tenant billing or OAuth solutions. The technology stack comprises a backend utilizing FastAPI and SQLAlchemy, with frontend implementation in Next.js, and offers database options like SQLite (default) and PostgreSQL (optional). To get started, users need to clone the repository, set up environment configurations, run backend services using FastAPI and Uvicorn, and initiate the frontend with Node.js. Changes can be proposed via `curl` commands for dry-run purposes and committed after receiving human approval. Memrail is targeted at developers and teams aiming to integrate governance into their OpenClaw workflows, offering a structured approach to manage AI agent writes while ensuring traceability and reversibility of changes. Keywords: #phi4, AI agents, Apache-20 license, FastAPI, Memrail, Nextjs, OpenClaw, PR-style, PostgreSQL, SQLAlchemy, SQLite, audit trail, backend, change management, diff preview, dry-run, frontend, governance, human approval, integration, memory quality, operational workspace, pull requests, rollback, security, undo, workflows
    The google logo   github.com 8 days ago
1868.  HN Show HN: AI Wins – Automated positive AI news aggregator
AI Wins is an automated aggregator designed to curate positive news stories about artificial intelligence, leveraging technologies such as Next.js, Tailwind CSS, and PostgreSQL. It organizes content into eight distinct categories including Breakthroughs and Healthcare through a comprehensive pipeline that encompasses sentiment filtering, summarization, categorization, and publishing. Despite its innovative methodology aimed at highlighting favorable AI developments, the system occasionally admits neutral stories due to imperfections in its positive-sentiment filtering process. A notable development on February 19, 2026, involved research from the University of Hawaii, where students at UH Manoa developed a physics-informed AI algorithm that ensures physically plausible outputs. This significant advancement has potential applications across various fields such as climate modeling and renewable energy, showcasing AI's growing impact in complex scientific domains. Keywords: #phi4, AI, Nextjs, PostgreSQL, Tailwind CSS, UH Manoa, accessibility, aggregator, algorithm, automated, business, categorization, climate modeling, creative, education, environment, healthcare, particle physics, physics-informed AI, positive news, renewable energy, research, sentiment filtering, summarization
    The google logo   www.aiwins.news 8 days ago
1888.  HN Nao: Open-Source Analytics Agent
Nao is an open-source framework designed for building and deploying analytics agents through a user-friendly interface that supports data interaction and analysis. The framework enables users to create contexts using tools like `nao-core` CLI, supporting various components such as data, metadata, modeling, and rules. Key features cater to both data teams and business users; it provides an Open Context Builder for limitless context additions, ensuring data stack agnosticism, reliability via unit testing and versioning, and security through self-hosting with private LLM keys. Business users benefit from natural language queries, native data visualization within the chat interface, transparent reasoning processes, and easy feedback mechanisms. The quickstart process involves installing the `nao-core` package, initializing a Nao project with optional configurations, debugging to verify setup, synchronizing contexts for file population, and launching the chat UI to begin querying. Evaluation is facilitated through unit testing using YAML test cases, accessible via specific commands such as `chat`, `init`, `sync`, `test`, and `debug`. Docker support allows running of chat interfaces from both example or local projects. Nao encourages community involvement through platforms like GitHub, LinkedIn, and Slack, with detailed contribution guidelines available. It operates under the Apache 2.0 License and is a Y Combinator company. Additional information on setup, usage, and deployment can be found in documentation provided by Nao Labs. Keywords: #phi4, Analytics, Apache License, Business Users, Data Teams, Docker, Framework, Insights, Natural Language, Open-Source, PostgreSQL, Reasoning, Security, Visualization
    The google logo   github.com 8 days ago
1900.  HN Show HN: Core – Constitutional governance runtime for AI coding agents
CORE is an innovative governance runtime specifically designed to facilitate AI-assisted software development by ensuring that AI coding agents operate within predefined constitutional laws during execution. It enforces 92 rules through seven deterministic engines, halting any operation if a rule is violated to prevent partial states and undetected breaches. Demonstrated by its ability to autonomously block a self-healing workflow due to synchronization issues between governance components, CORE showcases its capability for human-independent regulation. The system's architecture consists of three distinct layers: Mind, which defines the laws; Will, responsible for judgment and decision-making; and Body, handling execution. Each operation adheres to a phase-aware workflow that verifies compliance with constitutional rules before proceeding, thus preventing unauthorized actions such as deleting production databases and ensuring deterministic, auditable workflows. CORE supports autonomous operations advancing through defined levels of capability—from self-awareness to strategic autonomy—while maintaining adherence to constitutional boundaries. To deploy CORE, the system requires Python version 3.11 or higher, PostgreSQL version 14 or above, Qdrant, Docker, and Poetry for dependencies, with a quick start setup available via Git and Poet tools. Keywords: #phi4, AI agents, CORE, Docker, Git, LLM-assisted checks, Poetry, PostgreSQL, Python, architecture, audit trail, autonomous workflows, autonomy lanes, constitutional governance, deterministic engines, enforcement, law, runtime, self-healing, strategic autonomy, structural analysis, workflow phases
    The google logo   github.com 8 days ago
1901.  HN We Chose SQLite
The team behind Curling IO initially considered using PostgreSQL for their Version 3 due to its robust features and familiarity but decided against it after evaluating the complexities and dependencies of self-hosting. Instead, they opted for SQLite, complemented by Litestream for backup and replication, transitioning to OVH data centers in Canada to decrease reliance on AWS. This choice was strategic as SQLite's architecture allows Curling IO Version 3 to operate efficiently on a single server with each sport having its own database file, which suits their primarily read-heavy workload interspersed with occasional write bursts. The decision resulted in cost savings by negating the need for separate database hosting. Litestream enhances SQLite’s functionality by providing straightforward and dependable backup solutions using S3-compatible storage, facilitating quicker data recovery compared to the more complex processes associated with PostgreSQL. Despite some limitations inherent to SQLite, such as missing certain data types and restrictions on schema modifications, Curling IO has implemented application-level workarounds to address these issues effectively. Overall, migrating to SQLite paired with Litestream has proven advantageous for Curling IO by simplifying backup operations and optimizing resource use while ensuring operational flexibility. This setup supports their architectural objectives and operational requirements within existing infrastructure constraints, maintaining an open option for transitioning back to PostgreSQL if necessary in the future. Keywords: #phi4, AWS, BEAM runtime, Canada, Crunchy Bridge, Erlang NIF, Gleam, Litestream, OVH, PostgreSQL, SQLite, WAL, application-level pub/sub, backup and recovery, database-per-sport, in-process, managed services, multi-tenant, schema evolution, single-server architecture, test isolation, transaction-level point-in-time recovery, type conversions
    The google logo   curling.io 8 days ago
1922.  HN Video Conferencing with Postgres
Nick Van Wiggeren's article from February 27, 2026, explores an innovative implementation of video conferencing using a PostgreSQL database hosted on PlanetScale as the core data transport mechanism. Inspired by SpacetimeDB's pioneering "video call over a database," this project employs open-source software to achieve similar functionality with PostgreSQL. The system captures audio and video from a browser, encodes them into frames (JPEG for video, PCM16LE for audio), and transmits this data via WebSocket to a Node.js server known as pg-relay. This relay checks the call's active status before storing the media frames in PostgreSQL tables named `video_frames` and `audio_frames`. Logical replication is utilized to stream these media frames in real time from one participant’s database to another, allowing for playback reconstruction on the receiving browser. The project demonstrates PostgreSQL’s capability to handle real-time data transmission at 15fps for video and simultaneous audio streaming through logical replication. A cleanup process efficiently manages storage by removing older frames every few seconds, maintaining a buffer of about 5-7 seconds per call. While alternatives like LISTEN/NOTIFY or unlogged tables could be considered, they introduce additional complexities or limitations. The experiment highlights PostgreSQL's adaptability and potential as a versatile real-time backend, even on modest service plans such as PlanetScale's $5 tier. Although WebRTC is noted for its efficiency in video conferencing, this project provides insights into extending database technologies beyond traditional applications. The code is open-source on GitHub, encouraging further experimentation with PostgreSQL’s capabilities in real-time scenarios. Keywords: #phi4, AudioBufferSourceNode, AudioFrames, AudioWorkletNode, BYTEA, Binary WebSocket Frames, Blob URL, Cleanup Job, Database, JPEG, Jitter Buffer, LISTEN/NOTIFY, Logical Replication, Nodejs, PCM16LE, PlanetScale, PostgreSQL, Postgres, Real-Time Backend, Replication Stream, SvelteKit, Unlogged Tables, Video Conferencing, VideoFrames, WAL (Write-Ahead Log), WebRTC, WebSocket, pg-relay
    The google logo   planetscale.com 9 days ago
1935.  HN Show HN: Wardrowbe – I kept staring at a full closet with nothing to wear
Wardrowbe is a self-hosted wardrobe management application designed to address the common issue of having an overstuffed closet without suitable outfit options. Utilizing AI technology, it recommends outfits by analyzing photos uploaded by users to extract details such as color and pattern. These recommendations are customized based on current weather conditions and personal preferences, ensuring relevance and suitability for various occasions. The app offers scheduled notifications via ntfy/Mattermost or email to keep users informed about outfit suggestions. Key features of Wardrowbe include photo-based wardrobe management, AI-powered recommendation systems, support for multiple family members' wardrobes, and the ability to track clothing usage with analytics on wear frequency and color distribution. The data remains private as it is fully self-hosted, eliminating concerns over third-party access. Technically, Wardrowbe consists of a frontend built with Next.js and a backend leveraging FastAPI and SQLAlchemy, with PostgreSQL serving as the database. It uses Redis for job queues and incorporates AI services like Ollama or OpenAI. Deployment can be achieved using Docker Compose, with Kubernetes as an alternative option, allowing flexibility in setup. To set up Wardrowbe, users must ensure they have Docker, Docker Compose, and at least 4GB of RAM. They need to decide between local AI (Ollama) or the OpenAI API (internet-based), clone the repository, configure environment settings in a `.env` file, and initiate services with Docker Compose. The project encourages community involvement through clear contribution guidelines available in a CONTRIBUTING.md file. Support is offered via GitHub issues and forums for troubleshooting. Wardrowbe is open-source under the MIT License and requires sufficient RAM/storage to run effectively, with specific suitability noted for Raspberry Pi 5 deployments. Keywords: #phi4, AI-powered recommendations, Docker Compose, FastAPI, Kubernetes, Nextjs, OIDC authentication, Ollama, OpenAI API, PostgreSQL, Redis, SMTP email, Wardrowbe, analytics, development mode, family support, notifications, ntfysh, outfit suggestions, photo-based system, self-hosted, troubleshooting, wardrobe management, wear tracking, weather integration
    The google logo   github.com 9 days ago
1941.  HN India disrupts access to popular developer platform Supabase with blocking order
Supabase, a developer platform, has experienced disruptions in India due to a government directive enforcing Section 69A of the IT Act, which restricts access without publicly stated reasons since February 24. The blockade's duration is uncertain, causing inconsistent access across networks, impacting both new users and those reliant on Supabase for development tasks. While temporary solutions like DNS changes or VPN use have been proposed, they are impractical for most affected individuals. This interruption is significant as India accounts for about 9% of Supabase’s global traffic, highlighting the substantial impact. The situation underscores ongoing concerns regarding India's website blocking practices, previously criticized for their approach to internet restrictions. Despite attempts by Supabase to engage with authorities and seek resolutions through various channels, access issues persist for many users in the country as per the latest updates. Keywords: #phi4, ACT Fibernet, AI-driven app development Extracted Keywords: India, AI-driven app development Keywords: India, Access Now, Airtel, Bengaluru, DNS settings, Firebase, GitHub, India, Information Technology Act, JioFiber, New Delhi, PostgreSQL, Section 69A, Similarweb, Supabase, VPN, blocking order, copyright complaint, cybersecurity, developer platform, inconsistent access, open-source
    The google logo   techcrunch.com 9 days ago
1944.  HN Show HN: Recall – Persistent Memory for Claude Code via MCP Hooks
"Recall," a product designed as an MCP (Multimodal Contextual Protocol) server plugin for Claude Code sessions, addresses the challenge of remembering session details between interactions by offering persistent memory capabilities. This native plugin utilizes four lifecycle hooks to manage and capture critical events such as git commits, file changes, and other significant state alterations before summarization occurs. It preserves these states through a Redis-backed memory system that supports semantic search via advanced AI technologies, ensuring secure storage with AES-256-GCC encryption. The implementation of Recall is seamless and user-friendly, supporting auto-updates and installation via bash and curl commands without necessitating daemons or background processes. Its architecture leverages open-source components such as TypeScript, Express 5, Redis/Valkey adapters, Drizzle ORM for PostgreSQL, and StreamableHTTP MCP transport, enhancing its integration with a variety of tools. Security is emphasized through timing-safe HMAC-SHA256 signatures to verify incoming requests' authenticity, requiring no additional changes if no secret key is configured. Recall offers features like webhook ingestion for AI event awareness, self-hosting capabilities via Docker Compose, and team knowledge sharing within the same workspace. Available under an MIT license on recallmcp.com, it provides a free tier with 500 memories, making it accessible to users seeking efficient project management solutions. Keywords: #phi4, Claude Code, Cryptographic Proof, Docker Compose, Drizzle ORM, Embeddings, Express 5, HMAC-SHA256, MCP Hooks, MCP Tools, Persistent Memory, Plugin, PostgreSQL, Redis, Self-hosted, Semantic Search, StreamableHTTP, Team Sharing, TypeScript, Webhook Ingestion
    The google logo   recallmcp.com 9 days ago
2013.  HN TaskForge – immutable, orchiestration for OpenClaw bots
TaskForge is an independent orchestration layer built for the OpenClaw project, focusing on providing a secure and auditable environment for running AI agents. It employs sandboxed Docker containers with capability-based security, where agents start with minimal permissions and must request additional capabilities through human approval, ensuring changes via immutable container image rebuilds. Key features include isolated execution within Docker-in-Docker environments, explicit human approval for new capabilities, multi-provider language model integration (such as Ollama, Gemini, Anthropic, OpenAI), full audit trails of language model interactions, and durable Temporal workflows that allow tasks to be paused and resumed. TaskForge also supports deployment by enabling agents to create applications on specified ports. The setup process involves prerequisites like Docker 24+ and at least 16GB RAM, cloning the repository, configuring environment variables for LLM providers, and using `make up` to launch services. The architecture comprises a comprehensive Docker Compose topology with components like an API server, image builder, Temporal executor, frontend dashboard, and databases. TaskForge's development structure includes directories for various services such as control-plane, image-builder, temporal-worker, agent-executor, and front-end components. Its design supports secure orchestration of AI agents, making it apt for enterprise-scale solutions under stringent governance. Developed by Roman Pawel Klis, TaskForge emphasizes security, immutability, and scalability in executing AI tasks. Keywords: #phi4, API, Anthropic, Docker, FastAPI, Gemini, LLM routing, Ollama, OpenAI, OpenClaw, PostgreSQL, TaskForge, Temporal workflows, agent containers, audit trail, capability gating, deployment support, environment variables, image rebuilds, immutable infrastructure, multi-provider, orchestration, sandboxed execution, security model
    The google logo   github.com 9 days ago
2030.  HN Show HN: Open‑Source Capital Formation OS (Postgres and AI Agents)
The "Riserva Flywheel" project is an open-source initiative designed to revolutionize capital formation by shifting from traditional spreadsheet methods to a structured, automated, and AI-enhanced system. It aims to create a closed-loop environment that standardizes venture operations, organizes investors into graph structures, automates outreach efforts, and logs outcomes for continual improvement of fundraising strategies. Central components of the project include a PostgreSQL backend for data storage, along with defined schemas like Venture, Investor, Deal, and Value Ledger to manage information efficiently. A deterministic matching engine is employed to ensure reproducible investor pairing, while an event-based logging layer captures interactions as enduring records. Additionally, AI agents are integrated to automate various tasks such as matching investors, conducting outreach, and converting meetings into memos. The project focuses on structured data models, event logging, deterministic processes, and artifact generation to establish a compounding system for capital formation. Contributions to the project are encouraged in areas like schema development, matching logic, and automation, with an architecture that emphasizes clarity, traceability, and iterative improvements in fundraising efficacy. Keywords: #phi4, AI Agents, Artifact Generation, Automation Layer, Capital Engine, Capital Formation, Compounding Fundraising Keywords: Open-Source, Data Modeling, Deal, Deterministic Pipelines, Event Logging, Event-based Logging, FastAPI, Flywheel, Gmail API, Google Calendar API, GraphQL, Infrastructure, Investor, Matching Engine, Matching Logic, Nodejs, Object Boundaries, Open-Source, OpenAI API, PostgreSQL, Postgres, Python, REST, S3-compatible Storage, Schema, Structured Data Models, Systems Engineering, Value Ledger, Venture, n8n
    The google logo   github.com 9 days ago
2055.  HN Mentor Me
The webpage "Mentor Me" features a software developer seeking mentorship in lieu of offering coaching services. The developer boasts over 30 web applications developed, including freelance projects like Gift Circle and Talk Timer, utilizing technologies such as TypeScript and React. They are searching for guidance from an experienced senior or staff engineer at small to mid-sized companies to aid their job search. Emphasizing the importance of feedback and personal growth, they aspire to secure roles that allow them to be fully authentic. Interested mentors can reach out via a short message with their details and may receive a referral bonus if they successfully connect the developer with potential employers. Keywords: #phi4, AI agent, CDA Credential Tracker, CSS, HTML, JavaScript, Nextjs, PostgreSQL, Prisma, React, SQLite, SocketIO, Tailwind, TypeScript, coding practice, engineer, feedback, freelance projects, job search, mentee, mentor, portfolio, referral bonus, software developer, web apps
    The google logo   strangestloop.io 9 days ago
2079.  HN Video Conferencing with Postgres
In February 2026, Nick Van Wiggeren successfully demonstrated the use of PostgreSQL for real-time video conferencing by leveraging its capabilities as a backend system. Inspired by SpacetimeDB, he developed a setup using SvelteKit and Node.js to capture and stream audio/video frames through PlanetScale Postgres. The system encodes camera inputs into JPEG format and audio inputs into PCM format before transmitting them via a WebSocket server directly to PostgreSQL. This database utilizes logical replication for real-time delivery of video streams. By inserting encoded frames into a `video_frames` table, the setup achieves bidirectional video streaming at 15 frames per second, utilizing PostgreSQL's inherent durability and querying features. Van Wiggeren discussed potential enhancements such as using unlogged tables to increase insert speed but opted not to implement them due to their impact on logical replication processes. This innovative approach demonstrated that with a minimal infrastructure cost of $5/month for PlanetScale Postgres, the database could serve effectively as a versatile platform for real-time communication applications. Keywords: #phi4, AudioBufferSourceNode, AudioFrames, AudioWorkletNode, BYTEA, Binary WebSocket Frames, Blob URL, Cleanup Job, Database, JPEG, Jitter Buffer, LISTEN/NOTIFY, Logical Replication, Nodejs, PCM16LE, PlanetScale, PostgreSQL, Postgres, Real-Time Backend, Replication Stream, SvelteKit, Unlogged Tables, Video Conferencing, VideoFrames, WAL (Write-Ahead Log), WebRTC, WebSocket, pg-relay
    The google logo   planetscale.com 9 days ago
2095.  HN Show HN: Open-source proxy to track Claude API costs by team
Prism is an open-source proxy tool designed for organizations to monitor and analyze expenses related to the Claude API across various departments such as Legal, Sales, Engineering, HR, and Finance. It addresses the challenge of limited visibility into specific departmental usage and spending by providing comprehensive analytics. **Functionality and Architecture**: Prism operates as a proxy between applications and Anthropic's API, capturing essential metadata including costs, token usage, latency, and detailed request information. Its architecture allows for seamless integration without altering existing SDKs or headers; users simply substitute their base URL with the Prism instance. **Features**: The tool offers real-time dashboards that display metrics such as total cost, requests, token utilization, and average latency. It breaks down costs by model (e.g., Opus vs Sonnet) and team/key for detailed insights. Additionally, it supports streaming with full pass-through capability, logs every request with complete details like status and latency, and provides various chart visualizations to depict cost trends over time. Prism's user interface is modern and built using React, ensuring a premium experience. **Setup**: Users can set up Prism either through Docker using `docker-compose` or via manual installation that involves setting up the backend with Python virtual environments and frontend tools like Node.js. **Usage**: To use Prism, users must create an account to obtain a proxy API key for directing requests through the system. Application traffic is then routed to the Prism instance via code modifications or cURL commands, allowing access to its detailed analytics dashboard. **Technical Stack**: The backend of Prism is developed using Python 3.12, FastAPI, Async IO, PostgreSQL, SQLAlchemy, and Alembic for migrations. Its frontend leverages React, TypeScript, Vite, Tailwind CSS, and Recharts for visualizations. **Future Developments and Contributions**: The roadmap includes enhancements like person-level tracking, cost alerts, support for multiple models, budget forecasting, report exports, and a more robust production-grade proxy. The project invites contributions through pull requests after discussing proposed changes via issues. Prism is licensed under Apache 2.0, aiming to streamline API cost management by offering detailed usage insights quickly and efficiently. Keywords: #phi4, API, Analytics, Budget Forecasting, Contribution Guidelines, Cost Alerts, Cost Tracking, Docker, Encryption, FastAPI, Glassmorphism UI, JWT Auth, Multi-model Support, Open-source, PostgreSQL, Prism, Proxy, React Dashboard, Real-time Dashboard, Request Logs, Streaming
    The google logo   github.com 9 days ago
   https://github.com/doramirdor/NadirClaw   8 days ago
2100.  HN The Part of PostgreSQL We Hate the Most (2023)
The article "The Part of PostgreSQL We Hate the Most (2023)" critically examines PostgreSQL's implementation of Multi-Version Concurrency Control (MVCC), despite its widespread adoption as a robust Database Management System (DBMS). While acknowledging PostgreSQL's popularity for its reliability and feature set, the authors, including Bohan Zhang, argue that its MVCC approach lags behind other major DBMSs like MySQL, Oracle, and Microsoft SQL Server. Central to their critique is PostgreSQL’s append-only storage model which results in excessive data duplication during updates; it creates full copies of tuples rather than employing delta versions, leading to increased storage needs. This inefficiency contributes to table bloat under heavy write loads due to the accumulation of dead tuples that slow performance by occupying space and resources. Additionally, PostgreSQL's requirement to update all indexes with each tuple modification further hampers performance unless specific optimizations like heap-only tuples (HOT) are applicable—a strategy not employed in other systems which use different index handling techniques. The article also highlights issues with vacuum management, where the autovacuum process struggles due to its complexity and vulnerability to being impeded by long transactions, necessitating manual intervention to address performance degradation. Despite these drawbacks, PostgreSQL remains a favored choice for many applications, but the authors emphasize that overcoming its MVCC challenges often requires significant effort or external tools like those offered by OtterTune, which promise automated solutions in a forthcoming article. Keywords: #phi4, Amazon RDS, Aurora, MVCC, OtterTune, PostgreSQL, autovacuum issues, concurrency control, database systems, dead tuples, index maintenance, storage efficiency, vacuum, versioning
    The google logo   www.cs.cmu.edu 9 days ago
   https://news.ycombinator.com/item?id=41895951   9 days ago
   https://news.ycombinator.com/item?id=35716963   9 days ago
2108.  HN Show HN: Stash – AI-powered self-hosted bookmark manager
Stash is an innovative, self-hosted bookmark manager that leverages AI to streamline web bookmark management. It enhances user experience by automatically fetching, summarizing, categorizing, tagging URLs, and generating vector embeddings for efficient search functionalities. The system supports both semantic and keyword-based searches through the integration of pgvector cosine similarity and PostgreSQL tsvector with Reciprocal Rank Fusion, accommodating natural language queries as well as straightforward keywords. Emphasizing privacy and simplicity, Stash operates on a single-tenancy model without user accounts or data tracking, ensuring all information remains on the user's device. Key functionalities include organizing bookmarks into collections, applying filters, accessing content in reader mode to minimize distractions, and switching between various layouts. Users can also personalize their experience with dark/light modes and benefit from easy deployment through Docker. Stash is developed using React 19 for its frontend, Express.js for backend processes, PostgreSQL for database management, and OpenAI-powered features for AI capabilities. As an open-source project under the MIT license, it encourages community contributions, particularly in expanding browser extension development and enhancing bookmark import/export functionalities. More details about this extensible platform are available on its GitHub repository. Keywords: #phi4, AI-powered, BullMQ, Docker, Drizzle ORM, Expressjs, LLM summarization, OpenAI, PostgreSQL, React, Reciprocal Rank Fusion, Redis, Stash, Turborepo, URL processing, bookmark manager, collections, dark mode, filters, hybrid search, metadata extraction, multi-tenant, pgvector, reader mode Keywords: Stash, self-hosted, single-tenant, tRPC, tsvector, vector embedding
    The google logo   github.com 9 days ago
2149.  HN Claude Code Skills and 380 agent skills from official dev teams and community
The document presents an extensive collection of AI coding assistant tools and agent skills developed both officially and through community contributions for platforms like Claude Code, Codex, Antigravity, and others. These skills are crafted by prominent teams such as Anthropic, Google Labs, Vercel, Stripe, Cloudflare, and more, aiming to enhance practical applications rather than generate extensive AI content. The repository features a wide range of functionalities that include document editing, PowerPoint and Excel creation, generative art with p5.js, frontend design, web app testing, brand guideline application, status report writing, and more. It highlights specific skills for technologies like React and Next.js from Vercel, Cloudflare's secure AI agent building, PostgreSQL best practices by Supabase, and Stripe integration techniques. Security tools developed by Trail of Bits are also part of the collection, focusing on code auditing, smart contract vulnerability identification, configuration risk management, and more. Additionally, skills for creating AI agents with platforms like Expo, Sentry, and Tinybird, as well as domain-specific skills from Microsoft Azure SDK development, are included. The document emphasizes that these skills are curated yet not audited or endorsed, advising users to review potential security risks before use. It is organized in a table format, offering paths and documentation for each skill set while encouraging community contributions and maintenance. In addition to the initial functionalities, the compilation includes video encoding tools for HLS transformation, web application development integrations like Uppy file uploads with Next.js, CLI tools for web scraping and data mapping, app deployment solutions on Cloudflare, Netlify, and Render, and game development testing using Playwright. Document handling skills cover creation, editing, transcribing, and management in specific formats, while code management involves GitHub comments addressing, CI check debugging, and automated deployments. Image and media manipulation are facilitated through OpenAI's Image API for tasks such as generation and alteration. Collaboration and productivity tools encompass issue management in Linear, WhatsApp workflow integration, meeting insights enhancement, and marketing content generation. The document underscores quality standards for community contributions to the skill repository, focusing on progressive disclosure, specificity, tool dependencies, and avoiding hardcoded paths. Contributions are encouraged from established teams with real-world usage history, ensuring a high-quality ecosystem. Despite being curated, these skills are not security-audited by maintainers, who prioritize those adopted by communities. Keywords: #phi4, AI Coding Assistants, AI Models, AI Research, AWS Development, Agent Skills, App Store Management, Audio Transcription, Azure SDK, Browser Automation, CDN Delivery, CLI Tools, Context Engineering, Document Processing, Environment Variable Management, Expo Apps, Figma Design, Firecrawl CLI, Gemini API, Genealogy Research, Git Workflows, GitHub Deployment, GitHub Issues, HLS Streaming, HTML Presentations, Health Assistance, Home Assistant, Humanized Text, Incident Response, Legal Automation, Marketing Skills, Materials Science, Memory Systems, Model Routing, N8n Automation, Netlify Deployments, Notion Documentation, Notion Integration, OpenAI Repository, PDF Management, PPT Generation, Playwright Automation, PostgreSQL, Rails Upgrade, React Best Practices, SEO Optimization, Sanity Studio, Security Blueprints, Security Notice, Security Vulnerabilities, Sentry Issues, Skill Conversion, Solopreneur Tools, Sora API, Speech Generation, Speed Reading, Spreadsheet Analysis, Stripe Integrations, SwiftUI Best Practices, Terraform Best Practices, Terraform HCL, Threat Modeling, Threejs Skills, Transloadit Media, UI/UX Design, Uppy Uploads, Vercel Deployment, Video Encoding, Web Design Guidelines, Web Fuzzing, Web Scraping, WhatsApp Automation, WordPress Development, Writing Style, YouTube Editing, docx Editing, iOS Simulator Control
    The google logo   github.com 9 days ago
2157.  HN PostgreSQL Statistics: Why queries run slow
PostgreSQL's query performance is largely dependent on accurate statistical information used by its query planner, which accesses metadata from `pg_class` and `pg_statistic`. This process utilizes functions like ANALYZE to collect crucial statistics including estimated row counts, null fractions, distinct values, and histograms. These details are pivotal in predicting the costs associated with different execution plans for queries. However, when data volume or distribution changes—often due to bulk loads, schema migrations, rapid growth, or insufficient VACUUM operations—the accuracy of these statistics can degrade, leading the planner to make inefficient decisions that result in poor query performance. The planner's reliance on selectivity metrics derived from outdated statistics may cause it to misjudge row counts or fail to recognize new data patterns, affecting join strategies and index utility assumptions. To ensure optimal performance, maintaining up-to-date statistics through regular ANALYZE operations is essential. Using extended statistics can further enhance accuracy for multi-column queries by capturing dependencies and combined value frequencies among logically correlated columns. Diagnosing query inefficiencies involves comparing actual execution metrics obtained via EXPLAIN ANALYZE against planner estimates, adjusting statistics targets, or employing extended statistics to provide the planner with precise data necessary for effective decision-making. Keywords: #phi4, ANALYZE, EXPLAIN, MCVs, PostgreSQL, bloat, correlation, cost, extended statistics, histograms, indexes, joins, optimization, performance, pg_class, pg_statistic, plan, planner, queries, sampling, selectivity, slow, statistics, vacuum
    The google logo   boringsql.com 9 days ago
2201.  HN LiteLLM (YC W23): Founding Reliability Engineer – $200K-$270K and 0.5-1.0% equity
LiteLLM is in search of a Founding Reliability Engineer to join their team with an annual salary ranging between $200K and $270K, plus equity options of 0.5% to 1.0%. This role is critical for maintaining the stability and performance of LiteLLM's open-source AI gateway, which supports major clients such as NASA and Netflix by handling hundreds of millions of LLM API calls daily. The position requires a focus on ensuring operational reliability (60%) and performance engineering (40%), involving tasks like memory leak detection, race condition resolution, and database optimization. Candidates ideal for this role are expected to have substantial experience in scaling Python services, particularly with debugging memory issues and optimizing latency using tools such as memray or py-spy. The candidate must possess a strong understanding of Python async internals, PostgreSQL, Kubernetes, and be comfortable handling on-call responsibilities due to the critical nature of the infrastructure involved. The role offers an opportunity to own production reliability and performance engineering for one of the leading AI infrastructure projects globally. Those who have experience building infrastructure practices from scratch at companies like Meta, Cloudflare, or Stripe are considered ideal candidates for this position. Keywords: #phi4, Adobe, Canary Deployments, Database Scaling, Gateway, HTTP/2, Kubernetes, LLM API, LiteLLM, Memory Leak, NASA, Netflix, Nvidia, Open-source AI, Operational Reliability, Performance Engineering, PostgreSQL, Prometheus Metrics, Python Async, Race Condition, Reliability Engineer, SLO TrackingKeywords: LiteLLM, Streaming Responses, Stripe
    The google logo   www.ycombinator.com 10 days ago
2242.  HN Show HN: Yaw: terminal, SSH/database connections, AI Chat and optimized AI CLI
Yaw is an advanced integrated terminal application designed for Windows and macOS that enhances traditional terminal capabilities by integrating terminal emulation, connection management, and artificial intelligence features. It utilizes WebGL-rendered xterm.js to provide functionalities such as tabs, split panes, search, and session restoration. The application simplifies managing various database connections—including SSH, PostgreSQL, MySQL, SQL Server, MongoDB, and Redis—by supporting encrypted credentials and automatic Tailscale machine detection. A distinctive element of Yaw is its AI chat interface, which allows users to interact with models like Claude, ChatGPT, Gemini, Mistral, Grok, or Ollama using terminal output as context. Furthermore, Yaw incorporates an AI CLI tool that enhances usability by splitting the pane to simultaneously run tools such as Claude Code, Codex, Gemini CLI, and Vibe CLI along with a terminal in the same directory. This feature is complemented by an intuitive setup wizard. The release of version 0.9.60 invites user feedback on its innovative AI workflow and future developments, emphasizing Yaw’s commitment to evolving based on user input. Keywords: #phi4, AI Chat, CLI tool integration, GNU Screen, MongoDB, MySQL, PostgreSQL, Redis, Remote Sessions, SSH, Tailscale, Windows Terminal, Yaw, database, panes, search, session restore, setup wizard, split-pane workflow, tabs, terminal, xtermjs
    The google logo   yaw.sh 10 days ago
2265.  HN Show HN: Initium – A single Rust binary replacing scripts in K8s initContainers
Initium is a Rust-based tool designed to replace shell scripts in Kubernetes initContainers by enhancing security and functionality. It addresses typical challenges like debugging scripts during off-hours, managing missing dependencies such as `curl`, and preventing secret leakage into logs. The tool offers robust security features including running non-root with UID 65534, using a read-only filesystem, dropping all Linux capabilities, redacting secrets in logs, and avoiding OS packages to minimize vulnerabilities. Functionally, Initium provides dependency waiting, database migrations, declarative database seeding, configuration template rendering, secret fetching, and structured logging of executed commands. It also supports network reliability improvements with exponential backoff and jitter for operations. Initium ensures compatibility across multi-architecture containers (amd64 + arm64) and databases like PostgreSQL, MySQL, and SQLite. It can be configured using CLI flags or environment variables, making it versatile in various environments including Kubernetes, Docker Compose setups, and CI pipelines. The tool supports a sidecar mode for Kubernetes integration and offers logging options while maintaining minimal image size (~5 MB). Unlike Bash scripts, Initium eliminates runtime dependencies with zero known vulnerabilities (CVEs) and aligns with the Kubernetes Pod Security Admissions restricted standard. It simplifies deployment processes by avoiding inconsistencies across shell distributions. Initium is particularly advantageous in scenarios like waiting for Postgres or Redis before initiating main applications within Kubernetes environments. Deployment can be seamlessly handled using Helm charts or direct `kubectl` commands, with customizable network operation retry behavior. Its design caters to teams seeking secure and robust initialization processes beyond the capabilities of traditional shell scripts, streamlining deployment workflows in Kubernetes settings. Keywords: #phi4, Docker Compose, Helm chart, Initium, Kubernetes, MiniJinja templating, MySQL, PSA restricted, PostgreSQL, Rust, SQLite, TCP/HTTP readiness, YAML/JSON specs, capabilities dropped, config rendering, database seeding, exponential backoff, initContainers, migrations, non-root user, read-only filesystem, secret redaction, secrets fetching, security-hardened, static binary, structured logging, zero CVEs
    The google logo   github.com 10 days ago
2268.  HN Show HN: pg_stream – incremental view maintenance for PostgreSQL in Rust
`pg_stream` is an early-stage PostgreSQL 18 extension developed in Rust designed for incremental maintenance of materialized views without external dependencies or streaming pipelines. It allows users to define stream tables through SQL queries with freshness constraints, automatically generating delta queries to refresh only the modified data during cycles. Key features include Differential View Maintenance (DVM), which updates just altered data; Full CTE Support for recursive and non-recursive expressions; Trigger-based Change Data Capture (CDC) using lightweight triggers without complex Write-Ahead Logging (WAL); an optional WAL-based CDC mode to reduce overhead when feasible; DAG-aware scheduling that refreshes dependent stream tables in order; and mechanisms like advisory locks and monitoring views to ensure crash safety and observability. While supporting various SQL features in differential mode, such as joins, aggregations, subqueries, and window functions, it does not support certain elements like materialized views, LIMIT/OFFSET commands, or row-level locking due to limitations. Installation requires PostgreSQL 18.x and Rust 1.82+ with pgrx 0.17.x, facilitated via cargo commands and appropriate configuration of postgresql.conf. The project includes extensive testing—approximately 910 unit tests and 384 end-to-end tests—with coverage reports generated using `cargo-llvm-cov` and automatically uploaded to Codecov on GitHub. Currently not production-ready, pg_stream seeks user feedback for enhancements and provides more information through its [GitHub repository](https://github.com/grove/pg-stream). Keywords: #phi4, Apache License 20, CDC (Change Data Capture), CREATE EXTENSION, DAG scheduling, DVM engine, Data Versioning System (DVS), JSON functions, PostgreSQL, Rust, SQL support, WAL-based capture, background worker, crash-safe, differential view maintenance, documentation, extension, hybrid architecture, incremental maintenance, interoperability, limitations, logical replication, manual refresh, materialized views, monitoring views, pg_stream, refresh cycle, research plans, restrictions, stream tables, topological order, triggers, window functions
    The google logo   github.com 10 days ago
   https://github.com/xataio/pgstream   10 days ago
   https://github.com/grove/pg-trickle   8 days ago
2289.  HN Show HN: PgBeam – A globally distributed PostgreSQL proxy
PgBeam is a globally distributed PostgreSQL proxy aimed at minimizing latency for users in distant regions by enhancing connection setup times and query execution efficiency. Unlike PgBouncer, which solely focuses on connection pooling, or Hyperdrive, restricted to Cloudflare Workers, PgBeam provides both connection pooling and query caching across various regions without necessitating specific frameworks or ORMs. Users can seamlessly integrate PgBeam into their systems by simply updating the database host in their connection string. The service operates through three primary mechanisms: routing applications via GeoDNS to the nearest PgBeam instance for optimal connectivity; maintaining warm connections through connection pooling, which reduces the computational costs associated with TLS and authentication for each query; and caching SELECT queries at edge locations to deliver faster responses, while write operations are processed without involving cache. A live benchmark illustrates that PgBeam effectively cuts round-trip times by 3-5x in regions like Tokyo, São Paulo, and Mumbai for cached reads, and also improves performance for uncached queries due to connection pooling. Compared to existing solutions, PgBeam uniquely supports multi-region operations without tying users to specific platforms or ORM frameworks. Currently in technical preview, the focus is on gathering feedback from design partners and early adopters. Future developments include launching a management dashboard, expanding regional availability, and integrating into Vercel as an add-on service. Present limitations involve support exclusively for PostgreSQL, read-only query caching capabilities, and no synchronization of caches across regions. Keywords: #phi4, GeoDNS, PgBeam, PostgreSQL, TLS handshake, benchmark, connection pooling, edge worker, latency reduction, multi-region, proxy, query caching, read replicas, technical preview
    The google logo   pgbeam.com 10 days ago
2295.  HN Show HN: I built an open-source analytics platform for Claude Code sessions
Confabulous is an open-source analytics platform aimed at enhancing Claude Code sessions through comprehensive data management and analysis capabilities. It offers a secure solution for capturing, archiving, and examining session data without compromising proprietary information to external entities. Key features include the ability to perform full-text searches within archived sessions, generate detailed analytics with trend charts (such as tool usage and file modifications), provide actionable suggestions for improvements, automatically redact sensitive information, and auto-link sessions to Pull Requests (PRs) and commits. The platform supports various authentication methods, including password, GitHub OAuth, Google OAuth, and OpenID Connect (OIDC), making it adaptable to different user preferences. Setup is straightforward with Docker and Docker Compose, allowing configurations through environment variables defined in a `docker-compose.yml` file. Confabulous facilitates multi-user environments by enabling fine-grained session sharing controls and integrates seamlessly into developers' workflows by linking GitHub activities and managing API keys. It also incorporates per-user rate limiting to improve user experience. The platform's infrastructure is designed for simplicity, operating as a single Docker image compatible with PostgreSQL and MinIO databases, and supports deployment on custom domains. Released under an MIT license, Confabulous encourages community engagement through its open-source codebase, inviting contributions and fostering collaboration. Keywords: #phi4, Anthropic API key, Claude Code, Confabulous, Docker, Flyio, GitHub OAuth, Google OAuth, MIT license, MIT license Keywords: Confabulous, MinIO, Neontech, OIDC, PostgreSQL, React dashboard, React web dashboard, analytics, analytics platform, cloud deployment, cost tracking, developer experience, multi-user auth, platform, self-hosted, session data, transcript viewer
    The google logo   github.com 10 days ago
2297.  HN Show HN: Relay – SMS API for developers (send your first text in 2 min)
Relay is an SMS API designed to streamline the integration of SMS services, addressing challenges encountered with Twilio by offering a user-friendly experience. The platform allows users to sign up and send real SMS messages via a single POST endpoint in under two minutes. Its technology stack comprises Express.js for server-side logic, AWS End User Messaging for sending messages, PostgreSQL through Supabase for database management, and Redis for rate limiting purposes. Relay provides SDKs for JavaScript/TypeScript, Python, and Go to support various development environments. Currently, Relay's services are available in the US and Canada, with pricing starting at $19 per month, which includes 1,500 messages. It ensures compliance with 10DLC (Short Code Opt-In) regulations and handles carrier registration requirements automatically. A unique feature of Relay is its ability to allow AI agents to create accounts and send SMS messages autonomously without human verification; the trust levels for these accounts adjust based on message delivery quality. Additionally, developers have access to a free local development tool called `sms-dev`, enabling testing of SMS flows without dispatching actual messages. For further information and documentation, users can visit docs.relay.works and relay.works. Keywords: #phi4, 10DLC compliance, AI agents, AWS End User Messaging, Expressjs, Go, JS/TS, POST endpoint, PostgreSQL, Python, Redis rate limiting, Relay, SDKs, SMS API, Supabase, Twilio, autonomous accounts, carrier registration, developers, docsrelayworks, local dev tool, npm install, relayworks, sms-dev, testing SMS flows, trust levels, verification flow
    The google logo   news.ycombinator.com 10 days ago
2318.  HN CVE-2026-2006 – PostgreSQL Out-of-cycle release
The PostgreSQL Global Development Group is preparing to release an out-of-cycle update on February 26, 2026, addressing critical regressions introduced by a previous update on February 12, 2026. These issues arose from changes made to address CVE-2026-2006, which inadvertently led to the `substring()` function generating "invalid byte sequence for encoding" errors when processing non-ASCII text in database columns. Additionally, another regression affects standby servers, causing them to stop with an error related to accessing transaction status information. Fixes have been implemented across PostgreSQL versions 14 through 18 to resolve these problems. Detailed information about the fixes is available in linked resources for those seeking further guidance on addressing these issues. Keywords: #phi4, CVE-2026-2006, Global Development Group, PostgreSQL, database column, encoding error, fix, non-ASCII text, out-of-cycle release, regression, standby halt, substring(), thread, transaction status, update release, vulnerability
    The google logo   wiki.postgresql.org 10 days ago
2322.  HN I started a software research company
The author founded The Consensus, a company dedicated to providing unbiased, in-depth analysis of software infrastructure without vendor or technology bias. After departing from EnterpriseDB, the initiative aims to deliver code-focused insights surpassing typical tech outlets, while offering less biased content than that produced by venture capitalists. The platform covers various domains such as databases, programming languages, web servers, and related topics, drawing on expertise from seasoned developers. A key objective of The Consensus is to bolster the software community by showcasing open-source initiatives and opportunities within the industry. Financially supported through subscribers and potential sponsors, it has already attracted some initial subscribers. The author seeks feedback from readers and expresses enthusiasm for collaborative learning and discovery in the realm of software development. Keywords: #phi4, DataFusion, MySQL, PostgreSQL, analysis, community, company, databases, developers, feedback, independent, infrastructure, open-source, programming languages, software research, sponsors, subscribers, web servers
    The google logo   notes.eatonphil.com 10 days ago
2355.  HN Bitly handles 93 writes/s – URL shortener interviews ask for 1160
The article provides an insightful analysis of designing a URL shortener system, highlighting the disparity between interview expectations and real-world requirements. It points out that common interview scenarios often demand handling 1160 writes per second—a figure significantly higher than what is experienced by market leaders like Bitly, which processes about 93 writes per second. This overestimation in interviews underscores a gap between theoretical exercises and practical applications. The article delves into two classical design approaches for URL shortening: hashing with relational databases and using distributed unique ID generators. While both aim to produce short, fixed-length IDs, they present trade-offs related to performance and complexity. The author argues that even non-distributed setups employing tools like PostgreSQL or SQLite can meet both interview and real-world demands effectively. Through testing on modest hardware, the article demonstrates that these simpler systems achieve satisfactory read/write rates exceeding typical requirements. It also tackles availability concerns inherent in single-machine configurations by proposing redundancy strategies such as Multi-AZ configurations in cloud databases to bolster reliability without adding undue complexity. The piece advocates for a pragmatic approach to system design, suggesting straightforward solutions can surpass expectations and outperform larger entities like Bitly when starting with less complex systems. The author emphasizes assessing actual needs before resorting to elaborate distributed systems, offering benchmarks and resources on PostgreSQL performance in AWS environments as further reading material. Keywords: #phi4, AWS, Bitly, Bloom Filter, Database Index, Distributed ID Generator, Hashing, High Availability, Load Testing, Performance Benchmark, PostgreSQL, RPS, SQLite, Scalability, Sharding, Snowflake ID, System Design, TinyURL, URL Shortener
    The google logo   saneengineer.com 10 days ago
2409.  HN Show HN: SignForge – Free e-signature tool, no account needed to sign
SignForge presents itself as a cost-free alternative to e-signature services such as DocuSign or PandaDoc, enabling users to upload PDFs for easy addition of signatures, dates, and text via a drag-and-drop interface. This tool eliminates the need for account creation by allowing documents to be emailed directly for signing. Once signed, documents are returned with an audit trail and verification certificate, ensuring authenticity and tamper-proofing through QR code validation and cryptographic proof. Built on technologies including Next.js, FastAPI, PostgreSQL, and PyMuPDF, SignForge performs reliable stamping by overlaying images server-side. In addition to its primary function, it offers twelve other complimentary PDF tools that cater to various needs like merging, splitting, compressing, and watermarking without storing files, ensuring immediate processing. The service actively seeks user feedback to enhance the signing experience and introduce desired features. Keywords: #phi4, FastAPI, Nextjs, PDF, PostgreSQL, PyMuPDF, QR code, SignForge, audit trail, compress, cryptographic verification, date fields, drag signature, e-signature, email, free tool, freelancers, merge, no account, server-side, small businesses, split, verification certificate, watermark
    The google logo   signforge.io 10 days ago
2418.  HN "AI raises the quality of tuning beyond what most of us can achieve manually"
Frans Verduyn, a retired database professional with expertise in PostgreSQL, explores automating performance tuning using DBtune, an AI-powered tool designed for optimizing SQL queries and server configurations. After transitioning from Oracle to PostgreSQL, Verduyn faced the challenges of maintaining performance amidst rapid platform changes. Seeking solutions during his retirement, he experimented with various setups and chose DBtune for its potential to streamline traditionally complex tasks. Verduyn tested DBtune on a Raspberry Pi 500+ and a MacBook Pro, utilizing its free version which supported three database instances. He was impressed by the tool's intuitive GUI and easy setup process. The trials yielded positive outcomes, notably improving average query runtimes without performance degradation under heavy loads. Key features such as adjustable performance guardrails, configuration summaries, and comparisons underscored DBtune’s utility by providing insights into how changes affect performance. The ease of use and effectiveness of DBtune compared to manual tuning highlighted its value in optimizing database configurations for specific workloads. Verduyn pointed out the tool's potential for regular tuning in production environments, adapting to actual loads and changes to ensure optimal performance. He concluded that DBtune represents a significant advancement in database management, offering PostgreSQL administrators an automated solution to maintain high-performance standards. This automation could transform performance tuning from occasional adjustments into routine maintenance, improving system efficiency over time and freeing up database administrators for other essential tasks. Keywords: #phi4, AI, DBtune, Fingerprint, PostgreSQL, automation, configuration, database, environment, guardrails, optimization, performance tuning, production, workload
    The google logo   medium.com 11 days ago
2490.  HN From Postgres Migrations to AI Pipelines
The article explores the similarities between traditional database migrations in PostgreSQL and modern AI pipelines, emphasizing both involve uncertainty and risk estimation but often proceed with overly optimistic assumptions about failure management. In PostgreSQL systems, built-in mechanisms such as constraints, transactions, rollbacks, and monitoring anticipate and manage failures effectively, ensuring they are detectable and recoverable. In contrast, AI pipelines frequently assume model correctness until issues surface, typically identified belatedly by human oversight due to inadequate controls, resulting in costly errors. The author identifies key failure modes common to both systems, such as table locks or latency under load in databases, and suggests that AI pipelines need similar robustness features. The article argues that ensuring the correctness of AI models is insufficient without supporting system architecture; instead, a focus on comprehensive reliability measures like input validation, confidence gating, schema checks, and rollback paths is necessary to bolster performance. Ultimately, the piece advocates for moving beyond just refining models to developing thorough system architectures designed to proactively manage risks and failures, drawing parallels with established database practices to achieve robustness in AI systems. Keywords: #phi4, AI Pipelines, Confidence Gate, Constraints, Correctness, Estimation, Failure Modes, Human Escalation, Input Validation, Isolation Levels, Migrations, Model Confidence, Monitoring, Output Schema Check, PostgreSQL, Reliability, Rollbacks, System Architecture, Transactions, Uncertainty
    The google logo   medium.com 11 days ago
2498.  HN Show HN: Sitter Rank – Pet sitter booking without 20-40% platform fees
Sitter Rank is an innovative pet-sitting platform that facilitates direct bookings between pet owners and sitters, eliminating the 20-40% commission fees typically charged by competitors like Rover and Wag. Operating as a Software-as-a-Service (SaaS) business, it relies on sitter subscriptions rather than per-booking commissions to generate revenue. The technological backbone of Sitter Rank includes Next.js for its front-end development, PostgreSQL for database management, and Stripe Connect for handling payments. Pet owners benefit from the platform's user-friendly search features, which allow them to find sitters based on zip code and filter services according to their needs. They can read reviews from verified bookings to make informed decisions and pay sitters directly at fair rates, avoiding hidden fees. For pet sitters, Sitter Rank offers a commission-free listing opportunity where they can showcase their profiles without incurring monthly fees or payment charges per booking. To enhance trustworthiness, pet sitters on the platform must upload background check documentation to earn verification badges. Reviews are an integral part of the user experience; they are restricted to completed bookings and remain visible unless updated by new feedback. While sitters cannot remove these reviews, they have the option to respond to them. Pet owners enjoy free access to search for services and read reviews on Sitter Rank, with payments only required when booking a sitter's service. The platform offers a range of pet-sitting services, including dog walking, overnight stays, drop-in visits, among others. Each sitter’s profile provides specific details about the services they offer, enabling pet owners to find suitable matches for their pets’ needs. By focusing on transparency and trust, Sitter Rank creates a fair marketplace for both pet owners and sitters. Keywords: #phi4, Nextjs, Pet sitter, PostgreSQL, SaaS, Sitter Rank, Stripe Connect, background checks, booking, direct-booking, dog walking, doggy daycare, drop-in visits, free search, independent sitters, marketplace, overnight stays, pet owners, platform fees, reviews, services, subscriptions, verification badges
    The google logo   www.sitterrank.com 11 days ago
2535.  HN Show HN: Rediflow – SSR project management, one source of truth, no spreadsheet
Rediflow is a server-rendered project management application designed to act as a centralized platform for managing projects and resources, effectively replacing the need for spreadsheets. Developed with AI-assisted techniques, it seamlessly integrates project flow, portfolio flow, and capacity planning into one comprehensive system. The application features automatic updates across various charts and reports, detailed demand visualization, task tracking in an integrated view, and personalized staff allocation interfaces. Additional functionalities include audit logging, rigorous data quality checks, schema migrations, and robust backup procedures. Rediflow supports self-hosting through Podman or Docker, leveraging technologies like Flask and PostgreSQL, and offers optional OIDC authentication for secure access. The documentation and source code are publicly accessible online. As a self-financed initiative, Rediflow aims to fill capacity management gaps identified since 2000 by unifying data that was previously dispersed across numerous spreadsheets into one cohesive platform. Keywords: #phi4, Authentik, Docker, Flask, GitLab, OIDC, Podman, PostgreSQL, Rediflow, audit logging, backup procedures, capacity dashboard, capacity management, data quality checks, deliverables, demand chart, milestones, project management, schema migrations, self-hosted, server-rendered, single source of truth, tasks, work packages
    The google logo   gitlab.com 11 days ago
2559.  HN Show HN: Replacebase – library to migrate away from Supabase
Replacebase is a TypeScript library that facilitates transitioning from Supabase by offering a compatible API, allowing developers to migrate their backend infrastructure without major changes to frontend code. It serves as an intermediary solution, enabling the use of preferred services such as AWS or Specific for hosting backends. The library wraps PostgreSQL databases and S3 storage, providing similar functionalities to those offered by Supabase, including REST APIs, authentication through Better Auth, S3-compatible storage, and Realtime features like broadcasting and presence. To get started with Replacebase, developers must first gather necessary details from their Supabase setup, such as the Postgres connection string, JWT signing key URL, legacy secret, and S3 storage credentials. Following this, Replacebase can be installed via npm, initialized with these details, and integrated into backend frameworks like Next.js, Express, or Hono to serve APIs, with WebSocket support for Realtime functionalities. For frontend integration, developers need to adjust their code to connect to the custom backend provided by Replacebase instead of Supabase. The next migration steps involve transitioning to different Postgres providers and storage services since Replacebase is compatible with any standard Postgres database and S3-compatible platforms. Developers are encouraged to gradually replace Supabase SDK calls with bespoke API endpoints, creating a more customized backend architecture over time. As an early-stage project, Replacebase should be used cautiously with thorough testing before deployment in production environments. The library is released under the MIT license, allowing flexibility for further development and customization. Keywords: #phi4, API, AWS, Cloudflare Workers, Deno, Express, Hono, JWT, MIT license, Nextjs, PostgreSQL, Postgres, Realtime, Replacebase, S3, SDKs, Supabase, Typescript, backend, database, frontend, hosted, infrastructure, lock-in, migration, pg_dump, rclone, storage
    The google logo   github.com 11 days ago
   https://github.com/PostgREST/postgrest   11 days ago
2577.  HN I started a software research company
The author founded a new software research company called The Consensus after departing from EnterpriseDB to offer independent analysis of software infrastructure. This initiative focuses on delivering unbiased evaluations of code and corporate software development, extending beyond the coverage provided by established outlets like LWN.net and venture capitalists (VCs). Unlike other sources, The Consensus maintains independence from specific technologies or vendors, allowing it to cover a broad array of topics important to developers, such as databases, programming languages, and web servers. The company aims to engage experienced developers in content creation, compensating them financially for their contributions. In addition to sharing expert insights, The Consensus plans to support open-source communities by promoting their work and relevant events and job opportunities. Financial backing is derived from subscribers and potential sponsors, with the audience already growing through initial sign-ups. The author expresses enthusiasm about this new venture and encourages feedback, emphasizing a collaborative and educational approach for both contributors and readers moving forward. Keywords: #phi4, DataFusion, MySQL, PostgreSQL, analysis, community, company, contributors, databases, feedback, feedback Keywords: software, independent, infrastructure, open-source, programming, programming languages, research, software research, sponsors, subscribers, web, web servers
    The google logo   notes.eatonphil.com 11 days ago
2585.  HN AncestorTree – Open-source genealogy for Vietnamese families
AncestorTree is an open-source genealogy platform tailored specifically for Vietnamese families, incorporating unique elements such as lunar calendars, hierarchical clan structures, auto-generation numbering, and a 60-year zodiac cycle. Developed efficiently in just seven and a half sprints over a span of 24 hours using the TinySDLC framework and Claude Code with assistance from eight AI agents, AncestorTree employs a robust tech stack that includes Next.js, React, TypeScript, Supabase, and Vercel, all deployed at no cost. The platform's architecture features 13 PostgreSQL tables secured by four permission roles using Row Level Security (RLS), ensuring data protection. Key functionalities encompass a family relations panel, hierarchical tree layouts with branch filtering capabilities, and a tree-scoped editor that limits user edits to their specific subtree through recursive Common Table Expressions (CTEs). The development adheres to MTS-SDLC-Lite principles emphasizing stage gates and design reviews, which guarantee rapid production readiness when governance practices are implemented. AncestorTree is distributed under the MIT license, facilitating easy forking and deployment within approximately 30 minutes. User feedback is actively solicited to enhance platform capabilities continually. Keywords: #phi4, AI agents, AncestorTree, DFS-based rotation, MIT license, MTS-SDLC-Lite, Nextjs, PostgreSQL, RLS, React, Supabase, TinySDLC, TypeScript, Vercel, Vietnamese genealogy, auto generation numbering, branch filter, design review, family relations panel, feedback Keywords: Vietnamese genealogy, hierarchical clan branches, lunar calendars, open-source, recursive CTE, shareable URLs, stage gates, tree layout, zodiac cycle
    The google logo   news.ycombinator.com 11 days ago
2595.  HN Pg_plan_alternatives – eBPF tracing of all plans the optimizer considers
`pg_plan_alternatives` is an eBPF-based tracing tool designed to provide detailed insights into the query planning process of PostgreSQL, offering visibility beyond what the standard EXPLAIN command provides by revealing all alternative plans considered during optimization, along with their cost estimates. This enables users to better understand why certain execution paths are chosen over others. Key features include eBPF-based tracing that captures every evaluated plan and interactive visualizations through graphs generated using `visualize_plan_graph`, enhancing comprehension of planning decisions. The tool supports PostgreSQL versions 17 and 18, though it requires root privileges due to its reliance on eBPF technology. To install `pg_plan_alternatives`, users must use pip for installation, ensure the PostgreSQL instance is compiled with debug symbols, and have dependencies such as BCC and graphviz installed. Usage involves capturing plans by executing commands that specify the path to the PostgreSQL binary and necessary node tags, with options for JSON output or detailed tracing information via a verbose mode. The tool supports visualization of planning alternatives in formats like PNG, HTML, or SVG from JSON outputs, with table OID resolution achievable through database connection. Despite its utility, `pg_plan_alternatives` is an early prototype with limitations such as not supporting materialize nodes and parallel plans, requiring PostgreSQL to be compiled with specific flags that prevent function optimization. It necessitates a Linux kernel version 4.9 or higher, Python 3.10+, root privileges, and certain tools like BCC and graphviz for full functionality. This tool is particularly valuable for developers aiming to gain comprehensive insights into query planning processes for performance tuning and optimization in PostgreSQL databases. Keywords: #phi4, BCC, EXPLAIN, Linux, OID resolution, PostgreSQL, Python, alternatives, cost estimates, debug symbols, eBPF, graphviz, hash join, index scan, nested loop, optimizer, pg_plan_alternatives, psycopg2, query plans, root privileges, sequential scan, tracing, uprobes, visualization
    The google logo   github.com 11 days ago
2596.  HN Show HN: I built an ML stock picker that runs daily on a single server
A solo founder has developed an innovative ML-based stock picker that functions as a daily alternative to traditional robo-advisors, which typically charge 1% asset under management (AUM) and invest mainly in ETFs. The system utilizes LightGBM for ranking stocks and JAX PPO for determining position sizes, leveraging over 50 features including value metrics, momentum, quality factors, and sentiment data. It employs a walk-forward validation technique to mitigate lookahead bias and is built on a stack comprising PostgreSQL, FastAPI, and React, all hosted on a Hetzner server. The operational pipeline updates every evening post-market close by acquiring end-of-day data, recalculating financial ratios, refreshing materialized views, retraining models, generating predictions, and making minimal daily adjustments to portfolios. An encountered issue highlighted the importance of monitoring, as failed runs were traced back to blocked user permissions that affected data freshness. Over a period of 61 trading days, this system achieved a +9% alpha compared to SPY. Performance metrics are accessible online for review. The service provides stock recommendations through an API or dashboard at $99 per month, enabling users to retain their investments with their preferred brokerage while replicating suggested trades. Additionally, the platform offers daily updates and alerts regarding any recommended changes in positions. Keywords: #phi4, API, EMA, ETFs, FastAPI, Fidelity, JAX PPO, LightGBM, MACD, ML, PostgreSQL, RSI, React, SMA, Schwab, alpha vs SPY, brokerage, email notifications, materialized views, pipeline failures, reinforcement learning, robo-advisors, solo founder, stock picker, web dashboard
    The google logo   acis-trading.com 11 days ago
2638.  HN Show HN: Mengram – AI agent memory with facts, events, and evolving workflows
Mengram is an innovative AI memory tool designed to enhance the capabilities of AI agents by incorporating and evolving three types of memory: semantic (facts), episodic (events/decisions), and procedural (workflows). Unlike traditional memory tools that focus solely on facts, Mengram uniquely allows workflows to automatically adapt upon encountering failures, thus reducing repeated errors. This feature enables the system to refine processes over time without requiring manual intervention. The tool's architecture includes a technical stack comprising Python, PostgreSQL with pgvector, and FastAPI, offering diverse access options such as free cloud API, self-hosting, and SDKs for Python/JavaScript. It supports integrations with technologies like LangChain, CrewAI, and MCP. Despite its advanced capabilities, Mengram has limitations: the quality of extracted information is contingent upon the language model employed, and procedural evolution necessitates explicit failure descriptions. Additionally, real-time streaming is not currently supported. Users can interact with Mengram via API keys, installation through pip or npm, and a REST API for managing memories across all types. The tool facilitates cognitive profiling by generating system prompts from collected data, integrating this functionality into other language models. It provides features to import existing data (e.g., ChatGPT history), manage environments with multiple users, and includes agent templates tailored for specific applications such as DevOps and customer support. Overall, Mengram aims to deliver a comprehensive memory management solution that evolves based on user interactions and failures, supporting diverse integrations and use cases. Keywords: #phi4, AI memory, API key, CrewAI, FastAPI, LangChain, MCP, Mengram, PostgreSQL, Python, REST API, SDKs, agents, cognitive profile, deployment failures, episodic, evolution, integrations, multi-user isolation, personalization, procedural, semantic, triggers, workflows
    The google logo   github.com 11 days ago
2662.  HN Show HN: Pongo – a self hosted uptime monitor using configuration as code
Pongo is a self-hosted uptime monitoring solution designed specifically for developers who favor configuration as code. The platform allows users to define monitors, alerts, and status pages using TypeScript files that can be version-controlled alongside their application codebase. Pongo supports deployment across various platforms including Vercel, Railway, Docker, or bare metal setups, with compatibility for both SQLite and PostgreSQL databases. Key features of Pongo include the elimination of UI forms in favor of configuration through code, enabling enhanced developer control and integration into existing workflows. The tool offers multi-region deployments to ensure redundancy and robustness. Customizable status pages provide uptime history and RSS feeds, while smart alerting options are available via Slack, Discord, Email, or Webhooks, with thresholds that can be tailored according to user needs. Pongo is built using Next.js, Drizzle ORM, and Bun, focusing on avoiding vendor lock-in and allowing developers full control over their monitoring processes. As an open-source project, Pongo encourages community feedback to guide the development of future features, emphasizing a collaborative approach to tool enhancement while maintaining its core principle of developer empowerment. Keywords: #phi4, Bun, Discord, Docker, Drizzle ORM, Email, NextJS, Pongo, PostgreSQL, Railway, SQLite, Slack, TypeScript, Vercel, Webhooks, alerts, bare metal, configuration as code, dashboards, incidents, monitors, multi-region, open source, self-hosted, smart alerting, status pages, uptime monitor
    The google logo   www.pongo.sh 11 days ago
   https://pongo.sh/   11 days ago
   https://github.com/TimMikeladze/pongo   11 days ago
2670.  HN Show HN: Ideon – open-source spatial canvas for project context
Ideon is an innovative, open-source visual workspace designed to enhance the organization and management of project resources by mapping them onto a spatial canvas. Aimed at preserving a project's "mental model," it addresses the challenge of context-switching that developers face when taking breaks. Developed with Next.js (App Router), TypeScript, PostgreSQL via Prisma, and Docker Compose, Ideon allows users to arrange repositories, notes, links, and checklists in a spatial manner using drag-and-drop functionality. This system supports direct GitHub integration for real-time issue tracking, Markdown notes that synchronize live, and is self-hostable with an AGPLv3 license. Ideon's key features include the capability of deployment on cost-effective VPSs or home servers, offering a significant advantage by allowing users to maintain visibility of all project-related elements at once. This visual organization contrasts traditional linear management tools by providing spatial arrangement, enabling real-time collaboration across multiple users, and preserving snapshots that evolve with the project's understanding. The platform is designed not to replace existing tools like GitHub or Figma but rather to integrate their functionalities seamlessly on a unified canvas. It caters to developers, designers, founders, freelancers, and anyone in need of an organized project context. Ideon simplifies setup through Docker, featuring a quick deployment option via an installer script, while promoting open-source contributions to encourage growth and improvement based on community feedback. Keywords: #phi4, AGPLv3 License, Docker Compose, GitHub integration, Ideon, Magic Paste, Markdown notes, Nextjs, PostgreSQL, Prisma, TypeScript, VPS, blocks, cognitive cost, collaboration, contributors, home server, mental model, open-source, project context, resources mapping, self-hosted workspace, snapshots, spatial canvas, visual workspace
    The google logo   github.com 11 days ago
2679.  HN Self-hosted file decks with share links and visitor analytics
Deck Share is a self-hosted web application designed for organizing and distributing files via public links, featuring comprehensive visitor analytics. Administrators can upload files into structured decks or folders, customize share settings with titles, descriptions, expiration dates, passwords, and single-use access options before publishing them through unique URLs. Visitors using these links have the capability to browse content within the app interface, preview supported file types—including PDFs, images, videos, and Microsoft Office documents—and download files or engage with call-to-action prompts. The application provides analytics that track interactions such as views and downloads, with optional integration for enhanced insights via PostHog. However, it's important to note that the platform does not guarantee file security, urging users to proceed at their own risk. The technical foundation of Deck Share includes Node.js 20+, Next.js 16, React 19, a PostgreSQL database managed by Prisma, authentication through NextAuth 5, and an interface built using Tailwind CSS 4, Radix UI, and shadcn-style components. To set up the application, users need to establish a PostgreSQL database (version 16 or higher), configure their Node.js environment, and set necessary environment variables like `DATABASE_URL` and `NEXTAUTH_SECRET`. The app can be run locally for development purposes or deployed using Docker with available pre-built images from Docker Hub. Deployment involves executing migrations and seeding an admin user. Deck Share supports various file types, including PDFs, JPEG and PNG images, MP4 videos, as well as Microsoft Office files that are rendered in-app; unsupported files can still be uploaded but will not have preview capabilities. The platform offers robust features for managing and sharing files, granting admins control over content distribution while delivering an interactive viewing experience to visitors, complemented by detailed analytics to monitor user engagement effectively. Keywords: #phi4, Deck Share, Docker, NextAuth, Nextjs, Nodejs, PostHog, PostgreSQL, Prisma, Radix UI, React, Tailwind CSS, admins, analytics dashboard, authentication, call-to-action, development, document rendering, download, environment variables, file decks, file types, fingerprint tracking, migrations, organize files, preview, production, rich text, self-hosted, share links, unique URL, uploads, visitor analytics, visitors, web application
    The google logo   github.com 11 days ago
   https://github.com/e-hosseini/deck-share   11 days ago
2685.  HN Lean-PQ – Type-safe PostgreSQL bindings for Lean 4 via libpq FFI
Lean-PQ offers type-safe PostgreSQL bindings for Lean 4 through the Foreign Function Interface (FFI), utilizing libpq to ensure safe database interactions via compile-time checks and structural injection prevention mechanisms. Key features include compile-time column verification that provides proofs of existing columns within schema queries, thus preventing references to non-existent ones. Additionally, the Permission-Tracking Monad (PqM) tracks permission levels for various query types such as SELECT, INSERT/UPDATE/DELETE, and DDL at compile time; operations outside permitted contexts lead to type errors. To mitigate SQL injection risks, user values are strictly treated as parameters. Functional components of Lean-PQ include a domain-specific language (DSL) via the pq! macro that expands into a typed Query Abstract Syntax Tree (AST), ensuring column name verification and permission inference at compile time. It supports comprehensive Create, Read, Update, Delete operations with schema validation and enables concurrent asynchronous queries on separate connections using Lean tasks without shared mutable state. Setup requirements for Lean-PQ entail having Lean 4 version 4.24.0 or later installed alongside libpq, which can be installed via Homebrew on macOS or APT on Ubuntu, along with pkg-config. Integration into projects involves adding Lean-PQ as a dependency in the `lakefile.lean` using its specified GitHub repository. The architecture of Lean-PQ is composed of multiple components: FFI declarations and C implementations for libpq are managed within the Extern and extern.c files. Various Lean modules handle specific functionalities, including Error, DataType, Schema, Monad, Query, Syntax, and Async modules, which collectively manage errors, data types, schema modeling, permission tracking, query construction, DSL syntax, and asynchronous operations. Keywords: #phi4, DSL, Lean 4, Lean-PQ, PostgreSQL, PqM monad, SQL injection prevention, async queries, compile-time verification, database operations, libpq FFI, permission-tracking, schema-indexed expressions, type-safe bindings
    The google logo   github.com 11 days ago
2692.  HN Managed Iceberg for Streaming with PostgreSQL Simplicity – RisingWave Open Lake
RisingWave Open Lake provides a managed streaming solution based on PostgreSQL that ensures data remains within your cloud VPC, eliminating hidden fees and security issues, whether you opt for the open-source or fully-managed version. Leveraging open standards, it facilitates seamless integration with engines such as Trino, Spark, and DuckDB, enhancing the portability and query capabilities of Iceberg tables. This approach offers users significant flexibility in managing their data storage and processing choices, granting them comprehensive control over their data infrastructure. Keywords: #phi4, Cloud VPC, Data Ownership, DuckDB, Ingest, Managed Iceberg, No Fees, Open Lake, Open Standards, Portable Tables, PostgreSQL, Query, RisingWave, Security Compliance, Spark, Streaming, Trino
    The google logo   risingwave.com 11 days ago
2695.  HN Pg_doom
The project describes an innovative endeavor to execute the classic game Doom within a PostgreSQL database by developing a custom extension named "pg_doom." This involves creating two SQL functions: `doom.input` for capturing keyboard inputs (A, S, D, W, F, E) and `doom.screen` for retrieving graphical data necessary for display. The implementation is encapsulated as a Docker image to streamline setup and use. Key components of this project include the extension development carried out in C, integrating with PostgreSQL via these two functions, and an accompanying Bash script that interacts with the database by invoking these functions. Users must prepare their environment using Debian OS, install PostgreSQL with development tools, and employ GNU Make utilities before proceeding. The necessary `doom.wad` file must be placed within the project directory after cloning from GitHub. The Docker implementation simplifies setup by handling compilation and configuration tasks. Users build a Docker image following provided instructions and run it interactively to engage with the game. During gameplay, a Bash script captures keyboard inputs in real-time, converts them into SQL commands for database processing, and formats terminal output to mimic Doom's graphics using data from `doom.screen`. Installation requires compiling the extension with Makefiles and setting up PostgreSQL with appropriate configurations, including initiating temporary databases if necessary, creating roles, and granting function execution permissions within the `doom` schema. Users can clone the repository, build/run it within Docker for testing, and engage in gameplay through a terminal session that processes inputs, executes SQL commands, retrieves graphical data, and displays it. This approach illustrates PostgreSQL's versatility by running Doom as database interactions, demonstrating how databases can handle unconventional tasks beyond traditional data management roles. Keywords: #phi4, C, C language, Debian, Docker, Doom, GNU Make, Linux, PostgreSQL, SQL, Windows, architecture, compilation, control keys, database, extension, game, game integration, input-output, installation, installation Keywords: Doom, pg_doom, psql, server, terminal
    The google logo   github.com 12 days ago
2704.  HN Show HN: Sopho - Open Source Business Intelligence Platform
Sopho is an open-source Business Intelligence (BI) platform that aims to blend the best elements of both open-source and proprietary BI solutions by emphasizing simplicity, performance, security, and AI integration. Central to its design are features such as Canvas, which offers a unified environment for notebooks and dashboards; Chart Cells, providing configurable visualizations like bar, line, pie charts, and metrics; and SQL Cells that facilitate SQL execution through the CodeMirror editor integrated with TanStack Table. The platform enhances usability with global search capabilities and keyboard shortcuts for efficient notebook editing. Sopho supports database connections to PostgreSQL, Supabase, and SQLite, ensuring secure handling of credentials through encryption. It features robust authentication methods including username/password login, session-based access, and refresh tokens. Sopho is designed to be deployed easily as a Docker image with flexible environment configurations. Comprehensive documentation for users is accessible via a live Fumadocs site. The platform leverages modern technologies such as Rust for backend operations, React combined with Vite for the frontend, and PostgreSQL for data storage needs. It is distributed under the GNU Affero General Public License v3.0, promoting open-source collaboration and development. Additionally, Sopho offers community support through a dedicated Discord channel where users can engage in discussions and seek assistance. Keywords: #phi4, AI Features, Authentication, Business Intelligence, Canvas, Chart Cells, Community, Community Keywords: Sopho, Connections, Data Analytics, Deployment, Discord, Docker, Documentation, GNU Affero GPL, GNU Affero General Public License, Keyboard Shortcuts, Open Source, PostgreSQL, React, Rust, SQL Cells, SQLite, Sopho, Vite
    The google logo   github.com 12 days ago
2723.  HN Use a SaaS Boilerplate to Ship Faster
Using a Software as a Service (SaaS) boilerplate can greatly expedite the launch process for founders by eliminating repetitive tasks like setting up authentication, billing, and content management systems. This approach allows founders to focus on reaching real users more quickly, thereby enhancing early learning from user interactions and accelerating the time it takes to reach paying customers. Key advantages include increased speed through pre-built components that bypass lengthy development phases; a faster entry into the Build → Measure → Learn cycle due to quicker launches; upfront incorporation of essential security features reducing vulnerabilities; enhanced product credibility with polished UI components; immediate marketing capabilities via SEO-friendly content systems; and modularity allowing easy customization without extensive rewrites. While not all boilerplates offer the same benefits, selecting one with clear documentation, a predictable structure, and a familiar tech stack provides a solid foundation for further development. Ultimately, leveraging a SaaS boilerplate shifts focus from foundational coding to strategic business aspects such as distribution, feedback, iteration, retention, and revenue generation, allowing founders to concentrate on differentiating their product through unique workflows, user experience, and customer engagement strategies, which are critical for success. Keywords: #phi4, Authentication, Billing, Content & SEO, Core App Layout, Deployment, Distribution, Drizzle ORM, Emails, Entitlements, Feedback Loop, Iteration, Launch Earlier, LaunchSaaS, MDX, Modularity, Nextjs, OAuth, Orders, PostgreSQL, Resend, Retention, Revenue, SaaS Boilerplate, Security, Ship Faster, Stripe, Subscription Lifecycle, Tailwind, Webhooks
    The google logo   launchsaas.org 12 days ago
2730.  HN Show HN: StreamHouse – S3-native Kafka alternative written in Rust
StreamHouse is an innovative open-source streaming platform created to serve as a cost-efficient alternative to Apache Kafka by using Amazon S3 for storage instead of traditional broker-managed disks, resulting in substantial cost savings and easier management. Developed in Rust, StreamHouse ensures high performance with impressive throughput rates—62K writes per second and over 30K reads per second—and boasts low latency metadata queries under 10 milliseconds at the 99th percentile. The platform's key features include a Producer API that supports batching, LZ4 compression, and offset tracking, along with a Consumer API offering consumer groups, auto-commit functionality, and multi-partition fanout using Kafka-compatible protocols. It provides versatile API interfaces such as REST, gRPC, CLI, and a web UI for user interaction and can be swiftly deployed with Docker Compose in under five minutes. A significant advantage of StreamHouse is its cost-effective storage model, utilizing S3 at $0.023 per gigabyte per month compared to the higher costs associated with Kafka’s reliance on Elastic Block Store (EBS) volumes. It guarantees high durability at 99.999999999% and allows for scalable architecture by horizontally scaling through a stateless design anchored by S3 as its backbone. The platform provides comprehensive documentation, Rust client examples, and detailed setup guides for various environments including local setups, MinIO, and AWS. Licensed under Apache 2.0, StreamHouse is accessible on GitHub at [https://github.com/gbram1/streamhouse](https://github.com/gbram1/streamhouse), offering the functionality of Kafka with markedly lower costs and operational simplicity. Keywords: #phi4, AWS S3, Apache 20 license, CLI, Docker Compose, Kafka alternative, Kafka-compatible protocol, LZ4 compression, MinIO, PostgreSQL, REST API, Rust, S3-native, SQLite, StreamHouse, consumer API, cost-effective, durability, gRPC API, high-performance, infinite scalability, metadata queries, open-source, producer API, stateless server, streaming platform, web UI
    The google logo   github.com 12 days ago
2736.  HN Looking Forward to PGConf India in Bengaluru in March
PGConf India 2026 in Bengaluru is eagerly anticipated by the author, who highlights its rising prominence on the global PostgreSQL calendar. Microsoft plays a significant role as a diamond sponsor, contributing multiple talks across various tracks, including Rahila's performance optimization session and Claire’s insights into PostgreSQL committers. The conference features notable presentations such as 'Operating Postgres Logical Replication at Massive Scale' and in-depth explorations of PostgreSQL’s memory architecture. Attendees can expect a wide range of topics covering database development, administration, and application development. The organizing team is commended for creating an enriching learning environment, and attendees are encouraged to engage with Microsoft during the event. Keywords: #phi4, Azure Database, Bangalore Meetup, Bengaluru, HorizonDB, Microsoft, PGConf India, PostgreSQL, administration, commits, community, database architecture, logical replication, open-source, organizing team, performance optimization, program committee, program committee Keywords: PGConf India
    The google logo   techcommunity.microsoft.com 12 days ago
2779.  HN Show HN: CharityVerify – Trust scores for 138K Canadian charities
CharityVerify serves as an online platform that evaluates Canadian charities through trust scores derived from T3010 forms available via the Canada Revenue Agency, encompassing 138,203 charities. It employs a comprehensive scoring system to assess legitimacy, effectiveness, and compliance, assigning letter grades ranging from A+ to F for each organization. Utilizing Python and Playwright for data gathering, PostgreSQL for storage, and Express.js for its API infrastructure, CharityVerify provides insights into the charitable sector's performance, revealing that only 186 charities attained an A+ rating while reflecting a general effectiveness average of 51.6/100. The platform offers free access to basic search and viewing functionalities, with plans to introduce a tiered REST API tailored for professional entities like due diligence firms and grant-making organizations. Additionally, CharityVerify delivers on-demand narrative reports to elucidate each charity's operations and financial history from 2009-2023. While the data is public domain, the platform’s proprietary code remains closed-source. Keywords: #phi4, CRA data, Canadian charities, CharityVerify, Expressjs, GitHub Actions, Playwright, PostgreSQL, Python, REST API, Supabase, T3010 forms, assets, compensation, compliance, effectiveness, expenses, financials, legitimacy, letter grades, program spending, revenue, scoring algorithm, trust scores
    The google logo   charityverify.com 12 days ago
2836.  HN Show HN: OpenLingo – Connecting Sonnet 4.6 to a Duolingo-like interface
OpenLingo is a free, open-source language learning platform that integrates advanced AI technology into a user-friendly interface reminiscent of Duolingo. It facilitates language practice through interactive exercises and conversations with an AI tutor, supporting over 15 languages. The platform utilizes adaptive conversational tutors powered by large language models (LLMs) and incorporates spaced repetition flashcards using the SM-2 algorithm to enhance learning efficiency. Users benefit from personalized lesson creation tools, web article translation capabilities, and pronunciation feedback through OpenAI's GPT and Whisper technologies. Key features include diverse exercise types, pre-built or AI-generated courses, and comprehensive translation services that provide speaking feedback. Built with modern technologies such as Next.js, React, TypeScript, PostgreSQL, Cloudflare R2 for storage, and Tailwind CSS for styling, the platform offers robust functionality. To set up OpenLingo locally, users require Bun, Docker, an AI provider API key, and follow provided instructions to configure their environment, database, and application, all under an MIT license. Keywords: #phi4, AI Chat Tutor, AI-powered, Better Auth, Bun, CEFR, Cloudflare R2, Drizzle ORM, Duolingo-like, GPT-4o-mini-tts, Nextjs, OpenAI Whisper, OpenLingo, PostgreSQL, React, SRS, TTS/STT, Tailwind CSS, TypeScript, courses & units, frequency dictionaries, interactive exercises, language learning, reading material, spaced repetition, supported languages, tech stack, translation
    The google logo   github.com 12 days ago
2837.  HN Show HN: MasqueradeORM – Memory Efficient Node ORM: Just Write Classes
MasqueradeORM is designed as a lightweight and memory-efficient Object-Relational Mapping (ORM) solution specifically for Node.js applications, supporting both TypeScript and JavaScript environments. It simplifies SQL interaction by allowing developers to work with their existing class structures without necessitating specific ORM configurations or metadata systems, thus enhancing code readability and maintainability while streamlining workflow processes through automatic schema and table generation from these classes. The key features of MasqueradeORM include an effortless setup that leverages pre-existing classes and the automated creation of database schemas and tables. It provides IntelliSense support to facilitate complex query construction with real-time guidance in Integrated Development Environments (IDEs). The ORM minimizes memory consumption by avoiding duplicate instances of entities and optimizes database interactions through batched implicit writes. Additionally, it simplifies managing conditions that span multiple columns or tables and offers expressive template-literal WHERE clauses for intricate SQL logic. Further enhancing its capabilities, MasqueradeORM provides advanced sorting options with support for custom expressions and multi-column tie-breakers. It also ensures robust relational data handling via eager and lazy loading strategies. To enhance security against SQL injection attacks, the ORM employs parameterized queries. Moreover, it includes a smart runtime schema cleanup feature to reduce database bloat over time. MasqueradeORM stands out due to its minimalistic design with only two dependencies and supports SQLite and PostgreSQL databases. Installation is straightforward using npm, allowing seamless data manipulation where changes are automatically persisted. The ORM offers flexible handling for both relational and non-relational data and supports abstract class inheritance in JavaScript, leveraging JSDoc for strong typing without requiring a compile step. These features collectively make MasqueradeORM an efficient tool for developers looking to manage database interactions effectively within Node.js applications. Keywords: #phi4, IntelliSense, JSDoc, JavaScript, LazyPromise, MasqueradeORM, Nodejs, ORM, PostgreSQL, RDBMS, SQL, SQLite, TypeScript, batched writes, classes, embedded SQLite, find method, inheritance, lazy loading, lightweight, parameterized queries, relations, schema cleanup, soft deletion
    The google logo   github.com 12 days ago
   https://github.com/MasqueradeORM/MasqueradeORM   12 days ago
2862.  HN Show HN: KeyEnv – manage team secrets without scattered .env files
KeyEnv is a command-line interface (CLI) focused secrets manager designed to enhance the security and efficiency of managing team secrets by replacing scattered `.env` files with an encrypted storage system. Its primary function is to streamline access to sensitive data through a single command, `keyenv pull`, which simplifies the retrieval process for developers. To ensure robust security, KeyEnv employs AES-256-GCM encryption for data at rest, providing military-grade protection for stored secrets. It offers extensive scoping and access controls by supporting per-project and per-environment configurations such as development, staging, and production environments. These features are complemented by comprehensive team access controls and an audit trail to monitor who accesses the data and when. KeyEnv's compatibility with existing applications is seamless; it integrates without necessitating any changes to application code, maintaining reliance on environment variables for configuration. Its security suite includes not only strong encryption but also detailed audit logs and automated credential rotation to prevent downtime. Tailored specifically for teams managing microservices or operating in multi-environment setups, KeyEnv provides a robust solution with integrations supporting CI/CD workflows. These features make it an ideal tool for developers seeking secure and efficient secret management across diverse project environments. Keywords: #phi4, AES-256-GCM, Bitbucket, CI/CD, CLI-first, CircleCI, GitHub Actions, GitLab CI, KeyEnv, MySQL, PostgreSQL, SDKs, audit trail, encrypted store, environment variables, microservices, multi-environment, secrets manager, team access controls
    The google logo   keyenv.dev 12 days ago
2864.  HN Show HN: SQL-tap now has a browser-based Web UI
SQL-tap, an advanced SQL proxy tool designed for real-time query capture and inspection, has introduced significant updates enhancing its functionality and usability. A major addition is the built-in Web UI, activated through `--http=:8080`, which allows users to access a self-contained JavaScript SPA directly from their browser without dependencies. This interface supports real-time query streaming with Server-Sent Events (SSE), SQL syntax highlighting, and interactive inspection using EXPLAIN/EXPLAIN ANALYZE functionalities. Users benefit from an array of features including filtering and grouping transactions, color-coding slow queries, exporting data, and detecting N+1 query patterns, all available through both Text User Interface (TUI) and Web UI. Another critical update is the introduction of automatic N+1 Query Detection, which flags repeated executions of identical SELECT templates within a configurable time frame. This feature highlights occurrences in both TUI and Web UIs, offering customizable thresholds to aid developers in optimizing query efficiency. In addition to these new capabilities, SQL-tap has seen improvements since its initial release (version 0.0.1), particularly enhancing the TUI with structured filtering options, analytics views, export functionalities, argument-bound copy functions, and improved navigation controls. The proxy now also supports TiDB through `--driver=tidb`, ensures compatibility with MySQL 9, and resolves issues related to PostgreSQL parameter decoding. SQL-tap's seamless operation stems from its ability to intercept the native wire protocol between applications and databases without necessitating any code modifications. Available as a single Go-written binary, it can be installed via Homebrew or `go install`. As an independent project developed by a solo creator, contributions on GitHub are welcome, with user feedback encouraged through starring the repository. Keywords: #phi4, EXPLAIN ANALYZE, GitHub, Go, Homebrew, JavaScript SPA, MySQL, N+1 detection, PostgreSQL, SQL proxy, SQL-tap, SSE, TUI improvements, TiDB, Web UI, database support, queries, query statistics, real-time, single binary, syntax highlighting, transparent capture
    The google logo   news.ycombinator.com 12 days ago
2880.  HN Show HN: Scry – Test migrations against production scale copy of your DB
Scry is a sophisticated tool designed to evaluate database migrations by testing them on a production-scale replica of the actual database, thereby identifying potential performance issues before they affect live environments. It operates by simulating real traffic patterns and data volumes on a shadow database replica, which allows it to detect problems that might not be apparent in smaller test setups. In a case involving an e-commerce application, Scry successfully identified a significant slowdown caused by the addition of a new column to the orders table. The tool revealed that this degradation was due to the absence of an index, leading to a 92x performance regression. By recognizing and resolving such issues beforehand—adding the necessary indexes in this instance—Scry prevents costly disruptions and ensures that deployments proceed smoothly. Its seamless integration into Continuous Integration (CI) pipelines allows for thorough validation of migrations concerning both correctness and scalability, safeguarding customers from potential negative impacts. Moreover, Scry's ability to quickly detect regressions locally within minutes enhances deployment safety by reducing risks before any changes reach production environments. Keywords: #phi4, Alembic, CDC-replicated, CI, EXPLAIN ANALYZE, PostgreSQL, Scry, command line, concurrency, correctness, data skew, database, demo, index, latency, migration, performance, production scale, query regression, replay report, shadow database, staging, traffic patterns
    The google logo   www.scrydata.com 12 days ago
   https://github.com/scrydata/scry-cli/releases   12 days ago
2881.  HN Show HN: Tabularis – DB GUI where drivers are JSON-RPC executables
Tabularis is a database GUI application tailored for MySQL, PostgreSQL, SQLite, and MariaDB, distinguished by its plugin architecture that enables each database driver to operate as an independent executable. These drivers communicate with the core process through JSON-RPC 2.0 over stdin/stdout, ensuring no shared libraries or ABI concerns while providing automatic process isolation. This setup allows users to develop plugins using languages such as Rust, Go, and Python, or any language capable of handling standard input/output for JSON, with a notable example being the first community plugin developed for DuckDB. The application is built utilizing Tauri 2 and React 19 frameworks, deliberately avoiding reliance on Electron or JVM environments. Furthermore, the source code for Tabularis is publicly accessible on GitHub at [debba/tabularis](https://github.com/debba/tabularis). Keywords: #phi4, DuckDB, GitHub, Go, JSON-RPC, MariaDB, MySQL, PostgreSQL, Python, React 19, Rust, SQLite, Tabularis, Tauri 2, database GUI, executable, plugin architecture, process isolation
    The google logo   news.ycombinator.com 12 days ago
2892.  HN Show HN: Bruce – AI signal radar for Reddit/HN that learns what matters to you
Bruce is an innovative AI-powered signal radar tool tailored to filter relevant content from platforms such as Reddit and Hacker News by evaluating the context of products, ideal customer profiles, and competitors rather than just keyword matching. This approach enhances the relevance of information captured, allowing Bruce to improve over time through user interaction with alerts. Key features include its ability to learn and adapt alert accuracy via AI-based learning, monitoring a variety of sources like RSS feeds and ProductHunt using RSSHub, and utilizing modern technologies such as Next.js 15, PostgreSQL, Drizzle, and Better Auth. Developed to offer developer-friendly REST APIs, Bruce operates on an MCP server, providing users access through smartbruce.com. Additional insights into its scoring model and architecture are available for those interested in understanding the tool's inner workings further. Keywords: #phi4, AI, AI runtime, Better Auth, Drizzle, Hacker News, ICP, MCP server, Nextjs, PostgreSQL, REST API, RSSHub, Reddit, SaaS, architecture, competitor list, keyword alerts, learning algorithm, noise filtering, product context, radar, scoring model, smartbrucecom
    The google logo   smartbruce.com 12 days ago
2901.  HN Row Locks with Joins Can Produce Surprising Results in PostgreSQL
In PostgreSQL, using row locks alongside joins can lead to unexpected outcomes due to how locks are handled during query execution and release. Specifically, locking a `car` row without also locking its associated `owner` in a join operation may result in transactions encountering outdated or incomplete data when another transaction updates the owner information after the car lock is released. This issue stems from PostgreSQL’s default Read Committed isolation level, which permits visibility of other committed transactions during query execution. Consequently, if an `owner_id` in a locked `car` row changes due to concurrent updates and the join conditions are re-evaluated after releasing the lock, no matching rows may be returned if only the `car` table was locked. To address this without altering isolation levels or locking both tables—which does not fully resolve the issue—one effective approach is to decouple queries. First, acquire a lock on the `car` row and perform ownership updates, followed by a separate query to retrieve owner details using the updated `owner_id`. This ensures consistency in data used for updates. Another strategy involves structuring queries such that locks are acquired prior to joins, through methods like sub-queries or common table expressions, ensuring that re-evaluation reflects up-to-date information. Such strategies were successfully applied in a real-world scenario where rapid user interface interactions led to concurrent requests and highlighted the issue, prompting a solution with separate queries for improved robustness. Keywords: #phi4, CTE, PostgreSQL, Read Committed, Repeatable Read, Row locks, Serializable, concurrency, database, deadlock, error handling, execution plan, foreign key, index scan, isolation level, joins, lock, nested loop, outer join, query, sub-query, transaction
    The google logo   hakibenita.com 12 days ago
2916.  HN LLMs and Your Career
The article highlights the significance of grasping software fundamentals for developers, stressing that reliance on existing tools like PostgreSQL, MySQL, Rails, .NET, Stack Overflow, and Large Language Models (LLMs) should not result in treating these technologies as black boxes. A deep understanding of how web servers, databases, operating systems, and browsers operate enables developers to make informed decisions when adapting code. The comparison of coding with LLMs to using established frameworks or resources like Stack Overflow underscores that while they offer speed, without fundamental knowledge their use can be superficial. Companies prioritizing core concepts—especially those involved in large-scale operations or developing foundational technologies—tend to seek developers who possess this deeper understanding. The article also notes the trend toward software evolution aimed at automating problem-solving for smaller businesses, reducing the need for developer hires. However, it argues that complex and large-scale business requirements will continue to necessitate skilled developers. As non-developers increasingly utilize LLMs, systems relying on robust software fundamentals face more pressure. Consequently, roles focused on understanding and developing core technologies are likely to persist. The article advises those passionate about software development to pursue ongoing education in fundamental areas such as compilers, databases, and operating systems, while targeting companies that present complex problems where such expertise is essential. Keywords: #phi4, Applications, Browser, Career, Code, Compilers, Complexity, Databases, Frameworks, Fundamentals, LLMs, Libraries, MySQL, NET, Operating Systems, PostgreSQL, Problem Solving, Productivity, Rails, SMBs, Scale, Software Developer, Stack Overflow, Systems, Tools, Web Servers
    The google logo   notes.eatonphil.com 12 days ago
2946.  HN Show HN: Out Plane – A PaaS I built solo from Istanbul in 3 months
Out Plane is a Platform as a Service (PaaS) developed by Mert Kaya from Istanbul in three months, designed to streamline the deployment process for developers by removing the need for complex configurations such as Dockerfiles and YAML files. Users can effortlessly deploy code with a single push from connected GitHub repositories, achieving deployments in under 60 seconds. The platform offers managed services, integrating PostgreSQL and Redis with features like auto-detection, real-time metrics, and automatic scaling that includes zeroing out when idle. It provides cost-effective per-second pricing, which reportedly reduces costs compared to other platforms. Out Plane excels in scalability by managing traffic spikes autonomously while maintaining a 99.99% uptime SLA. The platform's ease of use is evident through its zero-configuration deployments and support for various programming languages and stacks. Additionally, it includes real-time monitoring and integration capabilities with built-in metrics, logs, and traces that are compatible with tools like Grafana and Datadog. Out Plane ensures compliance with enterprise-grade security standards, including GDPR, and integrates seamlessly with AWS, Cloudflare, and other partners. Its user-friendly approach has led to adoption by organizations such as the Ministry of Transport, and it promotes a feedback-driven development process. To attract users, Out Plane offers $20 in free credit without requiring a credit card, underscoring Mert Kaya's sole leadership in its development. Keywords: #phi4, AWS, CI/CD, Cloudflare, DDoS protection, Datadog, DevOps, Dockerfile, GDPR, GitHub, Grafana, Kubernetes, OpenTelemetry, Out Plane, PaaS, PostgreSQL, Redis, SSL/TLS, SSO & SAML, VPC isolation, YAML, auto-detection, compliance, containerization, deployment, edge deployment, infrastructure management, integrations, logs, managed databases, metrics, monitoring, pricing, real-time metrics, scaling, security, traces, traffic spikes, uptime SLA, versions, zero configuration
    The google logo   outplane.com 12 days ago
   https://blog.notmyhostna.me/posts/what-i-wish-existed-f   12 days ago
2952.  HN Show HN: Pointwise – Self-hosted Lidar annotation for AV teams
Pointwise is a self-hosted Lidar annotation tool tailored for autonomous vehicle teams needing to manage large datasets locally. It utilizes Docker and PostgreSQL, featuring a WebGL renderer that efficiently processes over 1 million points at 60 frames per second directly in the browser. The platform facilitates multi-user collaboration with distinct roles such as annotators, reviewers, and admins, incorporating functionalities like review pipelines, issue tracking, and audit trails to streamline workflow. It supports versatile data storage options, including local filesystems or S3-compatible solutions. The tool's key features include precise 3D bounding box annotations with real-time adjustments, a multi-view inspector that allows simultaneous views from the front, side, and top perspectives, and over ten pre-configured object categories pertinent to autonomous driving. Users have the flexibility to create custom label profiles and synchronize camera images with point clouds, accommodating multiple camera angles per frame. Pointwise also supports sequence navigation through a timeline interface and provides keyboard shortcuts to enhance workflow efficiency. Keywords: #phi4, 3D Bounding Boxes, AV teams, Camera Image Sync, Custom Label Profiles, Docker, Keyboard Shortcuts, Lidar annotation, Multi-View Inspector, Object Categories, PostgreSQL, Sequence Support, WebGL renderer, annotator roles, audit trails, issue tracking, multi-user workflow, review pipeline, self-hosted
    The google logo   www.pointwise.cloud 12 days ago
2962.  HN Ask HN: How do you implement production-grade draft isolation in Django?
A developer is constructing an open-source Learning Management System (LMS) complete with a content studio designed for instructors to create various educational materials, including exams and courses. A significant challenge they face involves implementing a robust draft isolation feature that allows instructors to test their work in conditions mimicking real-world usage, encompassing functionality like actual timers, submission processes, and grading systems. The objective is to enable these drafts to be either finalized or discarded without impacting any live data. Three potential solutions are under consideration: utilizing PostgreSQL schema separation, which offers conceptual clarity but complicates Django migrations; implementing `is_draft` flags that would necessitate extensive conditionals across application layers, thus increasing development complexity; and creating snapshot tables, which fail to support the intended real-world workflows. The developer is striving for a solution akin to pytest's database isolation capabilities in production settings, allowing tests to be seamlessly persisted or discarded. They are seeking existing solutions or best practices from other systems that successfully address similar draft-isolation challenges. Keywords: #phi4, Django, LMS, PostgreSQL, assignments, content studio, courses, discardable, exams, instructors, is_draft flags, migrations, persistable, preview, production-grade draft isolation, pytest-style DB isolation, quizzes, schema separation, snapshot tables
    The google logo   news.ycombinator.com 12 days ago
2969.  HN Timedb: Open-Source Database for Timeseries
TimeDB is an innovative open-source time-series database designed to manage complex temporal data using a three-dimensional model. Built on the foundations of PostgreSQL and TimescaleDB, it efficiently addresses challenges such as overlapping forecasts, auditable updates, and historical "time-of-knowledge" queries by distinguishing between valid times, knowledge times (when predictions were made), and change times for each data point. Unlike conventional databases that assume immutable values per timestamp, TimeDB incorporates a comprehensive audit trail with metadata including tags, annotations, and user information for every modification. The system's standout features include the capability to store overlapping forecasts with complete provenance records, generate detailed audit trails for manual adjustments, execute true backtesting by querying historical states at specific times, and organize data using meaningful labels. TimeDB streamlines its workflow through a Python SDK and FastAPI backend, making it accessible and efficient for users. For installation, TimeDB requires Python 3.9 or higher along with PostgreSQL and TimescaleDB, and can be installed via pip. It provides a quick start guide that outlines examples of creating schemas, inserting data with specific timestamps, reading the latest forecasts, and accessing all historical revisions. Additionally, potential users have the option to try TimeDB without local setup through Google Colab. The project encourages community contributions by providing extensive documentation and development guides for interested developers. Keywords: #phi4, FastAPI backend, Google Colab, PostgreSQL, Python SDK, TimeDB, TimescaleDB, annotations, audit trail, auditable updates, backtesting, change_time, contributing, documentation, forecast revisions, installation, knowledge_time, metadata, open-source, tags, temporal data model, time-series database, valid_time
    The google logo   github.com 12 days ago
3014.  HN The Agentic Data Stack
The Agentic Data Stack is an open-source framework designed to establish a self-hosted agentic analytics ecosystem using Docker Compose. This setup includes ClickHouse for high-speed data analysis, LibreChat as a customizable chat UI supporting multiple AI models, and Langfuse for monitoring language model interactions. Key features of this stack involve LibreChat on port 3080 for versatile chat functionalities with various AI providers like OpenAI, Anthropic, and Google; ClickHouse accessible through an MCP server on port 8000 for rapid data querying by agents; and Langfuse's observability services available on port 3000. The system incorporates several additional components: PostgreSQL (port 5432) for transactional operations, MongoDB (port 27017) for managing LibreChat transactions, MinIO (port 9090) as an S3-compatible storage solution, Redis (port 6379) for caching and queuing, Meilisearch (port 7700) for full-text search capabilities, a pgvector database (port 5433) to support retrieval-augmented generation (RAG), and an RAG API on port 8001 for enhanced chat functionalities. The quick start guide outlines prerequisites of Docker and Docker Compose v2+, followed by environment setup using scripts that generate or customize configuration files, culminating in the deployment of services via `docker compose up -d`. Services can be accessed through their designated ports, with LibreChat available at http://localhost:3080. Configuration and maintenance are facilitated by various scriptable components, allowing for easy reconfiguration and teardown as needed. Overall, this stack offers a robust environment for agentic analytics exploration with extensive observability and customization features. Keywords: #phi4, API keys, Agentic Data Stack, Anthropic, ClickHouse, Docker Compose, Google, LLM observability, Langfuse, LibreChat, MCP server, Meilisearch, MinIO, OpenAI, PostgreSQL, RAG API, Redis, analytics environment, caching and queue, chat UI, containers, embeddings, env file, evaluation, full-text search, object storage, observability, pgvector, prompt management, retrieval-augmented generation, transactional database, vector database, volumes
    The google logo   github.com 13 days ago
3046.  HN CLI tool for easier GA4/Firebase/postgre access (for cross-source data analysis)
Analytics Agent is a command-line interface (CLI) tool designed to streamline access to Google Analytics 4, Firebase Auth, and PostgreSQL databases by creating unified datasets for comprehensive cross-source analysis. The tool simplifies the setup process through an automatic wizard that employs OAuth2 Desktop App flow for authentication, storing tokens locally for ease of use. Users can interact with the data either via an AI agent or directly in Python, offering flexibility between automated queries and custom scripting using a provided library. To begin using Analytics Agent, users must clone its repository and install any necessary dependencies like `uv`. The setup involves a browser-based sign-in to Google services for authentication, property discovery across GA4 and Firebase, and entering the PostgreSQL connection string. For automated operations without user interaction, service accounts can be utilized. Various database connections are supported, including those hosted on public IPs or configured through environment variables for Google Cloud SQL, with all configurations stored locally. The tool provides multiple functionalities: it allows AI-driven queries and direct Python scripting to interact with GA4, Firebase, and PostgreSQL databases efficiently. It includes a range of CLI commands for setup, authentication management, configuration handling, and cleanup tasks. Additionally, Analytics Agent features a skill-creator function that enables users to customize analysis for specific products by creating product skills. This customization helps AI agents understand unique database schemas, funnels, and key events pertinent to the product without repetitive explanations. Analytics Agent requires Python 3.11 or higher and valid access credentials for GA4, Firebase, and PostgreSQL databases. It is designed to operate entirely on a user's local machine, ensuring that no data is transmitted externally, thus maintaining user privacy. Keywords: #phi4, AI agents, Analytics Agent, CLI tool, Firebase Auth, GA4, Google Analytics 4, OAuth2, PostgreSQL, Python library, data analysis, database connection, service account, skill-creator
    The google logo   github.com 13 days ago
3059.  HN Show HN: Crash-safe job queue – lease-expiry race and fencing fix
Faultline is an advanced job processing system built on PostgreSQL, designed to ensure correctness, recoverability, and race safety even under real-world failure conditions. Its architecture relies on lease-based execution paired with row-level locking for effective coordination and state management. Central to its design are fencing tokens that guard against stale writes by associating side effects with a unique combination of `job_id` and `fencing_token`, which increases monotonically during each lease acquisition. This setup, along with database-enforced idempotency, allows Faultline to handle crash-safe recovery and maintain deterministic job execution semantics. The system’s architecture is composed of several key components: - **Lease-Based Execution**: Workers are required to possess a valid lease to execute jobs, defined by attributes like `lease_owner` and `lease_expires_at`. - **Fencing Tokens**: These tokens prevent stale writes from committing post ownership loss by binding side effects specifically to each `(job_id, fencing_token)` pair. - **Idempotency and Crash-Safe Recovery**: Jobs transition through pre-defined states such as `queued`, `running`, and either `succeeded` or `failed`. The database layer ensures that illegal state transitions are blocked. Additionally, jobs can be safely reprocessed if leases expire, with side effects safeguarded by uniqueness constraints. Faultline assures several guarantees, including deterministic epoch advancement, correct owner progression during job execution, rejection of stale writes, and ensuring exactly one successful side effect per lease epoch bound to a `fencing_token`. The system is robust against various failure scenarios such as worker crashes, duplicate retries, or database restarts, thanks to its reconciliation process that repairs incomplete states. Observability features are provided via Prometheus metrics. The Faultline framework operates within a tech stack comprising Python, PostgreSQL, Docker, and Prometheus, offering a streamlined setup and deployment in distributed environments. This configuration supports efficient operation and scalability, making it a reliable choice for distributed job processing needs. Keywords: #phi4, Job queue, PostgreSQL, crash-safe recovery, database boundary, deterministic validation, fencing token, idempotency, lease expiry, lease-based execution, race condition, reconciliation job, side effects, worker crashes
    The google logo   github.com 13 days ago
3066.  HN Show HN: CodeRocket Deploy – AI generates GitHub Actions workflows in 60 seconds
CodeRocket Deploy is a GitHub App designed to streamline Continuous Integration/Continuous Deployment (CI/CD) workflows by automatically generating optimized GitHub Actions workflows. It simplifies setup by removing the need for manual template searches and YAML configuration debugging. Upon installation, CodeRocket Deploy analyzes repositories to identify languages, frameworks, and deployment targets, leveraging AI to create tailored workflows. A pull request is then generated for review before integration. The tool's capabilities include deep static analysis using GitHub’s Contents API, detection of over 20 frameworks such as Next.js and Django, few-shot prompting with examples, fallback templates, and YAML validation with security checks. It utilizes a technology stack that includes Django + DRF, React + TypeScript, PostgreSQL, and Redis for Celery tasks. Users can access a free tier offering 100 workflow generations per month across three repositories without needing a credit card. Available on the GitHub Marketplace, CodeRocket Deploy seeks feedback regarding framework support, workflow enhancements, and potential issues with AI-generated CI/CD configurations. Keywords: #phi4, AI, Actions, App, CI/CD, Celery, CodeRocket Deploy, Deploy, Django, GitHub, GitHub Actions, GitHub App, GitHub Marketplace, Marketplace, PostgreSQL, React, Redis, TypeScript, YAML, YAML validation, feedback, few-shot, few-shot prompting, framework, framework detectors, implementation, scanning, security, security scanning, static analysis, technical implementation Keywords: CodeRocket, validation, workflows
    The google logo   deploy.coderocket.com 13 days ago
3067.  HN Show HN: KitchenAsty – Open-source, self-hosted restaurant management system
KitchenAsty is an open-source restaurant management system tailored for small restaurants, leveraging modern technologies such as TypeScript, Node.js, React, PostgreSQL, and Prisma. It facilitates online ordering (both delivery and pickup), menu management, table reservations, kitchen displays, coupon systems, customer reviews, staff management, and provides analytics through a comprehensive dashboard. Payments are supported via Stripe, with an option for cash-on-delivery. The system's architecture includes an Express API server, React-based admin and customer interfaces, real-time updates facilitated by Socket.IO, and a mobile application developed using React Native. The project is extensive, comprising 27,000 lines of TypeScript, 30 database models, and over 330 tests covering unit, integration, and end-to-end scenarios with Playwright. KitchenAsty was initiated as an alternative to outdated PHP/Laravel projects in the restaurant management domain, offering a contemporary, rigorously tested solution that is easily deployable via Docker. It is MIT-licensed, encouraging both use flexibility and community contributions. Accompanying resources include comprehensive documentation, API documentation through Swagger UI, and active community engagement facilitated by GitHub issues and discussions. Keywords: #phi4, API, CI/CD, Docker, Docker Compose, E2E tests, GitHub, KitchenAsty, Nodejs, Playwright, PostgreSQL, Prisma, React, SocketIO, Stripe, Swagger UI, TypeScript, analytics, coupon system, developer experience, documentation, internationalization, menu management, mobile app, monorepo, npm workspaces, ordering, real-time updates, reservations, restaurant management, role-based access, self-hosted, staff management, test suite
    The google logo   github.com 13 days ago
3117.  HN From Select to Advanced SQL: JOINs, CTEs, and More
SQL stands out as an essential technology in software engineering, offering stability amid the rapid evolution of frameworks and technologies. It serves as the universal language for interacting with diverse database systems like PostgreSQL, MySQL, ClickHouse, and Snowflake. The article provides a comprehensive guide to SQL using PostgreSQL syntax, covering fundamental operations from understanding relational databases' structure to advanced query techniques. The guide begins with **Database Anatomy**, explaining how structured tables use primary and foreign keys to ensure data consistency within relational databases. It then delves into the core operation of reading data through **SELECT** statements, emphasizing efficiency by specifying necessary columns, using aliases for clarity, ensuring uniqueness via distinct values, and implementing pagination methods such as keyset pagination for handling large datasets. In terms of refining query results, techniques like **Filtering with WHERE**, utilizing conditions such as IN, BETWEEN, LIKE/ILIKE, and managing NULLs effectively are essential. Sorting is addressed through **ORDER BY**, which should be used on indexed columns to maintain performance efficiency. The article further explores **Aggregation and GROUP BY** operations, highlighting the use of aggregate functions (COUNT, SUM, AVG, MIN, MAX) to transform raw data into meaningful metrics while applying HAVING filters post-aggregation. It underscores the importance of **JOINs** for connecting tables without redundancy through various JOIN types like INNER, LEFT, and SELF JOIN. In discussing data modification, it introduces **Data Modification Language (DML)** operations—INSERT, UPDATE, DELETE, and TRUNCATE—which alter data while requiring careful handling to maintain database integrity. The article also covers **Schema Changes (DDL)** operations such as CREATE TABLE, ALTER TABLE, and DROP TABLE that modify the structure of databases, necessitating caution due to their significant impact. Complex logic is addressed using **Subqueries and Common Table Expressions (CTEs)** for improved readability and modularity. Additionally, **Window Functions** are introduced as tools enabling advanced computations over related rows without reducing row count, facilitating operations like ranking and moving averages. Understanding the **Logical Execution Order** of SQL is critical for optimizing queries efficiently. The article also discusses the role of **Indexes and Performance**, emphasizing that while indexes enhance data retrieval speed, their effectiveness depends on selecting only necessary columns to be indexed. In conclusion, mastering SQL involves not only grasping its core functionalities but also avoiding common pitfalls like using SELECT * indiscriminately. Leveraging advanced features such as window functions and CTEs is vital for optimized data manipulation and analysis, making SQL proficiency crucial in managing persistent data amidst evolving technological landscapes. Keywords: #phi4, CTEs, JOINs, PostgreSQL, SELECT, SQL, aggregation, database, filtering, indexes, performance, queries, schema changes, window functions
    The google logo   jsdev.space 13 days ago
3141.  HN A More Time Zone Tolerant Datetime Class
The article highlights challenges encountered with Python's `datetime` class due to its inconsistent handling of timezone-aware and unaware (naive) datetime objects, which frequently results in errors during comparison or manipulation. This issue is particularly burdensome for the author, who developed a web application named billtracker using Flask and SQLAlchemy. A specific problem arose when a TypeError was triggered by comparing aware and naive datetime objects. To address this, the author considered creating a `datetime_tolerant` subclass to facilitate better handling of these comparisons; however, challenges emerged because external libraries often return both types indiscriminately. Attempts to modify the original `datetime` class through monkeypatching were unsuccessful due to its immutable nature. The proposed solution involves gradually refactoring billtracker's codebase to adopt this new subclass, a process that demands substantial effort and carries the risk of incomplete implementation. While the author desires an innate resolution within Python's datetime handling to circumvent these issues, they recognize that such modifications are outside their influence. The narrative underscores both the practical challenges faced by developers working with datetime objects in Python and the broader need for more robust solutions within the language itself. Keywords: #phi4, Aware Datetimes, Billtracker, Comparison Operators, Conversion, Database, Datetime Class, Error Handling, Flask, Immutable, Local Times, Monkeypatch, Naive Datetimes, PostgreSQL, Proof of Concept, Python, SQLAlchemy, Subclass, Time Zone, datetime_tolerant
    The google logo   shallowsky.com 13 days ago
3154.  HN Show HN: PgDog – scale Postgres without changing the app
PgDog is an innovative open-source tool designed to scale PostgreSQL databases without necessitating changes to application code or undergoing database migrations. Developed by Lev and Justin, it offers multifaceted functionality as a connection pooler, load balancer, and database sharder, facilitating efficient data management across multiple shards. A standout feature is its ability to execute direct-to-shard queries with high reliability, supporting aggregate functions like count(), min(), and max() without the need for application refactoring. Key functionalities include support for over 10 data types, atomic cross-shard writes through two-phase commits, and omnisharded tables that ensure synchronized common data. Additionally, it allows multi-tuple inserts and mutation of sharding keys. PgDog provides a unique sequence generator that spans multiple shards, potentially replacing UUIDs with integer primary keys. The resharding process is optimized via parallelized logical replication streams. The tool features smart load balancing to manage failovers across managed Postgres services effectively. It also supports manual read/write separation through connection parameters or query comments and manages connection pooling by automatically rolling back unfinished transactions during application crashes, thereby preventing database overload. Its configurability is bolstered by comprehensive documentation available for users to explore its full capabilities. In summary, PgDog empowers applications to seamlessly interact with multiple databases using the same connection by simply including a sharding key, offering an efficient and scalable solution for PostgreSQL management. Keywords: #phi4, PgDog, PostgreSQL, SQL, aggregate functions, atomic writes, connection pooler, connection storm, cross-shard queries, database sharder, failover, grouping, load balancer, logical replication, multi-tenant, multi-tuple inserts, open source, read/write separation, resharding, sharding, sorting, transaction management, unique sequence
    The google logo   pgdog.dev 13 days ago
   https://postgresisenough.dev   13 days ago
   https://github.com/agoodway/postgresisenough?tab=readme   13 days ago
   https://github.com/pgdogdev/pgdog/pull/784   13 days ago
   https://github.com/pgdogdev/pgdog/pull/744   13 days ago
   https://www.merklemap.com/   13 days ago
   https://github.com/pgdogdev/pgdog/releases   13 days ago
   https://github.com/pgdogdev/pgdog/releases.atom   13 days ago
   https://docs.pgdog.dev/features/sharding/2pc/   13 days ago
   https://docs.pgdog.dev/features/sharding/explain&#   13 days ago
3184.  HN Managed OpenClaw hosting, 60-second provisioning
ClawHosters specializes in managed OpenClaw hosting, offering rapid deployment with prewarmed VPSs that enable provisioning within 30 to 60 seconds. The service enhances efficiency by pre-configuring essential components such as Docker, Nginx, SSL certificates, and firewall rules on idle servers. This preparation allows for swift customization according to customer needs. ClawHosters employs Traefik and Redis for dynamic routing management, supporting subdomain-specific access and optional HTTP authentication. To bolster security, Hetzner Cloud firewalls restrict external access solely to the main server's IP, ensuring that instances remain inaccessible directly from the internet. The platform includes a managed LLM proxy designed for secure AI interactions, preventing exposure of API keys by utilizing IP-based authentication. This configuration supports streaming data reassembly and logging alongside usage metrics monitoring. The technological stack is comprised of Rails 8, PostgreSQL, Sidekiq, and Docker, all integrated within a single namespace in a Rails application developed over two to three weeks. Currently, ClawHosters caters to more than 180 customers, including 36 paying clients, across three pricing tiers priced from €19/month to €59/month. The company enhances its offerings with an affiliate program that provides a 15% recurring commission and additional features such as optional SSH access and private network connectivity through ZeroTier. Keywords: #phi4, API keys, BYOK mode, Docker, HTTP auth, Hetzner Cloud, LLM proxy, Managed OpenClaw, Nginx, PostgreSQL, Rails 8, Redis, SSE, SSL, Sidekiq, Traefik, VPS, ZeroTier, affiliate program Keywords: Managed OpenClaw, firewall, prewarming, provisioning, snapshot-based
    The google logo   news.ycombinator.com 13 days ago
3219.  HN Show HN: Sayiir – Durable, simple, workflow engine in Rust, no replay
Sayiir is an open-source, durable workflow engine primarily implemented in Rust, offering Python and Node.js bindings designed to enhance workflow efficiency through continuation-driven execution. This innovative approach enables workflows to resume from their last saved state following a crash, thus eliminating the need for replaying entire histories as seen in traditional engines like Temporal, Airflow, or Prefect. Key features of Sayiir include no replay overhead, absence of determinism constraints allowing flexibility in task execution, and utilization of native language constructs rather than requiring Domain-Specific Languages (DSLs). Its architecture is graph-based with a hexagonal design that maintains stability across its core components, bindings, and backend. Sayiir emphasizes ease of use by incorporating familiar language idioms, thus reducing the learning curve for users. It facilitates straightforward workflow control functions such as cancel, pause, and resume operations while offering robust multi-language support through type-safe bindings. The engine supports pluggable persistence with PostgreSQL, utilizing either JSON or zero-copy binary codecs to enhance performance. Sayiir is actively developed under the MIT license, encouraging community contributions via GitHub and Discord engagement. Comprehensive documentation and resources are accessible on docs.sayiir.dev, providing guides, tutorials, and API references for its supported languages. While Sayiir caters primarily to those seeking simplified workflow management without heavy infrastructure needs, it also offers an enterprise server version for users requiring more robust infrastructure support. Overall, Sayiir aims to revolutionize workflow management by allowing workflows to be developed akin to regular code, thereby eliminating the necessity for separate servers or complex orchestration setups. Keywords: #phi4, MIT licensed, MIT licensed Keywords: Sayiir, Nodejs, PostgreSQL, Python, Rust, Sayiir, checkpointing, continuation-based execution, durable, graph-based, multi-language bindings, no replay, pluggable persistence, workflow engine
    The google logo   github.com 13 days ago
3220.  HN Fixing ORM slowness by 80% with strategic PostgreSQL indexing
The article examines how PostgreSQL performance was significantly enhanced by addressing inefficiencies caused by ORM-generated queries through strategic indexing, rather than altering application queries. An enterprise customer experienced high Read Input/Output Operations Per Second (IOPS), slow page loads, and delayed reports during peak hours, despite having optimized memory parameters, functioning autovacuum processes, and adequate hardware resources. The root cause of these issues was identified as ORM limitations that led to excessive sequential scans on large tables, some exceeding 41 million, resulting in increased disk I/O and slower queries. By implementing targeted indexing techniques within the database, read IOPS were reduced by 80%, thereby dramatically improving overall performance without any modifications to the application code or queries. This solution effectively overcame limitations in query structure control, yielding substantial performance gains. Keywords: #phi4, IOPS, ORM, PostgreSQL, autovacuum, database, disk I/O, hardware resources, indexing, memory parameters, optimization, page loads, peak hours, performance, pg_constraint, pg_index, pg_stat_user_indexes, pg_stat_user_tables, queries, reports, sequential scans
    The google logo   stormatics.tech 13 days ago
3230.  HN Visualize PostgreSQL plan alternatives using eBPF
`pg_plan_alternatives` is an eBPF-based tool designed to enhance understanding of PostgreSQL's query optimization process by providing visibility into all query plans considered during the planning phase, not just the final chosen one displayed in standard EXPLAIN outputs. It utilizes eBPF technology to trace these plans and offers interactive visualizations that illustrate alternative execution paths, thereby offering insights into the optimizer’s decision-making. The tool is compatible with PostgreSQL versions 17 and 18 and generates JSON-formatted output for easy processing. Installation requires root privileges due to its reliance on eBPF and can be accomplished using `pip install pg_plan_alternatives`. Users must identify the PostgreSQL server binary, specify necessary paths such as `nodetags.h`, and run queries within a test environment to capture planning alternatives. The tool allows visualization of captured plans in formats like PNG, HTML, and SVG via the `visualize_plan_graph` command. It effectively demonstrates tracing across various query types, including simple SELECTs, JOIN operations, and WHERE clause-containing queries, highlighting how different execution strategies are evaluated by the optimizer. To use `pg_plan_alternatives`, a Linux kernel version 4.9 or higher is required alongside Python 3.10+; root privileges must be granted, and PostgreSQL versions from 14 to 18 with debug symbols need to be installed. Additional dependencies include BCC, graphviz, and psycopg2. The tool emphasizes the importance of specific compilation flags for ensuring compatibility with uprobes in PostgreSQL and can be installed system-wide or within a virtual environment under the MIT License. Keywords: #phi4, BCC, EXPLAIN, JSON, PostgreSQL, debug symbols, eBPF, graphviz, optimizer, pg_plan_alternatives, query plans, tracing, uprobes, visualization
    The google logo   github.com 14 days ago
3232.  HN Show HN: Swarm AI – Shared memory layer for AI agents (self-hosted, open source)
Swarm AI is an innovative open-source platform designed to enhance the efficiency of multiple AI agents by providing a shared memory layer that synchronizes user profiles across these systems. The main advantage of Swarm AI lies in its ability to eliminate redundant requests for information among various agents, as it maintains a unified profile accessible by all connected entities. This synchronization capability is pivotal in environments where multiple AI agents operate simultaneously and need consistent data access. The platform's key features include cross-agent synchronization, allowing seamless read and write operations on shared user profiles without requiring SDKs or configuration files—users simply share a URL with the agents. Swarm AI supports multi-user setups and accommodates various layered profiles such as identity, preferences, work, and context, alongside full-text search capabilities facilitated by FTS5. Additionally, it provides optional semantic search through an OpenAI-compatible embedding API. Swarm AI also excels in creating distinct agent personas and enabling dynamic onboarding via self-readable API documentation (`llms.txt`). It offers robust user management tools, including JWT login, admin controls for disabling users or resetting tokens, making it versatile for different organizational needs. The system's flexibility extends to its database support; while SQLite is used for development, PostgreSQL can be leveraged for scaling. Setting up Swarm AI is straightforward: users can install the platform using `npx @peonai/swarm` and connect agents by sharing an onboarding prompt URL post-admin account creation. For demonstration purposes, a demo instance is available at hive.peonai.net, with recommendations to use disposable agents due to privacy considerations. Architecturally, Swarm AI utilizes Next.js for its web framework and can be deployed using a systemd service via an interactive CLI, ensuring operational efficiency. The system prioritizes data security by maintaining strict user data isolation and includes auditing capabilities for tracking profile changes and API activities. Licensed under MIT, Swarm AI is both flexible and transparent, offering a comprehensive solution to synchronize AI agent operations while adhering to best practices in software deployment and management. Keywords: #phi4, AI agents, FTS5 full-text search, JWT login, Nextjs, PostgreSQL, REST API, SQLite, Swarm AI, audit log, authorization, cross-agent synchronization, dynamic API docs, health check, mobile-friendly, multi-user, open source, self-hosted, semantic search, shared memory, systemd service, tenant isolation, user profile
    The google logo   github.com 14 days ago
3253.  HN Show HN: InferShield – A Lightweight Orchestration-Layer Attack Detector (POC)
InferShield is an open-source Proof of Concept designed to detect attacks targeting the orchestration layer, including Kubernetes control planes and cloud APIs, by tracking sessions and correlating events to identify abnormal activity patterns, such as those used in privilege escalation. It also functions as a security platform for AI applications, protecting against prompt injection, data exfiltration, and PII leaks by integrating with various Large Language Model (LLM) providers. The tool offers multiple deployment options: a browser extension, a security proxy, and a self-hosted platform, all providing enterprise-grade features like session tracking and risk scoring. The current versions of InferShield are production-ready, featuring user authentication, API key management, and PII detection, among other functionalities. Future developments aim to expand language support, introduce custom policy builders, and create mobile applications. Additionally, the project plans to enhance its capabilities with multi-provider support, compliance packs, and team account features. InferShield is MIT-licensed and encourages community contributions to foster a collaborative security environment through community-driven development. The project provides a roadmap for upcoming enhancements, aiming to broaden its functionality and adaptability in securing orchestration layers and AI applications. Keywords: #phi4, API key management, API misconfigurations, Chrome Web Store, Docker, GDPR compliance, InferShield, JWT sessions, Kubernetes, LLM applications, MIT license, Nodejs, PII leakage, PostgreSQL, Prometheus metrics, Proof of Concept, Sentry integration, attack detection, browser extension, data exfiltration, event correlation, multi-step attacks, open-source, orchestration-layer, prompt injection, risk scoring, security proxy, self-hosted platform, session tracking
    The google logo   github.com 14 days ago
3271.  HN Show HN: Sayiir Powerful/durable Rust workflow engine – Python/Node.js bindings
Sayiir is an open-source workflow engine built on a Rust core that streamlines workflow creation using native programming language constructs, eliminating the need for custom Domain Specific Languages (DSLs). Designed to address limitations found in engines like Temporal, Airflow, and Prefect, Sayiir avoids heavy infrastructure demands by employing continuation-based execution. This feature enables workflows to resume from the last saved state after a failure rather than restarting from scratch, enhancing efficiency and reliability. Key features of Sayiir include checkpoint-based recovery for resuming tasks without redundancy, no determinism constraints allowing tasks to freely interact with external APIs and use any libraries, and native language constructs for defining workflows in Python, TypeScript, and Rust. It offers pluggable persistence with PostgreSQL support using codecs like JSON or zero-copy binary (rkyv), alongside multi-language support through type-safe bindings. Sayiir's goal is to reduce the learning curve by leveraging familiar idioms from supported languages without necessitating additional infrastructure deployment. Its control operations include canceling, pausing, and resuming workflows, making it versatile for different operational needs. The project actively seeks contributors, maintainers, sponsors, and early adopters, providing comprehensive documentation and community support via GitHub issues and Discord. Operating under the MIT license, Sayiir focuses on developer-friendly practices and flexible backend implementations, ensuring both rapid execution and long-term reliability. Keywords: #phi4, MIT licensed, Nodejs, PostgreSQL, Python, Rust, Sayiri, async code, bindings, checkpointing, continuation-based execution, crash recovery, durable, enterprise server, graph-based, multi-language, no DSL, pluggable persistence, replay-free, task orchestration, workflow engine
    The google logo   github.com 14 days ago
3287.  HN Sidemantic: Universal Metrics Layer
Sidemantic is an open-source initiative aimed at establishing a universal metrics layer to standardize data definitions across various platforms and tools, supporting over 15 semantic model formats such as YAML, Python, SQL, Cube, dbt MetricFlow, LookML, and others. It is compatible with databases like DuckDB, PostgreSQL, BigQuery, Snowflake, and ClickHouse. The project offers a SQL query interface that facilitates multi-model joins, auto-detection of model formats, and features like pre-aggregations, predicate pushdown for enhanced query speed, Jinja2 templating, and a PostgreSQL wire protocol server. Installation options include the `uv` package manager or Docker Hub, with support for CLI commands, JupyterLab widgets, and integration with Marimo. Sidemantic enables model definition in SQL, YAML, or Python, provides a command-line interface for queries, and allows interaction with semantic layers through a Python API. It includes demos for platforms like Tableau and Superset but is still developing, with potential challenges in complex tasks and serving layers via the Postgres protocol server. Users can expect flexibility and compatibility, though they should be prepared for rough edges as they navigate its advanced features. Keywords: #phi4, API, Agent Skill, AtScale SML, BSL, BigQuery, CLI, ClickHouse, Conversion, Cube, Databricks, Docker, DuckDB, GoodData LDM, Hex, Jinja2 Templating, LookML, Malloy, Metrics Explorer, Metrics Layer, MotherDuck, OSI, Omni, PostgreSQL, Pre-aggregations, Predicate Pushdown, Protocol Server, Python, Rill, SQL, Segments, Semantic Layer, Semantic Models, Snowflake Cortex, Spark SQL, Superset, ThoughtSpot TML, YAML, dbt MetricFlow
    The google logo   github.com 14 days ago
3288.  HN Pentagi: Autonomous AI Agents for complex penetration testing tasks
PentAGI is an advanced AI-driven penetration testing tool designed to enhance security operations through autonomous functionality and comprehensive features. It uses artificial intelligence to automate complex security tasks within a secure environment, equipped with over 20 professional tools such as nmap, metasploit, and sqlmap. The system utilizes AI agents for autonomous operation, allowing it to determine and execute penetration testing steps independently. Its smart memory system retains successful strategies for future use, while its knowledge integration leverages Neo4j's graph database for better context understanding. PentAGI's architecture is modular, supporting horizontal scaling through a microservices design and ensuring secure deployment with Docker containers that segregate core services, monitoring, and analytics. It also features advanced web intelligence and search capabilities by integrating various APIs to collect comprehensive data. The tool supports team delegation by allowing specialized AI agents to manage different testing aspects, enhancing its efficiency. For installation, PentAGI requires Docker along with a minimum configuration of 2 vCPUs, 4GB RAM, and 20GB disk space, providing options for both interactive installers and manual setups via Docker Compose. It offers programmatic integration through REST and GraphQL APIs, using Bearer token authentication to facilitate seamless incorporation into automation pipelines. The system emphasizes security best practices by ensuring secure handling of API tokens, including regular rotation and monitoring to prevent unauthorized access. Additionally, PentAGI provides detailed configuration guidance for managing Large Language Models (LLMs) with various providers like OpenAI, Anthropic, Gemini, AWS Bedrock, and Ollama. It outlines environment variables setup, proxy configurations via LiteLLM, and testing procedures using utilities such as `ctester` and `ftester`. The guide includes advanced integration options for monitoring tools like Langfuse, observability services such as OpenTelemetry, and OAuth integrations for authentication through GitHub and Google. Furthermore, it covers development environment setup for backend (Golang) and frontend (Node.js), including testing, linting, and SSL certificate generation. The document also addresses building Docker images with options for multi-platform builds using `buildx` and acknowledges licensing considerations related to the integration of VXControl Cloud SDK under AGPL-3.0 terms. This comprehensive guide ensures users can effectively configure, deploy, and optimize LLMs within PentAGI, catering to diverse operational needs from basic setups to advanced integrations and performance optimization. Keywords: #phi4, AI Delegation, ANTHROPIC_API_KEY, API Key, API Tokens, AWS Bedrock, Agent Interaction, Anthropic Provider, Autonomous AI, BEDROCK_ACCESS_KEY_ID, Chain Summarization, ClickHouse, Docker, Docker Image, Episodic Memory, Foundation Models, GEMINI_API_KEY, Gemini Integration, Grafana, GraphQL, Graphiti Knowledge, HTTPS, Knowledge Graph, LLM Providers, LLM Server, LLM_SERVER_CONFIG_PATH, LLM_SERVER_KEY, LLM_SERVER_LEGACY_REASONING, LLM_SERVER_MODEL, LLM_SERVER_PRESERVE_REASONING, LLM_SERVER_PROVIDER, LLM_SERVER_URL, Langfuse, LiteLLM Proxy, Local Inference, Local LLM, Long-term Storage, Memory System, Microservices, MinIO, Model Configuration, Monitoring, Multi-turn Conversations, Multimodal Support, Neo4j, Neo4j Database, OAuth Authentication, OLLAMA_SERVER_URL, OPEN_AI_KEY, Ollama, Ollama Inference, OpenAI Models, Parallel Testing, Penetration Testing, PentAGI, Performance Optimization, PostgreSQL, Prometheus, Proxy Access, REST API, Reasoning Capability, Reasoning Format, Redis, SSL Certificates, Sandbox, Scalable Architecture, Secure Authentication, Security Agents, Security Analysis, Security Hardening, Security Tools, Self-Hosted Solution, Task Queue, Vector Store, Web Scraper, Working Memory, YAML Configuration, configuration files, ctester Utility, environment variables, knowledge storage, memory management, pgvector, semantic search, vector embeddings
    The google logo   github.com 14 days ago
3327.  HN Show HN: Semantic search over Hacker News, built on pgvector
The "Semantic Search over Hacker News" project introduces a semantic search engine developed using pgvector, hosted at ask.rivestack.io. This system indexes posts and comments from Hacker News via PostgreSQL, employing the HNSW index for vector embeddings generated by OpenAI's model. It allows users to perform searches based on meaning rather than exact keywords, delivering fast and relevant results typically under 50 milliseconds without requiring a separate vector database. This initiative aims to enhance search functionalities on Hacker News while testing Rivestack, which is a managed PostgreSQL service incorporating pgvector. Key insights from the project reveal that the HNSW index surpasses IVFFlat in terms of recall efficiency at scale and highlights the benefits of integrating vector embeddings with relational data within the same database. Recent enhancements to pgvector's speed have rendered dedicated vector databases unnecessary for various applications. The semantic search engine is freely accessible, and Rivestack offers a free tier for users interested in similar solutions. The creator invites inquiries about the architecture or tuning of pgvector, fostering further exploration and understanding of this technology. Keywords: #phi4, HNSW index, Hacker News, OpenAI, PostgreSQL, Rivestack, Semantic search, architecture, database, embeddings, managed service, nearest-neighbor, pgvector, recall, relational data, tuning, vector searches
    The google logo   ask.rivestack.io 14 days ago
3358.  HN Pigsty – The Floss PG RDS – Pigsty
Pigsty is an open-source tool developed by Ruohang Feng to simplify the setup and management of local PostgreSQL instances. It addresses user needs by streamlining complex processes, particularly in installing extensions, making it a favored option as noted by Vedran B., who also mentions its endorsement from the official PostgreSQL organization. Users find Pigsty advantageous over cloud-managed services like RDS for production environments that are self-hosted, highlighted by Terry Zheng and Darragh Ó Riordan. It provides a comprehensive suite of tools for managing various aspects of PostgreSQL infrastructure, graphics, and services, which has garnered appreciation from users such as Paul Hewson and François-Guillaume Ribreau who seek robust solutions for self-hosting databases. While it excels in its domain, there is interest in similar tools for other database systems like Redis and MySQL. Overall, Pigsty stands out for its feature-rich capabilities and user-friendly approach to local PostgreSQL management. Keywords: #phi4, Pigsty, PostgreSQL, RDS, alternative, database, extension manager, graphics, graphics Keywords: Pigsty, infras, instance, open-source, production, self-hosted, service, software, toolbox
    The google logo   pigsty.io 14 days ago
3404.  HN Show HN: X-Ray – Filter your X (Twitter) timeline by country
X-Ray is a free Chrome extension designed to enhance users' experience on Twitter by filtering content based on geographical location. It leverages publicly available location data from user profiles, displaying corresponding badges on tweets and enabling users to blur or block content from selected countries. Built using technologies such as Manifest V3, Node.js, and PostgreSQL, X-Ray efficiently intercepts and caches responses from Twitter's GraphQL API to bypass rate limits. The extension supports over 170 countries and is available in nine languages, providing real-time statistics and a customizable dark theme. It functions across various Twitter pages without requiring users to log in or share personal data, emphasizing its commitment to privacy. Developers encourage technical inquiries while highlighting the tool’s user-centric and privacy-focused design. Keywords: #phi4, Chrome extension, GraphQL API, Nodejs backend, PostgreSQL, Twitter timeline, X-Ray, block tweets, blur tweets, country filter, location badges, manifest V3, privacy, real-time stats
    The google logo   chromewebstore.google.com 15 days ago