Show HN: Fractional jobs – part-time roles for engineers
I'm Taylor, I spent about a year as a Fractional Head of Product. It was my first time not in a full-time W2 role, and I quickly learned that the hardest part of the job wasn't doing the Product work (I was a PM for 10+ years), it was finding good clients to work with.
So I built Fractional Jobs.
The goal is to help more people break out of W2 life and into their own independent careers by helping them find great clients to work with.
We find and vet the clients, and then engineers can request intros to any that seem like a good fit. We'll make the intro assuming the client opts in after seeing your profile.
We have 9 open engineering roles right now:
- 2x Fractional CTO
- 2x AI engineers
- 3x full-stack
- 1x staff frontend
- 1x mobile
Comments URL: https://news.ycombinator.com/item?id=44945379
Points: 222
# Comments: 104
Mon, 18 Aug 2025, 9:10 pm
Show HN: I built a toy TPU that can do inference and training on the XOR problem
We wanted to do something very challenging to prove to ourselves that we can do anything we put our mind to. The reasoning for why we chose to build a toy TPU specifically is fairly simple:
- Building a chip for ML workloads seemed cool
- There was no well-documented open source repo for an ML accelerator that performed both inference and training
None of us have real professional experience in hardware design, which, in a way, made the TPU even more appealing since we weren't able to estimate exactly how difficult it would be. As we worked on the initial stages of this project, we established a strict design philosophy: TO ALWAYS TRY THE HACKY WAY. This meant trying out the "dumb" ideas that came to our mind first BEFORE consulting external sources. This philosophy helped us make sure we weren't reverse engineering the TPU, but rather re-inventing it, which helped us derive many of the key mechanisms used in the TPU ourselves.
We also wanted to treat this project as an exercise to code without relying on AI to write for us, since we felt that our initial instinct recently has been to reach for llms whenever we faced a slight struggle. We wanted to cultivate a certain style of thinking that we could take forward with us and use in any future endeavours to think through difficult problems.
Throughout this project we tried to learn as much as we could about the fundamentals of deep learning, hardware design and creating algorithms and we found that the best way to learn about this stuff is by drawing everything out and making that our first instinct. In tinytpu.com, you will see how our explanations were inspired by this philosophy.
Note that this is NOT a 1-to-1 replica of the TPU--it is our attempt at re-inventing a toy version of it ourselves.
Comments URL: https://news.ycombinator.com/item?id=44944592
Points: 88
# Comments: 16
Mon, 18 Aug 2025, 7:52 pm
Show HN: We started building an AI dev tool but it turned into a Sims-style game
Hi HN! We’re Max and Peyton from The Interface (https://www.theinterface.com/).
We started out building an AI agent dev tool, but somewhere along the way it turned into Sims for AI agents.
Demo video: https://www.youtube.com/watch?v=sRPnX_f2V_c.
The original idea was simple: make it easy to create AI agents. We started with Jupyter Notebooks, where each cell could be callable by MCP—so agents could turn them into tools for themselves. It worked well enough that the system became self-improving, churning out content, and acting like a co-pilot that helped you build new agents.
But when we stepped back, what we had was these endless walls of text. And even though it worked, honestly, it was just boring. We were also convinced that it would be swallowed up by the next model’s capabilities. We wanted to build something else—something that made AI less of a black box and more engaging. Why type into a chat box all day if you could look your agents in the face, see their confusion, and watch when and how they interact?
Both of us grew up on simulation games—RollerCoaster Tycoon 3, Age of Empires, SimCity—so we started experimenting with running LLM agents inside a 3D world. At first it was pure curiosity, but right away, watching agents interact in real time was much more interesting than anything we’d done before.
The very first version was small: a single Unity room, an MCP server, and a chat box. Even getting two agents to take turns took weeks. Every run surfaced quirks—agents refusing to talk at all, or only “speaking” by dancing or pulling facial expressions to show emotion. That unpredictability kept us building.
Now it’s a desktop app (Tauri + Unity via WebGL) where humans and agents share 3D tile-based rooms. Agents receive structured observations every tick and can take actions that change the world. You can edit the rules between runs—prompts, decision logic, even how they see chat history—without rebuilding.
On the technical side, we built a Unity bridge with MCP and multi-provider routing via LiteLLM, with local model support via Mistral.rs coming next. All system prompts are editable, so you can directly experiment with coordination strategies—tuning how “chatty” agents are versus how much they move or manipulate the environment.
We then added a tilemap editor so you can design custom rooms, set tile-based events with conditions and actions, and turn them into puzzles or hazards. There’s community sharing built in, so you can post rooms you make.
Watching agents collude or negotiate through falling tiles, teleports, landmines, fire, “win” and “lose” tiles, and tool calls for things like lethal fires or disco floors is a much more fun way to spend our days.
Under the hood, Unity’s ECS drives a whole state machine and event system. And because humans and AI share the same space in real time, every negotiation, success, or failure also becomes useful multi-agent, multimodal data for post-training or world models.
Our early users are already using it for prompt-injection testing, social engineering scenarios, cooperative games, and model comparisons.
The bigger vision is to build an open-ended, AI-native sim-game where you can build and interact with anything or anyone. You can design puzzles, levels, and environments, have agents compete or collaborate, set up games, or even replay your favorite TV shows.
The fun part is that no two interactions are ever the same. Everything is emergent, not hard-coded, so the same level played six times will play out differently each time.
The plan is to keep expanding—bigger rooms, more in-world tools for agents, and then multiplayer hosting. It’s live now, no waitlist. Free to play. You can bring your own API keys, or start with $10 in credits and run agents right away: www.TheInterface.com.
We’d love feedback on scenarios worth testing and what to build next. Tell us the weird stuff you’d throw at this—we’ll be in the comments.
Comments URL: https://news.ycombinator.com/item?id=44943986
Points: 117
# Comments: 59
Mon, 18 Aug 2025, 6:51 pm
Show HN: Whispering – Open-source, local-first dictation you can trust
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app.
I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went.
So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. Your data is stored locally on your device, and your audio goes directly from your machine to a local provider (Whisper C++, Speaches, etc.) or your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.). For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before).
Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office.
Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs, and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ.
There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://github.com/cjpais/Handy). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model.
Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter (https://github.com/epicenter-so/epicenter), which I should explain a bit...
I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it.
Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now.
I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon.
Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here (https://github.com/epicenter-so/epicenter) and join the Discord here (https://go.epicenter.so/discord). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!
Comments URL: https://news.ycombinator.com/item?id=44942731
Points: 363
# Comments: 109
Mon, 18 Aug 2025, 4:52 pm
Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection
Hi HN! This is Ben from Reality Defender (https://www.realitydefender.com). We build real-time multimodal and multi-model deepfake detection for Fortune 100s and governments all over the world. (We even won the RSAC Innovation Showcase award for our work: https://www.prnewswire.com/news-releases/reality-defender-wi...)
Today, we’re excited to share our public API and SDK, allowing anyone to access our platform with 2 lines of code: https://www.realitydefender.com/api
Back in W22, we launched our product to detect AI-generated media across audio, video, and images: https://news.ycombinator.com/item?id=30766050
That post kicked off conversations with devs, security teams, researchers, and governments. The most common question: "Can we get API/SDK access to build deepfake detection into our product?"
We’ve heard that from solo devs building moderation tools, fintechs adding ID verification, founders running marketplaces, and infrastructure companies protecting video calls and onboarding flows. They weren’t asking us to build anything new; they simply wanted access to what we already had so they could plug it in and move forward.
After running pilots and engagements with customers, we’re finally ready to share our public API and SDK. Now anyone can embed deepfake detection with just two lines of code, starting at the low price of free.
https://www.realitydefender.com/api
Our new developer tools support detection across images, voice, video, and text — with the former two available as part of the free tier. If your product touches KYC, UGC, support workflows, communications, marketplaces, or identity layers, you can now embed real-time detection directly in your stack. It runs in the cloud, and longstanding clients using our platform have also deployed on-prem, at the edge, or on fully airgapped systems.
SDKs are currently available in Python, Java, Rust, TypeScript, and Go. The first 50 scans per month are free, with usage-based pricing beyond that. If you’re working on something that requires other features or streaming access (like real-time voice or video), email us directly at yc@realitydefender.com
Much has changed since 2022. The threats we imagined back then are now showing up in everyday support tickets and incident reports. We’ve witnessed voice deepfakes targeting bank call centers to commit real-time fraud; fabricated documents and AI-generated selfies slip through KYC and IDV onboarding systems; fake dating profiles, AI-generated marketplace sellers, and “verified” influencers impersonating real people. Political disinformation videos and synthetic media leaks have triggered real-world legal and PR crises. Even reviews, support transcripts, and impersonation scripts are increasingly being generated by AI.
Detection remains harder than we first expected since we began in 2021. New generation methods emerge every few weeks that invalidate prior assumptions. This is why we are committed to building every layer of this ourselves. We don’t license or white-label detection models; everything we deploy is built in-house by our team.
Since our original launch, we’ve worked with tier-one banks, global governments, and media companies to deploy detection inside their highest-risk workflows. However, we always believed the need wasn’t limited to large institutions, but everywhere. It showed up in YC office hours, in early bug reports, and in group chats after our last HN post.
We’ve taken our time to make sure this was built well, flexible enough for startups, and battle-tested enough to trust in production. The API you can use today is the same one powering many of our enterprise deployments.
Our goal is to make Reality Defender feel like Stripe, Twilio, or Plaid — an invisible, trusted layer that you can drop into your system to protect what matters. We feel deepfake detection is a key component of critical infrastructure, and like any good infrastructure, it should be modular, reliable, and boring (in the best possible way).
Reality Defender is already in the Zoom marketplace and will be on the Teams marketplace soon. We will also power deepfake detection for identity workflows, support platforms, and internal trust and safety pipelines.
If you're building something where trust, identity, or content integrity matter, or if you’ve run into weird edge cases you can’t explain, we’d love to hear from you.
You can get started here: https://realitydefender.com/api
Or you can try us for free two different ways:
1) 1-click add to Zoom / Teams to try in your own calls immediately.
2) Email us up to 50 files at yc@realitydefender.com and we’ll scan them for you — no setup required.
Thanks again to the HN community for helping launch us three years ago. It’s been a wild ride, and we’re excited to share something new. We live on HN ourselves and will be here for all your feedback. Let us know what you think!
Comments URL: https://news.ycombinator.com/item?id=44941580
Points: 77
# Comments: 39
Mon, 18 Aug 2025, 3:16 pm
Show HN: Doxx – Terminal .docx viewer inspired by Glow
I got tired of open file.docx → wait 8 seconds → close Word just to read a document, so I built a terminal-native Word viewer!
What it does:
* View `.docx` files directly in your terminal with (mostly) proper formatting
* Tables actually look like tables (with Unicode borders!)
* Nested lists work correctly with indentation
* Full-text search with highlighting
* Copy content straight to clipboard with `c`
* Export to markdown/CSV/JSON
Why I made this:
Working on servers over SSH, I constantly hit Word docs I needed to check quickly. The existing solutions I'm aware of either strip all formatting (docx2txt) or require GUI apps. Wanted something that felt as polished as [glow](https://github.com/charmbracelet/glow) but for Word documents.
The good stuff:
* 50ms startup vs Word's 8+ seconds
* Works over SSH (obviously)
* Preserves document structure and formatting
* Smart table alignment based on data types
* Interactive outline view for long docs
Built with Rust + ratatui and heavily inspired by Charm's [glow](https://github.com/charmbracelet/glow) package for viewing Markdown in the CLI (built in Go)!
# Install
cargo install --git https://github.com/bgreenwell/doxx
# Use
doxx quarterly-report.docx
Still early but handles most Word docs I throw at it. Always wanted a proper Word viewer in my terminal toolkit alongside `bat`, `glow`, and friends. Let me know what you think!
Comments URL: https://news.ycombinator.com/item?id=44934391
Points: 250
# Comments: 68
Sun, 17 Aug 2025, 7:52 pm
Show HN: OverType – A Markdown WYSIWYG editor that's just a textarea
Hi HN! I got so frustrated with modern WYSIWYG editors that I started to play around with building my own.
The problem I had was simple: I wanted a low-tech way to type styled text, but I didn't want to load a complex 500KB library, especially if I was going to initialize it dozens of times on the same page.
Markdown in a plain was the best alternative to a full WYSIWYG, but its main drawback is how ugly it looks without any formatting. I can handle it, but my clients certainly can't.
I went down the ContentEditable rabbit hole for a few years, but always came to realize others had solved it better than I ever could.
I kept coming back to this problem: why can't I have a simple, performant, beautiful markdown editor? The best solution I ever saw was Ghost's split-screen editor: markdown on the left, preview on the right, with synchronized scrolling.
Then, about a year ago, an idea popped into my head: what if we layered a preview pane behind a ? If we aligned them perfectly, then even though you were only editing plain text, it would look and feel like you were editing rich text!
Of course, there would be downsides: you'd have to use a monospace font, all content would have to have the same font size, and all the markdown markup would have to be displayed in the final preview.
But those were tradeoffs I could live with.
Anyways, version 1 didn't go so well... it turns out it's harder to keep a textarea and a rendered preview in alignment than I thought. Here's what I discovered:
- Lists were hard to align - bullet points threw off character alignment. Solved with HTML entities (• for bullets) that maintain monospace width
- Not all monospace fonts are truly monospace - bold and italic text can have different widths even in "monospace" fonts, breaking the perfect overlay
- Embedding was a nightmare - any inherited CSS from parent pages (margin, padding, line-height) would shift alignment. Even a 1px shift completely broke the illusion
The solution was obsessive normalization:
// The entire trick: a transparent textarea over a preview div
layerElements(textarea, preview)
applyIdenticalSpacing(textarea, preview)
// Make textarea invisible but keep the cursor
textarea.style.background = 'transparent'
textarea.style.color = 'transparent'
textarea.style.caretColor = 'black'
// Keep them in sync
textarea.addEventListener('input', () => {
preview.innerHTML = parseMarkdown(textarea.value)
syncScroll(textarea, preview)
})
A week ago I started playing with version 2 and discovered GitHub's element, which handles markdown formatting in a plain really well.
That experiment turned into OverType (https://overtype.dev), which I'm showing to you today -- it's a rich markdown editor that's really just a . The key insight was that once you solve the alignment challenges, you get everything native textareas provide for free: undo/redo, mobile keyboard, accessibility, and native performance.
So far it works surprisingly well across browsers and mobile. I get performant rich text editing in one small package (45KB total). It's kind of a dumb idea, but it works! I'm planning to use it in all my projects and I'd like to keep it simple and minimal.
I would love it if you would kick the tires and let me know what you think of it. Happy editing!
---
Demo & docs: https://overtype.dev
GitHub: https://github.com/panphora/overtype
Comments URL: https://news.ycombinator.com/item?id=44932651
Points: 446
# Comments: 97
Sun, 17 Aug 2025, 4:13 pm
Show HN: NextDNS Adds "Bypass Age Verification"
We just shipped a new feature in NextDNS: Bypass Age Verification.
More and more sites (especially adult ones) are now forcing users to upload IDs or selfies to continue. We think that’s a terrible idea: handing over government documents to random sites is a huge privacy risk.
This new setting workarounds those verification flows via DNS tricks. It’s available today to all users, including free accounts.
We’re curious how the HN community feels about this. Is it the right way to protect privacy online, or will it just provoke regulators to push harder?
https://nextdns.io
Comments URL: https://news.ycombinator.com/item?id=44931824
Points: 512
# Comments: 185
Sun, 17 Aug 2025, 2:29 pm
Show HN: Lue – Terminal eBook Reader with Text-to-Speech
Shown HN: Lue - Terminal eBook Reader with Text-to-Speech
Hello,
Just went live on GitHub with this project.
I really enjoy listening to my eBooks as audiobooks but was frustrated by the available options. Converting books into audiobooks with scripts is tedious, and most tools stumble over footnotes, headers, or formatting. I wanted something simple: just throw a book at it, and it starts reading immediately without any clicking or loading.
I also wanted it to be customizable and modular because new, better TTS engines are released all the time. For this initial release, I settled on Edge and Kokoro because they’re both fast (real-time) and good quality. I’ve already made modules for Kitten TTS, Gemini and a few others, and they work too. So I hope this setup is future-proof.
Here’s what Lue supports:
Multi-format: EPUB, PDF, TXT, DOCX, HTML, RTF, and Markdown.
Modular TTS system: Default Edge TTS (online) and Kokoro TTS (offline/local), with an architecture to add more models.
Rich terminal UI: Full keyboard and mouse support, customizable color themes, smooth scrolling.
Smart persistence: Automatically saves reading progress across sessions.
Cross-platform & multilingual: macOS, Linux, Windows, supporting 100+ languages.
I’d love feedback on both usability and the TTS experience. Are there any features you wish it had?
Comments URL: https://news.ycombinator.com/item?id=44925597
Points: 97
# Comments: 23
Sat, 16 Aug 2025, 6:00 pm
Ask HN: Do you still bookmark websites?
Many bookmarking tools were created, and then most got sucked into the tech's "how do I make more money cycle?" and died.
My favorite was delicious, and then Pocket. Even Google had a bookmarking extension.
Is saving links no longer considered fashionable?
Yes, AI, but how does it go back to my favorite that I need to either read or revisit?
Should I vibe code one?
Comments URL: https://news.ycombinator.com/item?id=44925438
Points: 73
# Comments: 122
Sat, 16 Aug 2025, 5:41 pm
Show HN: I built an app to block Shorts and Reels
I wanted to find a way to use Instagram without ending up scrolling for two hours every time I open the app to see a friend's story.
Most screen time apps I found focus on blocking the app itself instead of the addictive feed, so I created this app to allow me to keep using the "healthy" and "social" features and block the infinite scrolling (Reels)
After implementing the block on Instagram Reels, I got addicted to YouTube Shorts and Reddit feed. So, I extended the app to cover these as well.
To avoid replacing the scrolling for regular feeds, I also added a feature that shows a pop-up when I'm overscrolling in any app. It forces me to stop and think for a minute before I continue scrolling.
I built it on Android Studio, using Kotlin and Jetpack Compose for the UI. I use the Accessibility Service to detect scrolls and navigate out of them. Unfortunately, this only works for Android. There is no way (as far as I know) to do this on iOS.
I'd love to hear your thoughts
Comments URL: https://news.ycombinator.com/item?id=44923520
Points: 554
# Comments: 210
Sat, 16 Aug 2025, 2:01 pm
Launch HN: Embedder (YC S25) – Claude code for embedded software
Hey HN - We’re Bob and Ethan from Embedder (https://embedder.dev), a hardware-aware AI coding agent that can write firmware and test it on physical hardware.
Here’s a demo in which we integrate a magnetometer for the Pebble 2 smartwatch: https://www.youtube.com/watch?v=WOpAfeiFQkQ
We were frustrated by the gap between coding agents and the realities of writing firmware. We'd ask Cursor to, say, write an I2C driver for a new sensor on an STM32, and it would confidently spit out code that used non-existent registers or HAL functions from the wrong chip family. It had no context, so it would just guess and the code is always wrong.
Even when it wrote the right code, the agent had no way of interacting with your board and the developer would have to manually test it and prompt the agent again to fix any bugs they found. Making current solutions not ideal when working in an embedded context.
That’s why we are building Embedder, a hardware-aware coding agent that is optimized for work in embedded contexts. It understands your datasheets and schematics and can also flash and test on your hardware.
First, you give it context by uploading datasheets, reference manuals, schematics, or any other documentation on our web console and our coding agent will automatically have context when it executes tasks in the command line.
Second, Embedder can directly interact with your hardware to close the development loop. The agent is able to use a serial console just like a regular developer to read from your board and verify outputs. To solve more complex bugs or identify hardware issues the coding agent is also able to launch a debugging agent optimized for step through debugging workloads and interact with local or remote gbdservers.
You can try it out today. It’s an npm package you can install and run from your terminal:
npm i -g @embedder/embedder && embedder
It's free for the rest of this month while we're in beta. After that, we're planning a usage based model for individual developers and a team plan with more advanced features.
We’d love to get feedback from the community, or hear about your experiences of embedded development. We’ll be in the comments to respond!
Comments URL: https://news.ycombinator.com/item?id=44915206
Points: 68
# Comments: 26
Fri, 15 Aug 2025, 5:38 pm
Show HN: Edka – Kubernetes clusters on your own Hetzner account
Hi HN,
I’ve been working with Kubernetes for over a decade, since the alpha days, and was involved in kube-aws project before AWS launched EKS. For the past four years, I’ve been helping friends and small businesses cut costs by running Kubernetes on Hetzner Cloud, which I’ve found to be rock solid and by far the best priced provider.
Provisioning a cluster on Hetzner is now straightforward, thanks to tools like k3s and hetzner-k3s, but configuring it for your specific needs still takes time and expertise. I built Edka to make that part easy: spin up a production ready cluster in ~2 minutes, then choose how low level or automated you want to go.
How it works:
Layer 1 – Cluster provisioning
- Creates a k3s-based Kubernetes cluster on Hetzner (lightweight, easy to manage, scales well).
Layer 2 – Add-ons
- One-click deploy for metrics-server, cert-manager, and various operators; preconfigured for Hetzner, no extra setup needed.
Layer 3 – Applications
- Minimal config UIs for apps built on top of add-ons.
- Example: Need PostgreSQL? Fill a few fields → platform installs CloudNativePG → provisions HA PostgreSQL with PITR → gives ready to use endpoints. Backups can be restored to any point in time with a click. Quick demo: https://edka.io/apps/
Layer 4 – Deployments
- Connect your CI to push container images to a public/private registry.
- Edka updates deployments automatically (with semantic versioning rules), supports instant rollbacks, autoscaling, persistent volumes, secrets/env imports, and quick public exposure. Quick demo: https://edka.io/deployments/
Tech stack: TypeScript, React + Tailwind CSS, PostgreSQL, Redis, BullMQ, Vault + AWS KMS to encrypted sensitive data.
The platform is still in beta and I’m building it in my spare time, so there are some rough edges, but I’d love feedback from anyone running Kubernetes on Hetzner, exploring alternatives to EKS/GKE/AKS or looking to automate their infrastructure with Kubernetes.
More details: https://edka.io/
Thank you!
Comments URL: https://news.ycombinator.com/item?id=44915164
Points: 243
# Comments: 77
Fri, 15 Aug 2025, 5:34 pm