Skip to main content

· 2 min read
Paul Frazee

With Spork v1.3.1 we've added the beam command: an encrypted, networked pipe using Mafintosh's hyperbeam module.

$ npm install -g @atek-cloud/spork

Beam is very simple to use. On the first device, you'd run something like:

$ echo "SPORK POWERS!" | spork beam

This will output instructions for completing the pipe:

▶ Run the following command to connect:▶   spork beam upxtbzbcqvzlleo3vn4r3swfepk4nr7qerfksotlmssuttllxwhq▶ To restart this side of the pipe with the same key add -r to the above▶ Joined the DHT - remote address is XXX

On the second device, you'll enter:

$ spork beam upxtbzbcqvzlleo3vn4r3swfepk4nr7qerfksotlmssuttllxwhq

And soon you'll see:

▶ Connecting pipe...▶ Joined the DHT - remote address is XXX▶ Success! Encrypted tunnel established to remote peerSPORK POWERS!

Tada! Spork powers, activated.

Sending files#

Beam is quite good for sending files around:

# device 1$ cat secrets.txt | spork beam
# device 2$ spork beam $THE_KEY > secrets.txt

All the instructions are written to stderr, so there's no worry that they'll pollute your file.

Bidirectional streams#

The pipe is bidirectional, so you can send data from either device or both at the same time.

# device 1$ echo "Hi there" | spork beam
# device 2$ echo "And hello to you sir" | spork beam $THE_KEY

In fact, you can get a mini chat program going by running cat with no parameters so it reads from stdin:

# device 1$ cat | spork beam
# device 2$ cat | spork beam $THE_KEY
# chat away!

Credit to the Hypercore protocol#

Spork's magicks come from the incredible Hypercore Protocol team. Spork is really a small wrapper around Hyperswarm, their networking layer.

Hop on the Hypercore Protocol Discord if you want to dig more into the work they're doing.



· 6 min read
Paul Frazee

Look out, mr elephant!

Alternative title: What's the best way to eat an elephant?

Suppose you set up a server on your home network. Unless you configure DDNS or a static IP, your server can only host apps for your LAN. It's isolated, and the isolation means it can't do the kind of collaborative and social tasks we expect on the Internet.

Now what if your friend also had a home server, and we could get them to connect and send messages, share files, sync databases, etc? Now we're not isolated anymore. Now we can run social and collaborative applications which typically run on commercial clouds.

This is the idea for the Atek Cloud project. It will be something like a personal PaaS that’s consumer-friendly. It will use p2p and web3 tech to make the apps socially connective with each other.

Web 3 network

There's only one problem: how does it work? Not in a broad sense, but, like, in the actual execution. How do we implement this thing?

Does Atek run Docker images, or some new JS serverless runtime? Do we go nuts on a VM-based sandbox, perhaps with Firecracker? Does Atek implement a users API with SSO, or leave authentication and perms to the apps? Should it provide a novel peer-to-peer database that all apps share, or should data be isolated to each application? Is service discovery between the apps important? What about device clusters? The more we've talked to folks, the more questions we've uncovered about the implementation.

All this means that the Atek platform is an elephant-sized task, and you don't eat an elephant all at once. You eat an elephant one bite at a time.

A complex system that works is invariably found to have evolved from a simple system that works. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.

― John Gall, The Systems Bible

Sporking the elephant#

Spork is a command-line tool that implements the parts of Atek that we're sure about. We called it Spork because it's a fun tool that does multiple things. (The metaphorical relevance to elephant-eating is coincidental.)

$ npm install -g @atek-cloud/spork

At the moment what Spork does is peer-to-peer tunneling. You can, for instance, expose port 8080 as a p2p socket with the following command:

$ spork bind -p 8080Created temporary keypair, public key: whattzzuu5drxwdwi6xbijjf7yt56l5adzht7j7kjvfped7amova
======================Spork powers ACTIVATED
  - Mode: Reverse proxy  - Listening on whattzzuu5drxwdwi6xbijjf7yt56l5adzht7j7kjvfped7amova  - Proxying all traffic to localhost:8080======================
If you're running an HTTP server, you can share the following link:
This public gateway will tunnel directly to your spork!

Now somebody else can spork bind {pubkey} to create the other end of that tunnel on their local device. If they do so, they get an end-to-end encrypted tunnel which handles hole-punching through a distributed network. There's an elephant bite handled.

There are some pretty obvious use-cases for spork bind. Sharing Web apps are obvious, maybe when you're doing dev and need to share the link. We managed to spork an SSH session between Texas and Virginia (through Starlink!). Some folks on the discord sporked a Minecraft game. It currently exposes TCP socket only, but anything that goes through a single TCP connection should work.

The public gateway#

You'll notice a link gets dropped in the spork bind output that looks like https://{pubkey} That's a public gateway similar to the ones used by IPFS but for the p2p tunnel. Using that you can share a link your app without the recipient having to install spork themselves. You lose the end-to-end privacy and it's HTTP only, but you gain some convenience.

We felt like a gateway was important to help with the asymmetry of adoption and we'll keep it running until we go broke. It's pretty easy to run your own though with spork gateway.

You could probably CNAME a domain name to your {pubkey} address. I haven't tried this and wouldn't suggest it, but if you did I wouldn't be mad.

Next bites of elephant#

The next things we'll do with Spork will be convenience commands. Perhaps a quick "serve this folder as a website" command. Maybe an SSH tunnel command. SFTP, perhaps? Anything that makes life easier.

After that, I suspect we'll try to simplify the key-sharing flows. This might include magic-wormhole-style tools and local address-books, which gets us into identity systems and service-discovery.

The Hypercore Protocol is stabilizing some new distributed data structures. I can easily see Spork adopting Hypercore's key-value database (Hyperbee) into its toolset and then building upon that. Maybe to implement registries? Maybe to distribute software builds? Maybe for static websites? Some folks are even toying with smart-contracts using the append-only log.

From there, perhaps a chat command. Perhaps mail. Perhaps a simple way to run nodejs scripts as p2p services and automatically broadcast them to friends' service registries.

The point being, we have ideas but no concrete plans — and that's what Spork is for! We're going to keep iterating on the Atek stack by putting toys into Spork. Each toy gets us closer to understanding how Atek's platform should be designed. Next thing you know, there won't be any elephant left.

Credit to the Hypercore protocol#

I have to be clear that most of Spork's magicks come from the incredible Hypercore Protocol team. Spork is really a small wrapper around Hyperswarm, their networking layer.

It's important to me to give credit to Mathias, Andrew, David, and the rest of that team because they're the ones who deserve it, and also because otherwise people will start asking me questions about Spork's cryptography that I can't begin to answer.

Hop on the Hypercore Protocol Discord if you want to dig more into the work they're doing.



We at Atek do not condone the eating of elephants except in the metaphorical sense.

· 4 min read
Paul Frazee

We're still heads down on work today, so instead of a release overview I'm going to talk about the work in-progress with Atek DB.

Self-host your life!

Atek is a convenient platform for running NodeJS applications at home or in the cloud using peer-to-peer technology. Learn more.

Problems discovered while dogfooding#

While building an RSS Reader example app last week, I ran into complexity (!) using Atek DB's schema system.

The context: I needed to tweak the RSS Reader's data schemas during development. As I worked, I needed to add and modify the records' fields — a very typical workflow for an app in development.

The problem: Atek DB "installs" the schemas into its databases. It will only modify update the schema if a revision number is incremented. That meant I needed to increment its revision number with every change.

Is this so bad? With traditional strict-schema databases, you'd use database commands to create and destroy schemas/tables. Your code would probably include table migraters, with ways to go "up" and "down" the schema versions. If I was totally sold on Atek's strict schema model, I'd have solved the issue by adding something similar. The problem is, I wasn't sold on strict schemas.

Schema madness#

Prior to putting up Atek's site, there were multiple major refactors around how RPC and Database schemas work. The initial concept was to be extremely strict: schemas would be published at URLs. Those schemas would be canonical. Any API or Table using them would have to conform.

My expectation was that strict, well-coordinated schemas would help people write compatible software, but after working even a few moments with the first iteration, I realized the system was hard to learn and hard to use. The strictness wasn't helping me; it was paternalistically getting in my way. I hate that and I know other coders hate that. We want tools, not rules.

Each refactor would loosen the rules and reduce the complexity. By the time Atek's site went up, the API schemas had no baked-in enforcement, and Atek DB did it minimally: table schemas were installed in a database and then enforced, but not by fetching from a canonical URL.

Still, I was a little troubled by this final model. The stateful schema installation without the canonical schemas meant it would be really easy for developers to clobber each others work. We really had the worst of both worlds: stateful strictness without tooling to help with coordination.

A better pattern emerges#

As it turns out, the way that ADB Table schemas got distributed were as NPM modules. It's actually kind of neat: you import the module and then it "operates" on your table.

import adb from '@atek-cloud/adb-api'import {subscriptions, feedItems} from 'rss-example-tables'
const maindb = adb.db('maindb')
const subRecord = await subscriptions(maindb).create({  feedUrl: '...',  title: '...',  description: '...',  link: '...',})const {records} = await feedItems(maindb).list()

With this model, I started to realize that installing the schemas to the database was totally unnecessary. The schema modules can do the validation themselves, outside of the Atek DB process, if they so choose. Therefore I chose to strip out Atek DB's entire concept of tables and schemas, and make it a much more direct passthrough to Hyperbee.

This gets a lot closer to tools, not rules. If you need schema validation, great! The adb-api module has some pretty good helpers for doing that. If not, skip it and use your own solutions to validation.

Loosening the schema model has been this week's project, and should be published and documented next week.

Today's livestream#

I do a weekly livestream every Friday at 2PM CST (time zone converter). I'll talk about everything in this post and a little more there, so please join us!

👉  Here's the link to this Friday's Livestream 👈

Hope to see you there!


· 2 min read
Paul Frazee

It's been one week since Atek was announced and it's been a fun first week. We're up to 50 folks in the discord and our first two contributors are onboarding now. Exciting times!

So what all happened this week?

Self-host your life!

Atek is a convenient platform for running NodeJS applications at home or in the cloud using peer-to-peer technology. Learn more.


Atek is now at version 0.0.16. Here's what's new:

  • Atek now auto-updates. Now it's easier to stay on latest!
  • Users and authentication have been implemented. Multiple users can share an Atek instance, and each of them runs their own apps and stores their own data.

Technical updates#

A lot this week's work occurred behind the scenes:

  • Socket files. Applications now use unix (file) sockets to communicate with Atek. This reduces the amount of port usage and makes it harder for untrusted applications to connect to your Atek apps. The environment variable passed to applications has accordingly changed from ATEK_ASSIGNED_PORT to ATEK_ASSIGNED_SOCKET_FILE.
  • Pinned core. Atek now pins its default core services to a specific version. They will update when Atek pushes a new release.
  • Auth headers. Authentication headers Atek-Auth-User and Atek-Auth-Service have been added to requests sent to applications. As all requests are routed through the Atek host server, these headers are trusted.
  • Authed APIs. Atek's APIs now enforce permissions.
  • Authed ADB APIs. Atek DB APIs now enforce some permissions and assign ownership of databases to individual users.
  • App owners. All applications now have an "owning user," which is the user who installed them. If an application is installed for all users (such as the core services) they use the special system user.
  • Per-user home apps. The "main service" is now installed per user rather than acting as a core service.

Today's livestream#

I do a weekly livestream every Friday at 2PM CST (time zone converter). We'll do an overview of what's happened in the last week, I'll answer questions, and then we'll do some live coding.

👉  Here's the link to this Friday's Livestream 👈

Hope to see you there!


· One min read
Paul Frazee

Weekly livestreams

In my last project, CTZN, I live-streamed almost every minute of development. It was a lot, but it was a really great way to keep people updated and build a connection with the community.

For Atek, I'm going to do a weekly livestream every Friday at 2PM CST (time zone converter). We'll do an overview of what's happened in the last week, I'll answer questions, and then (depending on what's going on) we'll do some live coding.

👉  Here's the link to this Friday's Livestream 👈

Hope to see you there!


· 6 min read
Paul Frazee

Hello, world

Today I'm really happy to announce the Atek project, an open source peer-to-peer Home Cloud. (Pronounced "eɪ-Tek," rhymes with "hey tech.")

Home PI

Atek is a personal cloud for small home servers like Raspberry Pis. It uses peer-to-peer tech to connect your devices so you can share posts, photos, chats, and applications with the privacy and control you want.

Atek uses Hypercore Protocol as its main networking and data layer, but is designed to flexibly add services so that other technologies (IPFS, SSB, Ethereum, etc) can be added.

We're aiming for Raspberry Pis as a target device, but Atek can run on most laptops or desktops.

Developer preview#

Atek is available as a developer preview. I'm following the "release early and often" philosophy so that other developers can get involved. Links:

The Architecture Document is a very good overview of how Atek works.

General ideas#

Phoning home with Hyper#

Most home servers are traditional Web servers, which means a connection from outside the network requires VPNs, proxies, Dynamic DNS, and so on. This is more work than most people can do or care to do.

To replace the commercial cloud with home servers, we need to be able to accept connections from anywhere. We can accomplish this using Hypercore Protocol's networking stack, Hyperswarm. Custom desktop and mobile apps will render your server's Web apps using this stack, essentially proxying HTTP calls over Hyperswarm to reach your home device from anywhere.

Phoning home

Networking home servers together#

Web services don't often federate, and when they do they struggle to federate with servers on home networks. This is a problem because Atek's goal is to run social/collaborative applications which connect between multiple home clouds.

New tech such as Hypercore Protocol, IPFS, and SSB make decentralized p2p databases possible. Atek uses Hypercore to create the Atek Database, a decentralized document-store with strict schemas. Using these kinds of protocols (arguably everything under the "Web 3.0" umbrella) we can share and sync data between home servers, creating social connectivity between them.

Internet of homes

Small core, open ecosystem#

Atek uses a small core server that runs programs (services) and routes API-calls between them. All other functionality, including the protocols, primary data store, default frontend, and actual apps are user programs. A set of "core services" are set in the config file to bootstrap the server, and then the rest are loaded from records in the data store.

This "everything in userland" approach is designed to maximize flexibility for users to choose protocols and applications. Atek will ship with an opinionated core, but because that core is established by the config file, it's trivial to create alternative distros. It's also possible to install all kinds of new services in the default Atek, so if you prefer IPFS or SSB to Hypercore, write an Atek app for those protocol daemons and have at it! (You can see what Hypercore's daemon app looks like here.)


Learning from Sandstorm#

Sandstorm is a personal cloud started by Kenton Varda. It does a lot of what Atek wants to do: simple app installation, easy maintenance, good privacy.

Atek differs in a few ways. The first is that Atek should run on home hardware. Hypercore is designed to punch through hostile NATs and locate peers. This gives us a solid foundation for any two devices to connect without involving a proxy. It also means we can build applications which are still socially connective without using central services.

Atek also tries to minimize novel ideas so that developers can easily get started. Sandstorm uses a "grains" model for sandboxing data which offers a lot of benefits, but diverges from how most applications are built. Advancing a new idea like that requires a lot of resources which Atek won't have, so when we do introduce novelty — i.e. the whole p2p/web3 stack — we're going to try to stick with tools that have their own momentum, and focus on familiarity when introducing something new.

Learning from Beaker browser#

Beaker browser was a p2p Web browser which I started. It was a cool idea: it used Hypercore as a drop-in replacement for HTTP. Brave and Opera are experimenting with this now with IPFS. Peer-to-peer sites are an engaging premise for where the Web could go.

The challenge was that Beaker apps had no backend. If you want to build 100% client-side SPAs, you need something akin to a Firebase: a toolkit of databases, users/identity, and networking. We took a lot of shots at building that, but struggled to create APIs which matched the browser's security and page-based runtime model. Having to create a single monolithic stack for everybody to use is difficult, and ran against many people's expectations of what a browser is and is supposed to do. A home cloud is a more natural fit for this.

The next challenge was resource constraints. Beaker was originally meant to accept any-and-all web3 tech as a plugin, but many of these protocols require a sizable CPU, RAM, and disk budget, and the client-side applications added more overhead onto that. Consequently, I've come to believe it's better to use dedicated devices (home/personal clouds) for web3 apps and have the user devices connect to those devices in a client/server model. This is the basic model I've adopted for Atek.

What is the app runtime?#

Atek currently runs NodeJS applications.

I believe the next step will be to add Docker as a runtime. Again, simple and familiar tools are a benefit, and given Atek's goal to run a variety of protocols, it seems necessary to use containers. That said, I'm locked in debate with a friend who sees Docker as unnecessary overhead (yo dawg, I heard you like entire OS distros) so I'm open to better ideas.

Another near-term concern is answering security sandboxing. I'm still doing my research on the best solution for this, but am leaning toward Firecracker.

Where does the name come from?#

Atek stands for "Austin Texas," where I live. Yeehaw y'all.

Getting involved#

Atek is MIT-licensed FOSS and very open to community contributions. Links:

As I said before, the Architecture Document is a very good overview of how Atek works. You can also reach me on Twitter.