A little while back, I spent a week in a remote area. It had no Internet and no cell phone coverage. Sometimes, I would drive in to town where there was a signal to get messages, upload photos, and so forth. I had to take several devices with me: my phone, my wife’s, maybe a laptop or a tablet too. It seemed there should have been a better way. And there is.
I’ll use this example to talk about a mesh network, but it could just as well apply to people wanting to communicate on a 12-hour flight that has no in-flight wifi, or spacecraft with an intermittent connection, or a person traveling.
Syncthing makes a wonderful solution for things like these. Here are some interesting things about Syncthing:
- You can think of Syncthing as a serverless, peer-to-peer, open source alternative to Dropbox. Machines sync directly with each other without a server, though you can add a server if you want.
- It can operate completely without Internet access or any central server, though if Internet access is available, it can readily be used.
- Syncthing devices connected to the same LAN or Wifi will detect each other’s presence and automatically communicate.
- Syncthing is capable of handling a constantly-changing topology. It can also, for instance, handle two disconnected clusters of nodes with one node that “travels” between them — perhaps just a phone.
- Syncthing scales from everything from a phone to thousands of nodes.
- Syncthing normally performs syncs in every direction, but can also do single-direction syncs
- An individual Syncthing node can register its interest or disinterest in certain files or directories based on filename patterns
Syncthing works by having you define devices and folders. You can choose which devices to share folders with. A shared folder has an ID that is unique across Sycnthing. You can share a folder from device A to device B, and then device B can share it with device C, even if A and C don’t know about each other or have no way to communicate. More commonly, though, all the devices would know about each other and will opportunistically communicate the best way they can.
Syncthing uses something akin to a Bittorrent protocol. Say you’re syncing videos from your phone, and they’re going to 3 machines. It doesn’t mean that Syncthing has to send it three times from the phone. Syncthing will send each block, most likely, just once; the other nodes in the swarm will register the block availability from the first other node to get it and will exchange blocks with themselves.
Syncthing will typically look for devices on the local LAN. Failing that, it will use an introduction server to see if it can reach them directly using P2P. Failing that, perhaps due to restrictive firewalls or NAT, communication can be relayed through volunteer-run Syncthing servers on the Internet. All Syncthing communications are cryptographically encrypted and verified. You can also configure Syncthing arbitrarily; for instance, to run over ssh or Tor tunnels.
So, let’s look at how Syncthing might help with the example I laid out up front.
All the devices at the remote location could communicate with each other. The Android app is quite capable of syncing photos and videos using Syncthing, for instance. Then one device could be taken to the Internet location and it would transmit data on behalf of all the others – perhaps back to a computer at your home, or to a server somewhere. Perhaps a script running on the remote server would then move files out of the syncthing synced folder into permanent storage elsewhere, triggering a deletion to be sent to the phone to free up storage. When the phone gets back to the other devices, the deletion can be propagated to them to free up storage there too.
Or maybe you have a computer out in a shed or somewhere without Internet access that you go to periodically, and need to get files to it. Again, your phone could be a carrier.
Taking it a step further
If you envision a file as a packet, you could, conceivably, do something like tunnel TCP/IP over Syncthing, assuming generous-enough timeouts. It can truly handle communication.
But you don’t need TCP/IP for this. Consider some other things you could do:
- Drop a script in a special directory that gets picked up by a remote server and run
- Drop emails in a special directory that get transmitted and then deleted by a remote system when they’re seen
- Drop files (eg, photos or videos) in a directory that a remote system will copy or move out of there
- Drop messages (perhaps gpg-encrypted) — which could be text files — for someone to see and process.
- Drop NNTP bundles for group communication
You can start to see how there are a lot of possibilities here that extend beyond just file synchronization, though they are built upon a file synchronization tool.
Enter NNCP
Let’s look at a tool that’s especially suited for this: NNCP, which I’ve been writing about a lot lately.
NNCP is designed to handle file exchange and remote execution with remote computers in an asynchronous, store-and-forward manner. NNCP packets are themselves encrypted and authenticated. NNCP traditionally is source-routed (that is, you configure it so that machine A reaches machine D by relaying through B and C), and the packets are onion-routed. NNCP packets can be exchanged by a TCP call, a tar-like stream, copying files to something like a USB stick and physically transporting it to the remote, etc.
This works really well and I’ve been using it myself. But it gets complicated if the network topology isn’t fixed; it is difficult to reroute packets due to the onion routing, for instance. There are various workarounds that could be used — but why not just use Syncthing as a transport in those cases?
nncp-xfer is the command that exchanges packets by writing them to, and reading them from, a directory. It is what you’d use to exchange packets on a USB stick. And what you’d use to exchange packets via Syncthing. It writes packets in a RECIPIENT/SENDER/PACKET directory structure, so it is perfectly fine to have multiple systems exchanging packets in a single Syncthing synced folder tree. This structure also allows leaf nodes to only carry the particular packets they’re interested in. The packets are all encrypted, so they can be freely synced wherever.
Since Syncthing opportunistically syncs a shared folder with any device the folder is shared with, a phone could very easily be the NNCP transport, even if it has no idea what NNCP is. It could carry NNCP packets back and forth between sites, or to the Internet, or whatever.
NNCP supports file transmission, file request, and remote execution, all subject to controls, of course. It is easy to integrate with Exim or Postfix to use as a mail transport, Git transport, and so forth. I use it for backups. It would be quite easy to have it send those backups (encrypted zfs send) via nncp-xfer to Syncthing instead of the usual method, and then if I’ve shared the Syncthing folder with my phone, all I need to do is bring the phone into Internet range and they get sent. nncp-xfer will normally remove the packets out of the xfer directory as it ingests them, so the space will only be consumed on the phone (and laptop) until we know the packets made it to their destination.
Pretty slick, eh?
Have you looked at CouchDB? It serves some of the same use cases, but with a different approach. It’s very popular for offline-first applications. Two real-world examples I’ve heard:
1. forestry applications, where a mobile device is frequently in the middle of a national forest, with no network connection, but it can easily sync back to a centralized data store when back within range.
2. Fighting the Ebola outbreak in parts of Africa where network connecitivity was sporadic, at best. Medics in the field could track local data on tablets, to be easily synced with other sites when the network was made available, to allow global analytics.
It’s sometimes called “A replication protocol, with a database tacked on.”
I’ve been working on a project that allows bi-directional sync between CouchDB and a filesystem directory. Makes me think it could easily be coupled with something like syncthing…
It is definitely is on my list to look into. https://verbnetworks.com/research/store-forward-networks/ planted that idea in my head and it looks quite interesting.
@elb @ajroach42 2/ I also looked into Dat and IPFS, but they are neither as capable nor as useful as Syncthing for personal synchronization.
@elb @ajroach42 3/ For, eg, downloading websites, archivebox.io could be very nice. Archive Team also has some wiki pages on how to do it with wget and httrack. There are also plugins like webrecorder that could help.
@elb @ajroach42 4/ What you are really after is more general asynchronous communication. I have a whole blog series about this, including #NNCP and other tools: https://changelog.complete.org/archives/tag/asynchronous will give you all the posts in the series. Many of them are somewhat focused on backups, but should give you some good ideas for other things also. NNCP can use things like USB sticks, serial links, regular Internet connections, Syncthing, etc. as transport.
NNCP
asynchronous β The Changelog
@elb @ajroach42 5/ The NNCP page on use cases may give you some ideas (whether or not you use NNCP) http://www.nncpgo.org/Use-cases.html Their integration page http://www.nncpgo.org/Integration.html also is useful. http://www.nncpgo.org/WARCs.html describes downloading webpages.
Use cases (NNCP)
@elb @ajroach42 6/ You talked about accessing web pages offline. I’ve tried that but mostly don’t really bother. It is fairly painful (you frequently want to click on a link you don’t have). In some cases, for things like larger articles, it can make good sense. But you might want to look into something more like rss2email . Email is already asynchronous and there are lots of ways to get asyncrhonous email across. NNCP is one and documents this workflow at http://www.nncpgo.org/Feeds.html
Feeds (NNCP)
@elb @ajroach42 7/ Offline email is two separate problems: sending and receiving. Sending can go across BSMTP (delivered “somehow” via NNCP, UUCP, Syncthing, etc). I talk about Exim with NNCP as part of my series here https://changelog.complete.org/archives/10165-asynchronous-email-exim-over-nncp-or-uucp and the NNCP docs go over the Postfix setup. For incoming, you can use OfflineIMAP or an offline-capable mail reader. Alternatively, forwarding to an account you can receive via NNCP/etc to a local mailstore.
Asynchronous Email: Exim over NNCP (or UUCP)
@elb @ajroach42 8/ So if you have a VPS or a machine “in town” or whatever, you can do some pretty nice things; take the photos you copied into the “to upload” Syncthing folder and upload them, then delete them out of there. Or a laptop can run those commands directly “in town”
@elb @ajroach42 9/ Finally the two best ways to improve your 4G signal are: 1) height, and 2) antenna. I got one of these https://smile.amazon.com/gp/product/B01NBSLNJ6 with a Nighthawk M1 awhile back. Tremendous difference. A booster can only boost what it can receive. A good antenna, mounted high, hardwired into the access point will almost certainly be better. That antenna has “gain”, meaning it’s directional, so figure out where your best towers are and point it at those.
@elb @ajroach42 10/ Also point-to-point wireless may help; if there’s a good place you can get Internet and you have line-of-sight from your house, you may be able to work something out, even something surprisingly fast. For more challenging conditions, LoRA or XBee could work… but at 100Kbps or less. Not suitable for browsing but could work for email.
@elb @ajroach42 11/ Finally, don’t understimate the utility of sshing to a VPS somewhere and reading email in text. My qualifications to anser: have lived in Internet-challenged areas for 20 years, frequently travel into no-Internet areas, have experience with modern communication over extremely low bandwidth links (1 to 100Kbps) including LoRA/XBee radio, AX.25 packet radio, and satellite.
@elb @ajroach42 Also, @joeyh lives off-grid and may have lots of ideas to contribute here.
@jgoerzen @djsundog mentioned rss2email, which seems like a good choice to me (although, frankly, RSS to html files in syncthing is probably fine too.) I think what I’m wanting in terms of capturing web pages is an interface where I provide, for example, a URL or a search term, and the next time a connected node has an internet connection, that web page and every page it links to directly, and every page each of those links to directly is captured for LAN transfer at a later date.
@jgoerzen @elb Thick canopy and mountains, so there’s only so much improving to do. We’ll stick an antenna up pretty high, and run that in to our little booster. It’ll get the job done (I’m posting now over that cellular connection. It works, when you’re in the right location.)
@jgoerzen @elb I did some experimenting with LoRA and tried to do some work with XBee for keeping nodes of a distributed BBS in sync over multiple KM, but ultimately we just didn’t have the mesh density, and I ended up building a solution that used a device I carried with me to rsync each location over wifi. Using syncthing and a cellphone (and NNCP) seems like a more viable longterm solution.
@ajroach42 @jgoerzen @elb I still say we should try the trebuchet of usb flash drives thing at some point
@djsundog @ajroach42 @jgoerzen @elb question: what would be the bandwidth of 90 kg of flash drives launched 300 m?How fast would that be travelling
@nev @djsundog @jgoerzen @elb https://what-if.xkcd.com/31/
FedEx Bandwidth
@ajroach42 @djsundog @jgoerzen @elb this doesn’t answer my question!!
@ajroach42 @djsundog @jgoerzen @elb I googled it myself. Top speed 70 m/s, apparently…someone do the math
@nev a USB drive on average weighs about 30 grams, according to the two sources I found that weighed USB drives. Microcenter cells 256 GB flash drives. You could fit 3000 of them in a 90 KG payload. So that’s 768 terabytes.A full sized trebuchet can launch a 90kg at roughly 70m/s or 156.586 MPH. So you’re sending 768 TB at 300 meters in 4.2 seconds for a transfer speed of 182.857143 terabytes TBps.
@ajroach42 @nev the bottleneck is going to be reading from the interface
@nev Or, in more conventional terms, 1462857144 Mbps, or 1462857144000 Kbps or 1.462857144e+15 bits per second.
@ajroach42 @djsundog You could pretty easily do that with httrack on some remote system. nncp-exec sending a URL to that, it returns a tarball to you, or something similar, I’d think.
@jgoerzen @ajroach42 with a possible pit stop on a shared lan server that untars it long enough to index into elasticsearch or whatever along with a pointer to the tarball on lan storage for caching etc. yeah, this is more than doable.
@ajroach42 thank you for this incredible contribution to modern science.
@djsundog @ajroach42 Interestingly, Sergey just (literally within the past few days) added “multicast” support to #NNCP http://www.nncpgo.org/Multicast.html . The Internet-connected machine could send the downloaded site to the LAN “area”. Upon arriving at the LAN NNCP gateway, it would be copied both to the PC and to the indexer. Wouldn’t require any extra scarce bandwidth, and also wouldn’t require the indexer to be up in order for the data to reach the PC.
NNCP
Multicast (NNCP)
@djsundog @ajroach42 NNCP 7.x is still a bit buggy for me (I make pretty heavy use of it, sending thousands of packets, some of them hundreds of GB in size, across it daily) but I’m sure it will shake out shortly. In the meantime, 6.6.0 is rock solid
@ajroach42 @nev this works as a value for peak throughput, but i think in order to give a full picture of the capabilities, this calculation should also consider loading time rather than just launching time
@jgoerzen @ajroach42 @elb Reg. offline: you know scuttlebutt, do you? That sneakernet-compatible social network?
I think it’s a pretty good idea that meshes direct connections to hubs/proxies and p2p connections to update offline data.
@ajroach42 @nev I thought 30g sounded like a /really/ heavy flashdrive, so I weighed the small jar of flashdrives on my desk. their weights: 10g, 9g, 4g, 8g, 16g. which would increase data throughput :) I can weigh some SD cards too I guess… nvm. microSD card doesn’t make it up to 1g on my scales. 0.5g would be best guess. regular sized SD card is about 2 or 2.5g.
@epoch @ajroach42 okay so by my calculation, assuming a micro SD card is 0.5g and holds 1 TB, a 90kg payload contains 180 PB…
@nev @epoch and it’ll travel 300 meters in 4.2 seconds. So 180PB/4.2 seconds is 342 Petabits per second.
@jtr @debian @kensanata So #NNCP at its core is different because it doesn’t require the source and destination to be online simultaneously. With TCP, the origin of a packet is responsible for ensuring it gets delivered, retransmitted if dropped, etc. With NNCP, you send data to the next hop and then that hop takes over responsibility. It may be days before that hop is able to deliver it. That hop may be a computer, or it may be a USB stick or a radio. 2/
NNCP
@jtr @debian @kensanata #NNCP can be used to send files and execute things remotely. Think of it like ssh/scp and authorized_keys. I can say: tar -cpf /usr | nncp-exec dest untar -C /backupswhere I defined untar on the dest node as something that runs “tar -xpf -” and then we add a -C telling it where to unpack.You could run a very similar command with ssh. Difference: dest doesn’t have to be online with NNCP. 3/
NNCP
@jtr @debian @kensanata In that nncp-exec example, the data piped to nncp-exec is saved, along with the command line to run, in a data packet that is encrypted using the public key of the destination. Now it can be transported, directly or indirectly (via other nodes), over the network, USB drives, CD-ROMs, tapes, radios, laptops, phones, a combination of these, whatever. It can traverse multiple NNCP hops along the way, and is onion-routed like tor 4/
@jtr @debian @kensanata So let’s make this practical. Say you have slow Internet at home but fast Internet at a coffee shop. You have 10GB to send out. On your desktop at home, you queue it up to go like this: desktop->laptop->remote_server. Your desktop encrypts the 10GB to remote_server, wrapping that in a packet encrypted to the laptop. The laptop decrypts the outer encryption when it gets the data, but still can’t see the actual data. Laptop sends it out when you get to coffee shop. 5/
@jtr @debian @kensanata I use #NNCP for backups. I take hourly snapshots with #ZFS. I have a backup system with no Internet. My backups are sent to a staging box, and then I can get those packets over to the backup box with the various methods I’ve mentioned. On my laptops, NNCP is configured to only automatically send packets to staging when they’re on the home LAN, because I may be tethered to 4G otherwise. But I can manually send them off when I’m on fast Internet away from home. 6/
NNCP
zfs
@jtr @debian @kensanata #NNCP also has an #asynchronous #multicast feature, which I use to sync my #orgmode #git repo as described here https://changelog.complete.org/archives/10274-distributed-asynchronous-git-syncing-with-nncp I also have more on NNCP here https://www.complete.org/nncp/ 7/
NNCP
asynchronous
git
multicast
orgmode
Distributed, Asynchronous Git Syncing with NNCP
@jtr @debian @kensanata So #NNCP can be used to transport #Usenet news and #email because fundamentally those things can be transported by piping data into the rnews and rmail commands, which fits perfectly with nncp-exec. In fact, a predecessor to NNCP, #UUCP, was the way email and news often flowed in the early days, and this support is mostly still there even in modern servers. It takes just a bit of tweaking to make it use NNCP instead. 8/
NNCP
email
usenet
uucp
@jtr @debian @kensanata So I hope this helps, feel free to ask followup questions π end/
@jtr @debian @kensanata So this is all just scratching the surface of what you can do with #NNCP. It’s sort of like explaining what you can do with #ssh or find. Hope that helps! /end
NNCP
SSH
thank you very much!
So I’m getting here later, it’s 2024 and curious where it’s all led. What I’m seeing you describe is sort of BBS experience and what led me back to setting up and using a BBS.