I tagged it last week, but it took me a while to build the images. See the tag commit for a list of the bugfixes. The big thing for me is the fix for amrd and the virtual machine performance fix. Either update via git, or download an image.
You may have trouble switching back to a vty if you’re running a recent Intel video chipset and using KMS. It’s a side effect of the new KMS support, but it is being worked on.
All the machines in dragonflybsd.org should now be available over IPv6.
Also, Matthew Dillon did something weird to the DragonFly IPv6 network stack.
Almost done with this year’s GSoC. It’s been astonishingly… easy? The students are working and the problems are difficult, but there’s been very little in the way of crisis.
- Daniel Flores: HAMMER2 compression feature (includes performance graphs)
- Larisa Grigore: System V IPC in userspace
- Pawel Dziepak: Make vkernels checkpointable
- Joris GIOVANNANGELI: Capsicum (no actual report; student is traveling)
- Mihai Carabas: hardware nested page table support for vkernels
Sascha Wildner has ported rum(4), run(4), and urtwn(4) from FreeBSD to DragonFly, to work within the not-yet-default new USB framework. This happened some days ago, but I’m just now catching up.
avalon.dragonflybsd.org, also known as mirror-master, is the final dragonflybsd.org system to be moved into the new colocated blade server. Your downloads of binary packages or DragonFly images should be speedier.
Remember my recent disk issues? As a side effect of protecting myself, I have a good example of deduplication results.
I have a second disk in my server, with slave Hammer PFSs to match what’s on my main disk. I hadn’t put them in fstab, so they weren’t getting mounted and updated. I got them re-created, but they were nearly full. Here’s an abbreviated df, from which you should be able to tell which drives I have :
Size Used Avail Capacity 929G 729G 200G 78% /slave/slavehome 929G 729G 200G 78% /slave/slavevar 929G 729G 200G 78% /slave/slaveusr 929G 729G 200G 78% /slave/slaveslash
That 78% is how full the Hammer volume was. I turned on Hammer deduplication, since it’s off by default. The very next day:
Size Used Avail Capacity 929G 612G 318G 66% /slave/slavehome 929G 612G 318G 66% /slave/slavevar 929G 612G 318G 66% /slave/slaveusr 929G 612G 318G 66% /slave/slaveslash
It’s a 1 terabyte disk, and I gained more than 10% back – That’s 100g of disk space that I gained overnight. There might be more tomorrow, given that it was all of 5 minutes of dedup work.
This won’t surprise you if you’ve seen previous deduplication links here, like my previous results or some real-world tests. It’s still great. I’d suggest turning it on if you haven’t – hammer viconfig the appropriate PFS and uncomment the dedup line.
There’s several debates exclusive to the Unix-like world: Vi vs. Emacs, System V vs. BSD, and so on. A more recent one that people tend to fragment over is XML in config files vs. anything else. Read through this recent threa, starting here, about SMF (which became about XML) on users@.
Only 3 more Mondays left in the student work part of Summer of Code! Unsurprisingly, it seems the students are mostly in the cleanup phase – as it should be.
- Daniel Flores: HAMMER2 compression feature
- Larisa Grigore: System V IPC in userspace
- Pawel Dziepak: Make vkernels checkpointable
- Joris GIOVANNANGELI: Capsicum (updated)
- Mihai Carabas: hardware nested page table support for vkernels
I’ll be working on the 3.4.3 release of DragonFly within the next 24 hours, and it should be available this week. I’ll have a list of the bugfixes it contains…
It’s really neat to suddenly encounter something done just for DragonFly that you didn’t know was coming: A port of Go to DragonFly. I think these changes are going into the next Go release, or at least I hope so. (More on Go if you haven’t encountered it before.)
If you’re curious about the hardware being used for the colocated dragonflybsd.org servers (this includes the website, the repository, the mailing lists, dports build machines, etc.), here’s the ‘MicroCloud’ product page. DragonFly’s model was purchased from iXsystems. Apparently those Haswell processors have a fantastic power consumption to performance ratio. (via)
I’m running a bit behind because I’ve been on the road, but here they are:
- Daniel Flores: HAMMER2 compression feature
- Larisa Grigore: System V IPC in userspace
- Pawel Dziepak: Make vkernels checkpointable
- Joris GIOVANNANGELI: Capsicum
- Mihai Carabas: hardware nested page table support for vkernels
One of the most-requested items for the DragonFly mailing list archives is reverse sorting by date. Mailman, which is what’s being used now for archiving, doesn’t have a ‘native’ way to do that. Has anyone seen a trick/patch to get that to happen? I already patch Mailman to get the message date to show in listings.
Sepherosa Ziehau suggests this relatively easy task: adding a TSC cputimer to vkernels. Apparently most of the framework to do this is already in place.
I’d be really surprised to find this affects anyone, but it’s possible: some kernel options specific to Cyrix processors have been removed, by Sascha Wildner.
If you look at the reports from students this week, they are mostly “I had bugs and I fixed them and there’s not much to do other than test”, which is the sign of well-planned projects. Here’s the status reports:
- Daniel Flores: HAMMER2 compression feature
- Larisa Grigore: System V IPC in userspace
- Pawel Dziepak: Make vkernels checkpointable
- Joris GIOVANNANGELI: Capsicum
- Mihai Carabas: hardware nested page table support for vkernels
The mailing list archives for DragonFly (lists.dragonflybsd.org) have been moved to new hardware. (Yay!) The patch that actually shows date in the listings needs to reapplied, cause Mailman is somewhat stale. (Boo!) I applied the patch and I’m regenerating all the archives now. (Yay!) There’s some garbled messages in the archives that cause a bunch of “no subject” partial messages to be dumped at the end. (Boo!) I’ll manually fix them if I can, someday. (Yay?)
Several parts of dragonflybsd.org are moving to a new blade server, so there may be some service interruptions during the transition.