I got the old mailing list archives converted to Mailman. As I wrote in a post to users@, please let me know about problems. There’s some garbled messages from the old archive that were placed into the 2012-Sept. section for each message; I’ll be cleaning those up manually.
The old mailing list software for @dragonflybsd.org mailing lists, bestserv, apparently allowed people not subscribed to a list to post to it, after answering a confirmation message for each message posted.
The closest way to duplicate that for Mailman is to sign up for the list you want, and then turn off mail delivery for your email address in the config page for that mailing list. This won’t affect a lot of people, since most people want list output in their mailbox, but there’s at least a few I’ve fixed that way.
The combination of Mihai Carabas’s successful Summer of Code work on the scheduler and the recent Postgres benchmarking got Matthew Dillon to start thinking about making UNIX domain sockets work better, a shortcut around the buffer cache, scheduler improvements and then a new default scheduler, along with a change in idle CPU behavior. The best place to understand all the changes is in his long post to users@.
We should have benchmarks soon to show the performance improvements from all this.
Smartmontools will catch impending disk failures about 2/3 of the time, so it’s useful to run it and interpret the results. The results can be somewhat complex, though. However, it can be useful to look at other people talking about the output and glean knowledge from the context.
A discussion of why root automatically lists dotfiles with ls and all other users do not led to a long thread that includes some UNIX history. There’s some useful and some not-so-useful parts in the thread, but it did indirectly produce a way to reverse the listing effect itself.
Francois Tigeot benchmarked the recent Postgres 9.3 release. Postgres apparently switched to using mmap instead of SYSV shared memory, and Francois has done this to show the performance differences. (view the PDF in his post.) Of course, work has continued since this was posted, so there should be new numbers soon, and new changes I’ll document in a future post.
I haven’t found a reference to the exact decision Postgres made on how to handle memory; please post a link in comments if you know a good source.
NYCBUG, the NY BSD user’s group, has an RSS feed for their speaker events, found via Dru Lavigne’s always useful BSD Events twitter. The next event at the start of October is a talk about SMPng in FreeBSD. Given that it was the project that in part led to the creation of DragonFly, I’d like to hear about it. (and even better, have someone more qualified than I compare and contrast that approach with what’s in DragonFly.)
If you do, they don’t get cleaned up during the normal ‘hammer cleanup’ nightly routine. Chris Turner has added a way to manually specify them as a cleanup target.
I’m pretty sure in this case ‘offline’ means ‘nothing streaming to it from a master disk’. I think.
Matthew Dillon has created an experiment: shared page table mappings. It’s controlled by a sysctl, since it’s still experimental. The real-world effect is reducing the number of memory faults as a process uses up memory, and decreasing the overall memory usage. The obvious benchmark is Postgres speed; this makes the initial expansion of memory usage much less of an drag on speed due to a high memory fault rate.
If all this mention of faulting sounds like a problem, remember memory faults on BSD are normal; that’s how programs indicate they need more memory space by causing a fault. This is in contrast to Linux, where memory is allocated a different way. Or at least, that’s my understanding. (If you know better, please comment.)
If you are using an Intel 10G Ethernet card with a 82598GB chipset, you’re using ixgbe(4). You may want to set the net.inet.tcp.sosend_agglim sysctl to a value over 12 in certain circumstances, as described by Francois Tigeot.
These are small, but they make life easier: Hammer now has a scoreboard file, for viewing of mirror-streams running in the background. There’s also a ssh-remote directive, so you can use ssh without enabling an interactive shell, and a HAMMER_RSH environment variable so different remote shells can be used. These are all for Hammer 1.
If you ever wanted to read an extensive discussion about the scheduler, today’s your day. Mihai Carabas, who posted the details of a long discussion he had with Matthew Dillon about how the scheduler works. You may recall Mihai’s name from the very successful GSoC scheduler project that recently finished.
(look, a link to the new Mailman archive!)
All the mailing lists at @dragonflybsd.org have been converted over to Mailman. The old archives are still functioning, and will continue to update until I can find enough old material to retroactively complete the Mailman archives.
If you’re on any of the dragonflybsd.org mailing lists, I’m converting them over from bestserv to Mailman. I’ve done bugs@, commits@, hammer@, and test@ so far, and I’ll move the old archives over to the same format as soon as I find an actual mbox file with the old messages in it. The remaining lists should be tomorrow.
(If you got a note tonight from a list you were sure you were unsubscribed from, that was my fault; sorry! I didn’t understand the format of the bestserv user lists.)
DragonFly user varialus has created a page on the DragonFly website (it’s a wiki, after all) with all the notes taken from trying installation, etc. There’s far more notes than I expected there, so it’s worth a read.
Much of this new document has been around in other forms for a while, but now, there’s a brief guide on porting drivers to DragonFly in the source tree.
Because here’s some recommendations on good models, and here’s a way to check SSD health. Seriously, they’re great.
I’ve uploaded DragonFly 3.0.3 disk images, both ISO and IMG. They should start appearing on a mirror site near you in the next 24 hours. This took a while after the tagging, I know, but I wanted to make sure every one of them booted. I didn’t on a previous release, and regretted it.
If you have a LSI RAID card, meaning you are using the mfi(4) driver, Sascha Wildner has added /proc/devices to linprocfs, so that LSI’s MegaCLI configuration utility will run.