Linux General Questions

*Legion* wrote:

Well, you're not really treating it like a server if you're running X and screen savers.

The idea of "boot off the CD and upgrade OS with an installer GUI" is kind of a Windows-ism and not really how one would be expected to upgrade a Linux server.

Ultimately, you did find the process for upgrading if you have a GUI: shut it off and upgrade from the command line. Which is still much less of a disruption than rebooting the system into a CD boot environment and then rebooting again after the upgrade.

It's a media server as well, so the GUI proves handy when ripping DVD's or administering programs like calibre.

To clarify, the entire upgrade went through the GUI, it was just that I switched to the console to kill the screensaver process. I was hoping to do it off the CD as that would have made sure very few files were open during the upgrade, more so since all my PC's backup to this machine over the network.

pneuman wrote:

Why didn't you just SSH in and run do-release-upgrade?

Why SSH when you can KVM in?

Anyway, the upgrade went very smoothly after that little glitch. Couple of questionable changes in the default XFCE settings, but apart from that everything came back up just as it was before the upgrade. I'm yet again reminded that Debian package management is bloody awesome, especially compared to how often I've had yum sh*t the bed on upgrades.

trueheart78 wrote:

Care to highlight some of the difficulties you faced with Arch (that you also overcame)?

Also, what were the items that made you finally decide to go back to Ubuntu?

*Legion* wrote:

My second biggest complaint with Arch was the level of maintenance required. It's not nearly as bad as its reputation might lead you to believe (the distro, in spite of what some of its fans would like to believe, is not "hardcore"), but my daily use required more manual intervention than I cared for. Particularly when dealing with what I call "the boundaries" - when there's a change in something that a lot of other things rely on, and you run your update when certain packages have been updated to accommodate but other ones haven't.

Mostly, I found the returns of running a rolling distro to be a lot smaller than in the past. These days, I can add a PPA in Ubuntu for packages that I really care about getting new releases of, and for the rest, the 6 month cycle is plenty fast enough.

Pretty much this.

Here's a rambling, anecdotal example. This is just what I went through setting up my desktop manager:

I decided I was going to run Arch with OpenBox. I install Arch, and then I get the OpenBox package. Then I go to the wiki to get OpenBox running. Great. How do I manage my network? Oh, well, I can run all of that from the CLI, or I can find some widget package. I need a dock/panel. I need a file manager. Screw this, I'm going to run Cinnamon. Okay, Cinnamon on Arch. Awesome. But I want different fonts, and icons, and a color scheme. Maybe I'll just get the ambiance theme from Ubuntu. Sweet, this looks nice. Wait, I just made Ubuntu, but I spent 20 hours doing it, and I still have to deal with maintenance. Screw it, I'm switching to Ubuntu.

I love Arch. I think it's really awesome. I really didn't understand Linux until I ran Arch for six months, but I don't want to devote 10% of my time to just keeping it running and tinkering anymore. Also, I'm fine with Unity.

That cohesion you're looking for is what I find a little difficult to do with Slackware. Granted: I could just run one of the DEs rather than bare Openbox, so most of the difficulty arises from that choice. I don't care for XFCE for some reason and KDE is nice but taxes my "legacy" system too much.

So I run tint2 and lxpanel and that's it. I launch almost everything with custom key maps in rc.xml, and everything else with bashrun or xterm. I don't like to use my desktop as a workspace so I don't need KDE's widgets, but I sometimes miss the "start" menu as a central registry of what's installed. Then, of course, there are the general aesthetic tuning options that are far more-manual in naked Ob than KDE or its ilk.

I guess I could try a full LXDE setup, but it still doesn't seem like it's the sleek get-out-of-my-way-until-I-need-you-but-look-good-doing-it solution I'm looking for.

Scratched wrote:

Most of the distros I've looked at recently recommend a backup - clean install - restore cycle, as upgrading is possible but inefficient and slow.

That says to me, "don't use this distro".

Debian is legendary for release upgrades - server admins talk about how, "I've upgraded this box since Sarge!".

Ubuntu, I've found to upgrade in place pretty well.

And then of course there's the rolling distros like Arch.

You mentioned Fedora, and now that you've said that, I do remember upgrading CentOS/RHEL to be a "fresh install" situation. Which is a big reason I dropped CentOS from the one server I was running it on.

Citizen86 wrote:

SSH is scary.

Once you get the hang of it, SSH is incredibly powerful. Try these links for a start:

http://shebang.brandonmintern.com/ti...
http://blogs.perl.org/users/smylers/...

I am a tard but love ssh. I feel like scratching a missing limb with my personal computer out of commission. I'm limping along on my Inspiron D531 running XP SP3, my work system, and it's...oy.

EDIT: With 1GB RAM.

SSH is scary.

It's one of the best Unix tools ever invented. Seriously. If you have any real interest in the command line (which is the largest reason to use a Unix in the first place), then SSH is likely to be one of the most important programs you've got, after your shell of choice. (typically bash, for most folks.)

It's not just a remote communication tool; its ability to tunnel ports makes it a form of VPN, and since it accepts and returns input just like any other Unix program, it allows you to tie remote machines into your local command flow. Say you've got access to a machine with tons of CPU power:

cat mylocaldatafile | ssh username@supercomputer "/path/to/supercomputer/processing/app" | cat >myprocesseddatafile

(the cats there are superfluous, you can just redirect from and to the files directly with the < and > bash tokens, but this method is easier to read for non command-line jocks.)

Or, if you need to do the same thing on a bunch of machines at once, you can fairly easily write tons of automation stuff using ssh as your backbone, letting you copy scripts to remote machines, and then connect and run them, saving the results locally, so you can check for errors.

There aren't many programs more useful in that environment.

Debian is legendary for release upgrades - server admins talk about how, "I've upgraded this box since Sarge!".

Debian's the best version of Linux for upgrades, but it is far from perfect. It is quite easy to hose a box very thoroughly upgrading versions. You have to be very careful, especially if anything about GRUB is getting touched. The grub -> grub2 upgrade was an unbelievable fustercluck... it broke several boxes for me until I figured out to completely remove grub and then install grub2 manually. The upgrade process was not in shippable condition, but they shipped it anyway. And the squeeze -> wheezy upgrade of dovecot blew up badly enough that it was fastest to do a new install, and then translate my old configuration files by hand. Not graceful.

You'd think Ubuntu would be at least okay, since Debian is, but I've had very bad luck with them, even in simple VMs. I find it's better to install a new VM, and then copy my files over.

Malor wrote:

but I've had very bad luck with them, even in simple VMs. I find it's better to install a new VM, and then copy my files over.

It really does feel a bit random at times. I have successfully upgraded Debian openVZ VM's / physical machines with very few issues but that doesn't mean I don't get nervous every single time. I'm slowly working on creating local VM's that replicate my live servers down to the IP's they use so that I can test upgrades on them first but it feels like a lot of work for not much reward.

avggeek wrote:
pneuman wrote:

Why didn't you just SSH in and run do-release-upgrade?

Why SSH when you can KVM in?

Regardless of how you get access to the server, I'm not sure why you didn't use just the online upgrade process instead of using a CD. I didn't even know you could upgrade by booting a CD -- I can't imagine that upgrade path is anywhere near as thoroughly tested as the more traditional online upgrade. I've upgraded a tonne of Ubuntu and Debian systems in-place over the years, and very rarely does anything catastrophic go wrong; even on those occasions, it's always been fixable.

Malor wrote:
Debian is legendary for release upgrades - server admins talk about how, "I've upgraded this box since Sarge!".

Debian's the best version of Linux for upgrades, but it is far from perfect. It is quite easy to hose a box very thoroughly upgrading versions. You have to be very careful, especially if anything about GRUB is getting touched. The grub -> grub2 upgrade was an unbelievable fustercluck... it broke several boxes for me until I figured out to completely remove grub and then install grub2 manually. The upgrade process was not in shippable condition, but they shipped it anyway. And the squeeze -> wheezy upgrade of dovecot blew up badly enough that it was fastest to do a new install, and then translate my old configuration files by hand. Not graceful.

I didn't think Debian automatically upgraded to GRUB 2 at all -- the lenny release notes talk about giving you the option to set up GRUB 2 as an optional chainload so that you can test it without breaking your existing GRUB 1 setup.

As far as Dovecot goes, it's sometimes really hard to avoid problems with specific packages when those packages have been upgraded to new major versions that can't read older configs unmodified. Debian can upgrade the package for you, and if you've configured it just through package options rather than editing the config files then it can usually update the configs to keep you running, but if you've had to edit the configs by hand, and those configs are no longer compatible with the new version, then there's nothing the OS can really do.

If that wasn't the case with your Dovecot setup, and you really did mean it when you said you upgraded from squeeze to wheezy, then I think it's still a little unfair to criticise, since wheezy hasn't been released yet. You should file a bug, though, so the devs can look at that before the release.

trueheart78 wrote:

Running a laptop: Lenovo W520 with an i7, an Nvidia Optimus setup, and a Crucial m4 SSD.

In regards to Nvidia's Optimus, I've been pointed to the Bumblebee Project, which just rolled out v3.0.

Doesn't matter. It still has sh*t power management that will eat through your battery. It's just not really ready for prime time, UNLESS you can disable the hybrid sh*t in the bios.

I can disable it just fine.

Malor wrote:

after your shell of choice. (typically bash, for most folks.)

Tangential question - what do most folks here prefer as their shell? I started off with bash and with the bash-completion package installed, have found it works reasonably well. But is there a better shell for folks like me who don't develop a lot but just administer a bunch of services on a linux machine?

pneuman wrote:

Regardless of how you get access to the server, I'm not sure why you didn't use just the online upgrade process instead of using a CD. I didn't even know you could upgrade by booting a CD -- I can't imagine that upgrade path is anywhere near as thoroughly tested as the more traditional online upgrade. I've upgraded a tonne of Ubuntu and Debian systems in-place over the years, and very rarely does anything catastrophic go wrong; even on those occasions, it's always been fixable.

You are probably right - I guess I was a bit nervous since this machine had a configuration that I didn't have much experience with (software RAID, LVM) and I felt that doing the upgrade in a mode where the only thing running was the actual upgrade process would reduce the chance of some other process kicking off that interfered with/broke the upgrade process.

Anyway, I had initially installed 11.10 because I wanted more updated packages for mt-daapd, but since then I've switched to subsonic which isn't quite so bleeding-edge in it's requirements. Now that I'm on a LTS release, I probably won't upgrade till the next LTS release comes out sometime in 2014.

Edwin wrote:
trueheart78 wrote:

Running a laptop: Lenovo W520 with an i7, an Nvidia Optimus setup, and a Crucial m4 SSD.

In regards to Nvidia's Optimus, I've been pointed to the Bumblebee Project, which just rolled out v3.0.

Doesn't matter. It still has sh*t power management that will eat through your battery. It's just not really ready for prime time, UNLESS you can disable the hybrid sh*t in the bios.

FWIW, I've run bumblebee under Ubuntu and arch to great effect. My bios doesn't allow me to disable discreet graphics.

If it works for you, go for it. But Bumblebee and Optimus was a total sh*t experience for me on my laptop.

If you use SSH a lot, check out Mosh.

Mosh looks nice!

Tangential question - what do most folks here prefer as their shell?

I think most people, these days, use bash. It's as battle-tested as code ever gets, and has tons of features.

There's another major school of thinking in shell land; C Shell (csh). C Shell itself has, I believe, mostly fallen out of use, but tcsh carries the torch; the whole point of that shell is that the scripting language is kind of C-ish. (bash isn't really like anything else, and its syntax is very strange, at times.)

There's also the Korn Shell, which I've never used. A quick description when I looked it up on Wikipedia suggests that it's a bash-alike, with some ideas taken from csh.

Oh, one suggestion: when writing bash shell scripts, it's usually best to default the first line to be #!/bin/bash instead of #!/bin/sh. Regular old /bin/sh is not guaranteed to be bash, it may just be a POSIX-compatible shell (like, for instance, dash on Debian*), and it's very very easy to use bash-isms by mistake. This can cause you grief if you try to share scripts with others, or move them between machines.

So just always say 'bash' instead of 'sh', and there's a bunch of weird bugs you can neatly avoid.

*Debian does this because dash is very tiny, and /bin/sh is executed many many times on system boot, so a small, lightning-quick /bin/sh improves boot and/or resume speed quite noticeably.

Malor wrote:

I think most people, these days, use bash.

Old people use bash! The kids these days use zsh.

Malor wrote:

Oh, one suggestion: when writing bash shell scripts, it's usually best to default the first line to be #!/bin/bash instead of #!/bin/sh. Regular old /bin/sh is not guaranteed to be bash, it may just be a POSIX-compatible shell (like, for instance, dash on Debian*), and it's very very easy to use bash-isms by mistake. This can cause you grief if you try to share scripts with others, or move them between machines.

So just always say 'bash' instead of 'sh', and there's a bunch of weird bugs you can neatly avoid.

*Debian does this because dash is very tiny, and /bin/sh is executed many many times on system boot, so a small, lightning-quick /bin/sh improves boot and/or resume speed quite noticeably.

Yeah, I actually got caught by this problem when porting init scripts from CentOS to Debian. That said, I tend to use bash in scripts mostly as bash -x for debugging and then revert back to sh once I have it working. Given that I mostly write init or cron scripts, I think it still makes sense for me to use sh.

*Legion* wrote:

Old people use bash! The kids these days use zsh.

I've never tried zsh, gave fish a try but eventually came back to bash. If I did want to switch environments, I think right now I'm more interested in trying out mosh as a replacement for ssh.

*Legion* wrote:
Malor wrote:

I think most people, these days, use bash.

Old people use bash! The kids these days use zsh.

Howcome? Are you being serious in a jokey way, or just being jokey?

Malor wrote:
*Legion* wrote:
Malor wrote:

I think most people, these days, use bash.

Old people use bash! The kids these days use zsh.

Howcome? Are you being serious in a jokey way, or just being jokey?

It's a trend that I've noticed as well. In the same way that I'm not interested in spending a month being less productive to learn vim by heart, I don't really care enough to figure out why people love zsh. People do seem to love it, but bash works just fine for me.

Yeah, I appreciate the different/better tools, but I'm still in the process of learning the standard stuff. Trying to keep myself from getting overwhelmed as it is.

Oh, I love shells, and I'd drop bash in a heartbeat if I thought zsh was really better, but I'd like to make sure that it's really better before I invest the effort.

(a sign of getting old: at 18, I'd already have it installed. )

edit: well, I guess I'm only medium-old, because I'm reading the zsh web pages, and it looks pretty good!

Malor wrote:
*Legion* wrote:
Malor wrote:

I think most people, these days, use bash.

Old people use bash! The kids these days use zsh.

Howcome? Are you being serious in a jokey way, or just being jokey?

Serious in a jokey way.

Take a look at this and this and this for a few bits on why zsh is cool.

Check out oh-my-zsh, the popular zsh framework. (I don't use oh-my-zsh directly, but I do peek into it on occasion to go "shopping" for things I may want to pull into my zsh setup)

In the dev communities I'm closest to (Ruby, etc), there's a lot of love for a setup of zsh + tmux + vim.

On the subject of shells, you might amuse yourself for a bit by reading about/playing with Rush. (if you haven't seen it before) Unfortunately it looks like it has been mostly abandoned at this point.

I know awhile back I was interested in learning about the history of Linux, and only got pointed to a paperback, since no digital edition was available.

Fast-forward a bit, and after some Wikipedia jumping, I ended up being pointed to the Free Software Foundation's book section, specifically Free as in Freedom (2.0): Richard Stallman and the Free Software Revolution. I downloaded the free pdf earlier this week, and I've got no regrets.

I'm into chapter 7 now, and man, has this been an interesting read.

This is a biographical book on Stallman (and yes, he's made corrections and footnotes himself), and I've gotta say, I'm enjoying it more than I expected to. I honestly had no idea what to expect, and short of quips about him (and the occasional XKCD mention), I hadn't studied him myself (thanks, college degree!).

It has, however, s helped me see what his stance on free software is, and why he sees things the way he does, and it also pulls back the covers on the origin of certain software I'm in the processing of either learning or looking to learn (GNU/Linux, Emacs, etc).

Definitely recommend it for anyone looking for a more in-depth look at Richard Stallman, or even just a history lesson of sorts.

What you don't see from that book, most likely, is what a towering asshole Stallman was, in the early days of Linux, insisting that everyone, everywhere, call it GNU/Linux. He made permanent enemies out of the kernel devs, doing that. "His" kernel still isn't done, and the piece that made his software useful as a free system was Linux, but he was insisting on naming rights to what a completely unaffiliated team had done.

He would literally pop into discussions about something or other in Linux, scattered all over Usenet, and start insisting that all participants immediately comply with his preferred naming convention. Even if he had nothing to do with the conversation to begin with, he'd just show up out of nowhere and start telling people what term they should be using. He even did it in a thread I was in, once.

There's some justification for the stance, but it's quite possible to make Linux distros that don't include any GNU software at all. This wasn't that common, back then, but even if it did include GNU, why did it have priority of naming? Why not Linux/XFree86/GNU/BSD? Or, by lines of code in use, I think XFree86/Linux/GNU/BSD would have made more sense.

Linux was not a GNU project, and it wasn't inspired by the GPL -- Linus just happened to find the GPL, I think about six months into the project, and figured it sounded pretty okay. He didn't buy into any of the other stuff Stallman was trying to do, very much disliked the agenda that Stallman was pushing, and *hated* the attempt at name hijacking. It was a crappy thing to do, and much of the reason the kernel devs have refused to switch to GPLv3 is because Stallman made himself an enemy, when he didn't need to be.

It's funny, because when people criticize Stallman, I'll often chime in with just what an enormous impact he's had on the world; his act of ju-jitsu with copyright completely turned the computer world on its ear. There aren't many people who can directly claim to have changed the world in a way that almost everyone can see, and Stallman is one of them. But, at the same time, when I see posts idolizing the guy, I'm prompted to point out what an amazing asshole he can be, minimizing the work of others, and trying to jam his way into the limelight.

I suppose that's the nature of people that really shake things up; they have to be stubborn beyond imagination. But Stallman is very much a mixed bag, so don't take his thoughts uncritically.

Malor wrote:

What you don't see from that book, most likely, is what a towering asshole Stallman was, in the early days of Linux, insisting that everyone, everywhere, call it GNU/Linux. He made permanent enemies out of the kernel devs, doing that. "His" kernel still isn't done, and the piece that made his software useful as a free system was Linux, but he was insisting on naming rights to what a completely unaffiliated team had done.

Not sure, but it's safe to say he does come off as diligent/stubborn about what he believes.

Malor wrote:

Linux was not a GNU project, and it wasn't inspired by the GPL -- Linus just happened to find the GPL, I think about six months into the project, and figured it sounded pretty okay. He didn't buy into any of the other stuff Stallman was trying to do, very much disliked the agenda that Stallman was pushing, and *hated* the attempt at name hijacking. It was a crappy thing to do, and much of the reason the kernel devs have refused to switch to GPLv3 is because Stallman made himself an enemy, when he didn't need to be.

Oh, I plan on reading up on Torvalds as well, as I find it facinating looking at the different reasons people have for their choices.

Malor wrote:

But, at the same time, when I see posts idolizing the guy, I'm prompted to point out what an amazing asshole he can be, minimizing the work of others, and trying to jam his way into the limelight.

I suppose that's the nature of people that really shake things up; they have to be stubborn beyond imagination. But Stallman is very much a mixed bag, so don't take his thoughts uncritically.

Oh, definitely forming my own opinion. Just glad to have more insight into it than I have previously.

I already posted this, but I'll just leave it here, http://www.jupiterbroadcasting.com/1..., and you can listen to the man himself. It's interesting if nothing else.

Where I think Stallman has problems is understanding where to pick his battles. He's always that guy, the one that wants to fight every battle.

I don't mean he should compromise. He's uncompromising in his vision and beliefs, and that's good. There's plenty of other people to play the role of the pragmatist. I love that he keeps his personal computing as strictly Free as humanly possible, because there should be somebody doing that, and he's the guy willing to sacrifice convenience to do it.

But not turning every tiny point into a battle isn't compromising, it's just picking and prioritizing targets.

He makes himself sound worse because every interview inevitably involves the interviewer trying to get him to "accept" compromises. "What about X? What about Y?". And of course he says no to X and Y, because he himself won't accept those compromises, and he sounds like an ideologue that's telling everyone else that they're bad for doing X and Y. He just doesn't know how to finesse those things to still say no but without coming across like he's judging your ignorant computing.

Also, his names for devices with DRM are just dumb. Calling the Kindle and iPad the "Swindle" and "iBad" is like the Micro$oft of the '10s (mercifully, nobody else seems to be repeating them).