Raspberry Pi Catch-All

Not for what I'm doing with it. One's fine. The only reason it's CPU-loaded is because I'm testing the RNG on it, and that's because the driver is weird.

If I bought more, and tested each one on a different piece of the test suite, I'm not sure the result would be as meaningful.

And I have no pressing need to test it anyway, so spending a bunch of money for a one-time thing that's merely curiosity would be kind of silly.

trueheart78 wrote:
Citizen86 wrote:

The Pi is probably one of the slowest processors you can find. As well it should be for $35, but it's still just so slow. I've ran a web server off of it, and it would work fine for completely static sites, but as soon as you put up anything dynamic, that means work for the processor, which means that it's going to be seconds before anything loads. Just how it goes.

I wouldn't consider the Pi viable for any type of number crunching. I think trueheart78 said it took him an entire weekend to compile something simple on the Pi from source.

Which webserver did you try? I'm serving up PHP on nginx alright (no benchmarks).

As for compiling... Yeah. I wanted to get node.js and mongo support, for potential Meteor use. MongoDB took around a day to compile.

I used Apache, and it was on the 256mb RAM model. I didn't run top, so maybe it was a RAM issue (very possible). Nginx very well might be faster.

I set up a new Wordpress site on it and it was taking seconds to load the dashboard, so I basically said screw it and loaded the server locally. Nginx would probably have done better.

I used to run actual websites on processors that were no faster than that. Now, this was when everyone was on dialup, and a 1.5Mbit connection was considered a bit luxurious for a tech office of 150 people, but it was still perfectly possible to run something decent from a P2-300. It still should be, I would think. You just can't assume unlimited CPU resources when you design your site.

Malor wrote:

I used to run actual websites on processors that were no faster than that. Now, this was when everyone was on dialup, and a 1.5Mbit connection was considered a bit luxurious for a tech office of 150 people, but it was still perfectly possible to run something decent from a P2-300. It still should be, I would think. You just can't assume unlimited CPU resources when you design your site.

Of course you can, and I didn't say you couldn't, but I was saying for running a database and a CMS, it runs rather slowly. As I mentioned, in my experience, I used Apache and Wordpress, and trying to do anything in the dsahboard was slow, waiting multiple seconds before each page loaded.

I also said that perhaps Nginx could help fix that. But for serving static html files where there is very little processor usage, I'm sure the Raspberry could do that fine, and at a fine speed as well.

You know, in rereading, the tone on my last post came out snippier than I'd intended; I'd originally written more, and then deleted a bunch of stuff, and didn't realize that it sounds short and dismissive without the additional explanation.

What I'd gone on to say that was the webpanels on Wordpress might suck, or that Wordpress itself might just be assuming unlimited CPU when it shouldn't, but that the actual sites might run okay, even if the panel wasn't so hot. And I was also going to mention that DNS lookups can sometimes really screw up a website, and if you'd just installed the Pi, it might be failing there, rather than because of CPU grunt.

Even if it was totally a CPU problem, as you observe, that can be worked around. You can do a lot with a processor of that caliber, but it does take a different mindset.

As an aside, I still find it deeply amusing that, on the Pi, the CPU is a peripheral of the video card. It's just this little thing glued on there, almost an afterthought, and yet it's the total focus for the entire Pi ecosystem.

Malor wrote:

You know, in rereading, the tone on my last post came out snippier than I'd intended; I'd originally written more, and then deleted a bunch of stuff, and didn't realize that it sounds short and dismissive without the additional explanation.

What I'd gone on to say that was the webpanels on Wordpress might suck, or that Wordpress itself might just be assuming unlimited CPU when it shouldn't, but that the actual sites might run okay, even if the panel wasn't so hot. And I was also going to mention that DNS lookups can sometimes really screw up a website, and if you'd just installed the Pi, it might be failing there, rather than because of CPU grunt.

Even if it was totally a CPU problem, as you observe, that can be worked around. You can do a lot with a processor of that caliber, but it does take a different mindset.

Probably true, I didn't get far enough to test an actual site on it

And like I said, I had the 256mb RAM version, so it could have been Apache hitting a wall there as well. Wordpress doesn't require a super-server to work well, but I think that even any cheap VPS you could get would still probably have more CPU grunt than the Pi.

Even Mysql has issues with it though. I had to assign 512mb-1gb of swap space just to get Mysql to START when I had Arch installed on the Pi. Probably a given, but it was my first time setting up Arch on the Pi and was having issues with it there. Something to keep in mind though.

Boy, things have sure changed since I was a kid. I was about to observe that, yeah, 256 megs is pretty skinny, and that a system with that much RAM would probably be better as a microncontroller or something -- that's tons of room to write code to control the GPIO pins.

And then I realized, wait, 256 megs is skinny? 256 million bytes isn't enough anymore.

There was a time when 256 thousand was pretty good, and one human generation later, my current machine has 32 billion bytes of main memory.

So, while I'm waiting for the pi's dieharder run to finish (probably at least another week), I decided to play with the builtin RNG in the cheapo Haswell proc I just picked up.

The way it works for this RNG is that you actually execute a new CPU instruction, which directly returns a random value out of the hardware RNG. But, here's the thing: it's not really a true RNG. Rather, it's a PRNG, like /dev/urandom, that's constantly being reseeded by a "real" RNG, like /dev/random.

So, here's the thing: I don't trust it. And the reason I don't trust it is because the design doesn't make sense. Intel says, in its documentation, that the true RNG is able to generate three gigabits of data per second, internally. Three gigabits. Three gigabits. And then they run it through a predictable algorithm? That makes zero sense.

Further, the machine instructions are designed to be able to fail... that is, to return a status that says that there were no random bits available. But if they're polling a PRNG, there would never, ever be a reason to do that. The chip will NEVER run out of "random" bits when a PRNG is running.

So, this is what I think happened. Intel made a great, true RNG on their chip, and the NSA leaned on them to butcher it, forcing them to run the generator through a predictable sequence. The design makes no sense otherwise. I think the weird design is the engineers trying to warn us that their RNG is not reliable, that the PRNG part was a retrofit, a hack to screw it up. And I don't think you should trust it.

Further, I hacked up a little C program to call the Intel-provided library (a teeny little static library to do the machine calls for you, basically), generate 1024 32-bit ints at a time, and dump them to stdout. I did two runs of that through dieharder, and it didn't fail ANY tests. That smells real bad to me. There are more than a hundred tests, and the threshold for failure is at 2.5%, so on average, at least a couple tests should fail on each run. The fact that none of them did tends to suggest that the results aren't random enough.

I don't trust rdrand, and I don't plan to use it.

Malor wrote:
I don't know what it does, but I have one in my backpack

That's really weird. I can't imagine what you'd use such a thing for, unless it has a built-in Ethernet adapter. Maybe it's for USB range extension? Maybe?

The Mifi does not have an Ethernet port, but does support Ethernet out its mini- or micro-USB port. Given I have a breed of this cable (at least in terms of connectors), albeit with a different kind of USB connector, I presumed it was possible.

Ah, okay, I think what's going on there is that the MiFi looks like a USB-attached Ethernet device. So, you plug it into a PC, and the PC thinks it's a NIC on a stick, so to speak. I don't think any router out there will let you plug in new devices on USB, at least unless you're running a Linux derivative with the ability to configure things at the command line.

Boy, I just don't know what that cable is for. Very interested to find out.

A few things.

1. Cable is for an APC unit, USB to ethernet. I anticipate it's useless to me, though perhaps if I were a hacker I could write a driver for it. I am not a hacker.
2. DIR-655 used to support wireless bridge mode (prior to firmware 1.05) but no longer does. They maintain they changed it in keeping with the design of routers, that the feature unnecessarily complicated the 655. Many people feel it is a business/marketing decision.

I have an older Belkin Wireless G that I think supports wireless bridging. It's g speed but may end up being better overall. The MiFi gets flaky when I connect it to my PC and pull ethernet over the MiFi's USB port. I am considering picking up the WRT54GL in the trade thread that's flashed with DD-WRT already, but want to figure out if what I have will do the job first. I may be able to flash my Belkin with DD-WRT even.

Thanks again for the tips.

1. Cable is for an APC unit, USB to ethernet.

Maybe it has a tiny little USB Ethernet adapter in there, then. Is the end near the USB side sort of like a little rectangular box, much thicker than the actual cable?

They maintain they changed it in keeping with the design of routers, that the feature unnecessarily complicated the 655. Many people feel it is a business/marketing decision.

I would side with "many people" on that one. What a stupid thing to remove.

You already said DD-WRT doesn't support it, but check OpenWRT, too, just in case.

So they're selling at about half the yearly rate of the Commodore 64.

That's really not bad at all. It's a different market these days, of course, but that's still quite an accomplishment for such a hobbyist-oriented machine.

Malor wrote:
1. Cable is for an APC unit, USB to ethernet.

Maybe it has a tiny little USB Ethernet adapter in there, then. Is the end near the USB side sort of like a little rectangular box, much thicker than the actual cable?

There is what I'd expect is a ferrite core, but perhaps that's where the magic is. Otherwise, it's standard USB to RJ45.

They maintain they changed it in keeping with the design of routers, that the feature unnecessarily complicated the 655. Many people feel it is a business/marketing decision.

I would side with "many people" on that one. What a stupid thing to remove.

Thing is, unless they're just shills, lots of people in the forums I've sifted through agree with the approach. I dunno, seems it's a pretty straightforward feature given its availability in other consumer-grade routers, like my Belkin G.

You already said DD-WRT doesn't support it, but check OpenWRT, too, just in case.

Hmm. Yeah, I'll check. Thanks.

EDIT: No dice. 620? Yes. 825? Yes. 655? Nope.

Wow, that bites.

That 54GL will definitely work in client bridging mode, but remember that the hardware is very old, and quite slow. They've improved the hardware some over the multiple generations, but you shouldn't plan on more than probably about 40 megabits, and if it's a real early one, it may not even go that fast.

A 5 megabit connection was pretty quick, when the 54G series was launched.

edit: plus, 20 megabits is about the best you can get through G networking, anyway. Even in wired mode, though, it won't get a ton faster than that.

Malor wrote:

Wow, that bites.

That 54GL will definitely work in client bridging mode, but remember that the hardware is very old, and quite slow. They've improved the hardware some over the multiple generations, but you shouldn't plan on more than probably about 40 megabits, and if it's a real early one, it may not even go that fast.

A 5 megabit connection was pretty quick, when the 54G series was launched.

edit: plus, 20 megabits is about the best you can get through G networking, anyway. Even in wired mode, though, it won't get a ton faster than that.

Yeah, that's why I was trying to get the DIR-655 working. However, roughly half of my devices are g anyway (PS3, this laptop, two iPhone 3GSes, iMac) so we're in at best a mixed environment. I tried a few years ago running a mixed g/n environment, with one router serving each protocol, and that was abominable. I am not savvy, and was less so then, so I may have, for instance, had them on dueling channels.

However, I'll take g if it means I can offload the routing function from the Jetpack. It supports up to 10 devices, and it implies it will support them in parallel, rather than serially, but I'm not sure this is actually in the hardware's capability. I regularly have a device or three that won't connect to it, with only 4-5 connected at such a time. Also, I live in a single-story slab-foundation ranch house now and its signal barely reaches the back bedrooms all of 20-30 feet away through a couple walls of drywall. Even my old Belkin reached further through plaster of my 104-year-old three-story.

I've been poking at languages lately. I want to get my hands dirty in the Python statistical stack for fun and potentially profit. I also recall using Mathematica and thinking it was kinda okay, albeit many years ago, and wondering if current versions (what with their reported/marketed internet-enabled publication/interaction features) wouldn't be worth considering. I'm thinking--quite beyond the scope of my abilities--about building a data science team at the small company I've joined, and I could see wrapping many of our analysis and reporting functions into Mathematica.

Which is almost entirely beyond the specific point of this, but (a) I own a Pi and (b) have kids so (c) this is a no-brainer.

It would be so nice if I could get my LAN situated so I can reliably play with Mathematica on the Pi. Even better if country innernet [sic] wasn't LTE, but, hey, I gots innernet.

Wow, that is super cool, I've always wanted to play with Mathematica. I have no actual use for it, I just wanted to play, and, well, voila!

Even my old Belkin reached further through plaster of my 104-year-old three-story.

Wired networking always works. Wireless is inherently unreliable, but run a good cable, and the problem is solved permanently.

Just a thought.

Malor wrote:

Wired networking always works. Wireless is inherently unreliable, but run a good cable, and the problem is solved permanently.

Except when you have mysterious line drops where a device that was auto-negotiating to 1Gbps drops down to 100Mbps. This after swapping out the Ethernet jacks. Can't afford to rip out the entire line now either

Well, even 100Mb is still a lot better than most wireless.

Malor wrote:

Well, even 100Mb is still a lot better than most wireless.

It's a definite #firstworldproblem, but when I have an internet connection that is faster than 100 Mbps (and climbing), "Fast Ethernet" feels like I'm being short-changed!

I'm still using a 54GL with DD-WRT. No one in the house has any wireless N or newer devices so I haven't really been in a rush to buy a new router. Especially considering the newer routers I would want to buy are well over 150 bucks! That's just way to much. I bought this router ages ago for 50.

ASUS has a real good one for $95, the RT-56NU. N-class wireless, and it'll route about 800 megabits either up or down, 1200 if you load both directions at once.

Malor wrote:

Wow, that is super cool, I've always wanted to play with Mathematica. I have no actual use for it, I just wanted to play, and, well, voila!

Even my old Belkin reached further through plaster of my 104-year-old three-story.

Wired networking always works. Wireless is inherently unreliable, but run a good cable, and the problem is solved permanently.

Just a thought.

I would love if this were a solution of the root issue. However, the Verizon Jetpack 5510L has no RJ45 port. It has a mini-USB port that can be made to carry the network signal, not unlike tethering a phone. Actually, for some reason, I didn't really consider it that way until now, though it's obvious and not unprecedented after a quick DDG search. I'ma look into that further. This may still be a non-starter if it requires flashing with *-WRT.

The main difficulty in setting up the router was that the Jetpack doesn't have the RJ45 ethernet port to which to directly connect the router, and so lacks the ease with which to cable the rear of the house to the source of the internet.

Well, once you have the wireless bridge running, you can haul wires anywhere you want. In a 104-year-old house with thick walls, they're pretty much guaranteed to work. Even if your house was a Faraday cage, they would work, although getting a signal to the MiFi would need an outside antenna.

IMAGE(http://i.imgur.com/E3QuHW6.jpg)

Netflix on raspbmc??!!

Much glad tidings!

So, I've been working with some ruby code and thought, "Man, it'd be great if I could get my Raspberry Pi to support this."

Fast-forward a few days, and after setting up RVM with Ruby 2.1 on my Pi, installing passenger and re-installing Nginx (passenger requires it to be compiled with support, so... bleh), I've got it up and running now (using Sinatra): http://ruby.th78.me/

trueheart78 wrote:

So, I've been working with some ruby code and thought, "Man, it'd be great if I could get my Raspberry Pi to support this."

Fast-forward a few days, and after setting up RVM with Ruby 2.1 on my Pi, installing passenger and re-installing Nginx (passenger requires it to be compiled with support, so... bleh), I've got it up and running now (using Sinatra): http://ruby.th78.me/

Nice! But...a table? +)

I need to settle on a project for mine. The issue is, all the projects I want to do are gargantuan, and the more-realistic ones are more mundane. For instance, somehow combining it with my Baofeng UV-5R to create a numbers station would be dope, but while it sounds feasible I'm still fighting for time and attention and energy to get deeper into Python and the scientific computing stack.

Great problems to have, of course.

muraii wrote:
trueheart78 wrote:

So, I've been working with some ruby code and thought, "Man, it'd be great if I could get my Raspberry Pi to support this."

Fast-forward a few days, and after setting up RVM with Ruby 2.1 on my Pi, installing passenger and re-installing Nginx (passenger requires it to be compiled with support, so... bleh), I've got it up and running now (using Sinatra): http://ruby.th78.me/

Nice! But...a table? +)

Darnit, murali, I'm a developer, not an HTML guru.

muraii wrote:

Great problems to have, of course.

I have 3 Pi's: my hacky web server, an XBMC setup, and another that I've not settled on a project for just yet, but I'm thinking things up. Tempted to order some extra hardware for it (LCD screen, etc).

In the process of re-tooling my Ruby / Pi setup, as the passenger gem was great until I tried to post data to it. Then it bugged out and failed miserably.

So I've been working on getting Unicorn running, and I'm almost there. Got a few things to iron out, but I do have a single app running at the moment, and post-ing works, so it's a win thus far