Random Tech Questions you want answered.

Good point...

next question.... how hard is it to turn a visible light camera into a electron "light" camera so you can see things at 125nm?

I feel like this would be a good Vsauce video...

Cameras are designed to work with photons, not electrons. What you would end up with would be an Electron Microscope, not a modified camera. Your lens would be an electric field. You’d need a source of electrons in a beam, and a receiver to catch them, and a device to turn that data into a picture.

Likewise, cameras are not microscopes. For those, you need stabilized bases, long focal lengths (which limit the area you can resolve) and good lighting. So a device that looks for skin cells in air flows would be some kind of microscope, while one that looks for viruses would be an electron microscope.

Cameras as we know them just can’t do these tasks.

Electron microscopes are still, I think, the size of a good-sized office desk. They have to accelerate a beam of electrons with pinpoint precision, and then a sensor has to detect where the electrons are bounced to, which gives you the outline of an object you're looking at. The thing being imaged has to be inside a chamber, and I think the scanning is quite destructive to whatever's being focused on, particularly if it's biological. (edit: note, however, that this is vague knowledge on my part, and is quite old, so modern electronic microscopes may look quite different. They should still require a chamber, however, because I think they need sensors in a sphere to know where the electrons went.)

In effect, you're asking how to turn a telescope into an x-ray machine; the answer is to throw the telescope away and build an x-ray. The machines are so different that they have nothing in common beyond giving you a visual representation of a thing.

edit: I went and looked it up, and it appears that electron microscopes have sensors behind the object, not all around. The reason for the chamber is because the thing to be imaged needs to be in a vacuum so that the electrons don't scatter off air molecules before hitting the target. And it only works on extremely thin slices of materials, so biological material needs to be dried and embedded in resin before they can be sliced thinly enough to be imaged.

I love that you two just know these things.... thanks

I've never bothered to look at the specifics of a Emicroscope. i was under the impression that they were quite hard to come by, but then a doc in one of my lectures seemed to hint at using one to diagnose hepatitis. I imagine the process would have to be somewhat cheap if they were willing to do this for a patient. but yes, still way to complicated if your trying to catch electrons being ricochets

Robear, Still not sure i understand the focal length thing.... What's the difference between Malor's people in windows example and cells?

edit : sure enough, hep A can be diagnosed via emicroscope according to some fancy medical best practice (journal?) my school subscribes to

I am sure that electron microscopes are still as hideously expensive as a few years back ($3 million +) when I contracted for a company that builds the high rez cameras for them. They were also one of the few companies that afforded an array of ~40 SSDs some 5-6 years ago. Each picture that the special camera took was 5-20 GB or more!

Is there anyone here who can recommend a plugin or app to easily download youtube videos?
Someone I know is starting a business where the content will be private power point presentations. They are going to convert them to videos with accompanying sound but don't want to take up a ton of space on google drive. So I suggested the easiest way to access the videos would be private youtube videos. After a point, the business will want to allow them to view the videos off line for non technical people that may not have the greatest net access.

So they basically want them to be able to download the video but not share the link to those who aren't supposed to have access. (they know they can't stop them from copying the video once it is downloaded but that is much more involved than sharing a video link to 100 people that haven't paid for the program)

Long story short: if anyone has recommendations to a easy access plugin that downloads youtube videos, I'd appreciate it.

I think your in a bind because people will also be able to share the downloads you give them.

I think there is a way that you can change the link required to access it maybe that would enable them to keep it from getting shared too much.

Robear, Still not sure i understand the focal length thing.... What's the difference between Malor's people in windows example and cells?

Well, it's simple. The focal length is how far the light travels after passing through a lens before it converges onto a point. The longer the focal length, the "bigger" the image on your end will be, assuming the object's distance does not change. So longer focal length equals more magnification. (Until the object is *inside* the focal distance of the observer - 25cm for a human - when the opposite effect holds.)

However, the longer the focal length, the smaller the field of view is. Imagine cutting a two cm length of pipe and holding it up to your eye. Now, imagine doing the same thing with a 50cm piece of the same pipe. Intuitively, you know that the latter will show you a smaller circle of the world. That's field of view.

So as magnification increases via focal length (for a camera), the field of view decreases. That means that the more you magnify, the less you can look at. So you have to be looking right at the object to find it at great magnification. You might remember this from high school biology labs. And you can see it happen on your phone camera by zooming in on an object and watching the stuff around the object fall out of the picture.

Take physics, it's great for this stuff.

EvilDead wrote:

Hmm, well that is very annoying as I enjoyed being able to search the history. We don't use Google messages because it's reliance on cell signal and SMS / MMS just suck. What's app is out of the question now. Maybe Discord or Signal?

I would say Discord for personal stuff, probably Slack for professional stuff.

It's a fundamental principle of optics (and related fields like radio astronomy) that the best possible resolution (in radians) is equal to the wavelength of the light (or radio etc) divided by the diameter of the instrument (camera lens). A human skin cell is about 30µm (Wikipedia), so if you're taking a selfie close-up from about 30cm, you need a resolution of 1e-4 radian to make out detail the size of a skin cell. Visible light has a wavelength around 500nm, so the camera lens must be at least 5mm in diameter. That's plausible for a phone. Of course you'd need a larger lens if you want to make out details within the cell.

That would also take a really good lens. Those things can be horribly expensive.

They might be able to do something with interferometry, using two cameras, as widely spaced as possible, as a virtual larger lens, but that would increase the exposure time. Plus, I think there might be focus problems, as doing that at selfie length, and doing it at something a hundred yards away, or a thousand, would each need the lenses aimed slightly differently. It wouldn't just be extend/retract, they'd have to rotate a little, and they'd have to be incredibly precise.

Probably not coming to cellphones anytime soon.

The idea of moving away from Google services came up in the Stadia thread recently. This is something I've thought about doing as well, but moving email has become somewhat of a stumbling block. I don't want to move to another giant corporation like MS, and it's hard to find consistently good reviews of any of the smaller services (fastmail, protonmail, etc).

Anyone using something that they strongly recommend?

Personally I use a custom domain and forwarders, to avoid being tied to a single email provider.

That is, my "public" email is [email protected], where bar.com is a domain I registered. That's the address I give to people, and use for logging into twitter and whatnot. At my registrar, I have it set up to forward mails from that domain to [email protected], and also to a protonmail as a backup, and I do my actual day-to-day emailing from the gmail account.

Obviously the main account could be at outlook/yahoo/fastmail/wherever. The main point is that my signups and business cards and whatnot aren't tied to the provider, so if gmail banned me or whatever I wouldn't lose all my other accounts.

I've been using ProtonMail for 2 years now, as a paid service. Moving from Gmail was indeed a huge hassle, but I just did it account by account over a month or two. Gmail is free and keeps on trucking alongside any other email service you would choose, so no rush there. As a LastPass user I just searched for "@gmail" to find all remaining user accounts tied to Google. I took the opportunity to simply delete a lot of unused accounts as well.

Calendar at ProtonMail is still in beta, and Contacts are not as sturdy, so I still use Google for both. Especially as it's easier to integrate Google Contacts into my iPhone.

I find the Protonmail interface to feel a bit outdated, but because of that also more information-dense. I'm not a huge fan of all the whitespace in modern webdesign, but YMMV.

I never succeeded in fully transferring all email data to ProtonMail, so I used Google TakeOut to export my mails to at least have my own backup.

tl;dr version: yes it's a hassle, but there's no need to rush and ProtonMail works just fine.

I have been using fastmail for years. Has been utterly reliable for me and I recommend it. I originally went with it because it is one of the few decent providers that lets you setup custom sending identities for your own custom domains, at least when I last researched it.

I moved my wife over to it a few years ago and she has never had problems either.

Is anyone good with Docker / Linux ? I’m struggling with mounting persistent storage or even knowing what that means...

billt721 wrote:

The idea of moving away from Google services came up in the Stadia thread recently. This is something I've thought about doing as well, but moving email has become somewhat of a stumbling block. I don't want to move to another giant corporation like MS, and it's hard to find consistently good reviews of any of the smaller services (fastmail, protonmail, etc).

Anyone using something that they strongly recommend?

What I do is to go fenomas one better; I host my own domain and mail server. I use Cloudflare for DNS because it's free and lookups themselves aren't very security-sensitive, and then use Postfix (to send and receive mail) and Dovecot (an IMAP server) on a VM out in the cloud.

It's safest to host your email on a server sitting in your home; Cloudflare supports dynamic DNS registration, so it's pretty easy to keep a server running if you have a firewall that either supports it directly, or can let you install and run scripts to register the DNS name whenever the IP changes. (I use an actual Linux box as a firewall, which makes this trivial.) But most ISPs will block outbound Port 25, so you usually need to bounce outbound mail through your provider's servers.

Each of these scenarios (fenomas registering a domain and forwarding, hosting a VM mailserver in the cloud, and hosting your own mailserver at home) gets more complex, with the last being safest, but requiring a lot of fiddling to get dialed in. If you've never done any of that before, hosting at home will take substantial learning. Mail forwards take only a little, and VM-hosting takes a moderate amount.

Redherring wrote:

Is anyone good with Docker / Linux ? I’m struggling with mounting persistent storage or even knowing what that means...

I haven't used Docker, but many VM systems will boot on non-persistent media. That is, it starts from an image, and looks and works normally, but any changes to the image are thrown away at the end of the session. All changes to the base filesystem(s) are tracked in RAM and just discarded when the machine shuts down, so that each new startup is exactly the same.

On AWS, for instance, you generate a root image, and can boot that up, but changes you make never stick. If you want an updated image with current software, you more or less have to boot it up locally, update it, shut down, generate a new image, and upload it to AWS. You have to do this each time OS patches come out.

Persistent storage is exactly that; a filesystem, typically, that the OS mounts, and which does get permanently reflected to disk. Any data that you want to survive from boot to boot needs to go there.

How that translates to Docker-speak, I dunno.

Malor wrote:

What I do is to go fenomas one better; I host my own domain and mail server. I use Cloudflare for DNS because it's free and lookups themselves aren't very security-sensitive, and then use Postfix (to send and receive mail) and Dovecot (an IMAP server) on a VM out in the cloud.

Yeah, I think of myself as IT competent but running my own mail server is way to scary for me to consider. I don't grok the details but I've seen lots of horror stories about having your server wind up on a blocklist for whatever reason and suddenly being cut off. (OTOH I don't know if forwarding from a custom domain carries any similar risks...)

Yeah, that can happen. I can't email msn.com from my current VM, for instance, it refuses to talk to me. So far, that's the only one.

Once upon a time, it was usually not that difficult to get yourself unblocked, but all the big tech companies ignore peons these days. Finding a clean, cheap IP to send mail from can be awkward.

Malor wrote:
Redherring wrote:

Is anyone good with Docker / Linux ? I’m struggling with mounting persistent storage or even knowing what that means...

I haven't used Docker, but many VM systems will boot on non-persistent media. That is, it starts from an image, and looks and works normally, but any changes to the image are thrown away at the end of the session. All changes to the base filesystem(s) are tracked in RAM and just discarded when the machine shuts down, so that each new startup is exactly the same.

On AWS, for instance, you generate a root image, and can boot that up, but changes you make never stick. If you want an updated image with current software, you more or less have to boot it up locally, update it, shut down, generate a new image, and upload it to AWS. You have to do this each time OS patches come out.

Persistent storage is exactly that; a filesystem, typically, that the OS mounts, and which does get permanently reflected to disk. Any data that you want to survive from boot to boot needs to go there.

How that translates to Docker-speak, I dunno.

In docker-speak I think it’s trying to tell me:

1. If I start a docker container the storage will be inside it, so if I delete the container the storage will be deleted too. I’m not sure why I would want to delete the container but maybe that’s because I don’t know enough about docker yet.
2. I can mount a directory outside the container for persistent storage that will survive destroying the container, and this is the correct way to do it.

I have tested and the storage does persist if I stop the container or reboot the host device.

The example for how to mount the persistent storage doesn’t apply to the file system I’m working with, and I don’t know how to translate it. I understand Windows file systems well and Linux file systems not very much at all. And this is running on a NAS with I guess some proprietary Linux version. I’m doing this in ssh with root access.

The example line is “ -v /mnt/data:/data” which I think means use /mnt/data as the storage location. In windows the equivalent is “-v d:\data:/data”

My data volume is called /dev/md127/data (I think)

I tried “-v /dev/md127/data:/data” but it throws up all sorts of errors. Maybe docker just isn’t allowed to write there or I am reading the syntax wrong.

Apologies to anyone who didn’t want the tech questions to get this techy

Edit ...
Is /dev/md127/data says “cannot access; not a directory” so yeah I am not getting it.

Redherring wrote:
Malor wrote:
Redherring wrote:

Is anyone good with Docker / Linux ? I’m struggling with mounting persistent storage or even knowing what that means...

I haven't used Docker, but many VM systems will boot on non-persistent media. That is, it starts from an image, and looks and works normally, but any changes to the image are thrown away at the end of the session. All changes to the base filesystem(s) are tracked in RAM and just discarded when the machine shuts down, so that each new startup is exactly the same.

On AWS, for instance, you generate a root image, and can boot that up, but changes you make never stick. If you want an updated image with current software, you more or less have to boot it up locally, update it, shut down, generate a new image, and upload it to AWS. You have to do this each time OS patches come out.

Persistent storage is exactly that; a filesystem, typically, that the OS mounts, and which does get permanently reflected to disk. Any data that you want to survive from boot to boot needs to go there.

How that translates to Docker-speak, I dunno.

In docker-speak I think it’s trying to tell me:

1. If I start a docker container the storage will be inside it, so if I delete the container the storage will be deleted too. I’m not sure why I would want to delete the container but maybe that’s because I don’t know enough about docker yet.
2. I can mount a directory outside the container for persistent storage that will survive destroying the container, and this is the correct way to do it.

I have tested and the storage does persist if I stop the container or reboot the host device.

The example for how to mount the persistent storage doesn’t apply to the file system I’m working with, and I don’t know how to translate it. I understand Windows file systems well and Linux file systems not very much at all. And this is running on a NAS with I guess some proprietary Linux version. I’m doing this in ssh with root access.

The example line is “ -v /mnt/data:/data” which I think means use /mnt/data as the storage location. In windows the equivalent is “-v d:\data:/data”

My data volume is called /dev/md127/data (I think)

I tried “-v /dev/md127/data:/data” but it throws up all sorts of errors. Maybe docker just isn’t allowed to write there or I am reading the syntax wrong.

Apologies to anyone who didn’t want the tech questions to get this techy

Edit ...
Is /dev/md127/data says “cannot access; not a directory” so yeah I am not getting it.

A complete guess here, as I've never tried it, but it's possible it doesn't like you trying to mount the raw device into the container. Mount that device in the local filesystem first (or see if it already is -- mount | grep md127) and then pass that mount point in.

Redherring wrote:

Is anyone good with Docker / Linux ? I’m struggling with mounting persistent storage or even knowing what that means...

With docker you can use something like docker-compose to define a persistent volume and attach it to your container. When you up your composes stack the same persistent volume will be attached each time and mounted at defined path in container. Depending on what you are running in the container you can find an example compose file to work from. Keep in mind in docker you should not be persistently modifying the core files of the container after you stand it up. You build a new container image if you want to change it. The reason persistent volumes exist is for containers of things like databases where the db software itself does not change while running but it’s persistent data does.

What docker image are you trying to run and for what purpose? If you just want virtual Linux as a guest OS it will likely be a lot easier for you if you use whatever native virtualization layer your OS has. Or VirtualBox.

Malor wrote:
billt721 wrote:

The idea of moving away from Google services came up in the Stadia thread recently. This is something I've thought about doing as well, but moving email has become somewhat of a stumbling block. I don't want to move to another giant corporation like MS, and it's hard to find consistently good reviews of any of the smaller services (fastmail, protonmail, etc).

Anyone using something that they strongly recommend?

What I do is to go fenomas one better; I host my own domain and mail server. I use Cloudflare for DNS because it's free and lookups themselves aren't very security-sensitive, and then use Postfix (to send and receive mail) and Dovecot (an IMAP server) on a VM out in the cloud.

It's safest to host your email on a server sitting in your home; Cloudflare supports dynamic DNS registration, so it's pretty easy to keep a server running if you have a firewall that either supports it directly, or can let you install and run scripts to register the DNS name whenever the IP changes. (I use an actual Linux box as a firewall, which makes this trivial.) But most ISPs will block outbound Port 25, so you usually need to bounce outbound mail through your provider's servers.

Each of these scenarios (fenomas registering a domain and forwarding, hosting a VM mailserver in the cloud, and hosting your own mailserver at home) gets more complex, with the last being safest, but requiring a lot of fiddling to get dialed in. If you've never done any of that before, hosting at home will take substantial learning. Mail forwards take only a little, and VM-hosting takes a moderate amount.

I get stuck doing enough sysadmin tasks at work to keep me from wanting to do it at home. Interesting idea, though. A buddy of mine ran his own mail server in college, and until now he was literally the only person I've ever known who has done that.

For the moment, I'm doing the 14-day free trial at hey.com (from the Basecamp people).

Hmm.

If I just type “mount it says a whole lot of stuff then
/dev/md127 on /data type btrfs (lots of things) which would suggest it’s already mounted .. that and I can access it other ways like mapping a windows share to it.

But

It also says “subvol” is called / and there’s also a subvol called /home. Just /subfolder doesn’t seem to work but if I use /home/subfolder:/data then the container data appears there. Thank you! Will need to do some more testing but I think you have pushed me in the right direction.

Pandasuit - I’m trying to build some S3 compatible storage with Minio.

billt721 wrote:
Malor wrote:
billt721 wrote:

The idea of moving away from Google services came up in the Stadia thread recently. This is something I've thought about doing as well, but moving email has become somewhat of a stumbling block. I don't want to move to another giant corporation like MS, and it's hard to find consistently good reviews of any of the smaller services (fastmail, protonmail, etc).

Anyone using something that they strongly recommend?

What I do is to go fenomas one better; I host my own domain and mail server. I use Cloudflare for DNS because it's free and lookups themselves aren't very security-sensitive, and then use Postfix (to send and receive mail) and Dovecot (an IMAP server) on a VM out in the cloud.

I get stuck doing enough sysadmin tasks at work to keep me from wanting to do it at home. Interesting idea, though. A buddy of mine ran his own mail server in college, and until now he was literally the only person I've ever known who has done that.

I also used to host my own domain's mail server. I did that for probably 5 or 10 years. Eventually some of the issues mentioned already: finagling with my ISP's outgoing mail; dealing with destination domains that just refused to receive mail from my domain; and even just spam (SpamAssassin wasn't doing a great job for me, and I suspected that Gmail, with a much larger pool of spam to feed into its filters, would work better). So, I ported my domain over to being hosted-at-Google, and haven't looked back.

Redherring wrote:

...
The example for how to mount the persistent storage doesn’t apply to the file system I’m working with, and I don’t know how to translate it. I understand Windows file systems well and Linux file systems not very much at all. And this is running on a NAS with I guess some proprietary Linux version. I’m doing this in ssh with root access.
...

Is this on a Synology NAS, by chance?

-BEP

My strategy for this is to diversify. I have my own domain through Hover that provides email. That email gets handed out only to a very select few people, essentially long-term relationships that I highly trust to - whether I actually trust them or am forced to do so through circumstance. So my doctor, insurance people (don't trust them really but kind of have to as if they are bad they have everything anyway so :shrug:), some members of my family - basically long term relationships that are highly trusted to not sell my email address or other information and (hopefully) have stronger security in place to not get hacked (not a given but what can you do?).

Next level is other people that need my email that I kind of trust that I have to do business with locally. The plumber. The animal vet. Folks like that that will "probably" not sell my PII but on the other hand might. For them I use my ISP's email.

Then there is the rest of the world. For them I will use a number of GMail accounts, MSN, Yahoo, whatever is convenient. Those are for folks I do not trust at all: Steam, Ubi, Amazon, random stuff I buy online, folks like that. I think of these as burner accounts and, if one of them goes away, oh well. Would be kind of a pain but not crippling.

Keeping everything straight take a little set up but even MSMail can handle aggregating all of this nonsense with not much trouble. Once set up, everything is fine.

I am not advocating that this is something everyone should do, it is just something I have come to do over the years. I think the point is to diversify what you use online and duplicate the important stuff in a number of places (pictures I want to keep are in a few places, important docs the same, etc.) so the loss of one service does not cause you to lose everything. Keeping everything synced is non-trivial (shoot, sometimes deciding what is important is non-trivial) but IMO worth the effort at the end of the day.

billt721 wrote:

I get stuck doing enough sysadmin tasks at work to keep me from wanting to do it at home. Interesting idea, though. A buddy of mine ran his own mail server in college, and until now he was literally the only person I've ever known who has done that.

A mail forwarder is pretty easy. Actually running a domain has a lot of little fiddly steps. For instance, I found that a catch-all domain alias was a *bad* idea. You want the catchall to go to a spam account, where you can fish things out if you need to. All mail to my domain is aimed at me, so the 'to' becomes an identifier, instead of a target. I typically use [email protected], where Xs represent random digits. If I'm making it in the field for someone I'm talking to, I omit the digits, because I'll forget them.

If you create them as you give them out, then each new person you exchange email with will cost you some time listing the address in a text file, at least assuming that you use Postfix. If you also want to reply as that address when you get mail there, then you will probably need to list in your email client. I use Thunderbird, for instance, and it will reply with the right address if I've created a matching identity. If something comes in to "[email protected]", if I have a matching identity in Thunderbird, it will reply as "My Chosen Name ([email protected])" when answering.

I only have to do this, however, if I want to reply as that address. If I just want to receive mail (the normal default with most online companies), then it's just one line in a Postfix file.

Early on, this took a lot of work. At this point, it's about a minute per new account, three or four minutes if I set up a Thunderbird identity.

In exchange for that effort, and combined with graylisting on the mail server (I can explain that if you care), I get maybe three or four spam messages per year. As soon as I realize a given alias has fallen into the hands of spammers, I can deactivate it. Wham, fixed, clean inbox. If I really want to keep doing business with that entity, I can issue a new address for them. I rarely want to do this: if their care of their database is that poor, I probably don't want to be involved with them anymore. And you will always know the source of spam. Multiple times, I have figured out that a company was hacked before the company knew about it.

On the mail server, you will create an actual username, the real account that receives the mail. Never give this out to anyone. If that address falls into the hands of spammers, you can't easily block it. Changing it is kind of a pain, so it's a lot easier to just never give it out. Always hand out aliases. You can prep a couple ahead of time if you want, or you can just remember what you told someone, add it when you get home, and then go fishing in the spam account to see if they sent you anything before you activated their alias.

You can also make catchalls for various types of companies, but the lack of granularity will probably mean that you can't easily deactivate it if spammers get it. You have to figure out everyone that's using it, and update them to a new address, before you can turn it off, and you can't know for sure which company leaked it. Single, per-entity addresses mean that you can deactivate one person's alias without touching the others, and you can be certain who to blame.

It's not as intense as it used to be, but spammers still constantly try to phish for valid accounts they can bombard, and they always succeed at least some of the time. If you're using a single email address, then if anyone who knows it has bad data hygiene for even one click, it may end up being heavily spammed.

Aliases are an ongoing, small time investment to keep your inbox almost 100% pristine.