Post a news story, entertain me!

but we don't get office pizza.

Edit, moving to a place it is more on topic.

Paleocon wrote:

but we don't get office pizza.

Just throw a $10,000 pizza party for yourself.

iaintgotnopants wrote:
Paleocon wrote:

but we don't get office pizza.

Just throw a $10,000 pizza party for yourself.

IMAGE(https://i.insider.com/5e5cfc6ca9f40c0a300aa633?width=1000&format=jpeg&auto=webp)

There was a time when I would have found this earlier...

Oyster outrage: Woman's date sneaks out after she eats 48 oysters in viral TikTok video.

Video is nsfw

A 2000-year-old practice by Chinese herbalists – examining the human tongue for signs of disease – is now being used with machine learning and AI. It is possible to diagnose with 80% accuracy more than 10 diseases based on tongue colour. A new study achieved 94% accuracy with 3 diseases.

A 2000-year-old practice by Chinese herbalists – examining the human tongue for signs of disease – is now being embraced by computer scientists using machine learning and artificial intelligence.

Tongue diagnostic systems are fast gaining traction due to an increase in remote health monitoring worldwide, and a study by Iraqi and Australian researchers provides more evidence of the increasing accuracy of this technology to detect disease.

Engineers from Middle Technical University (MTU) in Baghdad and the University of South Australia (UniSA) used a USB web camera and computer to capture tongue images from 50 patients with diabetes, renal failure and anaemia, comparing colours with a data base of 9000 tongue images.

Using image processing techniques, they correctly diagnosed the diseases in 94 per cent of cases, compared to laboratory results. A voicemail specifying the tongue colour and disease was also sent via a text message to the patient or nominated health provider.

MTU and UniSA Adjunct Associate Professor Ali Al-Naji and his colleagues have reviewed the worldwide advances in computer-aided disease diagnosis, based on tongue colour, in a new paper in AIP Conference Proceedings.

“Thousands of years ago, Chinese medicine pioneered the practice of examining the tongue to detect illness,” Assoc Prof Al-Naji says.

“Conventional medicine has long endorsed this method, demonstrating that the colour, shape, and thickness of the tongue can reveal signs of diabetes, liver issues, circulatory and digestive problems, as well as blood and heart diseases.

“Taking this a step further, new methods for diagnosing disease from the tongue’s appearance are now being done remotely using artificial intelligence and a camera – even a smartphone.

“Computerised tongue analysis is highly accurate and could help diagnose diseases remotely in a safe, effective, easy, painless, and cost-effective way. This is especially relevant in the wake of a global pandemic like COVID, where access to health centres can be compromised.”

Diabetes patients typically have a yellow tongue, cancer patients a purple tongue with a thick greasy coating, and acute stroke patients present with a red tongue that is often crooked.

A 2022 study in Ukraine analysing tongue images of 135 COVID patients via a smartphone showed that 64% of patients with a mild infection had a pale pink tongue, 62% of patients with a moderate infection had a red tongue, and 99% of patients with a severe COVID infection had a dark red tongue.

Previous studies using tongue diagnostic systems have accurately diagnosed appendicitis, diabetes, and thyroid disease.

“It is possible to diagnose with 80% accuracy more than 10 diseases that cause a visible change in tongue colour. In our study we achieved a 94% accuracy with three diseases, so the potential is there to fine tune this research even further,” Assoc Prof Al-Naji says.

What disease do Akitas have then?

I don’t trust AI to write a sentence without lying, to do simple math, or to generate a reasonable image from a description on the first try.

I do NOT want AI anywhere near diagnostic decisions. I’m sure it could provide an entertaining or perhaps even a ballpark set of potential diagnoses for further examination.

Anything beyond that is pure hype designed to attract investment money, not to improve the lives of patients with better diagnoses.

It is likely the case that researchers are not currently relying solely on automated analysis, but it seems clear to me that is the goal.

Machine learning is a powerful tool, but it is not remotely sufficient for reliable end-to-end problem solving without careful, time consuming, and expert vetting of intermediate and overall results.

Not only that, but the lede invokes a perniciously common logical fallacy. I leave the specific fallacy as an exercise for the reader.

Medical AI diagnosis has a much longer history than generative AI, and can be very useful in remote rural areas where you don't have doctors around, but a travelling health care worker could screen patients with this and other tools. It's way better than nothing. And of course you need to follow up a potential positive with an actual specialist or other on-site diagnostic means (like a blood sugar test).

Yeah, that read to me as if Ken was conflating Chat-GPT with the kind of machine-learning diagnostic tools that have been in use for years already, and have a proven track record.

I agree that no doctor should be using Chat-GPT to diagnose patients, but that's not what's happening.

There are many different types of AI and some of them, like expert systems, are well advanced.

Yeah I am not concerned with ML tools and models used by competent professionals. So many fields have been accelerated by the leverage AI component technology can provide. But the way AI is being upsold now as some kind of potential replacement for experienced humans is premature and dangerous.

A big part of the problem is the way reporters and MBAs lump together pieces of tech that comprise the field of AI. Specialized ML tools are useful. LLMs are useful for specific research problems. Image recognition and categorization is a useful component of powerful tools that provide huge leverage for the abilities of scientists and engineers.

But standalone diagnosis machines are science fiction. And if the dozens of AI startups get their way, that fiction will come true a’la Idiocracy rather than Star Trek.

No Expanse autodoc... Yet

I know that the regulatory environment for medical devices is, to say the least, extensive.

Anyone know if that extends to diagnostic tools? Do they have to have demonstrated efficacy before they're allowed to be deployed?

I suspect this covers at least part of that market, but you can probably find more.

I am flying in two days and I am glad there is no super-scary story to up my anxiety...

Oh no...

He had a mental breakdown, and from the description, he dissociated and thought he was in a dream, so he was probably reacting to events that were not even happening. I feel sorry for the guy. By all accounts it's way out of character for him.

I mean, up to the point his break could have killed 80+ people

So, there's a difference between deliberately acting to try to kill 83 people for ideological or criminal reasons, and having a mental illness that causes an unintended, unanticipated break with reality, isn't there? Are we allowed to sympathize with the latter? I mean, dude has ruined his life and was not even there for it when it happened, really...

What if it had been the copilot, and simply a mistaken throttle setting or something? Can we sympathize with that, even if everyone had died? Or maybe a mechanic who failed to spot a fraying control wire? At what point are we not allowed to show sympathy for someone's bad luck in causing a terrible situation?

In all cases, we need to remember the 83 people on board. That's not in question. But so many of us have some kind of mental issues, I shy away from condemning someone who had a really bad break with reality. It's not like it's their fault (hallucinogenic drug taking excepted). Right? Or are we still in the days where mental illness and evil are one and the same?

I’m sympathetic to mental health challenges, but there is still personal responsibility as well. Especially considering issues of potential mass casualties. This guy’s lucky he wasn’t successful.

I understand depression and I even understand why people commit suicide but I will NEVER understand trying to take innocent people with me.

For this case specifically , I don’t think anyone not directly operating a plane should EVER be inside the cockpit.

He may have been on mushrooms, but incidents like these are exceedingly rare.

Also, something that actually terrified me before my last flight because it affects a lot of planes:

Ghost in the Machine: How Fake Parts Infiltrated Airline Fleets

To date, Safran and GE have uncovered more than 90 other certificates that had similarly been falsified. Bogus parts have been found on 126 engines, and all are linked to the same parts distributor in London: AOG Technics Ltd., a little-known outfit started eight years ago by a young entrepreneur named Jose Alejandro Zamora Yrala.

The discovery by an alert crew in Lisbon blew the cover off a massive aviation fraud that has left engine makers and their customers in a frantic race to stem the fallout. As a result of the fabrications, thousands of parts with improper documentation have wound up at airlines, distributors and workshops around the globe. From there, they’ve ended up inside jet engines, effectively contaminating a growing portion of the world’s most widely flown airliner fleet.

So the off-duty pilot, who has a duty to remain mentally sounds, took shrooms in his downtime, 2 days before a flight? Screw him. That's completely irresponsible. I stand corrected.

No Expanse autodoc... Yet

Peaches?

No mention of any previous research into identifying diabetics from their voice.

The scientists then pinpointed 14 acoustic differences between those with and without type 2 diabetes. Four of the differences assisted the AI in diagnosing type 2 diabetes more accurately.

This paper is in desperate need of peer review.

Mayo Clinic Proceedings: Digital Health states that it's editorial board conducts peer reviews for the journal.

It's official: Jesus is a Swiftie

IMAGE(https://pbs.twimg.com/media/F_GFPX2XAAAlCxu?format=jpg&name=large)

[quote="Chairman_Mao"]It's official: Jesus is a Swiftie
[/qoute]

IMAGE(https://i1.sndcdn.com/artworks-000125019212-ub72uw-t500x500.jpg)