News updates on the development and ramifications of AI. Obvious header joke is obvious.
PSA: If you're still paying TurboTax to file your taxes for you, either go to a proper professional if you need the help, or use a free direct filing service like https://directfile.irs.gov/ or https://www.freetaxusa.com/ (which I have used for the past 2 years and it works great)
AI prompts can boost writers’ creativity but result in similar stories,
Does it though? Because it looks to me the second part invalidates the first.
I won't post the link but a steam game/demo managed to get into new trending with what appears to be a bunch of chatgpt reviews. Super obvious to the point people started posting funny reviews about it. Interesting although I don't know how valve would fight this.
OpenAI to make its own search engine, which I'm sure will be great.
The search engine starts with a large textbox that asks the user “What are you looking for?”
Pizza recipes.
I mean, that was Google, but the idea still sticks.
OpenAI to make its own search engine, which I'm sure will be great.
This isn't NFTs, but this feels very much like NFTs in that the vibe is that they are trying SO HARD to get us all to buy into this wholesale, because otherwise this very quickly becomes unprofitable.
Yeah, OpenAI is burning through insane amounts of money. The longer they stay unprofitable, the more compelling the thing they're promising needs to be to keep investor money flowing - but mostly what they can deliver is "what if if we replaced this person and were worse at their job"?
It feels like this bubble is going to pop pretty soon.
Prederick wrote:OpenAI to make its own search engine, which I'm sure will be great.
This isn't NFTs, but this feels very much like NFTs in that the vibe is that they are trying SO HARD to get us all to buy into this wholesale, because otherwise this very quickly becomes unprofitable.
Yeah, OpenAI is burning through insane amounts of money. The longer they stay unprofitable, the more compelling the thing they're promising needs to be to keep investor money flowing - but mostly what they can deliver is "what if if we replaced this person and were worse at their job"?
It feels like this bubble is going to pop pretty soon.
What really disappoints me is there are areas where AI could definitely be useful. I am thinking of things like medical imagery analysis for one example. Instead of taking a narrow set of scans, because the person looking at them is limited by time, I can see a time when you take a shitload of scans, and AI goes through them looking for a specific set of markers or patterns, and then flags the scans that are useful for the human who has to make the final determination.
But because these AI TechBros are doing **waves hands wildly**, this is not getting the attention (and $$$) it deserves because all people see is this.
That tech as fair as I know is being developed by entirely different groups with much more sensible goals and already exists anyways. I wouldn't worry about any of this dragging that down because it actually does something useful and thus can show actual results.
Even the generative stuff is potentially useful for some things... but like every other application of AI so far, it requires human review and editing by someone familiar with the subject area to end up with usable results in the end. The corporate insistence on trying to use it standalone with no or very very little human involvement is what is driving all their efforts into the ditch, though I suppose there's a certain justice in that.
If they could be satisfied by the modest gains available by treating it as another tool they'd probably have achieved some positive results, but being satisfied by "modest gains" has never been part of the script for these folks.
What really disappoints me is there are areas where AI could definitely be useful. I am thinking of things like medical imagery analysis for one example. Instead of taking a narrow set of scans, because the person looking at them is limited by time, I can see a time when you take a shitload of scans, and AI goes through them looking for a specific set of markers or patterns, and then flags the scans that are useful for the human who has to make the final determination.
But because these AI TechBros are doing **waves hands wildly**, this is not getting the attention (and $$$) it deserves because all people see is this.
Machine learning has been used in medical analysis for decades. You just don't hear much about them because they existed long before the current LLM hype train -- and obviously the existing machine learning systems for medical analysis don't use Large Language Models because that would be like using a jackhammer to drive a screw.
My company has been using machine learning for a long time now to predict perinatal complications.
Also used extensively in manufacturing, automotive for a long time... Generative AI and LLMs are just the glitzy "AIs" of the moment.
You need to be careful of using AI for medical image analysis.
It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.
https://www.bdo.com/insights/digital...
Imagine this: A predictive AI program is integrated into a dermatology lab to aid doctors in diagnosing malignant skin lesions. The AI was built using thousands of diagnostic photos of skin lesions already known to be cancerous, training it to know with great exactness what a malignant lesion looks like. The potential benefits are obvious — an AI might be able to notice details that human doctors could miss and could potentially accelerate treatment for patients with cancer.But there’s a problem. Patient outcomes do not improve as expected, and some even tragically get worse. Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.
Which is not to say it won't be useful, but it's necessary to be so careful what data is used for training.
You need to be careful of using AI for medical image analysis.
It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.
Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.
Prompt: "Ensure there is peace and prosperity on Earth, where there is no thirst or starvation or homelessness or suffering for any human."
Primary Objective: Eliminate thirst, starvation, homelessness, suffering for humans.
Assessment: Thirst, starvation, homelessness, suffering for humans all require humans to occur.
Conclusion: Eliminate all humans.
I'm not sure if 'AI' will have revenue/profits it can directly attribute to in the short term. It might help tech start eating itself which wouldn't be a bad thing.
Google search is just terrible compared to when I first found GWJ.
If we get back to actual useful search engines that don't tell us to do ridiculous things scrapped from Reddit that's a win. Not sure how that gets monetized besides eating Googles lunch + that pool of money growing. The masses are not signing up for $5-10 better monthly search. So it will come from ads and our eyeballs.
MS will bring back Clippy just so it can be voiced by Awkwafina.
Don't make me hate Awkwafina.
You need to be careful of using AI for medical image analysis.
It's more likely to pick up a malignant skin cancer if there's a ruler in the photo.
https://www.bdo.com/insights/digital...
Imagine this: A predictive AI program is integrated into a dermatology lab to aid doctors in diagnosing malignant skin lesions. The AI was built using thousands of diagnostic photos of skin lesions already known to be cancerous, training it to know with great exactness what a malignant lesion looks like. The potential benefits are obvious — an AI might be able to notice details that human doctors could miss and could potentially accelerate treatment for patients with cancer.But there’s a problem. Patient outcomes do not improve as expected, and some even tragically get worse. Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.
Which is not to say it won't be useful, but it's necessary to be so careful what data is used for training.
Yeah, that's an old problem in machine learning. Back when I was an undergraduate comp-sci student (back near the dawn of the millennium, in 2001*), a professor told us about a neural network trained to spot camouflaged tanks in forested images (or something along those lines). It passed testing, then proved to be entirely useless in practice. Why? Because in the training data all the images with tanks were taken at one time of day, and all the images without were taken at another, so all the neural network did was track the position of the sun.
* And I'm pretty sure this story was old even then.
Pages