The Joys Of Programming

deleted

Sounds like you dont know how to code yet? I like this course for learning. You can shoot through it in a few dedicated days, or take it a week at a time for 10-20 weeks (part 2 is also recommended) https://www.coursera.org/learn/inter...

Are there any boring ass coding jobs no one wants to do that I could do or learn to do for money under the table? Is that a thing in this industry?

polypusher wrote:

Sounds like you dont know how to code yet? I like this course for learning. You can shoot through it in a few dedicated days, or take it a week at a time for 10-20 weeks (part 2 is also recommended) https://www.coursera.org/learn/inter...

Thanks!

Strangeblades wrote:

Are there any boring ass coding jobs no one wants to do that I could do or learn to do for money under the table? Is that a thing in this industry?

There are so many boring ass coding jobs no one wants to do. I don’t think I’ve seen too many under the table coding jobs, other than friends and relatives asking me to build sh*t for them for free.

Don’t do that.

Strangeblades wrote:

Are there any boring ass coding jobs no one wants to do that I could do or learn to do for money under the table? Is that a thing in this industry?

It's absolutely a thing you can do, and there are a lot of boring coding jobs. Hell, most of my days my goal is to kill whatever monotony I (and others) are dealing with, and that's where my software brain takes over.

As for finding these jobs, it's been ages. Lots of small companies will definitely hire programmers with enough experience to get themselves in trouble, so maybe look there. I remember one of my jobs was through a temp company that I had to take a programming test of their's to prove I could do the basics, and then they helped me get a temp-to-hire job that I stayed at for the better part of a decade.

So there are definitely lots of different ways to do this sh*t.

IMAGE(https://media.giphy.com/media/LP5FLyAK0pr43qMSuH/giphy.gif)

trueheart78 wrote:
Strangeblades wrote:

Are there any boring ass coding jobs no one wants to do that I could do or learn to do for money under the table? Is that a thing in this industry?

It's absolutely a thing you can do, and there are a lot of boring coding jobs. Hell, most of my days my goal is to kill whatever monotony I (and others) are dealing with, and that's where my software brain takes over.

As for finding these jobs, it's been ages. Lots of small companies will definitely hire programmers with enough experience to get themselves in trouble, so maybe look there. I remember one of my jobs was through a temp company that I had to take a programming test of their's to prove I could do the basics, and then they helped me get a temp-to-hire job that I stayed at for the better part of a decade.

So there are definitely lots of different ways to do this sh*t.

IMAGE(https://media.giphy.com/media/LP5FLyAK0pr43qMSuH/giphy.gif)

Excellent

I've been writing this python program that does a serious transformation of an STL. Got it working and it's great but performance is a big issue once my STL gets over 100,000 facets. So I'm researching ways of optimizing my algorithm and stumbled on pyvista. Thank you Gemini!

So converting to use pyvista is going well and whoever wrote it is much (MUCH!) better than I am! My program is flying!!

Moggy wrote:

I've been writing this python program that does a serious transformation of an STL. Got it working and it's great but performance is a big issue once my STL gets over 100,000 facets. So I'm researching ways of optimizing my algorithm and stumbled on pyvista. Thank you Gemini!

So converting to use pyvista is going well and whoever wrote it is much (MUCH!) better than I am! My program is flying!!

What does it do to STL files?

DeThroned wrote:

What does it do to STL files?

Performs a reflection in a cylindrical mirror. Sort of.

I was distracting myself this weekend by staying busy and learning/building things.

I intended to spend time learning C# but got bored.

Instead I upgraded my home AI assistant setup.

I now have more than 20 SLMs (Small Language Models) running on my home computers (including a multimodal one that understands images) and a better copilot UI I can access from any web browser. I can now easily add/update/refine/remove SLMs.

It’s still a completely private system with all processing done locally. None of my data leaves my network. It can fetch information from the internet when needed but it mostly uses my local files as context.

The main reason I have so many SLMs is that I want to figure out which ones are the most effective for my use and be able to retest as new ones are released.

So I also started writing a LM testing system where each LM is asked to perform a set of tasks and I measure how well they do after repeated runs. Right now all my tasks are designed to test how well the LMs can use tools since that’s one of the most critical features I want support for. SLMs are nowhere near as good as GPT-4 at tool use so I’ve been building up some code that generates better prompts and tries to fix/coerce imperfect responses into the expected formats.

SLMs have come a long way in the past year or so. Much more powerful and much easier to set up.

Sounds awesome panda. What is the tech stack (llama.cpp etc)? And are you using something to constrain the output to a grammar like json?

fenomas wrote:

Sounds awesome panda. What is the tech stack (llama.cpp etc)? And are you using something to constrain the output to a grammar like json?

I switched to Ollama containers for hosting the models and providing APIs. I'm using Open WebUI for the new UI for now. I like the multi-user support and ability to manage personas easily.

I was experimenting with a few structured prompting libraries like Instructor for the automated testing with JSON LM output. I'd previously been using TypeChat for structured prompting but wanted to try using my Pydantic types directly and Instructor does that. I ended up adding some features to my Pydantic base class to fix broken LM responses such as invalid JSON and do some coercion. I think I'm going to experiment with Guidance next.

Ollama doesn't directly support tool calling (yet) so I'll be layering it on top myself. My current tool orchestration code is written in Python Semantic Kernel. I will update or replace that when I settle on a structured prompting approach. I might implement my new tool orchestrator as a proxy layer in front of Ollama for now. That way calls coming from the UI or other services think they are talking to Ollama APIs directly but the orchestrator will leverage tools automatically as needed using multiple turns against Ollama before returning the assistant response. Or I may switch to an assistant frontend that orchestrates tools itself and just implement the OpenAI tool calling formats in my proxy.

There are now some pretty effective open source coding assistant SLMs you can use completely locally that were trained with open data (you can confirm). There’s VSCode plug-ins (and other UIs) for using them much like how GitHub Copilot works. I’d be very careful using them for anything but personal projects unless you double check the license of all the training data to make sure you aren’t integrating copy-left licensed code into your projects.

I currently pay for GitHub Copilot and have the mode enabled where it makes sure I’m not using copy-left code. But if people want a free option there definitely are some effective ones now. Sometimes they are just really useful to explain someone else’s code. I’m tempted to try using one for personal tasks that require a massive context window to do well. Some of these SLMs support 100k+ tokens of context now.

Anyone else here working with LMs much?

pandasuit wrote:

Anyone else here working with LMs much?

"Much" would be an overstatement, but I do use them to help with coding. I stopped searching the web (Ie. StackOverflow) quite a while ago and started using ChatGPT to get help with coding questions. Our company recently added an internal ChatGPT maybe 3 months ago(?) so now I don't have to anonymize anything I ask it questions about, which is nice.

We also now have CoPilot integrated into VSCode and IntelliJ. All my work is in IntelliJ and we don't have access to the Chat portion, yet. It will be nice when it does. I think we approve the Chat function in VSCode for those that use it.

I'm a believer. It saves me time with code suggestions every day because it is able to use open files/context to guess at what I want. It nails it a surprising amount of time.

Hopefully, I'll get to retire before it takes over my job.

-BEP

I have Copilot through work and I like it a lot. More than anything it just saves typing - mostly I'm not using it to make any decisions; I'm just writing the beginning of a straightforward chunk of code and letting it correctly guess the rest. I also have access to the chat feature, but haven't really found any uses for it.

What surprised me most is, I initially figured that I'd often need to prime it by writing a comment like: "// next we convert the X to a Y", and then put the cursor on the next line to see what it suggests. But in practice I find that it can usually guess what logic will come next without any hints.

OTOH I've had no luck using it for any kind of debugging. I've experimented with writing a comment like: "// the previous line throws an error on Safari because" and then seeing what it suggests, but I don't think I've ever fixed a bug that way. I mean it'll make reasonably sane suggestions - but they're clearly based only on the surrounding code, rather than on some kind of understanding of Safari behavior. So I don't think the AI has enough "world model" (as it were) to be used that way.

fenomas wrote:

I have Copilot through work and I like it a lot. More than anything it just saves typing - mostly I'm not using it to make any decisions; I'm just writing the beginning of a straightforward chunk of code and letting it correctly guess the rest. I also have access to the chat feature, but haven't really found any uses for it.

What surprised me most is, I initially figured that I'd often need to prime it by writing a comment like: "// next we convert the X to a Y", and then put the cursor on the next line to see what it suggests. But in practice I find that it can usually guess what logic will come next without any hints.

OTOH I've had no luck using it for any kind of debugging. I've experimented with writing a comment like: "// the previous line throws an error on Safari because" and then seeing what it suggests, but I don't think I've ever fixed a bug that way. I mean it'll make reasonably sane suggestions - but they're clearly based only on the surrounding code, rather than on some kind of understanding of Safari behavior. So I don't think the AI has enough "world model" (as it were) to be used that way.

Debugging is one thing the chat mode can help with. Open the relevant code files, paste the error into chat mode and type some context, it often has useful answers or at least points out something to investigate.

I’ve also used the chat mode to convert code between programming languages. I had a bunch of TypeScript types I wanted to convert to Pydantic and it saved me a lot of typing.

A lot of the time I use chat mode to ask questions about how to do something instead of searching online for answers.

I use Copilot sparingly. Sometimes it is a great help writing boilerplate. Sometimes it pisses me off with inane, useless, and constant suggestions and I rage disable it. I turn it back on a few days later when I have to write more boilerplate code. Copilot seems like a sometimes-useful intern that also goes off the rails very easily but without having to explain how to ask for PTO or that some behavior is inappropriate for the workplace.

The chat mode seems cool but I also have not found a good use case for it.

We are also exploring how to use LLMs to augment our data analytics efforts but we are still in the initial baby steps phase.

panda, it is interesting to read about your explorations into using home AI assistants. However (and I mean this with all respect), I always question if the effort of doing stuff like that is worth the payoff though. I am old, set in my ways, and not very creative, home automation mostly bemuses me at this point.

tboon wrote:

I use Copilot sparingly. Sometimes it is a great help writing boilerplate. Sometimes it pisses me off with inane, useless, and constant suggestions and I rage disable it. I turn it back on a few days later when I have to write more boilerplate code. Copilot seems like a sometimes-useful intern that also goes off the rails very easily but without having to explain how to ask for PTO or that some behavior is inappropriate for the workplace.
.

Same here. I tried using it but 95% of the time I just want it to f*ck off and get out of my way.

All Copilot has really achieved with me when it comes to boilerplate is making me think, “I should have an editor snippet for that”.

The more of those snippets I make, the less useful Copilot is for me.

tboon wrote:

panda, it is interesting to read about your explorations into using home AI assistants. However (and I mean this with all respect), I always question if the effort of doing stuff like that is worth the payoff though. I am old, set in my ways, and not very creative, home automation mostly bemuses me at this point.

It is mostly a toy project that is fun while also forcing me to learn new things. I started the first version early last year when my mentor at work suggested I learn more about the strengths and weaknesses of LMs as part of a software system and how to improve their effectiveness (fine-tuning, etc).

What I usually do in this kind of situation is come up with a project that has value to me but that also forces me to learn the things I want to learn. That's what my various home AI assistant projects have been.

I've now completed multiple LM projects at work. Digging into this stuff early had a large positive impact on the success of those projects and contributed to my promotion late last year.

Are my home AI assistants actually very useful tho? I've made some useful features that I think are really cool but still very much in the "fun toy to play with" category.

Sorry for the tardiness in responding, but thanks for the response panda.

Very interesting and a great perspective for me to ponder (I'm great at pondering but less good at getting off my ass and actually doing anything).

pandasuit wrote:

I've now completed multiple LM projects at work. Digging into this stuff early had a large positive impact on the success of those projects and contributed to my promotion late last year.

GDC 2024: Keeping Online Communities Healthy (Presented by Microsoft)
https://schedule.gdconf.com/session/keeping-online-communities-healthy-presented-by-microsoft/903871

Xbox’s new GPT-powered Content Moderation functionality offers innovative solutions and unprecedented community insights, paving the way for a safer and more engaging digital environment

The joy of not writing code?

I’ve written quite a few LM based agents that automate some of my dev tasks in very tightly controlled ways with quite a bit of success. The trick is to use them for things that are difficult (or impossible) to do with code but where you can easily verify (unit test) the outputs so you can have high confidence in their reliability. GenAI is just another way that I implement testable functions these days.

I watched that yesterday and thought it was pretty cool. I still haven’t tried any AI stuff for writing/testing code. A coworker has started using it to write docstrings for modules and functions, and the output is actually pretty thorough.

Huh, that video says almost exactly what I was telling some younger devs at work like two days ago. The analogy I used was, nailguns didn't put carpenters out of work because nobody ever hired a carpenter to hammer nails. Likewise programmers aren't paid to write code, we're paid to have scrum ceremonies deliver business value and whatnot, and writing code is an implementation detail.

It's also tempting to think of AI as another in a long series of abstractions for old devs to complain that new devs can't work without. Like N years ago it's Java kids that can't work near the metal, then later it's docker/k8s kids who can't sysadmin their own box, then soon it'll be AI kiddies building a mobile app without knowing any of the frameworks it's built on. We're not there yet, but it seems likely enough that we will be that I've been thinking about future-proofing my feelings about devs who rely too much on AI.

And those complaints will continue to be just as valid for appropriate contexts, and just as invalid for other contexts.

I don't view it as "ehh you just have to get over that kind of thinking." There will always be situations where it's really important that the dev understands how it works all the way down for some level of "all the way down."

pandasuit wrote:

The trick is to use them for things that are difficult (or impossible) to do with code but where you can easily verify (unit test) the outputs so you can have high confidence in their reliability.

I'm surprised the Test-Driven Development crowd hasn't jumped all over that idea yet.

"Just write the tests, let the AI write the code!"

I have a question for anyone with experience making an app for iOS or Android. I've been asked about making a webpage into an app. Looks like there are multiple tools you can use to make your app into a webpage.

However, from what I have read Apple will reject your app if it doesn't have a reason that it's an app instead of a webpage. So that's one issue. The other issue is that the webpage that is desired is actually a microsoft webpage. That page is https://myapps.microsoft.com. Obviously I haven't been asked by Microsoft to do this.

So leaving the programming aspect out, I'm relatively sure that neither Apple nor Google will allow us to make an app the points to a Microsoft web page.

It looks like there used to be an app for this but it's been removed.

Anyway I figured I'd check in here in case I'm wrong.