The Joys Of Programming

I liked days 16 and 19 - they build on each other, and they both have that vibe where there's no single flash of insight that trivializes the problem. Rather, there's just a big space to search, and it's too big to do naively, and there are a number of possible ways to pare it down. Neat problems.

Day 20 today was a dud. Reasonably straightforward to solve, but a lot of typing and finagling with off-by-one errors until you get there. At least for me.

That said:

Moggy wrote:

Day 13 - I saw all them nested brackets and just noped out. Need to man-up

Here's one way to do that bit in JS:

var parsed = eval(inputStr)

fenomas wrote:
Moggy wrote:

Day 13 - I saw all them nested brackets and just noped out. Need to man-up

Here's one way to do that bit in JS:

var parsed = eval(inputStr)

:D

Yeah. You can do it in Python as well. I'm assuming that gets you a nested list variable and you then put all those into one final list and then .sort() ?

Oh, I had no idea Python had an eval. But yes, that's what I did - eval'ed and then sorted with a compare function. (But then after submitting I wrote a parser so I could feel like I'd done a day's work.)

The Codingame AI Bot Fall Challenge has started, and there's still 12 days left. If you're curious what one of these looks like, you can see someone working through the first league here:

Seems like the new VSCode Server is out for everyone now and you enable it by using the “code tunnel” command.
https://code.visualstudio.com/docs/r...

I tried that briefly on my dev server and got an error so I haven’t had a chance to use it yet. Gotta find some time to debug the issue. Anyone else here try it yet?

Lately I’ve been using a Chromebook as my dev client machine which is really convenient when I want to sit anywhere in my house and get some stuff done. I installed ChromeOS’ Debian based Linux environment and was able to easily get VSCode working in that using the official deb file. Since I’m doing remote development on my dev server the Chromebook does little more than run the VSCode frontend. I’d still rather have to install nothing at all on the laptop so I’ll be trying to get that new VSCode server working. TBH Linux on a Chromebook is surprisingly good. I tried using Remmina to RDP into my dev server as an alternative and it didn’t perform as well as the local VSCode. Microsoft RDP client Android app wasn’t working for me on the Chromebook for some reason.

It’s kinda entertaining being able to develop on a potato but I’m probably going to move to a full laptop at some point here just for the bigger screen and full keyboard.

Mixolyde wrote:

The Codingame AI Bot Fall Challenge has started, and there's still 12 days left.

I submitted my first code to the arena and it kicked ass! I fine tuned my strategy until it could beat Boss 1 in the IDE before submitting.

It's now battling in the Bronze league, but seems to be tapping out in the 300s. It's also getting spanked by Boss 2 in the IDE.

Not sure if we should talk about strategy here or not. What do you think?

A week later, I finally finished an Elixir version of Day 15, part 2, that runs in milliseconds, and it's correct on the real input, but not the example, so I got my star, and I don't care. My long national nightmare is over.

I finally got time to start today. Almost done Day 3. I sometimes wonder if I should be writing good quality code, or just whatever it takes to get the answer. I can see Day 12 coming back and asking me to do something with the Day 2 code and I am kicking myself for using a lot of nested for loops and if statements.

Moggy wrote:
Mixolyde wrote:

The Codingame AI Bot Fall Challenge has started, and there's still 12 days left.

I submitted my first code to the arena and it kicked ass! I fine tuned my strategy until it could beat Boss 1 in the IDE before submitting.

It's now battling in the Bronze league, but seems to be tapping out in the 300s. It's also getting spanked by Boss 2 in the IDE.

Not sure if we should talk about strategy here or not. What do you think?

Congrats! That's a huge first step. I used a version of the movement and single spawn bot from the end of the video above and it jumped me straight from wood league to silver, no recyclers needed. I plan to tweak this one a little to see if I can improve my rank a little before adding recycler logic. It looks like the Boss at my current level drops one well-placed recyclers every turn if it can. Hard to tell what its movement strategy is right now. I am fine with discussing strategy here, other player's talk general strategy on the Discord and forums, so I think it's fine. It's not like I have time to write some crazy simulation AI like I normally would.

I have three strategy threads I'm iterating on. One each for spawn, build, and move.

Build:

v1 - do not build
v2 - Max recyclers = 1
max build / turn = 1.
Build in the tile not in range of a recycler with max potential
v3 - Max recyclers = 1
max build / turn = 1. Only build if enemy has more units
Build in the tile not in range of a recycler with max potential
v4 - Max recyclers = 1
max build / turn = 1. Only build if enemy has more units
only build if I lost units last turn (to stop those situations where the map had split and I ate away my territory for scrap I didn't need)
Build in the tile not in range of a recycler with max potential

v1 caused me to be overwhelmed by enemy units pretty quickly after the enemy built a recycler
v2 solved that but caused me to be cut off from parts of the map
v3 seems to be better but I was experiencing cases where the map split and I dissolved my lead by spawning builders in my territory. Hence v4

Spawn:

v1 - spawn as many robots as possible in first tile
v2 - spawn as many robots as possible in random tile

v1 was a bust
v2 works better

Move:

v1 - pick random direction. Move all units
v2 - pick a random enemy tile. Move all units in tile
v3 - If adjacent is owned by enemy move there,
else if adjacent tile is neutral move there,
elsemove to random enemy tile.
if more than 1 unit, leave one behind

v1 was really random
v2 worked OK (got me to bronze), but didn't cut it in the bronze league
v3 seems to be working OK

so my v3/v3/v3 code had me with a score of 15.07 (rank 217) on the ladder.

changing to v4/v3/v3 got me a score of 17.99 (rank 11).

So I'm close. More intelligent spawning location next, I think

My current move strategy is:
1. Build a target list of enemy tiles without recyclers, and neutral tiles with at least one scrap (i.e. - not grass)
2. For each tile of mine with units, sort the target list by Manhattan distance ( abs(x1 - x2) + abs(y1 - y2) ), and move all units on that tile to the closest target.

Right now I am not handling ties for closest tile, or duplicate moves at all. I am thinking of some simple strategies for these cases, like removing a target tile from the list if it has been picked, or randomizing ties, etc.

My current spawn strategy is:
Sort the list of tiles I can spawn by lowest total Manhattan distance to all tiles with enemies. This tends to find the shortest average distance to enemies.
Spawn 1 unit on the first tile in the list.

This was enough to get me to Silver.

Just checking in to report that Advent day 22 part 2 is... kind of a bear. I flailed at it for a couple hours making no progress, then I took a walk in the park and thought of a way to flatten it down. But even then it was challenging to get the details and implementation right. Very satisfying once I finally got there!

(OTOH part 1 is just a setup for part 2, and will be trivial for anyone who's reached this point.)

I quite enjoyed day 22, although my solution is not robust enough to work on anyone else's input. I basically hardcoded the configuration of the cube faces. And I had to make a prop to help out:

IMAGE(https://media.mstdn.social/media_attachments/files/109/567/920/226/552/845/original/da18b16382cac00f.jpg)

That seems to be Reddit's 'prop du jour' for day 22. Along with distain for those who hard-coded the 14 edge cases.

I've been too busy to keep up for the last few days, but hope to knock off a couple of part 1s today.

All finished with Advent! Really nice of the author to furnish all the challenges.

As a pointless flex, I added a button to my test page that runs all 25 solutions in sequence, and they finish up in under two seconds on my potato laptop. Which seems good enough.

Day 22 part 2 was definitely the one that took the most thinking. I don't know if there's a better crazy twist way to do it, but my way was...

Spoiler:

...from staring at the diagrams for a while, it eventually occurred to me that if you walk the perimeter of the cube layout, everywhere you find an inner (concave) corner you know that the two nearest edges will match up when the cube is made. I then thought I'd seen the answer and I started typing, but later on I realized you can't just walk the perimeter from any concave corner, as some later edges won't match.

Then after more staring, I realized (?) that if you start from any concave corner and trace both edges, those edge stitch together until you reach a point where both traces have reached a outer (convex) corner. If you do that process starting from all interior corners you will have traced the whole perimeter and you now know all the edge connections and directions.

Then I coded all that up and it worked, and then I realized that the algorithm isn't actually universal - on layouts like the following it wouldn't find the outside edges:

..X... .XXXX. ..X...

But those layouts happen to be exactly the Advent part 2 would be trivial, as the outside edges would work just like in part 1. So I guess it's safe to ignore them here.

Then the final thing I realized was, I apparently don't know WTF the words "convex" and "concave" mean, as I used them wrong throughout my code comments (and also while typing this very answer).

A lot of fun stuff in this years Advent. I have working solutions to all problems, but some still take a while to run (almost a minute in some cases) so I'd like to go back and figure out how to do them more efficiently. I should also redo day 22 part 2 correctly too... I feel vaguely ashamed for not doing it more rigorously. fenomas, your solution is pretty smart, I will definitely steal your ideas!

Anyone else here using LLMs (CoPilot, ChatGPT) for programming?

I’m trying to see what they can do to make me more efficient. Working out some prompts. Personal project stuff for now until I’m approved to use them for work.

I’ve had some funny experiences where they hallucinate features that don’t exist and try to use them in generated code. Funny when I know enough to notice it right away and less funny when I miss it until I’m debugging later.

I’ve found the most success having LLMs generate code in languages/libs I already know well so I can validate quick or where I can automate the testing to debug faster.

In this video it’s the mistakes ChatGPT makes that are the most interesting.

I have been playing with OpenAI in my NeoVim environment.

I find it... interesting, but too often it seems like someone trying to lob bullsh*t into my editor.

If I wanted to spend my time playing the "is this going to save me time, or just give me garbage code" game, I would assign more tasks to junior devs.

I've only used ChatGPT a little, but found it very effective for "which APIs can I call on library X in order to accomplish task Y?" style tasks.

I'm not sure I'd feel comfortable using it as a "save me some typing" kind of tool - I'd need to inspect whatever it generates, and I might retype a lot of it for style/comments/etc. But for figuring out how to do an unfamiliar task it's been great.

It’s a real mixed bag right now. I’m cautiously optimistic it will improve.

Much like Stack Overflow I can sometimes get a useful answer from ChatGPT that I only marginally trust and have to validate myself. Unlike Stack Overflow I don’t see a bunch of comments from other people who’ve already validated and helped refine the answer and I don’t get links back to library docs and other references. The hallucinations I’ve seen ChatGPT generate really lower my confidence in its current state. Often ChatGPT seems to have read just enough docs that it can mash something together that looks convincing but may be total gibberish.

That’s all “public” knowledge too. What I’m really waiting to see is how well LLMs do when they are introduced to private repos and patterns. Can I teach it how my teams do things? Maybe? One day? I’m open to trying it out to see how it does but I’m not expecting anything amazing any time soon.

Crosspost from the AIs thread: I've recently worked my way through Andrej Karpathy's much-lauded series of AI videos, and man they're good. He has a great style of starting from nothing, bashing out code and solving problems as they arrive, and then later on revealing that this or that function he wrote is actually a widely-used ML construction whose name you've heard of. It's a refreshing change from the usual pattern of introducing an API and then explaining what problem it solves.

The code is all jupyter/pytorch, but I've never touched Python and I had no trouble following along.

*Legion* wrote:

Just writing shell functions boss, being very productive today...

cd () { if [ $# -eq 0 ] then echo "cdeez nuts" builtin cd ~ else builtin cd "$@" fi }

I forgot that I did this. Finally did a bare "cd" and was reminded today.

Reminds me of college. *sheds single tear for "Yes, master?>" prompt.

I've been using a Jupyter Notebook app to develop personal projects from my phone for a while now. I sync the files to my other computers so I can work on stuff wherever I am.

Lately I wanted to make use of more flexible dev tools so I took the plunge and started tunneling from my phone to VSCode server running on my personal dev server. I've already been doing that from my Chromebook and other desktops for a while but never really tried from my phone before.

Now I can use many of the VSCode plugins and things I'm used to and have all the processing power of my dev server instead of my puny phone. Also, now I have GitHub Copilot from my phone which is kinda cool since it makes quick edits easier without a lot of typing. I hope Copilot X really improves the experience (I'm on the waitlist so haven't tried it yet). Slap a voice interface on that and I'll be that guy quickly writing broken code by talking to my phone.

I can now use Polyglot Notebooks which is nice since I've started to use C# some lately. I'm writing a project with Semantic Kernel and the C# version has lots of features the Python one doesn't have yet. Totally switching to the Python version when it catches up tho.

The big gap is that I can't use dev containers this way. VSCode web interface does not support them even when tunneling to the remote server that is running them. Really sucks as I rely on those heavily. I've had to port some of my projects to virtual envs for now. I may consider GitHub Codespaces just for this reason.

It's pretty cool having all the power of a full dev server from a client device that fits in my pocket. Except when I don't have a good connection and I have to go back to using the Jupyter Notebook app again. And the onscreen keyboard of my phone is missing some buttons I'd really like to have. Arrow keys for instance.

The most annoying thing I’ve run into using VSCode on my phone is the onscreen keyboard on iPhone is missing some buttons that would make me more productive. Cursor up for instance.

I looked around for custom iPhone keyboards and didn’t find one with buttons I want. There is one with left/right arrows but not up.

I’m using a shortcut plugin for VSCode that lets me trigger actions from on screen buttons more easily. Opening the terminal or launching the command palette for instance. There doesn’t seem to be a VSCode action for cursorUp that works in the console tho. Only in the editor. I’m surprised how annoying that one thing is for how I work.

So if I can’t find or make a custom iPhone keyboard with more buttons or make/mod a VSCode plugin to implement more actions I may be stuck using a Bluetooth keyboard for some stuff. That basically defeats the purpose of being able to get stuff done with only my phone.

I need a little button bar like the Jupyter Notebook app has.

I am an Emacs survivor

I had a friend in high school who was 100% unironically exactly like that.

The real disease is vim.

Hrdina wrote:

The real disease is using something other than vim.

I now have access to GitHub Copilot Chat. Starting to use it from my phone to see how well it works. Maybe my new code editor is a chat interface If I can get my phone's speech to text interface connected I'll get to try my fantasy of writing code by talking to a computer.

Edit:
Lack of a tab key on iPhone keyboard is frustrating since that’s the completion key for GitHub copilot. Argh. I really don’t want to carry any physical keyboard with me but I haven’t figure out how to make my VSCode plugin trigger key presses instead of commands.

BUT I did manage to whip up a quick experiment using only my phone and GitHub copilot where I’m running a generative AI model locally on my dev server to see how well it does.

Unfortunately the i7 47XX and 16GB RAM (and lack of dedicated GPU) in my dev server are not enough to run more powerful local models well. The last one I tried took 10 to 171 seconds to answer the question “name 3 colors” (quite a range) I need to experiment with different models and try some of my other hardware.

Edit 2:
Enabling the dictation feature on my iPhone means I can now talk into any place I can type. Works well for copilot chat other than it doesn't know programming terms so it autocorrects away from them. Thankfully copilot still does a good job of understanding my intent. Haven't yet tried using it to write code directly into a file but I'll try that too. I can pop up copilot inside a document and ask it to make edits and things for me. Really curious to see how much I can do that way. I'm not going to use this while outside my house so I'm still looking for a way to add a few useful keyboard buttons to my coding session through external keyboard or preferably onscreen buttons. I need tab and up arrow at least. Probably more but I can do most other things using the command palate button I've already added. It's just single key presses I can't figure out.