Category Archives: Development/RTsoft

Random stuff I’m working on.

Adding a cool custom C++ class template to Visual Studio 2022

Ok, this is one of those posts that are more just to document something so the next time I install VS on a computer I remember how to do this and can find the file.

If you don’t use Visual Studio C++ then you should run. Or, prepare to dive into the incredibly boring world of the lengths programmers will go to just to avoid monotonous typing.

I’m all about classes. When you think of classy, think of me. Any time I do anything, I need a .cpp and.h file for it. And I want ’em setup pretty with the current date and stuff. Automagically!

See that? If I specify “GameProfileManager”, these two files get made with these contents. That’s all I’m trying to do, as I make a LOT OF CLASSES.

For years I used custom macros I’d made for Visual Assist, but at $279 + $119 yearly it’s a bit hard to continue to justify using VA when VS 2022 has most of its features built in these days.

Yes, you can pay less but rules like the payment “Cannot be reimbursed by a company or organization” and the world’s most annoying licensing system makes it not worth it.

So what’s a lad to do? Well, I finally got off my bum (well, on it, I guess) and learned how to have the same macros with built-in features that VS has had forever. They work a tiny bit different but close enough.

Hint: Don’t even try to use the “Snippets” feature to do this, it won’t work.

How to add

Download this file and place it in your %USERPROFILE%\Documents\Visual Studio 2022\Templates dir.

(or %USERPROFILE%\Documents\Visual Studio 2019\Templates if you’re using that)

DON’T UNZIP! VS reads it as a zip.

Close and restart Visual Studio.

Then add new classes using the “Add item” menu. Like this shittily produced unlisted video shows: (can you guess the music I used? Why did this even need music? Are you wondering what youtube video I was watching?)

How to change it

Great! Ok, but you probably wanted to customize it with your own name and not give me credit to everything you make. Well, fine, if you must.

To modify it with your own name/email/texts, just unzip the file you downloaded, you’ll see four files inside. Delete the original .zip. (you’ll make your own in a sec)

Edit the .cpp and .h as you like, then zip the four files up again. (I don’t think the name matters, it just has to be in the same directory as before) Restart VS and you should see the changes.

When editing, there are many special words you can use that get replaced automatically (full list here).

Some useful ones:

  • $safeitemname$ – (current filename without the .cpp or .h part)
  • $projectname$ – The name of the project this is in
  • $year$ – Current year (ie, 2022)
  • $time$ – The current time in the format DD/MM/YYYY 00:00:00
  • $username$ – The current user name
  • $registeredorganization$ – The registry key value from HKLM\Software\Microsoft\Windows NT\CurrentVersion\RegisteredOrganization

Hint: Edit the MyTemplate.vstemplate file to change the name and some other things. Useful if you want multiple templates, one for component classes, etc.

And that’s it! Suck it, Visual Assist. But also I love you so please get cheaper and less annoying to buy & license.

PlayStep Mini – Making a $10 thing to replace the $700 version

The PlayStep Mini is a tiny device that allows a parent to limit screen time from any HDMI device based on “charging up” energy using an exercise stepper.

Github sources here

This explains what it is

But why tho?

According to the CDC,  kids aged 8 to 10 spend an average of 6 hours a day in front of screens.

While rationing time is good, in the real world sometimes parents need additional tools to help keep kids fit.

In 2019, I made the PlayStep, an open source program that runs on a Raspberry Pi that allows a kid to power the screen with exercise.  Years later, we still use it for one of my kids!

But I guess I’m the only person on earth using it, even though it’s open source.  I don’t know why, I mean it only costs around $700 in hardware and low level knowledge of linux command line stuff…

Oh, you want something more reasonably priced and easier to use?

Behold: The Playstep Mini!

The differences are this:

  • No cool video mixing, just on and off
  • Zero latency
  • Extremely power efficient (uses 30ma directly from the input HDMI’s 5v power)
  • Standalone and easy to use
  • Can be made for under $10 (plus exercise stepper that you might already have)

If this is something you’d be interested in buying as a kit to use, well, leave a comment, I can buzz you if myself or someone else ever offers it for sale.

If you’re someone who knows if it’s legal to sell something like this as a kit without certification of any kind, uh, also please let me know.


Won’t a kid just disconnect it and play without exercising?

If mine did, he knows he’d probably lose gaming altogether for a week at least so it’s never been an issue here.

Is it safe? Will my kid get a leg injury from too much exercise?

Please use common sense and adjust the ‘time energy’ per step, step resistance on the machine, and “max energy level” to something that is safe. Uh, if your kid is limping around you should definitely disconnect it.

Consult your doctor or whatever so I don’t get sued when your kid is skinny but with two herculean calves.

Can I get some made from JLCPCB and give them away or sell them?

Sure, I don’t care, do anything the GPL license allows. I could switch it if there is a reason.

You freaking used pcb auto-routing with HDMI? My Eyes! My God man, I would love to help you do it right

Hey, I’m a beginner! But that is very nice of you to offer to help improve it, you glorious human.

This version works great for 1080p, but the board layout is too stupid and noisy for 4k to work.  I guess I should stop using auto routing and get rid of all the HDMI vias by using more layers or whatever?

If you can help me work on this and you’re ok with its open and free hardware license, please check out the github project.

HoloVCS – Play Atari 2600 Pitfall! in 3D on a Looking Glass Portrait

Understanding how it works

Have a Looking Glass screen and want to try the above thing? Well, you can!

Download for Windows (~120 MB)

Github source

How to set it up and play it

To run this, you need:

  • A Looking Glass holographic display device connected to a Windows computer via hdmi (designed for the Portrait, but in theory it should run on all of them…)
  • The Holoplay driver installed
  • A beefy ass graphics card
  • A game rom (.a26 or .nes file) which needs to be put in the /atari2600 dir or /nes dir, depending on the system it’s for

If you don’t have a holographic device the screen will look like blurry garbage. If you want a build that works in 2D for some reason, I guess I could do one though…

Supported games (you must supply the rom, they are not included in the zip):

Note: Supported means they are playable, it doesn’t mean they work perfect or don’t have issues…!

  • Pitfall! (Atari 2600)
  • Super Mario Bros (NES)
  • Castlevania (NES)
  • Jack Bros (VB)
  • Panic Bomber (VB)
  • Teleroboxer (VB, lickers between rounds)
  • Vertical Force (VB)
  • Wario Land (VB)

The actual filenames don’t matter, it detects supported games by checksum.

Note: There are two versions of Castlevania NES roms out there, if it’s not detected and not playing in 3D you might have the wrong one.

It will run unsupported games, but they won’t have any depth/3d effects.

I’ve only tested with the Looking Glass Portrait and the OG 8.9. They both seem to run around the same speed, with the OG 8.9 being too zoomed in initially. (try the – key to zoom out?)

When it starts, it will show a menu with the hotkeys, but here they:

Arrow keys: Move
Ctrl, Space, Enter: A and B and Start buttons (gamepad supported too)
Return: Reset game
Num 0 through 5: Set frameskip (higher makes the game run faster by not showing every frame)
Num 6 through 8: Toggle texture smoothing, shadows, lighting
P: Pause
A: Adjust audio to match game speed (experimental but can help with audio problems)
/+: Zoom in/out. Hoping this will help with other Looking Glass sizes.
S: Save state
L: Load state
< and > : Cycle through detected games (any roms you’ve placed in the atari2600 and nes directories)


I can get 60 fps with everything with a 5ghz with Nvidia 3090. Exception is Virtual Boy which I’ve limited to 50fps to match the real device.

If a game is too slow, press 1 or 2 for frame skipping modes. If audio is weird, press A to cause audio to sync with the recent framerate. (gets rid of pops and scratches usually, but pitch/speed will be wrong making it… interesting…)

If you have problems, check the log.txt file for clues. (created in the root dir where HoloVCS.exe is)


Q. Does it support other games besides these?

A. It will play unsupported games without 3d plane effects, so not really

Q. I noticed you’re using emulators via a libretro dll interface, does this mean I can pop in more emulator types?

A. Yes! Well, no. I mean, each requires customizations to work properly and do 3d stuff.

Q. The snake in Pitfall! is mostly invisible!

A. This is a known bug, sorry. I mean, it’s a ghost snake now

Q. Why do some levels look weird or broken?

A. Sorry, I only made it so the first levels works, didn’t worry about later stuff. It is possible to detect current level/environments via PPU memory and adjust rendering to match the situation though.

Q. Why is it called HoloVCS?

A. It originally only supported Atari VCS emulation. Too lazy to change it


1.0 Initial release, supports Pitfall! for the Atari VCS

1.1 Added support for a few NES games too

1.2 Added some Virtual Boy support. The neat thing about this is there nearly no game specific code happening, it’s reading the 3d position directly by hooking into the emulator at a low level. It’s not able to capture everything but it’s enough to be a neat gimmick. The only game-specific thing I’m doing with VB is setting the zoom level to look slightly better for certain games.

Subfish: Search youtube channels for spoken words, export clips to Premiere/DaVinci Resolve timeline


  • Download all subtitles from a channel or playlist
  • Search subtitles for keywords (regex supported)
  • Export video timeline of all found clips as .EDL for Premiere or DaVinci Resolve
  • Will notify you on startup if a newer version is available
  • Full source code on github so feel free to submit patches, report bugs or make feature requests.
Exporting found clips to a Premiere/Resolve timeline

Here’s an example of exporting a lot of clips at once based on finding a single word and using Resolve’s scripting to automatically add the count, date, and youtube video title each clip is from. (the only “work” I had to do was hand nudge the clips forward and back a little so only the correct word was heard instead of a few seconds before and after as well)

Download/install instructions (Windows)

To run Subfish, you need to download some Windows libraries from Microsoft first because I’m too lazy to make an installer for now. Requires Windows 7 or newer.

1. Install the .NET 5.0 Desktop Runtime. (Windows version is here)

2. Install the WebView2 runtime (Try here, look for x64 version probably).

3. Download the latest version of Subfish (in a zip) for Windows.

Inside the zip there is a folder called “Subfish”. Drag that folder onto your desktop (or somewhere) to extract it. Then enter it and run Subfish.exe. (The binaries are signed by RTsoft so Windows shouldn’t give you any trouble running it)

An exciting screenshot

Why I made this

Earlier I was doing some youtube research and needed to look through thousands of videos for spoken words. While I did figure out a way to do it using youtube-dl and text scanning utilities, it was a clunky process and I couldn’t instantly jump to the exact spot in videos to preview video without some shenanigans.

“This is stupid, someone must have made a slick front end for this…” and well, I couldn’t find one, so here we are. As for the name, well, check out my other free utility Toolfish!

The timeline export options were actually added for a friend, but that’s pretty handy too.

Info & Issues

Audio sync problem after importing the EDL timeline into DaVinci Resolve? I think this is a Resolve bug when importing something that has clips with multiple internal timings. No problem – I created a script to fix it, check the ScriptsForDaVinciResolve sub directory. The readme.txt there explains how to copy into %PROGRAMDATA%\Blackmagic Design\DaVinci Resolve\Fusion\Scripts\Comp so you can run Workspace->Scripts->Comp->FixTimelineSync in resolve.

Is there a way to automatically add date, video counter, video name etc on top of the videos in the exported timeline? Yes, I’ve done it with DaVinci Resolve scripts, hoping to do a tutorial on that later as it’s kind of tricky. The .json metadata we export with each video is useful for this.

It seems to stop after downloading around 4500 subtitles? This seems to be a youtube limitation. One trick is to download again in reverse order, so 9,000+ from a single channel is possible. I think with some changes to optimize youtube-dl I could have it “continue” pulling data in a much smarter way but I haven’t been bothered enough to try yet. (youtube-dl’s current date restriction options just don’t work for right for subtitles, it still checks every video in order)

I’m getting “This browser or app may not be secure.” when I try to login to my Google/Youtube account in the preview window?! Yeah, I started getting that recently too. Luckily it has nothing to do with the actual text/video extraction process, but clip previewing tends to show google ads if not logged on. I think you can fix this by enabling “Less secure apps” but I didn’t actually try it.

OSX/Linux support? Cross-compiling is a problem due to using WebView2 for now, so I guess that’s out. On a side note, in theory this does support Windows-10 ARM based devices too but I don’t have one to test with.

Privacy – On startup, Subfish visits<version #> in its little web-browser thingie which will give a download link if a new version is out. That’s the only communication done with our servers.

Legal – Only use this product if it’s legal to do what you’re doing where you’re doing it. That’s probably good advice for life in general.

To report a bug or feature request – Post here, twitter, or drop me an email

Using computer vision to enforce sleeping pose with the Jetson Nano and OpenCV

(special thanks to Eon-kun for helping demonstrate what it looks like)

Imagine you HAVE to sleep on your back for some reason and possibly restrict neck movement during the night. Here are some options:

  • Tennis balls strapped to sides
  • Placing an iphone on chest/pocket and using an app (SomnoPose) that monitors position with the accelerometer and beeps when it detects angle changes. (it works ok but the app is old and has some quirks like not running in the background)

The above methods are missing something though – they don’t detect head rotation. If you look at the wall instead of the ceiling while not moving your body, they don’t know.

The tiny $99 Jetson Nano computer combined with a low light USB camera can solve this problem in under 100 lines of Python code! (A Raspberry Pi would work too)

The open source software OpenCV is used to processed the camera images. When the program can’t detect a face, it plays an annoying sound until it does, forcing you to wake up and move back into the correct position so you can enjoy sweet silence.

If you’re interested in playing with stuff like this, I recommend Paul McWhorter’s “AI on the Jetson Nano” tutorial series, the code below can be used with that.

I’m really excited about the potential of DIY electronics projects like this to help with real life solutions.

The Pi and Nano have GPIO pins so instead of playing a sound, we could just as easily activate a motor, turn a power switch on, whatever.

Of course instead of just tracking faces, it’s also possible to look for eye, colors, shapes or cars, anything really.

The Python code listing for Forcing you to sleep on your back

import cv2
import time
from playsound import playsound
import os

timeAllowedWithNoFaceDetectBeforeWarning = 22
timeBetweenWarningsSeconds = 10

timeOfLastFaceDetect = time.time()
timeSinceLastDetect = time.time()
timeOfLastWarning = time.time()
warningCount = 0

def PlayWarningIfNeeded():
    global timeBetweenWarningsSeconds
    global timeOfLastWarning
    global warningCount

    if time.time() - timeOfLastWarning > timeBetweenWarningsSeconds:
        print ("WARNING!")
        warningCount = warningCount + 1
        os.system("gst-launch-1.0 -v filesrc location=/home/nano/win.wav ! wavparse ! audioconvert ! audioresample ! pulsesink")
        timeOfLastWarning = time.time()

bCloseProgram = False

cv2.moveWindow('nanoCam', 0,0)
cam = cv2.VideoCapture("/dev/video0")

cam.set(cv2.CAP_PROP_FPS, int(10))
face_cascade = cv2.CascadeClassifier('/home/nano/Desktop/Python/haarcascades/haarcascade_frontalface_default.xml')

while True:

    ret, frame =
    frame = cv2.flip(frame, 0) #vertical flip

    #rotate 90 degrees
    #frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:
           cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 4)
           timeOfLastFaceDetect = time.time()
    timeSinceLastDetect = time.time()-timeOfLastFaceDetect
    if timeSinceLastDetect > timeAllowedWithNoFaceDetectBeforeWarning:
    text = "Seconds since face: {:.1f} ".format(timeSinceLastDetect)
    frame = cv2.putText(frame, text, (10,dispH-65),fnt, 1.5,(0,0,255), 2)

    text = "Warnings: {} ".format(warningCount)
    frame = cv2.putText(frame, text, (10,dispH-120),fnt, 1.5,(0,255,255), 2)

    if cv2.waitKey(10)==ord('q') or cv2.getWindowProperty('nanoCam',1) < 1:
        bCloseProgram = True
    if (bCloseProgram):