The data it creates is in json format and looks like this:
"title": "Borderlands: The Handsome Collection",
}, ... and so on
If I cross referenced it with trophy data it would be more accurate. If anyone knows a better way to get at this data (one that doesn’t break if the user isn’t using English for example…) please let me know.
No plans to add anything else to this but wanted to throw up a post about it so anybody else working on something similar could find the source if needed.
“Oppia, Oppia, Oppai!” My younger kid was chanting this word over and over into his headset mic while playing Fortnite on the Switch.
Why is he saying this? I tell him to flip the monitor speaker on. Some laughing dumbasses were telling him if he said the word (which means “breasts” in Japanese) enough times, they would buy him a skin. (Fortnite skins are giftable right now… this presents griefers with a lot of ammo to screw with naive children)
Note: When I say “monitor speaker” I just mean a tiny speaker to monitor the conversation, nothing to do with a computer monitor or TV.
How I deal with this (and worse)
Instead of freaking out and forbidding voice chat completely, I prefer to use it as a teachable moment to prepare him for his inevitable technological future.
So how do you know when someone is lying to you? How do you know what’s true and false?
If I can teach him to be skeptical, maybe he won’t get suckered later in life. <Looks out window, sees homeopathy, magical wallets, magnetic sports bracelets… sigh>
What behavior crosses the line? How does muting and banning work? Friend only chat? What about strangers in a friend’s squad?
In the all too near future he’ll be flying solo, but for now I can ride copilot when issues come up and monitor the conversation with the right cables and a tiny speaker. Works better than my old way: grabbing the headphones and interrupting the game.
Want this? Here’s what to buy
Here’s what I used:
Note: I linked to the items on amazon/ebay as examples, this isn’t an endorsement of any particular seller or product, just to show what the correct cables look like… until the links break, anyway.
3.5mm TRRS Male to Dual TRRS Female Stereo 4-Pole Splitter Cable (Amazon) <– This is the trickiest part to buy because you can’t really see from a picture if the female ends are wired correctly (you want a full TRRS jack inside and not a splitter to a TRS or something)
3.5mm TRRS to 2 TRS Audio Headset Mic Y Splitter Cable Adapter (Ebay)
Mini 3.5mm Hamburger Speaker USB Rechargeable (Ebay)
More details with crappy diagram photoshop
A normal headphone cable (TRS) carries three conductors, left channel, right channel, and an audio ground. A TRRS cable adds one more wire so it can hold four – the extra conductor is used for the mic. (The mic shares the same audio ground)
If you directly plugged in a TRS cable into a TRRS jack, you’d accidently short the mic and ground together which would disable the mic input. That’s why we need the TRRS to headphone/mic splitter, so we can just plug the speaker into the headphone part.
The mic end of the splitter is unused. See the pic at the top of this post to see how to plug stuff in, it’s pretty simple.
If the mic is picking up sounds from the speaker, move it farther away or turn the speaker volume down.
Having written a system-wide live content scanner for an MMO I can tell you that there is no way you could overestimate how shitty (some) people can be online, so please keep an eye on your kids and don’t trust any system to do it for you.
Is there any better way to do this? Maybe there is a tiny speaker out there that already has a TRRS passthrough built in? That’d be cleaner.
If using a dock, the TV speaker might be an option but the Switch has no option to “Play audio via headphones and HDMI simultaneously”, so it always mutes HDMI when headphones are plugged in.
Despite the risks and warnings above, I’m ok with the current reality of online gaming as an important school-wide social activity. Earning “screen time” provides motivation for my son to finish his homework and hey, at least he’s getting exercise.
UPDATE October 12th, 2021
For practice, I designed a simple PCB adapter that does the same thing as above. If electronics is your bag and you wanted to make a bundle of them to sell or hand out, full design is here.
In a hurry? Download link for the program is here. Edit config_template.txt for directions on setup, it’s kind of tricky. Only runs on Windows right now. (Github source)
Why I wanted a “translate anything on the screen” button
I’m a retro gaming nut. I love consuming books, blogs, and podcasts about gaming history. The cherry on top is being able to experience the identical game, bit for bit, on original hardware. It’s like time traveling to the 80s.
Living in Japan means it’s quite hard to get my hands on certain things (good luck finding a local Speccy or Apple IIe for sale) but easy and cheap to score retro Japanese games.
Yahoo Auction is kind of the ebay of Japan. There are great deals around if you know how to search for ’em. I get a kick out of going through old random games, I have boxes and boxes of them. It’s a horrible hobby for someone living in a tiny apartment.
Example haul – I got everything in this picture for $25 US! Well, plus another $11 for shipping.
There is one obvious problem, however
It’s all in Japanese. Despite living here over fifteen years, my Japanese reading skills are not great. (don’t judge me!) I messed around with using Google Translate on my phone to help out, but that’s annoying and slow to try to use for games.
Why isn’t there a Google Translate for the PC?!
I tried a couple utilities out there that might have worked for at least emulator content on the desktop, but they all had problems. Font issues, weak OCR, and nothing built to work on an agnostic HDMI signal so I could do live translation while playing on real game consoles.
So I wrote something to do the job called UGT (Universal Game Translator) – you can download it near the bottom of this post if you want to try it.
Snaps a picture from the HDMI signal, sends it to google to be analyzed for text in any language
Studies the layout and decides which text is dialog and which bits should be translated “line by line”
Overlays the frozen frame and translations over the gameplay HDMI signal
Allows copy/pasting the original language or looking up a kanji by clicking on it
Can translate any language to any language without needing any local data as Google is doing all the work, can handle rendering Japanese, Chinese, Korean, etc (The font I used is this one)
Controlled by hotkeys (desktop mode) or a control pad (capture mode, this is where I’m playing on a real console but have a second PC controller to control the translation stuff)
(Added in later versions) Can read the dialog outloud in either the original or translated language
Can drag a rectangle to only translate a small area (Ctrl-F10 by default)
In the video above, you’ll notice some translated text is white and some is green. The green text means it is being treated as “dialog” using its weighting system to decide what is/isn’t dialog.
If a section isn’t determined to be dialog, “Line by line” is used. For example, options on a menu shouldn’t be translated all together (Run Attack Use Item), but little pieces separately like “Run”, “Attack”,“Use item” and overlaid exactly over the original positions. If translated as dialog, it would look and read very badly.
Here are how my physical cables/boxes are setup for “camera mode”. (Not required, desktop mode doesn’t need any of this, but I’ll talk about that later)
Happy with how merging two video signals worked with a Roland V-02HD on the PlayStep project, I used a similar method here too. I’m doing luma keying instead of chroma as I can’t really avoid green here. I modify the captured image slightly so the luma is high enough to not be transparent in the overlay. (of course the non-modified version is sent to Google)
This setup uses the windows camera interface to pull HDMI video (using Escapi by Jari Komppa) to create screenshots that it sends to Google. I’m using an Elgato Cam Link for the HDMI input.
Anyway, for 99.99999999% of people this is setup is overkill as they are probably just using an emulator on the same computer so I threw in a “desktop mode” that just lets you use hotkeys (default is Ctrl-F12) to translate the active Window. It’s just like having Google Translate on your PC.
Here’s desktop mode in action, translating a JRPG being played on a PC Engine/TurboGrafx 16 via emulation. It shows how you can copy/paste the recognized text if want as well, useful for kanji study, or getting text read to you. You can click a kanji in the game to look it up as well. (Update: It now internally can handle getting text read as of V0.60, just click on the text. Shift-Click to alternate between the src/dest language)
Try it yourself
Before you download:
All machine translation is HORRIBLE – this is no way replaces the work of real translators, it’s just (slightly) better than nothing and can stop you from choosing “erase all data” instead of “continue game” or whatever
You need to rename config_template.txt to config.txt and edit it
Specifically, you need to enter your Google Vision API key. This is a hassle but it’s how Google stops people from abusing their service
Google charges money for using their services after you hit a certain limit. I’ve never actually had to pay anything, but be careful.
This is not polished software and should be considered experimental meant for computer savvy users
Privacy warning: Every time you translate you’re sending the image to google to analyze. This also could mean a lot of bandwidth is used, depending how many times you click the translate button. Ctrl-12 sends the active window only, Ctrl-11 translates your entire desktop.
I got bad results with older consoles (NES, Sega Master System, SNES, Genesis), especially games that are only hiragana and no kanji. PC Engine, Saturn, Dreamcast, Neo-Geo, Playstation, etc worked better as they have sharper fonts with full kanji usually.
Some game fonts work better than others
The config.txt has a lot of options, each one is documented inside that file
I’m hopeful that the OCR and translations will improve on Google’s end over time, the nice thing about this setup is the app doesn’t need to be updated to take advantage of those improvements or even additional languages that are later supported
After a translation is being displayed, you can hit ? to show additional options. Also, this is outdated, use the real app to see the latest.
5/8/2019 – V0.50 Beta – first public release, experimental 5/13/2019 – V0.51 Beta – Added S to screenshot, better error checking/reporting if translation API isn’t enabled for the Google API key, minor changes that should offer improved translations 5/30/2019 – V0.53 Beta – Added input_camera_device_id setting to config.txt for systems with multiple cameras. Moves mouse offscreen for “camera” mode captures 9/5/2019 – V0.54 Beta – Fixes crash on startup problem some people had, adds “audio|none” config.txt command to optionally disable all sound. Added “minimum_brightness_for_lumakey” setting to config.txt in case the default isn’t right 9/15/2019 – V0.60 Beta – New feature, text to speech! You’ll need to enable Google’s Text To Speech API, Fixed a crash bug, added some in-app persistent settings, gamepad can now move around the cursor and click things. Controls changed a bit. Added automatic reading of detected dialog, can choose to read src or dest langs, can hide text overlays if you want now. A few new options in the config.txt. Switched to FMOD audio, SDL_Mixer has buggy mp3 playback which was causing some me grief. Changed the translate button sound to something more soothing. 11/22/2019 – V0.61 Beta – Replaced audio system with Audiere to prepare for putting it on GIT, added more logging and error checking with libCURL – I’ve put the complete source on Github, feel free to bugfix or add some features if you’re a programmer!
4/12/2020 – V0.62Beta:
* FEATURE: Added draggable window option (Changed hotkey to Ctrl-F10 in the default config.txt, if upgrading, it will be be Shift-Ctrl-F11 though) * Removed an include for wiringpi (it isn’t used) * Added “FMOD Release” MSVC configuration profile, this enables FMOD as well, it will be the default now as I noticed some clicks/pops from Audiere sometimes when playing text to speech generated by Google * Added status at the bottom that shows what is happening with uploading/download, in situations where “nothing is happening” these status updates will let you know what it’s doing, useful for slow internet or whatever * Can now cancel spoken audio by clicking it again * Added a font so rendering Hindi is supported (hotkey is 0) * Initiating a translation when a translate dialog is already on the screen now just toggles it off instead of doing weird things * Added “audio_device” option to config.txt, if text matches an audio device that will be used instead of the default * Joystick deadzone increased from 0.15 to 0.20, needed because my 360 sticks are just bad * BUGFIX: Word wrap doesn’t sometimes cause spaces to be missing between words
4/23/2020 – V0.63 Beta
* Added support for rendering Punjabi (note: the only open source font I could find doesn’t have English letters in it..hope to find something better later) * Added support for setting a source language hint in the config.txt. Required to read some non latin languages, for example setting to “pa” for Punjabi allows that language to be read. Hint language is shown on startup screen, “auto” means no hint * Now shows exact google error text onscreen (like bad API key or whatever) instead of saying “open error.txt” (error.txt is still written also though) * Shows “<language code> language not supported for audio” if Google can’t do text to speech on it (Punjabi for example) * Punjabi as a translation target is now one of the included languages you can cycle through using [ and ] or L and R on a control pad. Note: these languages can be changed/added via the config.txt, the first one set will be the default on startup * “Press space to continue or ? for help – rtsoft.com” changed to “<Space or ?>” and doesn’t show at all for extremely small drag rects, so it doesn’t overlay the translation * Shows “Nothing found” if there is zero text to translate, better than looking like it crashed or something
12/2/2020 – V0.64 Beta
* Added support for a gamepad based hotkey, (by default you need to push in to click the right joystick) to initiate a scan/translate. This works in computer mode. By default it will scan the active window, but if you’ve done a custom drag rect size previously, it will use that size instead. (useful for grabbing just the dialog area of a game)
Released Universal Game Translator V0.64 : now by default the right joystick button will trigger translations of the active window while nicely sharing the gamepad, don't have to touch the keyboard at all. Here's me playing a PS1 game via Retroarch. https://t.co/0nIdLVEqT9pic.twitter.com/dCoolo8AoP
Note: This is useful because you can use just a controller to translate an Elgato game capture or twitch string of a real console, so you can sort of play via two controllers.
* Now using XInput for input which allows UGM to access the gamepad even if it’s being used by a game. (with a PC game/emulator you can play and do translations without touching the keyboard using a single controller!)
BTW, it’s doing some weird things under the hood to make the overlay work, if you have any problems setup the game/app in a windowed mode and it should be ok.
Note: You can change which button does the translation by changing the gamepad_button_to_scan_active_window setting in the config.txt. The options are listed right above where it is. (default is “right joystick button”)
Note: An XInput compatible controller is required for the global button controls to work. An Xbox 360, Xbox 1 or Xbox Series X/S controller is recommended.
2/5/2021 – V0.65 Beta
* Gamepad hotkey that initiates a translation now also can dismiss it (by default, right stick button) * New feature: Pressing E will export the current translation to html and open it with your default browser. Example:
Added a "Push E to export to HTML" feature in UGT 0.65, helps me with Japanese study because now I can use the excellent chrome extension rikaichan/rikaikun on game dialog. Here's a crappily made video using FFXIII (PS3) as an example pic.twitter.com/e4gMNcTMef
* Tiny quality of life change for people using the keyboard to initiate translations: The capture active window hotkey (Ctrl-F12 is default) now works like the gamepad button capture, if a custom screen capture area was set previously (Ctrl-F10) it will default to using that instead of the entire active window.
3/25/2021 – V0.67 Beta
Fixed to properly handle windows scaling, so if you’re using 150% DPI or whatever it shouldn’t cut the window off or act weird anymore. A side effect of this fix (at least how I implemented it) is it probably requires Windows 10 (or later) to run UGT now though
4/8/2021 – V0.68 Beta
Fixed problem where unnecessary spaces were inserted into the original OCR’ed Asian text, very noticeable when using cut & paste text features (Thanks coltonoscopy)
Fixed issue where it failed to cut and paste text if no translation took place (for example, English to english)
5/20/2021 – V0.69 Beta
Locked to max 100 fps, applied Meerkov’s FPS limiting bugfix (thanks!)
Changed startup sound to be shorter, added GUI checkbox to “Disable capture sound” (Meerkov)
Added “google_text_detection_command” config setting to config.txt (Meerkov)
Fixed bug in text layout engine that would cause freezes/delays in some situations. Could also create duplicate text boxes (Meerkov)
Added more replacement tags for html export: END_X, END_Y, WIDTH, HEIGHT, TRANSLATED_TEXT. Added htmlexport/readme.txt with the full list and descriptions of what they do
Internally some work was done to better support customized settings for a specifiic language (like font, sizing hints), it will be easy to move to an external fonts.txt file or something later to allow people to add their own fonts/etc
FEATURE: Added support for using using the Deepl translation API instead of Google’s. config.txt (well, config_template.txt) modified to add new options deepl_api_key and translation_engine. The .txt file explains how to set it up. It seems to offer better translations, similar to Google you have to sign up on their website (using a credit card) to get an api key, which allows 500k characters a month for free. (that doesn’t seem like much!) Note: It seems a little slower than Google’s stuff.
You can press T to toggle translation engines on the fly, useful for comparing output
7/27/2021 – V0.70 Beta
Big changes internally with how linefeeds are handled, “dialog mode” translations should work better (Meerkov)
FEATURE: Can flexibly log translations to translation_log.txt (see config.txt’s log_capture_text_to_file option to enable)
FEATURE: Can flexibly add translations to clipboard (see config.txt’s place_capture_text_on_clipboard option to enable)
FEATURE: Checks for newer version on startup, see config.txt’s check_for_update_on_startup option if you’d like to disable this)
Bugfix: Fixed issue where repeating text could appear (Meerkov)
Guessing the source language of a translated block might be more accurate now (Meerkov)
FEATURE: Fonts are now configurable, can set a custom font for default or any specific language (edit fonts.txt file for more info)
8/4/2021 – V0.71 Beta
D and L hotkeys now toggle force dialog and force line-by-line mode until the app is quit, instead of only for that one translation (useful for RPG dialog!)
Exposed “auto-glue” settings in config.txt, helpful to make sure all the dialog is a single block. (auto_glue_vertical_tolerance and auto_glue_horizontal_tolerance)
Sidenote: I know this page is just a giant mess now, sorry!
– NOTE FOR UPGRADING: It’s recommended to start with the config_template.txt again, just copy over your google API key and rename it config.txt again.
My first tests used Tesseract to do the OCR locally, but without additional dataset training it appeared to not work so hot out of the box compared to results from Google’s Cloud Vision. (They use a modified Tesseract? Not sure) It might be a nice option for those who want to cut down on bandwidth usage or reliance on Google. Although the translations themselves would still be an issue…
I like the idea of old untranslated games being playable in any language, in fact, I went looking for famous non-Japanese games that have never had an English translation and really had a hard time finding any, especially on console. If anyone knows of any I could test with, please let me know.
Also, even though my needs focus on Japanese->English, keep in mind this also works to translate English (or 36 other languages that Google supports OCR with) to over 100 target languages.
Test showing English being translated to many other languages in an awesome game called Growtopia
Sure, there are ways to get exercise while gaming. Virtual reality and music games like Dance Dance Revolution come to mind.
But that’s all worthless when your kid just wants to play Fortnite.
Behold, the PlayStep!
This thing forces him to work up a sweat. This post details what methods I used and issues I had making it. (Github source code for the program that runs on the Pi here for anybody who wants to make one)
Building a screen blanker connected to exercise isn’t a new idea (see the end of this post for related links I found) but my version does have some novel features:
Dynamically modifies the video and audio of the game’s HDMI signal to do things like partially obscure the screen in random ways
Uses an energy bank so you can save up game time. This means you can madly pedal in the lobby and still sit in a chair during the critical parts of Fortnite
I first built a cheap version (~$120 in parts). It just blanks the screen when you’re out of energy, and uses an LCD screen to show energy left.
I then did a better but more expensive way (~$700 in parts) but it’s a lot cooler.
The expensive version with HDMI in/out, the “enclosure” is a plastic basket thing from the dollar store
Things both ways have in common:
Use a Raspberry Pi 3B+ (a $40 computer with hardware GLES acceleration) with the Retropie distro – I start with it because its mouse/keyboard/GLES/SDL works out of the box with Proton SDK where normal Raspian requires tweaking/compiling some things
Use Proton SDK for for the app base (allows me to design/test on Windows, handles abstraction for many platforms so I can write once but run everywhere)
Use hall effect sensors to detect the pedal down position on each pedal via the Pi’s GPIO, this way a kid can’t cheat, he’s forced to move the full range of the stepper
The sensors are placed on a stepper exerciser. I used a USB connector for the wiring so I could unplug/replace it later if I wanted to setup a different exercise machine, like if I ever got a stationary bike.
Yes, I’m about to duct tape an electrical taped sensor to a pencil that has been zip-tied in place. What? I never said I was pro
A note on using USB cables for wires and my idiocy
Each hall effect sensor requires three wires. We have two sensors. So we need to run six wires from the Pi GPIO pins? WRONG! We only need four because the power and ground can be shared between them.
So I thought hey, I’ll use USB cables and connectors laying around as they have four wires in them. (until we get to USB 3+ cables, but ignore that)
Then I thought, if I could find a simple USB Y splitter, it will be easier to share the power/ground with the two sensors . (I’m not actually using this as a USB connection, it’s just so I can use the wire and handy plugs)
Wow, I found this for cheap on Amazon:
Perfect! A lowly USB splitter that I’m sure just has no fancy electronics hidden inside
So I partially wired it up but when testing found that the middle pins had no continuity. Can you guess why?
WHAT THE HELL IS THIS INSIDE THE CABLE?!
It’s got a hub or something hidden in the connector. I never plugged it into an actual PC or I might have noticed. No wonder it didn’t work. I removed the electronics part (it was a horror, I shouldn’t be allowed near soldering irons) and it worked as expected. Moral of the story is, I’m dumb, and don’t trust USB splitters to just split the wires.
The cheap way (just screen blanking with LCD panel)
My “cheap” way ignores rendering anything graphical (It doesn’t output any HDMI itself) and just shows a single “energy count” number on an LCD screen. When it gets low, the game’s HDMI signal will be completely shut off until it goes positive again. In the video above I’m using little buttons to test with instead of the stepper.
To help the user notice the screen is about to shut off it makes a beeping noise as the counter nears zero.
I suggest never testing this at an airport, can’t stress that enough really.
So how can a Raspberry Pi turn on/off the game’s HDMI signal?
A splitter with no USB power = a dead signal
This is hacky but it works – I took an old 1X2 HDMI splitter and powered it from one of the Pi’s USB ports. (lots of electronics these days use a USB plug for power)
I only use one of the outputs on the splitter as I don’t really need any splitting done.
It’s possible to kill the power on a specific Pi USB port using a utility called uhubctrl.
So when the player is out of “energy”, I kill the USB port powering the HDMI splitter by having my C++ code run a system command of:
./uhubctl -a off -p 2
And because the HDMI splitter is now unpowered, the signal dies killing the game screen.
After turning the USB port back on (replacing “off” with “on”) it will power up and start processing the HDMI signal again. Originally I was using the Pi to turn on/off an entire AC outlet but that seemed like overkill – I was thinking maybe turning off an entire TV or something, but meh.
So the big downsize of this method is it takes up to 5 seconds for the HDMI splitter to turn back on, and your TV to recognize the signal again. It works but… not optimal. Also, in my case I don’t really have a good place to put the LCD screen or speaker for the beeping. (might make more sense on a stationary bike instead of a stepper)
Alternate way to disable the HDMI signal : Instead of this no-wiring hack, maybe instead run it through an HDMI cable but put one of the pins into a relay to turn that pin on/off? Might be the same effect but cheaper and simpler.. although, which pin?!
The expensive but better way (offers more options with images and audio)
There isn’t enough drama in simply turning the HDMI signal on/off – wouldn’t it be better if holes started spawning randomly over your actual gameplay and you had to pedal to remove them as your screen became increasingly obscured?! There are a million options, really.
The Raspberry Pi can generate the graphics (thanks GLES) and audio but we need a way to overlay its HDMI output over the game’s HDMI signal with no noticeable latency costs at 60fp.
This is known as a chroma key effect. (Side note: I once bought a $5,000 video mixer in the 90s so I could do live-effects like this, a WJ-MX 50. Just saw one on ebay for $100, damn it’s big)
It’s pricey, but it works perfectly. It has the following features of interest:
Remembers all settings when powered on, including chroma key mode and color/sensitivity
Can disable auto-detection so inputs 1 and 2 are always the same even if input is turned off
Can disable all buttons/levers on it so accidental changes won’t happen (we don’t need them active, it’s just a black box to us)
It’s pretty small for a video switcher
Mixes audio into the HDMI signal from both inputs
No noticeable latency
Although I didn’t need or use it, it’s worth noting that it can show up as a USB MIDI device and be controlled via MIDI signals. I did not need those features but that’s pretty cool, assuming the Pi could work with it, you could do transitions between inputs or enable/disable effects.
With no color keying, this is what the raw Pi video out looks like
The software to control things uses Proton SDK with its SDL2 backend and WiringPi for the GPIO to read from the sensors. It’s modified from the RTBareBones example.
It uses a config.txt file to adjust a couple things:
To allow the Pi to correctly output 1080P HDMI even if the switcher hasn’t booted up yet, I edited the /boot/config.txt and set:
To fix remove the unnecessary border I also set:
Might be fun to simply design Pi powered pedal games that use the stepper as a controller. You could then output straight to a TV or TFT screen without worrying about the spendy chroma-keying solution.
I mean, sure, my kid would refuse to play it, but it could be a funny thing to show at a meet-up or something.
Related things to check out
Cycflix: Exercise Powered Entertainment – Uses a laptop to pause netflix if you don’t pedal fast enough. He connected an arduino directly to the existing stationary bike electronics to measure pedaling, smart.
No TV unless you exercise! – Arduino mounted on a stationary bike cuts RCA signal via a relay if you don’t pedal enough. Uses a black/white detector for movement rather than hall effect sensors.
TV Pedaler – A commercial product that blanks screen if you don’t pedal enough that is still being sold? The website and product seem really old (no HDMI support) but they accept Paypal and the creator posted here a few years ago about his 1999 patent and warned about “copying”. Hrmph. His patent covers a bunch of random ideas that his machine doesn’t use at all. Patents like this are dumb, good thing it says “Application status is Expired – Fee Related” I guess.
The EnterTRAINER – This defunct commercial device is basically a TV remote control with a heart monitor you strap to your chest. Controls volume and TV power if your heart rate goes too low. Its hilarious infomercial was posted in one of the reviews.
The 123GoTV KidExerciser – Ancient commercial product that lets you use your own bike in the house to blank the TV if not pedalled fast enough. Company seems gone now.
It was the summer of 1983 at Jeff Mccall’s slumber party when I saw my first game console.
Crowded around the small TV we gawked at the thing – an Atari VCS.
The seven of us took turns. Passing the joystick around like a sacred relic we navigated Pitfall Harry over hazardous lakes, crocodiles and scorpions.
One by one the other kids fell asleep. Having no need of such mortal frivolity, I played Pitfall all night!
I fainted in the street the next day due to sleep deprivation. Worth it.
It’s kind of mind-blowing that games that originally sold for over $30 ($70+ in 2018 money) can now be completely stored in a QR code on a small piece of paper.
As a poignant visual metaphor for showing my kids how much technology has changed, I decided to create a Raspberry Pi based Atari that accepts “paper carts” of actual Atari 2600 games.
The requirements for my “PaperCart” Atari VCS:
Must use the real QR code format, no cheating by tweaking the format into something a standard QR reader couldn’t read
100% of the game data must actually be read from the QR code, no game roms can be stored in the console, no cheating by just doing a look-up or something
Runs on a Raspberry Pi + Picamera with all open source software (well, except the game roms…)
Can convert rom files to .html QR codes on the command line, sort if need this or we’ll have nothing to print and read later
Easy enough to use that a kid can insert and remove the “paper carts” and the games will start and stop like you would expect a console to do
Standard HDMI out for the video and audio, USB controller to play
All about QR codes
The QR in QR Code stands for Quick Response. It’s a kind of 2d barcode that was invented by a Japanese company named Denso Wave in 1994. They put it into the public domain right from the get go, so it’s used a lot of places in a lot of ways.
QR codes have a secret power – they use something called Reed-Solomon error correction. It has the amazing ability to fill in missing parts using extra parity data. More parity data, more missing data can be reconstructed. Not certain parts, ANY OF THE PARTS. I know, right?
Reed-Solomon is also used in CDs, DVDS and Blu-ray, that’s why a scratched disc can still work.
Remember those .par files on Usenet you’d use when you were downloading a bunch of stuff in chunks? Yep, parchives were based on Reed Solomon.
I hid a fun Atari fact in this code.
I’ve encoded some text in the above QR code with error correction set to Level H (High), which means up to 30% can be missing and you can STILL read it!
Go ahead, block some of it with your fingers, put your phone in camera mode and point it at the QR code above. Does it work? That’s the Reed-Solomon stuff kicking in.
QR codes automatically jump to larger sizes to encode more data – from version 1 to version 40.
Can you find your way out of this maze? Does your brain hurt yet? Hope no one took that seriously and actually tried.
Above is a version 40, the most dense version. My iPhone is able to read this one right off the screen too. If you have problems, you can try zooming into the page a bit maybe.
This is the first 2900 characters of Alice In Wonderland. We can store a max of 2,953 full bytes. A byte is 8 “yes or no” bits. With that, you can store a number between 0 and 255.
Because text doesn’t need a whole byte, there are smarter ways to store it which would allow us to pack in much more than we did here – but let’s ignore that as we’re only interested in binary data.
If I show the QR code too clearly, I might be enabling rom piracy and get in trouble. Weird, right?
This game (Stampede) has 2,048 bytes (2K) of rom data so it easily fits inside a single QR code.
Other Activision classics like Fishing Derby and Freeway are also 2K games but Pitfall! is 4K game. Using gzip compression saves us nearly 20% but it’s still a bit too big to fit in a single QR code. To work around this I’ve added a “Side B” to the other side of the Pitfall! card. Cart. Whatever it is.
My paper cart format stores some metadata so the reader can know how many QR codes are needed for the complete game, as well as if the data is for the same game or not by storing a rom hash in each piece.
Emulating a 2600 on a Raspberry Pi 3
I started with latest Retro Pi image and put that on a micro SD card. RetroPi has an Atari 2600 emulator out of the box that can be directly run from the command line like this:
So now I just needed to write some software that will monitor what the Pi camera sees, read QR codes, notice when no QR code is present or it has changed and start/stop the emulator as appropriate.
Writing the software – PaperCart
Naturally I chose Proton SDK as the base as it handles most of what I need already. For the QR reading, I use zbar, and for the webcam reading I use OpenCV and optionally raspicam instead. (no need for OpenCV on the Raspberry Pi linux build) I put it on github here.
The PaperCart binary can also be used from the command line to create QR codes from rom files. (It uses QR-Code-generator)
RTPaperCart.exe -convert myrom.a26
or on the raspberry:
RTPaperCart -convert myrom.a26
It will generate myrom_1_of_1.html or if multiple QR codes are needed, a myrom_1_of_2.html and myrom_2_of_2.html and so on. I opened in the web browser, cut and pasted them into photoshop, scaled them down (disable antialiasing!) to the correct size and printed them.
A quick note about zbar and decoding binary data in a QR code
If you want binary data to look exactly as it went in (and who wouldn’t?!), you need to do a little processing on it after zbar returns it with iconv. Here is that magical function for any future googlers:
string FixQRBinaryDataEncoding(string input)
iconv_t cd = iconv_open("ISO-8859-1", "UTF-8");
if (cd == (iconv_t)-1)
int buffSize = (int)input.length() * 2;
char *pOutputBuf = new char[buffSize]; //plenty of space
size_t outbytes = buffSize;
size_t inbytes = input.length();
char *pOutPtr = pOutputBuf;
char *pSrcPtr = &input.at(0);
if (iconv(cd, RT_ICONV_CAST &pSrcPtr, &inbytes, &pOutPtr, &outbytes) == (size_t)-1)
} while (inbytes > 0 && outbytes > 0);
int finalOutputByteSize = (int)buffSize-(int)outbytes;
memcpy((void*)temp.c_str(), pOutputBuf, finalOutputByteSize);
Want to make your own?
It’s pretty straight forward if you’re comfortable with linux and Raspberry Pi stuff. Here are instructions to set it up and download/compile the necessary software.
(If you really wanted, it’s also possible to do this on Windows, more help on setting up Proton on Windows here, you’d also need to OpenCV libs and Visual Studio in that case)
Now we’ll install and compile raspicam, a lib to control the camera with.
Note: It acts a little weird, possibly because it’s using outdated MMAL stuff? In any case, it works “enough” but some fancier modes like binning didn’t seem to do anything.
git clone https://github.com/cedricve/raspicam
cd raspicam;mkdir build;cd build
sudo make install
Before we can build RTPaperCart, we’ll need Proton SDK:
git clone https://github.com/SethRobinson/proton.git
Build Proton’s RTPack tool:
Download and build RTPaperCart:
git clone https://github.com/SethRobinson/RTPaperCart.git
Build the media for it. It converts the images to .rttex format.
.rttex is a Proton wrapper for many kinds of images.
cd ~/proton/RTPaperCart/media sh update_media.sh
Now you’re ready to run the software (note: pkill Emulation Station first if that’s running):
You might see errors if your camera isn’t available. To enable your camera, plug in a USB one or install a Picamera and use “sudo raspi-config” to enable it under “Interfacing options“. (don’t forget to reboot)
If things work, you’ll see what your camera is seeing on your screen and if a QR code is read, the screen should go blank as it shells to run the atari emulator.
You can point your camera at a QR code on the screen and it will probably work, or go the extra mile and print paper versions because they are fun. You don’t have to laminate them like I did, but that does help them feel more sturdy.
I setup mine to automatically run when the Pi boots (and not Emulation Station) so it works very much like a console. (To do that, edit /etc/profile.d/10-retropie.sh)
Running RTPaperCart /? will give a list of optional command line options like this:
-w <capture width> -h <capture height> -fps <capture fps> -backgroundfps <background capture fps -convert <filename to be converted to a QR code. rtpack and html will be created>
3D Printing the stand
I sort of imagined designing a stylish 2600 themed case with a slot for the paper cart and fully enclosing the Pi, but that would take skill and also require some kind of light inside so the QR could be read.
So instead I did the minimum – a thing to hold the PI, camera, and easel where you insert the QR code paper.
I used Fusion 360 and designed the stand parametrically so you can fiddle with values to change sizes pretty easily. The modules are designed to snap together, no screws needed.
You can download the Fusion 360 project here, the download button allows you to choose additional formats too.
You need to kind of use common sense and print with supports where it looks like you need them.
So that’s great, but you’d like to store Grand Theft Auto 5 as QR codes because they are so convenient?
Let’s see, 70 gigabytes. No problem. To convert that you’ll just need about 25 million QR codes. You might want to order some extra ink now.
At one code per paper, the stack would reach a mile and a half into the sky.
If you got this far, you must also be a connoisseur of gaming history and old hardware. Check these out too then:
They Create Worlds (Podcast on gaming history, no fluff) Matt Chat (Interviews and info about old games in visual form) The Retro Hour (Podcast with retro gaming interviews and news) Atari 5200 Multi-ROM Cartridge Using Raspberry Pi (Cool, something like this might make it possible to mod a real 2600 to read “paper cartridges”. Small world, Dr. Scott M Baker wrote BBS stuff too, including Land Of Devastation as well as Door Driver, a utility that allowed a dumb kid like me to write BBS games)