The data it creates is in json format and looks like this:
"title": "Borderlands: The Handsome Collection",
}, ... and so on
If I cross referenced it with trophy data it would be more accurate. If anyone knows a better way to get at this data (one that doesn’t break if the user isn’t using English for example…) please let me know.
No plans to add anything else to this but wanted to throw up a post about it so anybody else working on something similar could find the source if needed.
In a hurry? Download link for the program is here. Edit config_template.txt for directions on setup, it’s kind of tricky. Only runs on Windows right now. (Github source)
Why I wanted a “translate anything on the screen” button
I’m a retro gaming nut. I love consuming books, blogs, and podcasts about gaming history. The cherry on top is being able to experience the identical game, bit for bit, on original hardware. It’s like time traveling to the 80s.
Living in Japan means it’s quite hard to get my hands on certain things (good luck finding a local Speccy or Apple IIe for sale) but easy and cheap to score retro Japanese games.
Yahoo Auction is kind of the ebay of Japan. There are great deals around if you know how to search for ’em. I get a kick out of going through old random games, I have boxes and boxes of them. It’s a horrible hobby for someone living in a tiny apartment.
Example haul – I got everything in this picture for $25 US! Well, plus another $11 for shipping.
There is one obvious problem, however
It’s all in Japanese. Despite living here over fifteen years, my Japanese reading skills are not great. (don’t judge me!) I messed around with using Google Translate on my phone to help out, but that’s annoying and slow to try to use for games.
Why isn’t there a Google Translate for the PC?!
I tried a couple utilities out there that might have worked for at least emulator content on the desktop, but they all had problems. Font issues, weak OCR, and nothing built to work on an agnostic HDMI signal so I could do live translation while playing on real game consoles.
So I wrote something to do the job called UGT (Universal Game Translator) – you can download it near the bottom of this post if you want to try it.
Snaps a picture from the HDMI signal, sends it to google to be analyzed for text in any language
Studies the layout and decides which text is dialog and which bits should be translated “line by line”
Overlays the frozen frame and translations over the gameplay HDMI signal
Allows copy/pasting the original language or looking up a kanji by clicking on it
Can translate any language to any language without needing any local data as Google is doing all the work, can handle rendering Japanese, Chinese, Korean, etc (The font I used is this one)
Controlled by hotkeys (desktop mode) or a control pad (capture mode, this is where I’m playing on a real console but have a second PC controller to control the translation stuff)
(Added in later versions) Can read the dialog outloud in either the original or translated language
Can drag a rectangle to only translate a small area (Ctrl-F10 by default)
In the video above, you’ll notice some translated text is white and some is green. The green text means it is being treated as “dialog” using its weighting system to decide what is/isn’t dialog.
If a section isn’t determined to be dialog, “Line by line” is used. For example, options on a menu shouldn’t be translated all together (Run Attack Use Item), but little pieces separately like “Run”, “Attack”,“Use item” and overlaid exactly over the original positions. If translated as dialog, it would look and read very badly.
Here are how my physical cables/boxes are setup for “camera mode”. (Not required, desktop mode doesn’t need any of this, but I’ll talk about that later)
Happy with how merging two video signals worked with a Roland V-02HD on the PlayStep project, I used a similar method here too. I’m doing luma keying instead of chroma as I can’t really avoid green here. I modify the captured image slightly so the luma is high enough to not be transparent in the overlay. (of course the non-modified version is sent to Google)
This setup uses the windows camera interface to pull HDMI video (using Escapi by Jari Komppa) to create screenshots that it sends to Google. I’m using an Elgato Cam Link for the HDMI input.
Anyway, for 99.99999999% of people this is setup is overkill as they are probably just using an emulator on the same computer so I threw in a “desktop mode” that just lets you use hotkeys (default is Ctrl-F12) to translate the active Window. It’s just like having Google Translate on your PC.
Here’s desktop mode in action, translating a JRPG being played on a PC Engine/TurboGrafx 16 via emulation. It shows how you can copy/paste the recognized text if want as well, useful for kanji study, or getting text read to you. You can click a kanji in the game to look it up as well. (Update: It now internally can handle getting text read as of V0.60, just click on the text. Shift-Click to alternate between the src/dest language)
Try it yourself
Before you download:
All machine translation is HORRIBLE – this is no way replaces the work of real translators, it’s just (slightly) better than nothing and can stop you from choosing “erase all data” instead of “continue game” or whatever
You need to rename config_template.txt to config.txt and edit it
Specifically, you need to enter your Google Vision API key. This is a hassle but it’s how Google stops people from abusing their service
Google charges money for using their services after you hit a certain limit. I’ve never actually had to pay anything, but be careful.
This is not polished software and should be considered experimental meant for computer savvy users
Privacy warning: Every time you translate you’re sending the image to google to analyze. This also could mean a lot of bandwidth is used, depending how many times you click the translate button. Ctrl-12 sends the active window only, Ctrl-11 translates your entire desktop.
I got bad results with older consoles (NES, Sega Master System, SNES, Genesis), especially games that are only hiragana and no kanji. PC Engine, Saturn, Dreamcast, Neo-Geo, Playstation, etc worked better as they have sharper fonts with full kanji usually.
Some game fonts work better than others
The config.txt has a lot of options, each one is documented inside that file
I’m hopeful that the OCR and translations will improve on Google’s end over time, the nice thing about this setup is the app doesn’t need to be updated to take advantage of those improvements or even additional languages that are later supported
After a translation is being displayed, you can hit ? to show additional options. Also, this is outdated, use the real app to see the latest.
5/8/2019 – V0.50 Beta – first public release, experimental 5/13/2019 – V0.51 Beta – Added S to screenshot, better error checking/reporting if translation API isn’t enabled for the Google API key, minor changes that should offer improved translations 5/30/2019 – V0.53 Beta – Added input_camera_device_id setting to config.txt for systems with multiple cameras. Moves mouse offscreen for “camera” mode captures 9/5/2019 – V0.54 Beta – Fixes crash on startup problem some people had, adds “audio|none” config.txt command to optionally disable all sound. Added “minimum_brightness_for_lumakey” setting to config.txt in case the default isn’t right 9/15/2019 – V0.60 Beta – New feature, text to speech! You’ll need to enable Google’s Text To Speech API, Fixed a crash bug, added some in-app persistent settings, gamepad can now move around the cursor and click things. Controls changed a bit. Added automatic reading of detected dialog, can choose to read src or dest langs, can hide text overlays if you want now. A few new options in the config.txt. Switched to FMOD audio, SDL_Mixer has buggy mp3 playback which was causing some me grief. Changed the translate button sound to something more soothing. 11/22/2019 – V0.61 Beta – Replaced audio system with Audiere to prepare for putting it on GIT, added more logging and error checking with libCURL – I’ve put the complete source on Github, feel free to bugfix or add some features if you’re a programmer!
4/12/2020 – V0.62Beta:
* FEATURE: Added draggable window option (Changed hotkey to Ctrl-F10 in the default config.txt, if upgrading, it will be be Shift-Ctrl-F11 though) * Removed an include for wiringpi (it isn’t used) * Added “FMOD Release” MSVC configuration profile, this enables FMOD as well, it will be the default now as I noticed some clicks/pops from Audiere sometimes when playing text to speech generated by Google * Added status at the bottom that shows what is happening with uploading/download, in situations where “nothing is happening” these status updates will let you know what it’s doing, useful for slow internet or whatever * Can now cancel spoken audio by clicking it again * Added a font so rendering Hindi is supported (hotkey is 0) * Initiating a translation when a translate dialog is already on the screen now just toggles it off instead of doing weird things * Added “audio_device” option to config.txt, if text matches an audio device that will be used instead of the default * Joystick deadzone increased from 0.15 to 0.20, needed because my 360 sticks are just bad * BUGFIX: Word wrap doesn’t sometimes cause spaces to be missing between words
4/23/2020 – V0.63 Beta
* Added support for rendering Punjabi (note: the only open source font I could find doesn’t have English letters in it..hope to find something better later) * Added support for setting a source language hint in the config.txt. Required to read some non latin languages, for example setting to “pa” for Punjabi allows that language to be read. Hint language is shown on startup screen, “auto” means no hint * Now shows exact google error text onscreen (like bad API key or whatever) instead of saying “open error.txt” (error.txt is still written also though) * Shows “<language code> language not supported for audio” if Google can’t do text to speech on it (Punjabi for example) * Punjabi as a translation target is now one of the included languages you can cycle through using [ and ] or L and R on a control pad. Note: these languages can be changed/added via the config.txt, the first one set will be the default on startup * “Press space to continue or ? for help – rtsoft.com” changed to “<Space or ?>” and doesn’t show at all for extremely small drag rects, so it doesn’t overlay the translation * Shows “Nothing found” if there is zero text to translate, better than looking like it crashed or something
12/2/2020 – V0.64 Beta
* Added support for a gamepad based hotkey, (by default you need to push in to click the right joystick) to initiate a scan/translate. This works in computer mode. By default it will scan the active window, but if you’ve done a custom drag rect size previously, it will use that size instead. (useful for grabbing just the dialog area of a game)
Released Universal Game Translator V0.64 : now by default the right joystick button will trigger translations of the active window while nicely sharing the gamepad, don't have to touch the keyboard at all. Here's me playing a PS1 game via Retroarch. https://t.co/0nIdLVEqT9pic.twitter.com/dCoolo8AoP
Note: This is useful because you can use just a controller to translate an Elgato game capture or twitch string of a real console, so you can sort of play via two controllers.
* Now using XInput for input which allows UGM to access the gamepad even if it’s being used by a game. (with a PC game/emulator you can play and do translations without touching the keyboard using a single controller!)
BTW, it’s doing some weird things under the hood to make the overlay work, if you have any problems setup the game/app in a windowed mode and it should be ok.
Note: You can change which button does the translation by changing the gamepad_button_to_scan_active_window setting in the config.txt. The options are listed right above where it is. (default is “right joystick button”)
Note: An XInput compatible controller is required for the global button controls to work. An Xbox 360, Xbox 1 or Xbox Series X/S controller is recommended.
2/5/2021 – V0.65 Beta
* Gamepad hotkey that initiates a translation now also can dismiss it (by default, right stick button) * New feature: Pressing E will export the current translation to html and open it with your default browser. Example:
Added a "Push E to export to HTML" feature in UGT 0.65, helps me with Japanese study because now I can use the excellent chrome extension rikaichan/rikaikun on game dialog. Here's a crappily made video using FFXIII (PS3) as an example pic.twitter.com/e4gMNcTMef
* Tiny quality of life change for people using the keyboard to initiate translations: The capture active window hotkey (Ctrl-F12 is default) now works like the gamepad button capture, if a custom screen capture area was set previously (Ctrl-F10) it will default to using that instead of the entire active window.
3/25/2021 – V0.67 Beta
Fixed to properly handle windows scaling, so if you’re using 150% DPI or whatever it shouldn’t cut the window off or act weird anymore. A side effect of this fix (at least how I implemented it) is it probably requires Windows 10 (or later) to run UGT now though
4/8/2021 – V0.68 Beta
Fixed problem where unnecessary spaces were inserted into the original OCR’ed Asian text, very noticeable when using cut & paste text features (Thanks coltonoscopy)
Fixed issue where it failed to cut and paste text if no translation took place (for example, English to english)
5/20/2021 – V0.69 Beta
Locked to max 100 fps, applied Meerkov’s FPS limiting bugfix (thanks!)
Changed startup sound to be shorter, added GUI checkbox to “Disable capture sound” (Meerkov)
Added “google_text_detection_command” config setting to config.txt (Meerkov)
Fixed bug in text layout engine that would cause freezes/delays in some situations. Could also create duplicate text boxes (Meerkov)
Added more replacement tags for html export: END_X, END_Y, WIDTH, HEIGHT, TRANSLATED_TEXT. Added htmlexport/readme.txt with the full list and descriptions of what they do
Internally some work was done to better support customized settings for a specifiic language (like font, sizing hints), it will be easy to move to an external fonts.txt file or something later to allow people to add their own fonts/etc
FEATURE: Added support for using using the Deepl translation API instead of Google’s. config.txt (well, config_template.txt) modified to add new options deepl_api_key and translation_engine. The .txt file explains how to set it up. It seems to offer better translations, similar to Google you have to sign up on their website (using a credit card) to get an api key, which allows 500k characters a month for free. (that doesn’t seem like much!) Note: It seems a little slower than Google’s stuff.
You can press T to toggle translation engines on the fly, useful for comparing output
7/27/2021 – V0.70 Beta
Big changes internally with how linefeeds are handled, “dialog mode” translations should work better (Meerkov)
FEATURE: Can flexibly log translations to translation_log.txt (see config.txt’s log_capture_text_to_file option to enable)
FEATURE: Can flexibly add translations to clipboard (see config.txt’s place_capture_text_on_clipboard option to enable)
FEATURE: Checks for newer version on startup, see config.txt’s check_for_update_on_startup option if you’d like to disable this)
Bugfix: Fixed issue where repeating text could appear (Meerkov)
Guessing the source language of a translated block might be more accurate now (Meerkov)
FEATURE: Fonts are now configurable, can set a custom font for default or any specific language (edit fonts.txt file for more info)
Sidenote: I know this page is just a giant mess now, sorry!
– NOTE FOR UPGRADING: It’s recommended to start with the config_template.txt again, just copy over your google API key and rename it config.txt again.
My first tests used Tesseract to do the OCR locally, but without additional dataset training it appeared to not work so hot out of the box compared to results from Google’s Cloud Vision. (They use a modified Tesseract? Not sure) It might be a nice option for those who want to cut down on bandwidth usage or reliance on Google. Although the translations themselves would still be an issue…
I like the idea of old untranslated games being playable in any language, in fact, I went looking for famous non-Japanese games that have never had an English translation and really had a hard time finding any, especially on console. If anyone knows of any I could test with, please let me know.
Also, even though my needs focus on Japanese->English, keep in mind this also works to translate English (or 36 other languages that Google supports OCR with) to over 100 target languages.
Test showing English being translated to many other languages in an awesome game called Growtopia
Sure, there are ways to get exercise while gaming. Virtual reality and music games like Dance Dance Revolution come to mind.
But that’s all worthless when your kid just wants to play Fortnite.
Behold, the PlayStep!
This thing forces him to work up a sweat. This post details what methods I used and issues I had making it. (Github source code for the program that runs on the Pi here for anybody who wants to make one)
Building a screen blanker connected to exercise isn’t a new idea (see the end of this post for related links I found) but my version does have some novel features:
Dynamically modifies the video and audio of the game’s HDMI signal to do things like partially obscure the screen in random ways
Uses an energy bank so you can save up game time. This means you can madly pedal in the lobby and still sit in a chair during the critical parts of Fortnite
I first built a cheap version (~$120 in parts). It just blanks the screen when you’re out of energy, and uses an LCD screen to show energy left.
I then did a better but more expensive way (~$700 in parts) but it’s a lot cooler.
The expensive version with HDMI in/out, the “enclosure” is a plastic basket thing from the dollar store
Things both ways have in common:
Use a Raspberry Pi 3B+ (a $40 computer with hardware GLES acceleration) with the Retropie distro – I start with it because its mouse/keyboard/GLES/SDL works out of the box with Proton SDK where normal Raspian requires tweaking/compiling some things
Use Proton SDK for for the app base (allows me to design/test on Windows, handles abstraction for many platforms so I can write once but run everywhere)
Use hall effect sensors to detect the pedal down position on each pedal via the Pi’s GPIO, this way a kid can’t cheat, he’s forced to move the full range of the stepper
The sensors are placed on a stepper exerciser. I used a USB connector for the wiring so I could unplug/replace it later if I wanted to setup a different exercise machine, like if I ever got a stationary bike.
Yes, I’m about to duct tape an electrical taped sensor to a pencil that has been zip-tied in place. What? I never said I was pro
A note on using USB cables for wires and my idiocy
Each hall effect sensor requires three wires. We have two sensors. So we need to run six wires from the Pi GPIO pins? WRONG! We only need four because the power and ground can be shared between them.
So I thought hey, I’ll use USB cables and connectors laying around as they have four wires in them. (until we get to USB 3+ cables, but ignore that)
Then I thought, if I could find a simple USB Y splitter, it will be easier to share the power/ground with the two sensors . (I’m not actually using this as a USB connection, it’s just so I can use the wire and handy plugs)
Wow, I found this for cheap on Amazon:
Perfect! A lowly USB splitter that I’m sure just has no fancy electronics hidden inside
So I partially wired it up but when testing found that the middle pins had no continuity. Can you guess why?
WHAT THE HELL IS THIS INSIDE THE CABLE?!
It’s got a hub or something hidden in the connector. I never plugged it into an actual PC or I might have noticed. No wonder it didn’t work. I removed the electronics part (it was a horror, I shouldn’t be allowed near soldering irons) and it worked as expected. Moral of the story is, I’m dumb, and don’t trust USB splitters to just split the wires.
The cheap way (just screen blanking with LCD panel)
My “cheap” way ignores rendering anything graphical (It doesn’t output any HDMI itself) and just shows a single “energy count” number on an LCD screen. When it gets low, the game’s HDMI signal will be completely shut off until it goes positive again. In the video above I’m using little buttons to test with instead of the stepper.
To help the user notice the screen is about to shut off it makes a beeping noise as the counter nears zero.
I suggest never testing this at an airport, can’t stress that enough really.
So how can a Raspberry Pi turn on/off the game’s HDMI signal?
A splitter with no USB power = a dead signal
This is hacky but it works – I took an old 1X2 HDMI splitter and powered it from one of the Pi’s USB ports. (lots of electronics these days use a USB plug for power)
I only use one of the outputs on the splitter as I don’t really need any splitting done.
It’s possible to kill the power on a specific Pi USB port using a utility called uhubctrl.
So when the player is out of “energy”, I kill the USB port powering the HDMI splitter by having my C++ code run a system command of:
./uhubctl -a off -p 2
And because the HDMI splitter is now unpowered, the signal dies killing the game screen.
After turning the USB port back on (replacing “off” with “on”) it will power up and start processing the HDMI signal again. Originally I was using the Pi to turn on/off an entire AC outlet but that seemed like overkill – I was thinking maybe turning off an entire TV or something, but meh.
So the big downsize of this method is it takes up to 5 seconds for the HDMI splitter to turn back on, and your TV to recognize the signal again. It works but… not optimal. Also, in my case I don’t really have a good place to put the LCD screen or speaker for the beeping. (might make more sense on a stationary bike instead of a stepper)
Alternate way to disable the HDMI signal : Instead of this no-wiring hack, maybe instead run it through an HDMI cable but put one of the pins into a relay to turn that pin on/off? Might be the same effect but cheaper and simpler.. although, which pin?!
The expensive but better way (offers more options with images and audio)
There isn’t enough drama in simply turning the HDMI signal on/off – wouldn’t it be better if holes started spawning randomly over your actual gameplay and you had to pedal to remove them as your screen became increasingly obscured?! There are a million options, really.
The Raspberry Pi can generate the graphics (thanks GLES) and audio but we need a way to overlay its HDMI output over the game’s HDMI signal with no noticeable latency costs at 60fp.
This is known as a chroma key effect. (Side note: I once bought a $5,000 video mixer in the 90s so I could do live-effects like this, a WJ-MX 50. Just saw one on ebay for $100, damn it’s big)
It’s pricey, but it works perfectly. It has the following features of interest:
Remembers all settings when powered on, including chroma key mode and color/sensitivity
Can disable auto-detection so inputs 1 and 2 are always the same even if input is turned off
Can disable all buttons/levers on it so accidental changes won’t happen (we don’t need them active, it’s just a black box to us)
It’s pretty small for a video switcher
Mixes audio into the HDMI signal from both inputs
No noticeable latency
Although I didn’t need or use it, it’s worth noting that it can show up as a USB MIDI device and be controlled via MIDI signals. I did not need those features but that’s pretty cool, assuming the Pi could work with it, you could do transitions between inputs or enable/disable effects.
With no color keying, this is what the raw Pi video out looks like
The software to control things uses Proton SDK with its SDL2 backend and WiringPi for the GPIO to read from the sensors. It’s modified from the RTBareBones example.
It uses a config.txt file to adjust a couple things:
To allow the Pi to correctly output 1080P HDMI even if the switcher hasn’t booted up yet, I edited the /boot/config.txt and set:
To fix remove the unnecessary border I also set:
Might be fun to simply design Pi powered pedal games that use the stepper as a controller. You could then output straight to a TV or TFT screen without worrying about the spendy chroma-keying solution.
I mean, sure, my kid would refuse to play it, but it could be a funny thing to show at a meet-up or something.
Related things to check out
Cycflix: Exercise Powered Entertainment – Uses a laptop to pause netflix if you don’t pedal fast enough. He connected an arduino directly to the existing stationary bike electronics to measure pedaling, smart.
No TV unless you exercise! – Arduino mounted on a stationary bike cuts RCA signal via a relay if you don’t pedal enough. Uses a black/white detector for movement rather than hall effect sensors.
TV Pedaler – A commercial product that blanks screen if you don’t pedal enough that is still being sold? The website and product seem really old (no HDMI support) but they accept Paypal and the creator posted here a few years ago about his 1999 patent and warned about “copying”. Hrmph. His patent covers a bunch of random ideas that his machine doesn’t use at all. Patents like this are dumb, good thing it says “Application status is Expired – Fee Related” I guess.
The EnterTRAINER – This defunct commercial device is basically a TV remote control with a heart monitor you strap to your chest. Controls volume and TV power if your heart rate goes too low. Its hilarious infomercial was posted in one of the reviews.
The 123GoTV KidExerciser – Ancient commercial product that lets you use your own bike in the house to blank the TV if not pedalled fast enough. Company seems gone now.
“I can hear you breathing… sorry kid, but you’ve got to stop or leave the room” is something Cosmo has heard many a time while using the computer next to me.
As my long suffering family can attest, I’ve had a “thing with sound” for a long time. It’s one of the reasons I love my job – I’ve worked alone in a room for most of the last thirty years.
Introducing… my white noise app! I wrote it because I couldn’t find another app that would automatically “mix” a small amount of noise to the podcast I like but have it switch to a different setting when it detects no other audio is playing so I can sleep through the night.
It’s easy to use, has the perfect custom mixable sounds, and is totally free. If you use white noise at all, check it out.