Tuesday, December 31, 2013

3D in the Browser- using Three.js (Part 1)



I've been using Three.js for the past year couple of years now. Running your creations directly in the browser using JavaScript makes things very easy. Three.js abstracts away a lot of the mundane chores that usually makes 3D development time consuming. 

You might wonder if JavaScript would slow things down too much to be usable. The reality is that the Graphics Processing Unit (GPU) that's on board most computers today has incredible power and ends up doing 99% of the work. You can verify this by running any of the Three.js examples and checking your CPU performance. For instance, in the WebGL example below, I'm only utilizing 12% of the CPU. It's randomly drawing 2000 pieces of pasta and barely stressing the processor. (I also had a number of applications running and 5-6 browser windows open at the time, any of which could have been taking lots of processor cycles.)






Here's the screencast..part 1 one of an ongoing series I'm producing which demonstrates the incredible capabilities of Three.js. I hope you enjoy the series and find the time to try Three.js yourself. Note: Visit Threejs.org to learn more.





http://threejs.org/examples




http://www.chromeexperiments.com/





Friday, November 29, 2013

Simple Data Binding (JQuery versus AngularJS)

I've been using AngularJS this past year and I'm impressed with it's capabilities. Here are some of the reasons why.


Data Binding:

I'm finding that the two-way data binding capabilities of AngularJS are saving me considerable amounts of coding time.  By eliminating lots of mundane "boilerplate" coding, I can focus on the specific application logic of the feature I'm developing.


Templates and Directives:

These extend what you can do in HTML so you're basically describing how the model should be projected into the view. When Angular compiles, it uses these rendering instructions called directives to set up the data binding in your application. And you can create your own..which lets you easily create custom components which are described simply within the HTML markup.


MVC:

AngularJS basically uses the MVC design pattern which promotes separation of concerns. Actually it's more like the Model-View-ViewModel pattern popularized by Microsoft.

Dependency Injection:

AngularJS has a dependency injection subsystem which reduces the chances that refactoring will cause code breakage. This allows your code to ask for it's dependencies instead of defining them. This abstraction allows those dependencies to be injected into the underlying code therefore making them easier to change without the risk of code breakage.

Data Binding Example:

Here's an extremely simple example of data binding AngularJS. First though we'll look at how this same data binding would be done in plain old JavaScript, JQuery and then finally we'll see how much simpler it gets done in AngularJS.



Homemade Mini-Arcade


This was an interesting project which required a number of different skills. I started with a Jakks Namco 5 In 1 Arcade Classics joystick which I found used on Ebay for $13.

These battery operated video game boxes were popular years ago. You plugged them directly into your T.V. via RCA jacks. They typically offered a few of the most popular video games from the 80's and 90's, Pacman being the most often featured game.

This provided me with the heart of the arcade machine... the buttons, joystick and video game capabilities. I couldn't find any templates for small video game cabinets so I ended up just sketching out my own cabinet design on poster paper and building a paper mock-up. This allowed me to confirm the sizing prior to starting the cabinet build. It was important to confirm that the monitor and controls would fit comfortably. I used a small 3.5 watt component audio amplifier ($8) which i found online. This turned out to be way too much power. A 1/2 watt amplifier would have been fine. I used a 4 inch video screen I found on Amazon new for $19. I was able to find lots of video cabinet art on google images and I printed them out using sticker printer paper. I opted to replace the buttons that came with the joystick and instead bought coin-op cabinet buttons for $3 on ebay.

All in all this project was lots of fun and most people that see it want one. Unfortunately it takes more than 10 hours to build one and due to the copyright issues trying to sell something like this commercially would be problematic. I do think we'll see more of these types of mashups in the future. If I build another one I'll probably use Raspberry Pi and M.A.M.E which would allow me to run any video game through emulation, even the old Apple II arcade games I wrote years ago.

















Here's a video of the finished project.













Thursday, November 28, 2013

Animals, Avatars and ASR


Here's a prototype I created last year which combines three technologies I'm interested in. Automatic Speech Recognition (ASR), Avatars and Computer Intelligence. To recognize speech, I used SRI International's EduSpeak Speech Recognition Technology. This version ran locally within Windows and was scriptable in the browser using Javascript. I've since worked with a team at GlobalEnglish which adapted this Speech Recognizer technology to run on the server and therefore didn't require a client install. We use Adobe Flash to record and stream the audio to the server where the recognition takes before returning the recognized speech text to the browser.

I used Avatar technology from SitePal. It's interesting to see people's reaction to this Avatar technology. Some people think it's very cool and others see it as "creepy", also known as the "uncanny valley" hypothesis. The uncanny valley hypothesis, in the field of human aesthetics, holds that when human features look and move almost, but not exactly, like natural human beings, it causes a response of revulsion among human observers.

I suspect that the human race over time and while subjected to more and more "avatars", robots', etc will eventually lose this aversion. Especially as human enhancement technologies become more widely adopted and the line between human and robot becomes blurred.

As for the logic of guessing which animal you are thinking about, I "borrowed" the logic from the website http://www.animalgame.com/ for this. I didn't hack into the site or copy their code or anything like that. I simply set up a little bit of server code to submit requests and "screen scrape" from their website the data I needed. This site didn't offer an api which would have let me more formally leverage their technology and I certainly wouldn't have done this for any production feature. However since this was simply a demonstration project, it seemed like a viable alternative to spending the time creating the game logic from scratch. And I figured that if this prototype ever moved into the production phase it would then be time to buy or develop a proprietary version of Animals.

I originally came across this "guess the animal" game in the 1980's when it was widely available for the early personal computers and typically written in basic. It especially intrigued me because it was my first encounter with Artificial Intelligence running on a personal computer. And provided the information it was given was accurate, it was capable of learning. Over time the program would progress to the point where it could guess any animal you were thinking about.

Unfortunately this prototype didn't support a continuous speech recognition capability. Therefore I had to click a button when I wanted to speak. Of course being able to simply speak to the avatar as you would any human would have been the most natural way to interact with it. 

One of the things I've learned while working with Speech Recognition over the years is the importance of good Speech Recognition UI. You've got to constrain the context such that the user knows the range of responses that are expected. To many, "I'm sorry, I didn't get that" responses can kill any great ASR project.

I look forward to creating more prototypes utilizing these technologies. Actually I have more prototypes to show and will write about them in future posts.

If you're interested in collaborating on a future prototype drop me a line.

Check out this new Text To Sing avatar page at SitePal. Wild!




Wednesday, November 27, 2013

Running Windows 7 inside Windows 7


Often these days installing shareware is a gamble. You're gambling that a piece of malware won't find it's way onto your system in the process. One reason I've found for this is that shareware authors are increasingly turning to these "piggyback" installs as a way to cover their development costs. The sad fact is that so few of us are paying for their software.

I'm not without fault myself as I don't typically pay for shareware unless it's something I end up using regularly and my built-in "guilt trip" mechanism takes over . If I use something one time or am just testing it out I'm not usually in a rush to click the donate button.

Lately, in order to mitigate the risk inherent in testing out shareware, I've start using virtualization in order to test out software before hand. If it passes the test, I'll then consider installing it on my main laptop. Before installing the application, I'll do a quick google search to see if there's any discussion about the particular software being a malware threat. Assuming it passes that test I'll open up a virtual machine (VM), I'll take a file and registry snapshot, install the software, then take a quick look at what's been installed.  Later, after I've taken it around the block a few times and am convinced it's safe, I'll add it to my main development machine. Because I've installed it into a virtual machine, I can simply roll back (undo) the changes made to the VM. There are lots of options when it comes to virtual machines. On the Windows side, Microsoft makes Virtual PC available for free. Also Windows 7 ships with a free copy of Windows XP (for backwards compatibility) so you don't to pay to activate another copy of the OS.

Here's a link to instructions for installing Windows 7 into an existing Windows 7 laptop.

So I'm effectively running Windows 7 inside Windows 7. Due to the ease of backing up a virtual machine (you basically copy a few files) I've often considered the idea of building out a laptop which had no applications other then Virtual PC on it. You can have Virtual PC run at start up and take over the full screen so in this scenario you really don't realize you are even running in a virtual environment. And with the hardware-assisted virtualization (HAV) built into most new pc's today, there's very little speed penalty. Should the OS ever get infected or become a victim of malware it's so very easy to simply undo the changes that affected it in the first place. Backing up the entire VM is really as easy as copying a few files.

Happy Virtualization!

Chris