Thursday Night Football on Amazon

I don’t ever talk about work on here, and mostly corporate work doesn’t see the light of day or doesn’t see the public eye like this, but this is an exception, because it’s publicly visible now. I contributed heavily to the automated system that produced the team vs team and the background images (and only images, not the streaming content or anything else) that you see on Amazon now concerning Thursday Night Football, some of the games from which are to be broadcast soon. Swipe to see screenshots and search for Thursday Night Football on Amazon to see more images.

Go here now for the details.

Areas of Technology that I Find Interesting

New areas of technology emerge all the time, and they change the educational and workforce-related landscape along with them. Here’s a compilation of some technologies I’ve been hearing about that speak to me and that I’d love to be able to contribute more toward.
1. Inexpensive space travel- something along the lines of SpaceX’s mission, so that we can become an interplanetary civilization, enabling our posterity to better handle unforeseen calamities and issues that might arise on Earth in the future owing to population growth, fossil fuels running out, etc.
2. Doing big data analysis and data science on data from the environment, and being able to use that analysis for the betterment of the environment.
3. Controlling robots or avatars through our brains, in order to be able to navigate areas impossible for humans to tread, such as radiation heavy areas or deep space. Dr Michio Kaku’s book The Future of the Mind provides an excellent explanation for this use case.
4. Alzheimer’s research. In the book mentioned above there’s a lot of discussion about active research happening toward controlling the growth/ decay of the human brain and what we can do about it.
5. Doing big data analysis and data science on data from the fields of astronomy and cosmology, and being able to use that to explore new territories or provide answers to long-standing open questions in astronomy and cosmology.

What’s next?

Technical Diversification vs Focus

I believe that there has to be a delicate balance between diversifying in the types of technologies one is familiar with and the types that one really goes deep into (the so-called T-shaped career).

Technology is growing at an exponential rate, but you cannot work all the time. You want to have time for family, friends, life and living, and for having some fun. At the same time, as the demands at work change, we have to be able to learn and adapt to new technologies and sometimes entire new technology stacks and ways of thinking. Learning new things also expands our horizons and introduces us to new ways of thinking that we didn’t think possible before.

I propose learning very selective technologies across very varied technologies. Let me explain. I propose learning a single example from many different kinds of technologies. For instance, when it comes to text/ coding editors, there are many out there. Instead of mastering the Vim + Tmux + Zsh stack along with mastering EMacs (both those stacks will get you at the same place), take a pick, and that then becomes your old school programmers’ editor that you can use in any lean, remote environment uniformly to mercilessly manipulate plain text fast. After you’ve done that, move on to an advanced IDE, such as Jetbrains CLion or IntelliJ.

Likewise, learn one todo app really well – be it Omnifocus, Wunderlist, Things, or whatever, but don’t bother learning all of them well. Learn one object oriented programming language, one mid-level language, one concurrent language, one functional language, etc. really well, but don’t read 5 books each on both Java and C#.

There are areas where you should diversify (different programming paradigms), and there are areas where you should accumulate your several thousand hours (text editor) so that you can burn those keyboard shortcuts in your fingers and make those muscle memories. Time and energy are limited resources after all, and as our lives become busier and busier, this kind of prioritization remains the only option, so that we can keep learning the right kinds of new things and keep sailing forward smoothly.

Easy Ways to Help the Environment

I recently read a book on Elon Musk, in which one of the motivations behind SpaceX is revealed. That motivation is, that given that Earth might soon become uninhabitable for humans because of several reasons, we must make it easy and inexpensive to travel to other planets, with a view to eventually colonizing them. That motivation appeals to me a lot, although unfortunately for the foreseeable future I’m not going to be directly involved in any such endeavor.

Continuing this train of thought, one of the reasons contributing to the unsuitability of our home planet to sustain life any further is of course climate change. I also read Unstoppable where Bill Nye (the Science Guy) does a marvelous job of explaining how to use technology for a cleaner environment, at the same time debunking detractors and elucidating why the topic is so important. Finally, Arnold Schwarzenegger recently wrote something vehement urging people to take serious action toward ‘terminating’ climate change.

All these factors made me start looking for ways in which I can contribute every day toward the betterment of our Pale Blue Dot. It wasn’t difficult. I came upon 50 Ways to Help, a beautiful compilation of no-brainers that people can incorporate into their everyday lives in order to make a difference. While some of the suggestions aren’t very practical for my particular profession (e.g., if I shut my computers down every night instead of putting them to sleep/ hibernate, I’d be spending a ton of time every morning bringing them back to the state they were in the previous night in terms of applications open, programs running, etc. Also, I walk to work, and everything else is too far/ too inconvenient to bike, and I can’t use a bike for groceries etc.), most of them are very easy to implement.

As it stands, for now I’m resolved to regularly do the following, as my way of saying ‘thank you’ to our home in the Cosmos:

Use CFLs, don’t rinse dishes before putting them into the dishwasher, recycle as much as possible (was already doing this), eat only vegetarian some days, only launder full loads in the machine, launder on cold or warm (not hot), use fewer paper napkins, use both sides of paper, use reusable water and coffee containers, take shorter showers, take fewer baths, brush teeth without running water, use cruise control, occasionally buy second hand mechanical and electric equipment, buy local (to reduce fuel and pollution needed to get you the stuff), keep vehicles maintained, de-clutter and donate, use e-tickets, prefer downloads over compact disks (who uses optical disks anymore anyway), and go paperless

Earth is our home. And for the foreseeable future, given the current state of technology, our only home in the Cosmos. For better or for worse, Isaac Asimov’s Foundation-level civilizations spreading across galaxies, where space travel is the norm rather than the exception, do not exist, and aren’t likely to exist for a very, very, very long time to come. As Bill Nye would say, let’s treat the planet as our owned house, and not as a rental apartment. Let’s take good care of it. Only good things can come out of a pledge to do something about climate change right now, and we can all make contributions without changing much in our everyday lives.

New technologies at UNT and around the world [Schweeb]

Hi everybody!
So, the UNT Rec Center is apparently doing something really cool – they are transforming the kinetic energy generated when you run on the treadmills/ ellipticals into electric energy! I don’t know if it’s fully operational yet, but look here.

Really motivating for environmentalists who want to generate clean, carbon-free energy, and also for students who want to contribute toward producing such energy!

Also, a new project from Google lets you ride like this.

It has been implemented at a park in New Zealand, and we are just waiting for it to go public for everybody to use in big cities as a means of public transport =D Won’t that be cool?

Multicore Programming – negatives and positives

[Adapted from David Patterson’s article The Trouble with Multicore IEEE Spectrum magazine, July 2010]

The semiconductor industry has, as of the last 30 years now, been focusing on putting several microprocessors on a chip. But this has been being done with no clear notion of how such devices will in general be programmed.

Why then, has the industry taken such a gamble, just hoping that someone someday will be able to figure out how to program multiple cores/ processors? Well, it turns out, there was no alternative.

For decades, the burgeoning trend in the industry has been to squeeze as many transistors as possible on a chip. What further pushed processing power further up was the advent of microprocessors that could do several things at once. The continually reducing size of transistors and the consistent upping of microprocessor rates worked out quite well for a considerable amount of time.

However, around 2003, the whole process became stagnated. Why? Because the operating voltage could not be reduced any longer. Adding more transistors therefore caused the amount of dissipated heat for each square millimeter of silicon to go up – hitting the power wall. Try to add more transistors to a standard chip now and keeping it cool will become a problem. After all, as David Patterson puts it in the article, nobody would want a laptop that burns your lap.

So, heat problems, and failure to increase performance of a single chip has led designers to shift their focus instead on assembling multiple cores on a chip. Potentially, with several low-end microprocessors working together in parallel, you can have much more computing power.

Welcome multicore microprocessors, or many-core microprocessors.

So the major trend change has been, instead of focusing on how to pack multiple transistors on a chip [using efficient circuitry techniques], now focus on how to pack multiple cores on a chip. The core has become the new transistor, so to say.

So why does all this make programming these chips difficult? Or why is it that we are not able to fully utilize the computing power provided to us by the standard chips shipping from Intel?

For starters, not all problems could be transformed into several smaller problems – problems that are capable of running in parallel independent of each other. Complications will arise if one of these parts cannot be completed until the other is finished. All the several parts will also have to be timed such that they finished together – otherwise the other segments will keep waiting for any segments that are still running.

Technical terms for these problems are load balancing, sequential dependencies and synchronization. And it is the job of the programmer to handle these problems. Hence the challenge.

One hope was that the right parallel programming language will make parallel programming straightforward. APL, Id, Linda, Occam, SISAL – languages have come and gone, some have even made parallel programming easier, but they haven’t succeeded in making parallel programming as fast, efficient, and flexible as traditional sequential programming languages. Hence they haven’t become very popular either.

On the other end of the spectrum certain visionaries believed that if they just designed the proper hardware, things will become a smooth sail. That idea hasn’t worked out so far either.

Automatic parallelization of programs using appropriate software hasn’t been much of a success either. While this has shown to be effective for up to 8 cores, for any larger number of cores the usefulness of such an automatic parallelizer is looked upon with skepticism. Research has been going on further in this area.

Having talked about the negative aspect, let us now try to see the bright side of things. One area in which parallelism does work is when you can have a bunch of smart programmers divide a problem into several parts that do not depend much on each other. ATM transactions, airline ticketing, Internet search are some examples – essentially it is easier to parallelize a problem where a lot of users are doing the same thing, rather than a single user doing something complicated.

Another success story happens to be computer graphics – where several unrelated scenes can be generated in parallel. At a much more complicated level, some algorithms have been discovered to parallelized computations of single images too. High end GPUs (Graphics Processing Units) may contain hundreds of processors.

Scientific computing and weather prediction are more such examples.

To summarize so far, data parallel or embarrassingly parallel problems are prone to be easily solved using parallelism. Another important point to note is that it usually takes hordes of doctorates and efficient programmers to fully utilize the computing power provided by multicore processors – and desktop level applications simply lack that kind of intellectual horsepower behind them.

As more and more people start working on the problem of parallelization, there is increasing hope. Programmers are mostly focusing on dual and quad-core processors for now. Besides, while programmers in the past depended on the chip makers to keep giving them faster and faster chips to be able to handle bigger and bigger problems, now they cannot depend on the single chips to get any better, so they have to put some effort in inventing the right way to program multicore chips.

Nevertheless, instead of finding an all-encompassing way to convert every piece of software to run on many parallel processors, rather naturally the trend is to develop a few new applications that can take advantage of the many-core processors. One such application is speech recognition.

One problem that the researchers are facing is that many-core processors are not yet being designed, and simulating a 128-core processor with software will also be complicated. A way around it is using field-programmable gate arrays (FPGAs).

To conclude, there are several possible ways in which the industry and programmers can move now, and it is going to be very interesting to watch how things develop over the next decade.

Easy comparison between basic computer technologies

Ever wondered what the difference is between POP3 and IMAP? Ever tried to figure whether you should use VNC or NX for remotely accessing your work desktop remotely and which one would be faster/ better? Ever wished someone would concisely tell you the differences between MySQL and PostgreSQL so you could decide in a jiffy which one to use?

Now, apparently we have an answer. This Website right here does that, and I’ve personally found it useful for certain cases.

That’s all for now – peace!

Bookmark-plugin for Adobe Reader

Ever tried reading a long e-book in Adobe Reader, say one that is more than a 100 pages long? Ever wanted to take a break in between? [Duh!] Ever wished there was a way you could store the current page as a bookmark in Reader itself and not have to write it down somewhere else? Funny Adobe Reader doesn’t have this basic functionality.

Now, using the Javascript file offered at this site, you can do it! On Windows 7, I had to store it under Program Files\Adobe\Reader 9.0\Reader\Javascripts

Now when you open your Reader, go to the page you would like to bookmark, select Tools – Bookmark – Bookmark this page. Then you can happily close the program. Next day when you get the urge to read further, open the file in Reader, go to Tools – Bookmark – Go to bookmark, and voilá – you’re there!

Heat your homes using waste from data centers

This is the neatest proposition I’ve seen/ read about in quite some time. IBM’s Zürich Research Lab demonstrated at Supercomputing 2008 how, within 5 years, they’re going to have data centers that use water pumped through microchannels within computers for cooling rather than traditional air conditioning or fans. Here alone we are talking about saving annual energy costs by billions of dollars! Then, additionally, this water that would absorb the heat from the huge data centers will be used to heat nearby homes, which would save more energy dollars. If this comes true, we are talking about a very efficient and effective step towards a greener earth, in my opinion. Even as I am holding my breath, you can read this interesting article here.