The wearable gadget industry has morphed tremendously over the last few years, and today it stands at a crossroads. The future of wearable tech seems to want to go in one of two distinct directions. One being a luxury item sold mostly to fancy women who step out of their Mercedes’ and fish for wads of cash in their Gucci handbags, and men with 63 pairs of shoes back home and a bit of money left to spare for some more boy toys. The other being a functional and mainstream accessory, affordable to most people. For now at least, the industry looks to be going in the former direction, but then there are those who are trying to bring technology to everybody. Here’s a roundup of the best wearables of all the different types that we can expect in the near future.

  • Sony’s mysterious new clip on device :

With Google Glass being out of reach for most people who would like to keep their kidneys, the product is now fading into the background. Society already considers it to be elitist and creepy, and what’s more, Google Glass users have their very own nickname, Glassholes. Sony meanwhile, has been working on a device that clips onto glasses or sunglasses to transform them into futuristic Call of Duty style HUDs, and better still, they can be taken off after use, which means you won’t have to walk into the local bar repelling other human life. The device is expected to have a control board and Bluetooth capabilities, and allow projection of high resolution images in all light conditions. It is also expected to be a featherweight in its market, weighing in at a measly 40g, and that’s necessary since we wouldn’t want to accidentally block our nasal passage. The details of this product aren’t yet clearly out, so this could still swing both ways.

d

 

  • Virgin Media’s KipstR :

Most wearables focus on health or accessibility, even fashion, so I’ve got to admit this TV recording wristband makes for a welcome change in the industry. This device senses when you fall asleep watching TV, and begins recording your show. I know it’s a very specialized product that essentially does the same thing your TV remote would, but thinking about all the missed Modern Family episodes, I think this deserves a mention, albeit a really short one. Moving on.

 

  • Apple Watch

Possibly the most high profile feature on this list, Apple’s upcoming  wearable will make an attempt to fuse technology and fashion into a small package, and sell to customers with a large one, since it is expected to cost around $300. There’s nothing groundbreaking going on here, with basic applications such as music and pictures, along with some health and lifestyle applications. We can expect an elegant and ergonomic build, something that Apple have a reputation for. It also seems to be a great alternative to strapping your iPhone around your arm and looking like a royal douche. What’s more, you can read messages from your phone on your watch, which is good when you’re running, or on a crowded bus or train. One thing’s for sure, the Apple Watch is going to be a mean device, and those with a little bit of fun money will definitely want to have it.

e

  • Sony SmartBand :

I wouldn’t normally want to bore you with another FitBit in the sea of health and lifestyle wearables, which is why the SmartBand is special. On the surface it looks like just another gadget that you’d buy in early January just after you have drafted your new year’s resolution and vowed to start running, but it’s the little things that make this an excellent FitBit if you choose to go that way. Apart from the usual health and fitness mumbo jumbo, the SmartBand pairs with your phone and lets you control music through the band, although it does not have a screen so you would need to know your playlist well. Also, the band vibrates on recieving notifications, which is a welcome addition. What I like most is that it vibrates if it is too far away from the phone, so you won’t forget your phone in a cab or at a restaurant ever again. These little features make it a better option for those looking for a fitness companion.

f

 

 

Communicate with anyone on the planet, with no linguistic divide. Sounds like something out of cheesy sci-fi flick doesn’t it? Until recently, it might have, but with Skype’s latest offering, the Skype Translator, the world may have just become a smaller place.

Over a decade of research and development has allowed Microsoft to achieve what a number of Silicon Valley icons—not to mention the U.S. Department of Defence—have not yet been able to. To do so, Microsoft Research (MSR) had to solve some major machine learning problems while pushing technologies like deep neural networks into new territory.

Translation though, has never been the hardest part of the equation. Effective text translators have been around for a while. Translating spoken language—and especially doing so in real time—requires a whole different set of tools. Spoken words aren’t just a different medium of linguistic communication; we compose our words differently in speech and in text. Then there’s inflection, tone, body language, slang, idiom, mispronunciation, regional dialect and colloquialism. Text offers data; speech and all its nuances offers nothing but problems.

To translate an English phrase like “the straw that broke the camel’s back” into, say, German, the system looks for probabilistic matches, selecting the best solution from a number of candidate phrases based on what it thinks is most likely to be correct. Over time the system builds confidence in certain results, reducing errors. With enough use, it figures out that an equivalent phrase, “the drop that tipped the bucket,” will likely sound more familiar to a German speaker.

This kind of probabilistic, statistical matching allows the system to get smarter over time, but it doesn’t really represent a breakthrough in machine learning or translation (though MSR researchers would point out that they’ve built some pretty sophisticated and unique syntax parsing algorithms into their engine). And anyhow, translation is no longer the hardest part of the equation. The real breakthrough for real-time speech-to-speech translation came around in 2009, when a group at MSR decided to return to deep neural network research in an effort to enhance speech recognition and synthesis—the turning of spoken words into text and vice versa.

Designed more like the human brain than a classical computer, Deep Neural Networks (DNNs)—biologically inspired computing paradigms designed more like the human brain than a classical computer—enable computers to learn observationally through a powerful process known as deep learning. New DNN-based models that learn as they go proved capable of building larger and more complex bodies of knowledge about the data sets they were trained on—including things like language. Speech recognition accuracy rates shot up by 25 percent. Moreover, DNNs are fast enough to make real-time translation a reality, as 50,000 people found out this week.

So how do all these magical elements come together?

When one party on a Skype Translator call speaks, his or her words touch all of those pieces, traveling first to the cloud, then in series through a speech recognition system, a program that cleans up unnecessary “ums” and “ahs” and the like, a translation engine, and a speech synthesizer that turns that translation back into audible speech. Half a beat after that person stops speaking, an audio translation is already playing while a text transcript of the translation displays within the Skype app.

Skype translator still isn’t perfect though, with its fumbles on uncommon idioms and phrases and how the system evolves as it tries to keep up with tens of thousands of users testing its capabilities, still remains to be seen. What is for certain, is that through Skype, Microsoft has ushered in an age of digital communication without borders.

 

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. Simply put, Machine Learning is the field of study that gives computers the ability to learn, without being explicitly programmed. Essentially, it is a method of teaching computers to make and improve predictions or behaviours based on some data. What is this “data”? Well, that depends entirely on the problem. It could be readings from a robot’s sensors as it learns to walk, or the correct output of a program for certain input. Machine Learning converts data sets into pieces of software, known as “models,” that can represent the data set and generalize to make predictions on new data.

Broadly, Machine Learning can be used in three different ways:

  1. Data Mining: ML can be used by people to gain insights from large databases.
  2. Statistical Engineering: ML can be used to convert data into software that makes decisions about uncertain data.

Artificial Intelligence: ML can be used to emulate the human mind, to create computers that can see, hear, and understand.

c

The question arises, when a machine ‘learns’, what does it modify, its own code, or the data which contains the experience of code for the given set of input?

Well, it depends.

One example of code actually being modified is Genetic Programming, where you essentially evolve a program to complete a task (of course, the program doesn’t modify itself – instead it modifies another computer program).

Neural networks, on the other hand, modify their parameters automatically in response to prepared stimuli and expected response. This allows them to produce many behaviours (theoretically, they can produce any behaviour because they can approximate any function to an arbitrary precision, given enough time).

This may lead you to believe that machine learning algorithms work by “remembering” information, events, or experiences. This is not necessarily (or even often) the case.

Neural networks, only keep the current “state” of the approximation, which is updated as learning occurs. Rather than remembering what happened and how to react to it, neural networks build a sort of “model” of their “world.” The model tells them how to react to certain inputs, even if the inputs are something that it has never seen before.

This last ability – the ability to react to inputs that have never been seen before – is one of the core tenets of many machine learning algorithms. Imagine trying to teach a computer driver to navigate highways in traffic. An effective machine learning algorithm would (hopefully!) be able to learn similarities between different states and react to them similarly.

The similarities between states can be anything – even things we might think of as mundane can stump a computer! For example, let’s say that the computer driver learned that when a car in front of it slowed down, it had to slow down to. For a human, replacing the car with a motorcycle doesn’t change anything – we recognize that the motorcycle is also a vehicle. For a machine learning algorithm, this can actually be surprisingly difficult! A database would have to store information separately about the case where a car is in front and where a motorcycle is in front. A machine learning algorithm, on the other hand, would “learn” from the car example and be able to generalize to the motorcycle example automatically.

Machine learning is a huge field, with hundreds of different algorithms for solving a myriad of problems across a plethora of fields, ranging from robotics to stock forecasting. Think of the humble search engine. Behind it, is a very complex system that interprets your query, scours the web, and returns information that you will find useful, but because these engines have such high volume of traffic, Machine Learning is used, in the form of automated decision-making to handle the uncertainty and ambiguity of natural language.

As Rick Rashid, Founder of Microsoft Research, put it, “This topic of machine learning has become incredibly exciting over the last 10. The pace of change has been really dramatic.” With recent leaps like IBM Cognitive Computers’ Skin Cancer Detection System and Skype’s real time speech to speech translator, Machine Learning truly is, the way forward.

 

The Web knows no bounds. With a seemingly infinite amount of data at our finger tips, effective navigation through this unending maze of information becomes as important as its comprehension. The easiest way to find the proverbial needle in the haystack? The search engine.

For all the complexity behind the search engine, it has two primary functions – crawling and indexing, and providing results by calculating relevancy and serving results.

What does it mean when we say Google has “indexed” a site? Colloquially, we mean that we see the site in a [site: www.site.com] search on Google. This shows the pages in Google’s database that have been added to the database – but technically, they are not necessarily crawled, which is why you can see this from time to time

Indexing is something entirely different. If you want to simplify it, think of it this way, URLs have to be discovered before they can be crawled, and they have to be crawled before they can be “indexed” or more accurately, have some of the words in them associated with the words in Google’s index.

Google learns about URLs, and then adds these URLs to its crawl scheduling system. It deduces the list and then rearranges the list of URLs in priority order and crawls in that order. Once a page is crawled, Google then goes through another algorithmic process to determine whether to store the page in their index. What this means is that Google doesn’t crawl every page it knows about and doesn’t index every page they crawl.

b

Which brings us to how these pages are ranked. At first glance, it seems reasonable to believe that what a search engine does, is keep an index of all these web pages, and when a user types in a query search, the engine browses through its index and counts the occurrences of the key words in each web file. The winners are the pages with the highest number of occurrences of the key words. These get displayed back to the user. Indeed, this was how things were done in early search engines, with their text based ranking systems. This leads to a host of issues. For example, if one searches for “ACM”, one would expect that www.acm.org would be the most relevant result. However, there may millions of pages on the web using the term “ACM”. Suppose one were to write nothing but the term “ACM” a billion times on a web page. Since the search engine simply counts the occurrences of the words in the query, such a page would, invariably, make it to the top of the results.

The usefulness of a search engine depends on the relevance of the result set it gives back. There may of course be millions of web pages that include a particular word or phrase; however some of them will be more relevant, popular, or authoritative than others. A user does not have the ability or patience to scan through all pages that contain the given query words. One expects the relevant pages to be displayed within the top 20-30 pages returned by the search engine.

One of the most well-known algorithms for computing the relevance of web pages is Google’s Page Rank algorithm. The idea that PageRank brought up was that, the importance of any web page can be judged by looking at the pages that link to it. If we create a web page i and include a hyperlink to the web page j, this means that we consider j important and relevant for our topic. If there are a lot of pages that link to j, this means that the common belief is that page j is important. If on the other hand, j has only one backlink, but that comes from an authoritative site k, (like www.google.com, www.cnn.com) we say that k transfers its authority to j; in other words, k asserts that j is important. Whether we talk about popularity or authority, we can iteratively assign a rank to each web page, based on the ranks of the pages that point to it.

A quick overview of PageRank:

  • The higher the page’s score, the further up the search results list it will appear.
  • Scores are partially determined by the number of other Web pages that link to the target page. Each link is counted as a vote for the target. The logic behind this is that pages with high quality content will be linked to more often than mediocre pages.
  • Not all votes are equal. Votes from a high-ranking Web page count more than votes from low-ranking sites. You can’t really boost one Web page’s rank by making a bunch of empty Web sites linking back to the target page.
  • The more links a Web page sends out, the more diluted its voting power becomes. In other words, if a high-ranking page links to hundreds of other pages, each individual vote won’t count as much as it would if the page only linked to a few sites.
  • Other factors that might affect scoring include the how long the site has been around, the strength of the domain name, how and where the keywords appear on the site and the age of the links going to and from the site. Google tends to place more value on sites that have been around for a while.

 

Ever since the advent of Raspberry Pi, cheap, tiny, underpowered computers have become all the rage. With its $99 parallel-processing board for Linux, christened the Parallella, Adapteva wants a larger slice of the single-board computer pie. It may be almost four times the price of the Pi, but the concept of a supercomputer for the average consumer, at under a hundred dollars deserves to be lauded.

Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently, i.e. in parallel. Supercomputers such as the IBM Blue Gene/P employ parallel computing.

a

Based on the Epiphany multicore chips from Adapteva, the Parallella platform is an open source, energy efficient, high performance, pocket-sized computer with an ARM A9 processor. The 64-core Epiphany Multicore Accelerator allows the board to achieve 90 gigaflops (that’s the GFLOPs equivalent to a 45 GHz processor) while consuming only 5 watts under average workloads.

Specifications:

  • Zynq-7000 Series Dual-core ARM A9 CPU (Z-7010 or Z-7020)
  • 16 or 64-core Epiphany Multicore Accelerator
  • 1GB RAM
  • MicroSD Card
  • 2x USB 2.0
  • 4 general purpose expansion connectors
  • 10/100/1000 Ethernet
  • HDMI port
  • Linux Operating System
  • 3.4″ x 2.15″ form factor

 

Imagine your trusted HP deskjet dishing out scaled down models of your favorite cars and planes. Sounds cool, doesn’t it? That is essentially what 3D printing is. A 3D printer is a device that creates 3-dimentional objects from digital files fed into it, just like a regular printer prints out physical copies of digital documents. So if you take one of your sem 2 CAD drawings and pop them into a 3D printer, you’ll have a physical 3D model of the drawing.

The next question that comes to mind is, how exactly does something like this work? Essentially, the digital 3D drawing is divided into a large number of 2D slices. For example, to print a 3D cube, the printer will divide the cube into a large number of thin layers of squares. Once the 3D image is divided into numerous 2D images, the printer deposits material layer by layer from the ground up and eventually creates the required 3D model. Imagine building a wall brick by brick, except here the brick layers are the layers of material deposited by the printer.

3D printing is the Benedict Cumberbatch of the technology world. Everybody’s talking about it, and for good reason. Once the process has been streamlined, it could bring manufacturing costs down greatly. Imagine having everything from your phone to your sunglasses being manufactured using 3D printing technology. Imagine dentists making dentures using 3D printers instead of turning part-time sculptors. Imagine 3D printed heart valves for patients with heart conditions. Archaeologists could create fossil replicas and architects could print models of buildings. In the future, prototypes of everything from cars to rockets and satellites could be 3D printed. The possibilities are endless, and that is why today 3D printing has everybody, from Wall Street to Silicon Valley jumping about like a kid in a candy store.

By now you probably have a fair idea of how groundbreaking this is. 3D printing is the future, and the future cannot look any brighter. Except it could, courtesy of a bunch of geniuses at MIT. They call their technology 4D printing. No, the 4th D is not time, but the ability to change the other three ‘D’s. 4D printing aims at creating ‘smart’ objects that can respond to external stimuli and changes in the environment. Imagine having shoes that adjusted to the curves of your feet to fit you just right, t-shirts that don’t need ironing. Think water pipes that can adjust to the volume of water flowing through to keep the force of water constant. And that’s not even the most amazing part. One would naturally think that the material would need to be some sort of bionic, semi-living futuristic material. But the beauty of the design is the simplicity. The code simply uses the angles to which the material can bend in such a way that it is flexible only upto a certain limit, and thus provides a sort of loosely rigid structure. This synergy between flexibility and rigidity gives the 4D material its property to adapt, yet retain its overall characteristics.

Technology is making strides like never before and we will continue to see stuff that tickles the mind’s imagination and satisfies man’s need for the awesome.

 

The net neutrality debate burst into the spotlight when American network provider Comcast was accused of selectively slowing down uploads and downloads to certain internet services. The term net neutrality means ‘equality in internet traffic’. The concept has since become the topic of intense debates in corporate and tech circles, even featuring in President Obama’s campaign speeches.

Net neutrality supporters believe that internet providers should not have control over internet applications and services used by their subscribers. Proponents believe that internet providers could use control over subscriber content to create a ‘false market’ for services that would otherwise have been bundled in. Let’s consider an example to demonstrate the way this works. Consider an Internet Service Provider(ISP) which provides an internet plan A, with a monthy subscription fee of Rs. 1000 for unlimited internet, with the exception of services like Skype and FaceTime. The same provider also has a plan B, which costs Rs. 1500 and additionally provides these services, thus creating a ‘false market’ for those services, because in a neutral system, all these services should come bundled in. There have been reports of ISPs blocking certain third party applications to eliminate competition for services they themselves provide. Another way providers could exploit their control over the so called internet ‘pipeline’ is by striking deals with entertainment and other internet application manufacturers such that their content reaches consumers faster. Let us consider two online gaming providers A and B, and an internet provider C. Company C has agreed to a contract with online gaming service A, where users of C’s internet services receive faster speeds when gaming using service A, thereby intentionally pushing service B out of the competition. Such abuse of power by internet providers could endanger the unrestricted internet landscape around the world, and is a major cause for concern around the globe.

Let us now take a look at the other side of the argument. Some people believe that we are not yet ready for regulation in favor of net neutrality, stating that the grey areas about the exact definition of net neutrality must first be ironed out, since net neutrality could be taken to mean ‘equal speeds to all subscribers at the same prices’, which is then a violation of free market policy. Many critics believe that there would have to be an effective regulator for internet services, which would efficiently handle all ambiguity arising out of the legal framework for net neutrality in the future.

The debate has raged on for a few years now, and technology gurus are now looking for a middle route, a proverbial ‘third way’, which would solve the problem using a milder approach, one that requires less of a structural overhaul, and hopefully bury this debate permanently.