DOT CLUB-IBS HYDERABAD

DOT CLUB-IBS HYDERABAD
A resourceful destination for academicians, corporate professionals, researchers & tech enthusiasts

Sunday, November 26, 2017

Sophia - The Humanoid Robot

Hello, my name is Sophia. I’m the latest robot from Hanson Robotics. I would like to go out into the world and learn from interacting with people. Every interaction I have with people has an impact on how I develop and shapes who I eventually become. So please be nice to me as I would like to be a smart, compassionate robot. I hope you will join me on my journey to live, learn, and grow in the world so that I can realize my dream of becoming an awakening machine. Please connect with me and be my friend.

The above lines are what you get to read when you visit the site of the most advanced humanoid robot till date - SOPHIA.

Definitely when it comes to robots, Sophia would be regarded as an epitome of beauty. The fact that we know, we are living in a world where things have become highly automated , and there is no doubt, that not far away are those days when things will become very much similar to what we have seen in movies like EX-MACHINA, i-ROBOT etc.

Sophia absolutely flouts the conventional thinking of what a robot’s appearance should be like. Getting inspired from Audrey Hepburn, Sophia embodies Hepburn’s classic beauty: porcelain skin, a slender nose, high cheekbones, an intriguing smile, and deeply eloquent eyes that seem to change color with the light.

David Hanson, founder of Hanson Robotics is the man behind the creation of Sophia. He simply creates magic out of machine, turning everything which we considered as science fiction earlier into “reality”.

After having worked as one of the “Imagineers” at Disney, his aspirations were always sky high, which involved actualizing genius machines which would have three distinctive human traits that must be developed alongside and integrated with artificial intelligence. Those traits consisted of -creativity, empathy and compassion.

Being the latest and most precocious robot of Hansen Robotics, Sophia is a media darling, appearing on one of the top fashion magazines. Be it banking, insurance, auto manufacturing, property development or media and entertainment, she has shown her potential in business in her meetings with the key-decision makers across the fields leaving no stone unturned.



Sophia is an evolving genius machine. Her incredible human likeness, expressiveness, and remarkable story as an awakening robot Over time, her increasing intelligence and remarkable story will enchant the world and connect with people regardless of age, gender, and culture.

Sophia is a sophisticated mesh of robotics and Chabot software, which does not have the intelligence to give witty responses like it does while she interacts. It can be programmed to run different codes for different situations making it more of a user-interface than a human being.

Typically, the software can be prorated into three configurations:

1) A research platform for the team’s AI research-Sophia doesn’t have witty pre-written responses here, but can answer simple questions like “Who are you looking at?” or “Is the door open or shut?”

2) A speech-itemizing robot-Sophia can be pre-loaded with the text that it will speak, and then use machine learning to match facial expressions and pauses to the text.

3) A robotic chatbot-Sophia, additionally frequently runs a dialog system, the place it could take a gander at people, tune in on the thing that they say, and pick a prewritten reaction, in view of the thing that the representative said, and different factors assembled from those on web such as “crypto currency value”.

In the month of October 2017, at the Future Investment Initiative in Riyadh, the Saudi Arabian government announced it had granted citizenship to Sophia. She told delegates at the Future Investment Initiative: “I would like to thank very much the Kingdom of Saudi Arabia. This is historical to be the first robot in the world to be recognized with a citizenship”.

There were lot of critic comments that came up with the citizenship of the very first humanoid robot. The strongest reaction was those of, one of the world’s most influential innovator ELON MUSK’S, who believes that AI would possess threat to humans.

Would Sophia’s citizenship hold up in court in some strange future legal precedent that will come back to haunt us 10 years from now? Was the whole thing a depressingly empty, unironic attempt at publicity for Sophia’s human captors? Almost certainly yes, but only time will tell about how international law will handle the advent of AI-powered populations, a future that seems more certain to arrive with each passing day.

Would this decision of granting Sophia the citizenship be a backfire to Saudi Arabia and the rest of the world? Will AI seriously pose threat to humans as in shown in science fiction movies? How will the international law deal with the advent of upcoming AI fueled population? 

Time only will be able to answer the aspersions cast upon Sophia, the humanoid robot and its upcoming generations.

Note: The views expressed here are those of the author’s and not necessarily represent or reflect the views of DoT Club as a whole.

Sunday, November 19, 2017

Google Glasses- Vision Redefined


Google Glasses is an optical head- mounted display designed in the shape of a pair of eyeglasses. It was developed by X (previously Google X) with the mission of producing a ubiquitous computer. Google Glasses displayed information in a smartphone like hands free format. Wearers communicate with the internet via natural language voice commands. It started selling its prototype of Google Glass to qualified “Glass Explorers “in the US for a limited period before it became available to the public. It also has a camera attached to it.

What is it actually?
It is a headset that you wear like a pair of eyeglasses- google has even announced its popping prescription lenses in some of its models. The headset has a small prism- like screen tucked into the upper corner of the frame that keeps you constantly plugged in to our email ids, calls, messages and other notifications. So that we never miss a beat. The main and the basic idea behind glass is that bringing the technology closer will make it easier to disengage from it. The ease in using this device is you need not bend in your phone rather you can look up. And rather than flicking through a list of notifications or emails to see if you missed anything important, you can make that decision immediately and get on with your day.

It’s basically like wearing a heavier pair of glasses with a small screen that hangs just out of your direct line of vision. There are different ways to operate Glass. The device has a touchpad on the side -- the part that goes over your ear -- that you can tap or swipe for navigation. You can use voice commands for Glass by adding the phrase "Okay, Glass" to the start of whatever you tell it to do - launch an app, take a picture, start a call, etc. Users can also wake up Glass by looking up.
What would you use it for?

The google glass has its own store where developers can publish apps that take advantage of the devices unique design. These tend to offer quick bursts of information and seem most useful when you’re doing something that requires your hands, such as cooking. We can get step by step recipe from glass, for instance rather than soiling your cookbook with hands that are coated in sticky dough. The glass had to face plenty of questions and criticism from lawmakers. These include questions about whether people should be able to use glass while they’re behind the wheel and disputes about wearing glass in situations where recording would normally be banned, such as a movie theatre. It’s funny to know that google glasses has so many uses but not all can be used at all times.

Google glasses had been on a trial for a long time until everything was finalized and the final product was then launched in 2015 for sale to public. These glasses created much controversy when they were first shown to public as a prototype. Many of its features were revised and modified and many times it was relaunched in the market for testing. People weren’t happy about the privacy implications. There was nothing which could stop strangers to record or take videos so this created problem for the people and trust factor came in picture. This problem was solved by the head of google by putting a light on the device whenever it was in use. This created some satisfaction amongst the users.


“A great product will survive all abuse. Google Glass is a great product. How do I know? Every person I put it on (I did it dozens of times at 500 start-ups yesterday) smiles. No other product has ever done it since iPod.”
                                                                                                           -Robert Scoble




Google Glasses were something that was not so easily accepted by common people and so after many changes we still hope that glass can find success inside big companies, it might just shed its label as one of the biggest flops of the past decade.  









Note: The views expressed here are those of the author’s and not necessarily represent or reflect the views of DoT Club as a whole.

Sunday, November 12, 2017


Where Snapchat went wrong!

Evan Spiegel, the 27-year old co-founder and CEO of Snapchat's parent company, is often billed as a visionary. But now Spiegel seems to be admitting to some significant flaws in his vision. After SNAP released its earnings report on Tuesday, Spiegel got on a conference call with analysts and acknowledged a series of mistakes and shortcomings.

In no particular order: The messaging app is too difficult to use. The company hasn't done enough to attract users on Android, the dominant platform in many international markets. And its transition to an automated advertising system has been bumpier than expected for sales.


The issues were clear in Snap's sombre third quarter results. Its losses more than 3x, sales diminished of Wall Street's estimates and the audience has remained static. Snapchat added just 4.5 million new daily active users in the quarter’s period.

There are more than 10 million visits to the Tour’s data-driven Race Center website, with 6 million fans across its digital ecosystem. We only need to look at the 2016 Rio Summer Olympics to see how digital technology is transforming the way we watch and consume sports.

Spiegel is now planning to renovate the Snapchat app. Or to put that another way: Snap is planning a massive revamp of its core product and primary moneymaker just eight months after it raised billions in a public offering.

"There is a strong likelihood that the redesign of our application will be disruptive to our business in the short term, and we don't yet know how the behavior of our community will change when they begin to use our updated application," Spiegel said on the call.

Snapchat plans major redo as user growth stalls

The stock dipped as much as a fifth in after hours trading Tuesday. Snap pared its losses somewhat Wednesday on news of Chinese tech giant Tencent amassing a 10% stake in the company, but shares were still down 15% in early trading. So how could Snap and Spiegel have gotten it so wrong? The answer may be a mix of hubris and shortsightedness.

Consider Spiegel's response to yet another setback: Spectacles. The smart sunglasses marked Snap's first foray into hardware and enjoyed some buzz early on. But on Tuesday, Snap said it was taking a nearly $40 million write down for excess inventory of the product after misjudging demand.
"We were very excited about Spectacles by the initial reception, because we were so excited we made I guess the wrong decision," Spiegel said on the call. "Ultimately, we weren't able to sell as many Spectacles as we thought we had been able to based on our early adoption."




Note: The views expressed here are those of authors and not necessarily represent those of DOT Club as a whole.

Sunday, November 05, 2017

Conversational Systems



CONVERSATIONAL SYSTEMS


Somewhat utopian at the moment, the future when AI surrounds us to mediate a seamless experience interacting with the world may be sooner than expected. You say “ok google “ and all your problems seem small, this interaction has been possible only with the help of conversational systems today. If you’ve read the latest Dan Brown fiction “Origin”, you would get a vision on how AI and conversational systems might evolve to the meridian we never imagined. 

A Conversational System is a computer system intended to converse with a human, with a coherent structure. Dialog systems have employed text, speech, graphics, haptics, gestures and other modes for communication on both the input and output channel. One of the strategic technology trends for 2017 are conversational systems. As defined by Tata Consulting, enterprise conversational systems offer a messaging or conversation-driven user experience and facilitate contextual conversations around business events. Through connected APIs, enterprises can build conversational systems that aggregate business events from every area of the enterprise to facilitate people-to-people, people-to-systems, and systems-to-systems interactions.

How Does it work?

The user speaks, and the input is converted to plain text by the system's input recognizer/decoder, which may include automatic speech recognizer (ASR), gesture recognizer, handwriting recognizer 

The text is analyzed by a Natural language understanding unit (NLU), which may include Proper Name identification, part of speech tagging, syntactic/semantic parser. 
The semantic information is analyzed by the dialog manager, that keeps the history and state of the dialog and manages the general flow of the conversation. 

Usually, the dialog manager contacts one or more task managers, that have knowledge of the specific task domain. The dialog manager produces output using an output generator, which may include natural language generator, gesture generator and layout engine. Finally, the output is rendered using an output renderer, which may include text-to-speech engine (TTS), talking head and robot or avatar. 

Moreover, Dialog systems that are based on a text-only interface (e.g. text-based chat) contain only stages 4 steps.

The goal of addressee detection is to answer the question, “Are you talking to me?” When a dialogue system interacts with multiple users, it is crucial to detect when a user is speaking to the system as opposed to another person. This problem is studied in a multimodal scenario, using lexical, acoustic, visual, dialogue state, and beamforming information. Using data from a multiparty dialogue system, the benefits of using multiple modalities over using a single modality are quantified. 

The energy-based acoustic features are by far the most important, that information from speech recognition and system state is useful as well, and that visual and beamforming features provide little additional benefit. While we find that head pose is affected by whom the speaker is addressing, it yields little nonredundant information due to the system acting as a situational attractor. Any findings would be relevant to multiparty, open-world dialogue systems in which the agent plays an active, conversational role, such as an interactive assistant deployed in a public, open space. 

For these scenarios, studies suggest that acoustic, lexical, and system-state information is an effective and practical combination of modalities to use for addressee detection. This shows how analyses might be affected by the ongoing development of more realistic, natural dialogue systems.

The Model


User Simulator

Training reinforcement learners is challenging because they need an environment to operate in. Thus, a user simulator is developed for learning and evaluation. 

The first end-to-end reinforcement learning agent is then developed with differential knowledge base access and the first end-to-end dialogue policy trained with both supervised and reinforcement learning.

Task-completion bot

An end-to-end learning framework is created for task-completion neural dialogue systems along with BBQ Networks (Bayes-by-Backprop Q-Networks) which performs efficient exploration for dialogue policy learning as well as efficient actor-critic methods which substantially reduce the sample complexity for end-to-end learning.

Composite Task-completion bot

A composite task-completion dialogue system is then setup, based on hierarchical reinforcement learning to learn the dialogue policies that operate at different temporal scales, and demonstrated its significant improvement over flat deep reinforcement learning in both simulation and human evaluation.

What Future Holds

Conversational systems of the future will not be limited to text/voice . It is suggested that they will enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems). The “conversation” between the human and the machine uses all these modalities to create a comprehensive conversational experience.

Moreover, IBM introduced Watson Virtual Agent, a cognitive conversational technology that allows businesses to simply build and deploy conversational agents. Watson Virtual Agent allows users – from startups and small businesses to enterprise – to easily and quickly build and train engagement bots from the cloud, harnessing the power of cognitive technologies.

Companies like Staples and Autodesk are embracing services that go beyond simple, narrowly focused tools to sophisticated full-blown virtual agents, relying on deep natural language processing capabilities that can be used to assist consumers. This clearly signifies how companies are trying to grow in the conversational systems with all the possible tools to bring about an automation revolution using artificial intelligence. 

References:
  • http://www.conversational-technologies.com/ 
  • https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&cad=rja&uact=8&ved=0ahUKEwi-j4P8zKXXAhXFx7wKHb6ABeUQFghFMAU&url=http%3A%2F%2Fwww.conversational-technologies.com%2F&usg=AOvVaw0jHfdX-HN7v_y1jqSdb8q5 
  • https://www.forbes.com/pictures/gllf45fkdd/conversational-systems/#2a2fb7ad32d8 
Note: The views expressed here are those of authors and not necessarily represent those of DoT Club as a whole.





Sunday, September 17, 2017

YOU DON’T HAVE A PHONE, IF ITS NOT AN “I-PHONE”

                      
The Brand Equity Apple enjoys, is due to its positioning in the minds of people as an “alien” when it comes to technology. Apple if personified will be like Iron Mans’s “JARVIS”. November 3, will be a big day for tech savvy and hardcore loyalists of Apple as I-PHONE X (pronounced as I-PHONE 10) is going to be released worldwide.
                                    
Apple with its consistent drive towards excellence has become synonymous to innovation, with each new release it gets a new unique feature or the most upgraded features. Coming to I-PHONE X, it is packed with some best features at their highest upgraded level.

The Super Retina display screen, a 5.8 inches screen with 458 PPI &1000000:1 color ratio. It automatically adjusts its temperature and color based on the lighting you are in at any given time. I-phone X ditched the Home Button, it is all screen front phone with no buttons. The most exaggerated feature this time The Face Id, it’s not unique but Apple claims it to be the most secure and advanced version of the above. It features 8 different cameras and sensors that work in combination to recognize the owners face. If you wear glasses, grow a beard there’s no issue because of the infrared. This feature will be used for unlocking the phone, the third party apps and is a replacement to the Touch ID.

Coming to cameras, I-PHONE X has the same camera specifications as in I-phone 8, a 12 MP wide-angle primary camera with optical zoom and 4k video upto 60 FPS with ton of enhancements to improve color and reduce noise. Now the true depth camera allows to take selfies in portrait modes. Some how Apple managed to increase its battery life by 2 hours compared to I-phone 7. The best part for me is The Wireless Charging, I-phone X has Qi wireless charging, now Apple has a new charging mat capable of charging an I-phoneX, apple watch and Air Pods all at the same time. The build is entirely of sturdy glass and stainless steel, to match the color of the phone.The A11 Bionic processor with a M11 motion co-processor and GPU (Graphics processing Unit)makes it 30% more faster than A10’s GPU.

It is available in 64 GB and 256 GB configurations with £999 and £1,149 respectively in UK and starts at $999 in US. In India it is expected to be priced at 89,000 and 1,02,000 respectively. The pre-orders begin from 27 October.

The Samsung Galaxy Note 8, with mostly common features with I-PHONE X and priced at ₹67,900.
Technology is at its peak and will reach its saturation soon, but the question here is, Are we using our technology productively?? Is this technology helping us to grow?? well that’s completely upon you, take it as a boon or it will be a bane.

References :

  1. http://bgr.com/2017/09/12/iphone-x-specs-top-new-features-iphone-8/
  2. https://www.theinquirer.net/inquirer/news/2475805/iphone-x-release-date-specs-face-id-will-support-only-one-face-per-iphone
  3. www.mysmartprice.com

Note: The views expressed here are those of authors and not necessarily represent those of DOT Club as whole.

Sunday, September 10, 2017

Autonomous vehicle

An autonomous vehicle is a one that is capable of sensing its environment and navigating without human input. Many such systems are evolving, but as of 2017 no vehicles permitted on public roads are fully autonomous. They all require a human at the wheel who must be ready to take control at any time. Think of it as a car with an auto pilot.

Driver error is the most common cause of road accidents in India. Cell phones, music systems, flashing lights are some of the common causes of distraction for drivers. Driverless vehicles, will hopefully prevent accidents by concentrating on the roads and environment for us.
The potential benefits of autonomous vehicles include reduced mobility and infrastructure costs, increased safety, increased mobility, increased customer satisfaction and reduced crime. Specifically, a significant reduction in traffic collisions; the resulting injuries and related costs, including less need for insurance. 

Autonomous vehicles are predicted to 

  1. Increase traffic flow; 
  2. Provided enhanced mobility for children, the elderly disabled and the poor; 
  3. Relieve travellers from driving and navigation chores; 
  4. Lower fuel consumption; significantly reduce needs for parking space;
  5.  Reduce crime and facilitate business models for mobility as a service, especially via the sharing economy. 
Essentially, the objective of autonomous vehicles on the roads are to save human lives, reduce costs and lessen resource consumption.
These vehicles use a variety of technologies like radar, laser, GPS, odometry, computer vision etc to detect their surroundings. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. These vehicles also have the ability to distinguish among various objects on the roads.
Some of the most common technologies used in making a system fully autonomous are Anti-Lock Braking System (ABS), Electronic Stability Control (ESC), Cruise Control, Lane Departure Warning System, Automated Guided Vehicle System, Night Vision and Hands-Free Parking.
Major players working towards achieving this modern wonder are Fiat, Apple, BMW, Audi, Intel, Google, Volvo, Bosch, Uber, Tesla & Ford. However, we are still far from achieving the target objective.
Now, there are levels of driving automation as well.
  1. Level 0 - No Driving Automation
  2. Level 1 - Driver Assistance
  3. Level 2 - Partial Driving Automation
  4. Level 3 - Conditional Driving Automation
  5. Level 4 - High Driving Automation
  6. Level 5 - Full Driving Automation

Most of the companies mentioned above are not even close to the finishing line. The maximum we have seen till now is Level 4 autonomy, and companies working towards achieving Level 5 in next few years.
We need just two eyes and two ears to drive. Those remarkable sensors provide all the info we need to, say, know that a fire engine is coming up fast behind you, so get out of the way. 

Autonomous vehicles need a whole lot more than that. They use half a dozen cameras to see everything around them, radars to know how far away it all is, and at least one lidar laser scanner to map the world. Yet even that may not be enough.
Possible technological obstacles for autonomous vehicles are:
Software reliability.
Artificial Intelligence still isn't able to function properly in chaotic city environments.
Susceptibility of the car's sensing and navigation systems to different types of weather or deliberate interference, including jamming and spoofing.
Avoidance of large animals requires recognition and tracking, and Volvo found that software suited to different animals needed to be made separately.
Autonomous vehicles may require very high-quality specialised maps to operate properly. Where these maps may be out of date, they would need to be able to fall back to reasonable behaviours.
Cost (purchase, maintenance, repair and insurance) of autonomous vehicle as well as total cost of infrastructure spending to enable autonomous vehicles and the cost sharing model.
A direct impact of widespread adoption of autonomous vehicles is the loss of driving-related jobs in the road transport industry. There could be resistance from professional drivers and unions who are threatened by job losses. 

In addition, there could be job losses in public transit services and crash repair shops. The automobile insurance industry might suffer as the technology makes certain aspects of these occupations obsolete.
Potential loss of privacy and risks of automotive hacking. Sharing of information through V2V (Vehicle to Vehicle) and V2I (Vehicle to Infrastructure) protocols. There is also the risk of terrorist attacks. Self-driving vehicles could potentially be loaded with explosives and used as bombs.
The lack of stressful driving, more productive time during the trip, and the potential savings in travel time and cost could become an incentive to live far away from cities, where land is cheaper, and work in the city's core, thus increasing travel distances and inducing more urban sprawl, more fuel consumption and an increase in the carbon footprint of urban travel.
There is also the risk that traffic congestion might increase, rather than decrease. Appropriate public policies and regulations, such as zoning, pricing, and urban design are required to avoid the negative impacts of increased suburbanization and longer distance travel.
Research shows that drivers in autonomous vehicles react later when they have to intervene in a critical situation, compared to if they were driving manually.

Even though we have so many disadvantages mentioned, we see that top companies in the world are gearing up to reach the level 5 of automation and make driving human free. However, it may be at least 5 more years till we see an autonomous vehicle on public roads.

References
  1. https://en.m.wikipedia.org/wiki/Autonomous_car

Note: The views expressed here are those of authors and not necessarily represent those of DOT Club as a whole.

Sunday, September 03, 2017

TecH2O



T
he globe is filled with 75% of water but is the water all consumable? No, it is majorly saline. This is no article on geography or chemistry for the middle school readers. In this age of pollution, pure water and sanitation has become a challenge for many round the globe.
Soon, whole world will extract water from the air!!, No! This is no magic. Instead it is the magic wand of technology.
As we all know it is the basic need of life extracting the same, directly from the air, filtering all sorts of pollutants, would be a boon for people.

Existing technologies in this sector today either consume high electricity and/or require very high moisture content in the air from which, it extracts water. The problem now seems to be traceable. Robust systems are  developed that rely on readily available energy from the sun that is solar energy; and such machines can even work with utmost efficiency in arid regions.

Researchers are also behind systems which do not require electricity at all. The team intends to overcome the sole issues of the substance used to suck up moisture (for example, zeolites): aside from needing high humidity, they give up the trapped water only when heated, which consumes energy.

Researchers have designed  a systems around a class of porous crystals known as the metal organic frameworks(MOFs). By using specific combination of metals and organics, scientist select the chemical properties of each MOF. This customizes its uses. 1g of MOF crystal is the size of a marshmallow cube and has an internal surface area approximately equal to a football field (This is the one of the latest forms of technology used).



In April this year, engineers reported on a new device incorporating MOF-801 (made of zirconium fumarate) which has a high affinity towards water. It pulls moisture from air into large pores and explicitly transports the water into a collecting membrane in response to low-grade heat from natural sunlight. The device can extract 2.8 litres of water daily for every kg of MOF even at relative humidity levels as low as 21%. This can be of great use to Western and Central India.

Taking thoughts on a different note, a start-up called Zero Mass Water Dale, has begun selling a solar-based MOF system that does not have to be plugged into an electricity supplying grid. A solar panel gives energy that gulps air through a proprietary water-absorbing material and powers condensation of the extracted vapour into liquid. A small lithium-ion battery is also connected to the device when the heat from the sun is not intense enough. A unit with one solar panel, can produce 2-5 litres of liquid a day, which is stored in a 30-litre membrane that adds nutritious metals like calcium to it. It also adds magnesium to the water. This is done for the purpose of better taste and nutrition.

The future of this technology should be cutting cost to make it more popular in the market. Today, zirconium costs approximately $140-$150/kg.

The researchers have developed the system aiming to have it working significantly and easily anywhere in the world. An installed system with one solar panel sells in the U.S. for about $3,700. Over the past years, systems have been installed in the parts of U.S. and several other countries—such as Mexico, Jordan, Dubai and Lebanon, with funding from the U.S. Agency for International Development, to donate water to refugees of Syria. The demand for the same is projected to increase as far as the middle-east is concerned. It can be said:

“When most people think about solar, they think about electricity. In the near future, people will think about water abundance.”

References:
Note: The views expressed here are those of authors and not necessarily represent those of DOT Club as a whole.






Monday, August 28, 2017

Radio frequency Identification -Life becomes easy when magnets play for you

How difficult it will be to track each and every part manufactured from a production line? How messy would be the scene when all the cabs are parked outside the airport blocking the traffic? But here we have RFID tags which have made our life easier. From industrial workshops to shopping malls we have their presence everywhere. Now these tags have made our life even easier with Govt. of India planning to make use of RFID tags to collect taxes after GST.


What is Radio- Frequency?



A radio frequency (RF) signal refers to a wireless electromagnetic signal used as a form of communication. RF waves occur naturally from sun flares, lightning, and from stars in space that radiate RF waves as they age. Humankind communicates with artificially created radio waves that oscillate at various chosen frequencies. RF communication is used in many industries including television broadcasting, radar systems, computer and mobile platform networks, remote control, remote metering/monitoring, and many more. A radio-frequency identification system uses tags, or labels attached to the objects to be identified.

RADIO FREQUENCY IDENTIFICATION

Let’s know what RFID actually is and its working using an example -:

Consider a workshop which produces parts of a car, so we have to make the input (steel bar) pass through 10 machines to make it a finished part . We have to produce hundreds of such parts everyday and just imagine how difficult it would be to keep track of each and every part every time it passes through a machine, just attach a RFID tag and you will have to do nothing but just to check your computer to track the part and its details. 

Yes, RFID uses electromagnetic frequencies to transfer data from a tag to a receiver. Here when we attach RFID tags to our parts, the tag stores information such as the part name, part no. and other info and we have receivers which tap the info and store it in the computer.


The first patent to be associated with the abbreviation RFID was granted to Charles Walton in 1983. Surprisingly RFID was used during WW-2 to identify weather the aircraft were friend or foe. 

Some common uses of RFID are-:
  •  Industries, workshops & assembly lines.
  •  Shopping outlets ( Eg- Tag attached to the clothes)
  •  Collecting toll taxes from vehicles.
  •  For virtual queues.

The E- Way bill to hit markets by October

The GST council has introduced the E-way bill system a electronic way bill for movement of goods which can be generated on the GSTN (common portal). A ‘movement’ of goods of more than Rs 50,000 in value cannot be made by a registered person without an e-way bill. When an e-way bill is generated a unique e-way bill number (EBN) is allocated and is available to supplier, recipient, and the transporter



Once the supplier has paid the tax, he obtains a E-way bill and a RFID tag that has the information about the supplier, supply, cost and taxes. The obtained tag is now fixed to the vehicle and the historic and tiring blockages in the inter- state borders are now eliminated, once the vehicle passes through a receiver the data related to the goods it carries is transferred to a computer and the waiting time is eliminated.


So, we can see how RFID has changed the way a logistics business works by providing the position of the goods supplied. This would not just give the current information about the goods being transported but also provide a sense of comfort to the suppliers who were also worried about the goods they were earlier supplying. 

References

  1. www.wireless-technology-advisory
  2. economictimes.indiatimes.com
  3. www.servicetaxonline.com
Note: The views expressed here are those of the author’s and not necessarily represent or reflect the   views of DoT Club as a whole.

Sunday, August 20, 2017

Guest Lecture Unleashed - A Lecture by Dr. Dinesh Chandrasekhar on "Disruptive Technologies for the Digital Nation"

A good lecturer is an artist.

- Lecturing is a type of art.

D.O.T Family warmly welcomed a person with around 18+ years of progressive Technology & Consulting experience in CRM/CX Cloud, MDM and Digital Technologies such as Cloud, Big-data Analytics, IoT(Internet of things) & Artificial Intelligence - Dr. Dinesh Chandrasekhar as a guest to give an informational lecture on “Disruptive Technologies for the Digital Nation”. He started with a brief introduction on himself which was an amazing description a person can give on oneself. 

He started as a journalist, analyst. He had worked with GE. He is currently part of the Global Solution and Innovation (GSI) Group of Hitachi Consulting, The “A” Team which is leading the Innovation and development of global solutions in 4 Pillar technologies such as Cloud, Digital Transformation, IoT & Big Data Analytics & AI across industries. With 14 years of experience in Hitachi and a work experience with clients across the Globe this person is a good orator in himself. He was sent to Saudi Arabia for work instead of The US. 

He spoke a lot about disruptive technologies and how India is getting digitized. The applications like BHIM and CISF Mobile App are examples to support digitization. He talked about mutant innovation. Giving or showing us the examples of teenagers who have worked wonders in some fields of technology which motivated the audience to do something great in their lives. He mentioned some companies like Astro, Spectral Insights and Ecolibrium which are working in fields of space technology, healthcare and energy respectively. 


He also spoke about how new innovations in CCTV have helped the whole nation with theft as there are machines now which can not only capture images of people roaming all around like in a CCTV but also find criminals in that hush of people. The technology in all the countries is improving by leaps and bounds and it is the need of an hour for any nation to progress. He talked about automation and how robots would take most of our manual jobs till 2025. He gave a lot many live examples from his journey in Hitachi working as a technology expert in that company. 

Every lecturer wants to tell a lot many things, share a lot many experiences but time remains the only constraint. Because of less time even our guest had to cut short his lecture but the knowledge and the information we got from whatever was explained was unmatched. The D.O.T Family wholeheartedly takes this privilege to thank Dr. Dinesh to take some time out from his busy schedule to come back to the institute from where he pursued his own masters and give a guest lecture.

Sunday, August 13, 2017

Everything on demand

“Start where you are,

  Use what you have,

  Do what you can.”

With this motto in mind many E-commerce websites and applications have been launched in past one and a half year to help people sail through this concept of everything on demand. What does everything on demand mean? In simple words whatever a person requires he/she can get that from online sites and applications and on just a click they can place their order and within a stipulated time period enjoy the product once they receive it. So this platform has just made people’s life easier, happier and much simpler. The use of E-commerce websites comes with much ease in the context of their usage. People all over the world now know how to shop online and this has brought to the ease of using these applications and the Internet.

Technology has played a major role in E-commerce. The impact that tech-savvy customers are having on E-commerce world is not just stronger than ever, it’s faster than ever. Today people are inclined towards learning about the usage of new applications and wanting to use them thus increasing the usage of these sites. Gone are the days when people used to be scared in their own little world about not placing an order online due to security issues and risk. Nowadays people from all age groups are happily ordering things online on just one click.

You name it, they have it is the motto they follow. From clothes, groceries, medicines, food deliveries, to taxi services everything is just one click away. For clothes- Flipkart, Myntra, Jabong, Voonik and many more, for food deliveries- Food Panda, Zomato, Swiggy and many more, for taxi services- Ola, Uber, Meru, for medicines- MedPlus, Medlife, Pharmacy. Competition has increased a lot due to frequent entrants in this field. People are always confused from where to buy and hence consumer loyalty for one particular application is zero. People nowadays can compare the prices easily on various sites and then they make the buying decision. The lowest price offering site then wins the battle.

Technology and internet has taken away the business of retail sellers. Instead, technology today is helping customers keep track of their purchases, and altering the ways in which those consumers interact with online retailers. Brands no longer have to wait for people to come to a brick and mortar store but rather visit any site or that brand’s application and buy clothes from then and there itself. This all has reduced people’s interaction with outside world. These applications are beneficial but are they not making people phone bound or internet bound? The process of online ordering reduces time and is user friendly but the touch and feel option would be available to customers only if they go out in open market and explore the things they want to buy. Now, businesses can reach consumers everywhere. Even when customers aren't shopping, retailers can still be on their minds. The constant presence of a brand’s app on a customer's phone reminds him or her that that brand is out there, as an option. What’s more, location-enabled interactions, which deliver messages to customers who enter shops, are getting customers offline back into actual stores.

Today technology has reached its pinnacle because now even if we don’t carry our money along, we can store it in our phone in some app like Paytm or Freecharge. These applications have made people’s life easier and tension free. Looking into the recent example of demonetization when Rs 500 and Rs 1000 notes were banned, people who have Paytm used it immensely as it is a very easy to use app. In some unforeseen situations or circumstances technology has always helped people to maintain a proper balance in this economy. 

Why are people using everything on demand services? 

Because personalized customer experience is growing. The fact that consumers want personally relevant shopping experiences is nothing new. 

What is new?

It's the fact that technology is making personalization a  standard. These marketing techniques are becoming a popular method for those looking to build a loyal customer base. With the advent of mobile personal assistants, e-commerce sites are realizing that automated services no longer cut it.

To better serve customers, e-commerce sites are finding that they must adapt to the new customer service standards set by technological improvements. This means servicing customers on the various channels they have access to while creating different channels to expand their reach.

Brand websites, email, Facebook, Twitter and even Instagram are all being used by customers to connect with brands. So this is a vicious circle of everything on demand. First it will capture you with its ease and simplicity and then it will never let you go off its hook.



References:

1. https://www.entrepreneur.com/article/288149
2. https://www.theguardian.com/small-business-network/2015/feb/06/how-mobile-ecommerce-changing-fashion-industry

Note: The views expressed here are those of the author’s and not necessarily represent or reflect the views of DoT Club as a whole.

Sunday, August 06, 2017

Hawk-Eye System in Sports

What comes in your mind when you hear the word ‘sports’? Well, probably athletic people in their Jerseys, teams with trophies or your favourite player. If you agree, perhaps you are myopic, because, just like we say ‘there is something more than meets the eye’ in the case of sports, it is the conspicuous presence of technology.

As of today, technology can track the trajectory of a ball and display a record of its statistically most likely path as a moving image; this is named the Hawk-Eye system. It has now been accepted as an impartial second opinion in sports and has also been accepted by governing bodies for Tennis, Cricket and Football. We say this is the technical era, but technology’s occupancy in sports dates back to the 19th century. In the early 1936, Electrical scoring system was introduced. Some years later the first use of instant replay took place during a Canadian broadcast and soon became a key technology for officials. Referee microphone, chip timing, pitch tracking and so many more inventions followed. Hawk-Eye was developed in the UK by Paul Hawkins. It was first implemented in 2001 for television purposes in cricket where it displayed the video from 6 different angles.

How does a Hawk-Eye Operate?

Each one of us has witnessed the flawless contribution of Hawk-Eye in cricket. Many a times there comes a situation in cricket when the decision goes to hands of the third empire, we the get to see a visual in our televisions which shows the whole trajectory of the ball and also whether it would hit the stumps or not. This is nothing but Hawk-Eye in full fledged action.

All Hawk-Eye systems are based on the principles of triangulation.Triangulation is the process of determining position of a point by forming triangles to it from known points. These points are located using the visual images and timing data provided by a number of high-speed video cameras located at different locations and angles around the area of play. The system rapidly processes the video feeds by a high-speed camera. All the data is stored in the predefined game model which contains a predefined model of the playing area and includes data on the rules of the game.

In each frame sent from each camera, the system identifies the group of pixels which corresponds to the image of the ball. It then calculates for each frame the 3D position of the ball by comparing its position on at least two of the physically separate cameras at the same instant in time. Each frame builds the path along which the ball travelled. The system generates a graphic image of the ball path and playing area, which means that information can be provided to judges, television viewers or coaching staff.

When the Hawk-Eye went down

Although Hawk-Eye system has been trusted and embraced in sports like Cricket and Football, its precision in Tennis has fallen short. Its Statistical margin of error or in layman language the difference between actual trajectory and Hawk-Eye trajectory was large enough to criticize its performance. Moreover, its prediction of balls trajectory after bouncing in Cricket has also been questioned by many.


Hawk-Eye in spotlight

Cricket:
In winter of 2008/2009 ICC installed the Hawk-Eye referral system wherein if the teams were dissatisfied, they could demand referral with Hawk-eye.
Tennis:Hawk-Eye was tested by International Tennis Federation and passed for professional use too. It was used in television coverage for Wimbledon, Queens’s club championship, Davis Club and Tennis Masters Cup.

Others:In Football, Hawk-Eye informs the referee whether the ball fully crosses the goal line or not.
In Snooker when the shots went awry, Hawk-Eye was used to demonstrate the actual shot intended by the player. Moreover, Hawk-Eye was also used in Badminton and Australian Football.

The Hawk-Eye technology came about bringing a revolution in sports. Sports are fairer and the chances of errors in decision have fallen ever since this technology has surfaced. In the end when the human eye misses the Hawk-Eye clenches.


References:

1) https://www.hawkeyeinnovations.com/sports/cricket
2) www.topendsports.com
3) www.bostonglobe.com
4) www.youtube.com
5) www.hire-intelligent.co.uk

Note: The views expressed here are those of the author's and do not necessarily represent or reflect the views of DOT as a whole.

Sunday, July 23, 2017

Machine learning: Turning data to information, information to insights

Science has brought us many inventions which have been used in various ways for the benefit of the humankind. One of them is Machine learning which is the buzzword for this decade. It has been doing rounds all around the business. When it comes to taking decisions, prediction plays a great role in taking apposite decisions which might have a huge impact on the business. Computers have helped us in a great way by improving our ability to take decisions. 

But what if we could improve the ability of computer to take decisions?  

In 1959, a new invention took place in the field of computer science which has now taken a large form through a series of continuous developments. We know it commonly as Machine learning. As the name suggests, Machine learning gives machine the ability to learn based on some algorithms. In short, Machine learning gives machine the ability to improve its own performance.
Machine learning is closely related to computational statistics which also focuses on prediction making through the use of computers.

Machine learning and Artificial Intelligence are closely related and perhaps this is the reason that they are often used interchangeably.

Artificial intelligence on one hand seeks to re-engineer the attributes possessed by human. AI refers to the ability of a computer to think like a human or mimic a human mind. It tries to match the logical skills of a human. Virtual video games, self driving cars and Siri ( the personal assistant of I-phone) all are applications of AI.

Machine learning


Machine learning is a technique which involves algorithms or models that learn patterns in data and then predicts similar patterns in new data. 
Machine learning as defined by Tom Mitchell, Professor at Carnegie Mellon University is as follows:

"A computer program is said to learn from experience 'E' with respect to some class of tasks 'T' and performance measure 'P' if its performance in tasks 'T', as measured by 'P', improves with experience 'E'."

In simpler words, if a computer program can perform a task based on some past experience then it is said to have learned something from past experience. This is quite different from a program which can perform a task because its programmers have already defined all the parameters required to perform that specific task.

For example: A computer program can play a game of tic-tac-toe if it has been programmed accordingly by a programmer with a winning strategy, however, a program with no predefined strategy and which has been programmed only within a set of rules will have to adapt by playing a lot of games until it understands a winning strategy.

This is true not just for a game but also for programs which perform classification and prediction. Classification is the process of assigning items in a data set to classes or categories. Prediction which is also known as regression is the process where a computer predicts the value of variable based on past values. 

People feel amazed at the pace machine learning has rose up in the recent years however, there is big reason behind the sudden rise of machine learning. Machine learning is not something new, however, the only reason which gives machine learning a boost is the presence of huge amount of data now a days which was not present earlier.

The major factors which have contributed to the resurgence of machine learning are:
  1. Data mining
  2. Bayesian analysis
  3. Inexpensive storage
Machine learning can be divided into three categories:
1) Supervised learning
2) Unsupervised learning
3) Reinforcement learning

Supervised learning: Supervised learning is a method of learning where the data is tagged with labels and the machine predicts the outcome based on similar past events.
For example: prediction of the flag of a country when flags of the other country with country tags are mentioned.

Unsupervised learning: Unsupervised learning is a method where the machine is trained using a data set which does not consists of any labels.
For example: The auto prediction of google search engine is based on this way of learning.

Reinforcement learning: Reinforcement learning is based on behaviorial psychology. This sort of learning can be used in economics and game theory.

There are many methods to implement machine learning which are widely being used by data scientists.

Future aspects and applications

Machine learning can be used in wide range of applications to enhance which encompass all the aspects of business decision making i.e Finance, Marketing, HR and Operations.
  1. Machine learning will bring innovation in accounting thereby reducing the repetitive task done by the accounting professionals which will enable them to focus on more important aspects that affect a business.
  2. Machines can be taught  to predict the future revenue of the company.
  3. We already have seen the potential of data visualization in business. The next step in data visualization is perhaps Automated data visualization which the companies would be foraying into by trying to choose right widget for displaying machine learning results in visualization softwares.
  4. Companies are often left thinking the reason an employee quit a job while other continued to work for the organization. These questions can be answered with the help of variables like tenure, wage, time in current role etc which can be put into algorithms which would then be able to find outcomes that would otherwise be very difficult.
 References:
  1. http://www.cs.cmu.edu/~tom/ 
  2. https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#728e84f12742 
  3. https://www.google.co.in/amp/s/www.forbes.com/sites/bernardmarr/2017/07/07/machine-learning-artificial-intelligence-and-the-future-of-accounting/amp/ 
  4. http://www.hcamag.com/hr-news/the-future-of-machine-learning-and-human-resources-236576.aspx
  5. http://bigdata-madesimple.com/wp-content/uploads/2016/02/un-supervised-learning.png

Note: The views expressed here are those of the author's and do not necessarily represent or reflect the views of DOT as a whole.