- Shrey Pandey, Manas Uniyal, Ayush Jaiswal, Sahaj Bamba
Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.
— Donald E. Knuth
As we all are well aware, a computer consists of there major components viz. input device, output device, processing unit.
Generally, the input is fed to a computer through a touch screen, a conventional keyboard or a mouse. But there are a few downsides to such methods of input.
- We need to physically interact with computers.
- These devices have a fixed way of taking input from the user. Thereby, lacking personalization for each user.
To fetch output from computers, we use devices such as a display screen, speakers, etc. The most general form of output i.e., conventional display screen, has it's demerits too.
- We need to face the screen to interact with it.
- We can’t see 3D objects on a display screen but in reality, all objects are 3D. Conventional screens are unable to mimic the physical world.
- Physical-world objects can’t interact with computer screens which makes simulations very difficult.
To overcome these problems, a smarter, intuitive user interface is needed which can replace the current manner of input-output and provide something which is more convenient and easier to interact with.
For taking the output from a computer, a technique is needed such that virtual output from the display screens can be seen in conjunction with the physical world. Something, which is more natural as we all grow up interacting with the physical world.
Similarly, for giving input to a computer, a technique is needed which can comprehend our physical actions and gestures in the same ways as other humans can.
- Disabled People