Copy of Public Copy (unlocked) of History of data input
Comments
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
$
%
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Still loading...
ABCDEFGHIJKLMNOPQRST
1
ADetailsSource
2
0NEED Date for switcheshttp://iopscience.iop.org/1741-2552/10/4/046003/article
3
10The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[12] This is the essence of programmability. 10 - 70 AD
4
1837In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.http://en.wikipedia.org/wiki/Computer - cite_note-22
5
1866 Earlier computer keyboards had been based either on teletype machines or keypunches. There were many electromechanical steps in transmitting data between the keyboard and the computer that slowed things down. With VDT technology and electric keyboards, the keyboard's keys could now send electronic impulses directly to the computer and save time. By the late ‘70s and early ‘80s, all computers used electronic keyboards and VDTs. Nevertheless, the layout of the computer keyboard still owes its origin to the inventor of the first typewriter, Christopher Latham Sholes who also invented the QWERTY layout. However, the computer keyboard does have a few extra function keys.
6
1866Christopher Sholes invented the practical modern type writer. Sholes had a type bar system and the universal keyboard was the machine’s novelty Which evolved into the QWERTY keyboard.
7
1874Apple Siri Apple Acquires Siri Inc.
8
1874The Beginning! 1874: Alexander Graham Bell proves that frequency harmonics from an electrical signal can be divided. This eventually leads to the digitization of speech.
9
1880 In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. “After some initial trials with paper tape, he settled on punched cards...”[25] To process these punched cards he invented the tabulator, and the keypunch machines.
10
1924 (BCIs) starts with Hans Berger's discovery of the electrical activity of the human brain and the development ofelectroencephalography (EEG). In 1924 Berger was the first to record human brain activity by means of EEG. By analyzing EEG traces, Berger was able to identify oscillatory activity in the brain, such as the alpha wave (8–12 Hz), also known as Berger's wave.
11
1930The teletype machine, introduced in the 1930s, combined the technology of the typewriter (used as an input and a printing device) with the telegraph. Elsewhere, punched card systems were combined with typewriters to create what was called keypunches. Keypunches were the basis of early adding machines and IBM was selling over one million dollars worth of adding machines in 1931.
12
1936The first electronic speech synthesizer "Voder". 1936: AT&T's Bell Labs produced the first electronic speech synthesizer called the Voder (Dudley, Riesz and Watkins). This machine was demonstrated in the 1939 World Fairs by experts that used a keyboard and foot pedals to play the machine and emit speech
13
1946Early computer keyboards were first adapted from the punch card and teletype technologies. In 1946, the Eniac computer used a punched card reader as its input and output device. In 1948, the Binac computer used an electromechanically controlled typewriter to both input data directly onto magnetic tape (for feeding the computer data) and to print results. The emerging electric typewriter further improved the technological marriage between the typewriter and the computer.
14
1952First effective speech recognizer. 1952: Bell Labs develops the first effective speech recognizer (97% accurate) using the simple frequency splitter technology similar to the one developed by Alexander Graham Bell 78 years ago.
15
1964MIT, Bell Laboratories and General Electric had collaborated to create a computer system called Multics; a time sharing, multi-user system. Multics encouraged the development of a new user interface, the video display terminal. The video display terminals (VDT) combined the technology of the cathode ray tube used in televisions and electric typewriters. Computer users could now see what text they were typing on their display screens making text easier to create, edit and delete, and computers easier to program and use.

16
1969Artificial intelligence needed.1969: John Pierce of Bell Labs said automatic speech recognition will not be a reality for several decades because it requires artificial intelligence.
17
1969In 1969 the operant conditioning studies of Fetz and colleagues, at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity.[7]Similar work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity.[8]
18
1970HMM approach to speech recognition is invented.1970: The Hidden Markov Modeling (HMM) approach to speech recognition was invented by Lenny Baum of Princeton University and shared with several ARPA (Advanced Research Projects Agency) contractors including IBM. HMM is a complex mathematical pattern-matching strategy that eventually was adopted by all the leading speech recognition companies including Dragon Systems, IBM, Philips, AT&T and others
19
1970Punch Cards
20
1971DARPA establishes SUR program.1971: DARPA (Defense Advanced Research Projects Agency) established the Speech Understanding Research (SUR) program to develop a computer system that could understand continuous speech. Lawrence Roberts, who initiated the program, spent $3 million per year of government funds for 5 years. Major SUR project groups were established at CMU, SRI, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BBN). It was the largest speech recognition project ever.
21
1978"Speak and Spell" by Texas Instruments.1978: The popular toy "Speak and Spell" by Texas Instruments was introduced. Speak and Spell used a speech chip which led to huge strides in development of more human-like digital synthesis soundhttp://www.timetoast.com/timelines/the-history-of-voice-recognition
22
1978Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. 1978 2000
23
1982Dragon Systems founded.1982: Dragon Systems was founded in 1982 by speech industry pioneers Drs. Jim and Janet Baker. Dragon Systems is well known for its long history of speech and language technology innovations and its large patent portfolio.
24
1993There has been rapid development in BCIs since the mid-1990s.[10] Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices. Notable research groups have been led by Richard Andersen, John Donoghue, Phillip Kennedy, Miguel Nicolelis and Andrew Schwartz.[citation needed] 1993 1997
25
1995Dragon releases dictation speech recognition technology to 1995: Dragon released discrete word dictation-level speech recognition software. It was the first time dictation speech recognition technology was available to consumers. IBM and Kurzweil followed a few months later
26
1996BellSouth releases the world's first voice portal.1996: BellSouth launches the world's first voice portal, called Val and later Info By Voice.
27
1996Charles Schwab becomes an industry leader.1996: Charles Schwab is the first company to devote resources towards developing up a speech recognition IVR system with Nuance. The program, Voice Broker, allows for up to 360 simultaneous customers to call in and get quotes on stock and options... it handles up to 50,000 requests each day. The system was found to be 95% accurate and set the stage for other companies such as Sears, Roebuck and Co., and United Parcel Service of America Inc., and E*Trade Securities to follow in their footsteps.
28
1997Dragon releases "Naturally Speaking" (continous speech dictation software).1997: Dragon introduced "Naturally Speaking", the first "continuous speech"
29
1998Microsoft begins investing in speech recognition technology.1998: Lernout & Hauspie bought Kurzweil. Microsoft invested $45 million in Lernout & Hauspie to form a partnership that will eventually allow Microsoft to use their speech recognition technology in their systems.
30
1998Their patient, Johnny Ray (1944–2002), suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. Ray’s implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm.[25]* 1998 2002
31
1999In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamuslateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects.[11] Similar results in humans have since been achieved by researchers in Japan (see below).
32
2000After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys have advanced reaching and grasping abilities and good hand manipulation skills, making them ideal test subjects for this kind of work. 2000 2001
33
2000By 2000 the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food.[12] The BCI operated in real time and could also control a separate robot remotely over Internet protocol. But the monkeys could not see the arm moving and did not receive any feedback, a so-called open-loop BCI.
34
2000First world-wide voice portal. 2000: TellMe introduces first world-wide voice portal.
35
2000launched the world's first voice enabler, which includes an on-line ordering application with real-time Internet integration for Office Depot.
36
2000Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a prominent proponent of using multiple electrodes spread over a greater area of the brain to obtain neuronal signals to drive a BCI. Suchneural ensembles are said to reduce the variability in output produced by single electrodes, which could make it difficult to operate a BCI. 2000 - 2001
37
2001ScanSoft Closes Acquisition of Lernout & Hauspie Speech and Language Assets.
38
2002In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle’s second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision.
39
2003 Healthcare is radically impacted by Highly Accurate Speech Recognition.
40
20032003: ScanSoft Closes Acquisition of SpeechWorks International, Inc.
41
20032003: ScanSoft closes deal to distribute and support IBM
42
2005Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics’s BrainGate chip-implant. Implanted in Nagle’s right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV.
43
2006One year later, professor Jonathan Wolpaw[who?] received the prize of the Altran Foundation for Innovation to develop a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.
44
2008In 2008 research developed in the Advanced Telecommunications Research (ATR) Computational NeuroscienceLaboratories in Kyoto, Japan, allowed the scientists to reconstruct images directly from the brain and display them on a computer. The article announcing these achievements was the cover story of the journal Neuron of 10 December 2008.
45
2011researchers from UC Berkeley published[62] a study reporting second-by-second reconstruction of videos watched by the study's subjects, from fMRI data. This was achieved by creating a statistical model relating visual patterns in videos shown to the subjects, to the brain activity caused by watching the videos. This model was then used to look up the 100 one-second video segments, in a database of 18 million seconds of random YouTube videos, whose visual patterns most closely matched the brain activity recorded when subjects watched a new video.
46
2012
47
20132013 Mind Controlled Helicopter performs Stunts.
48
End of BCI and Start of VoiceA brain–computer interface (BCI), often called a mind-machine interface (MMI), or sometimes called a direct neural interface or a brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions.
49
End of Voice Beginning of Keyboard
50
*Neuroprosthetics is an area of neuroscience concerned with neural prostheses. That is, using artificial devices to replace the function of impaired nervous systems and brain related problems, or of sensory organs. The most widely used neuroprosthetic device is the cochlear implant which, as of December 2010, had been implanted in approximately 220,000 people worldwide.[4] There are also several neuroprosthetic devices that aim to restore vision, including retinal implants.*
51
o Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed—nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910.Note: Used punch cards
52
Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.[citation needed
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
Loading...
 
 
 
Sheet1