|PATENT ANALYTICS -"Technology Scouting Analysis"|
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|TABLE OF INDEX|
|3||Masted Patent Dataset|
|4||Critical Patent Reference Findings|
|5||Top 5 Competitor Intelligence|
|6||Patent List of Top 5 Competitors|
|7||Top Manufacturers and Vendors|
|9||Intersting Research Articles|
|10||List of Universities|
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|To perform a Technology Scouting Analysis|
|The present technology relates to "Wearable Audio Device with Embedded Technology".|
To find wearable audio device with Humidity Sensor, Temperature Sensor and Pressure Sensor provide sensing in healthcare and entertainments. The wearable device is not only for entertainment and monitoring our physical wellbeing but will blend seamlessly in to our lives, and providing a link to the Internet of Things (IoT) and beyond. The wearable device is allowing us to manipulate our surroundings, as well as enable our surroundings to adapt to our immediate needs.
These days, wearable come in various forms such as a smart watch, smart shoes, smart glasses, armband, waist accessories etc. Similarly, a user can keep mobile in his front pocket, back pocket, shirt pocket, and hand or on a table.
Then, we also developed some wireless wearable sensor systems, such as the mobile force plate system, to implement quantitative human kinematic and kinetic analysis, which may be applied in rehabilitation, clinical diagnosis and healthcare monitoring in the future.
The above mentioned sensors (Humidity, Temperature and Pressure Sensors) are majorly used to solve the above defined problems with functionality and expandability of sophisticated engineering development platforms.
|This assignment relates to scouting & analysis of patents describing “Wearable Audio Devices with Embedded Technology” using the 3 different types of sensors - Humidity, Temperature and Pressure sensors. |
Wearable has various types of sensors but mainly Humidity Sensor, Temperature Sensor and Pressure Sensor potential to change the world, and Bluetooth (BLE) have empowered devices with power of sensing and communication to take complex decisions. These sensors are performed by main basic monitored parameters in clinical practice and daily life and the framework and main modules utilized in the device, which constitute the basis of wearable sensor systems for users, were summarized.
Monitoring methods and techniques such as single-parameter monitoring, multi-parameter monitoring and textile electrode technology in the wearable sensor system were reviewed according to some recent research and applications in the technology.
These sensors are supported with application for Android and iOS, developers to connect the device to the cloud out of the box, without any additional software development.
These sensors are used by the wearable sensor systems can be used for some special cases in healthcare and patient monitoring.
The wearable device includes a wearable sensor system which is becoming smaller, more intelligent and many of them have been commercialized, which benefit numerous users around the world. Various kinds of monitoring methods and techniques, such as direct monitoring, indirect monitoring, multi-parameter monitoring, single-parameter monitoring, textile technology, integration, wireless sensing and power supply, have been applied in these systems.
The future of wearable technology is going to see unprecedented growth and evolution in the next few short years, and we’re all invited along for the ride. The sensors and Bluetooth (BLE) have empowered devices with the power of sensing and communication to take complex decisions.
|Search Strategy |
|Following steps were undertaken, not necessarily in sequence, to perform the search. |
• Various keywords and classifications were used independently and/or in combination with each other to perform multiple searches in various databases.
• Various assignee names were used in combination with keywords and classifications to perform multiple searches in various databases.
• To supplement the search analytics, Assignee standardization is done for enabling the accuracy in patent count.
• Researchers have adopted a progressively evolving search strategy to identify the most relevant results within the project budget.
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|Taxonomy relates to “Wearable Audio Devices with Embedded Technology” having different Mode of connections, different types of sensors, processing units and additional features, etc.|
|Technical Features||Mode of Connection||Wireless|
|Types of Sensors||Gyroscopic Sensor|
|Optical / Light Sensor|
|Piezoelectric / Capacitive Sensor|
|Physiology / Biometric Sensor|
|Processing Unit (Location)||Inside Earphone|
|Application Area||Internal (Physiological)|
|External (Environmental)||Noise Cancellation|
|The sheet shows all the patent/publication numbers with corresponding categorization list.|
|LANDSCAPE ANALYSIS - "AUDIO DEVICES WITH EMBEDDED TECHNOLOGY"|
|S.No||Publication Number||Title||Abstract||First Claim||Technical Features||Application Date||Publication Date||Earliest Priority Date||US Classification||IPC Classification||CPC Classification||Inventors||Assignee / Applicant||Assignee - Standardized||Count of Citing Patents||INPADOC Family Members|
|Mode of Connection||Types of Sensors||Processing Unit (Location)||Application Area||Additional Features|
|Wireless||Wired||Gyroscopic Sensor||Position Sensor||Pressure Sensor||Accelerometer||Magnetometer||Optical / Light Sensor||Acoustic Sensor||Piezoelectric / Capacitive Sensor||Image Sensor||Ultrasonic Sensor||Microphone||Physiology / Biometric Sensor||Temperature Sensor||Proximity Sensor||Vibration sensor||Humidity Sensor||Inside Earphone||Outside Earphone||Internal (Physiological)||External (Environmental)||Feedback||Video display||Gesture||Touch control|
|1||US9226090B1||Sound Localization For An Electronic Call||During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.||1. A method, comprising: |
capturing, with an electronic earphone located at a head of a talking person, binaural sound that will be provided to a listening person during a telephone call;
designating, with a computer system, a sound localization point in empty space that is away from and proximate to the listening person such that the sound localization point simulates an origin of the binaural sound at the empty space that the listening person hears during the telephone call;
adjusting, with the computer system, the binaural sound captured at the earphone of the talking person so the binaural sound originates during the telephone call from the sound localization point in empty space that is away from and proximate to the listening person; and
providing, with an electronic earphone located at a head of the listening person, the binaural sound to the listening person during the telephone call such that the origin of the binaural sound for the listening person occurs at the sound localization point in empty space that is away from and proximate to the listening person.
|-||Yes||Yes||Yes||-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||Yes||Yes||-||-||Yes||Yes||-||Facial Motion Capture||-||-||-||-||2014-06-23||2015-12-29||2014-06-23||-||H04S000700||H04S0007303||Norris, Glen A. | Lyren, Philip Scott||Norris Glen A | Lyren Philip Scott||-||0||US9226090B1 | US20150373477A1|
|2||US9224382B2||Noise Cancellation||A noise cancellation signal is generated by generating an ambient noise signal, representing ambient noise, and generating a noise cancellation signal, by applying the ambient noise signal to an feedforward filter, where the feedforward filter comprises a high-pass filter having an adjustable cut-off frequency, and by applying a controllable gain. The noise cancellation signal is then applied to a loudspeaker, to generate a sound to at least partially cancel the ambient noise. An error signal is generated, representing unwanted sound in the region of the loudspeaker. The phase of the ambient noise signal is compared to a phase of the error signal, and the gain is controlled on the basis of a result of the comparison, taking account of a phase shift introduced by the high-pass filter when performing the comparison.||1. A method of generating a noise cancellation signal, the method comprising: |
generating an ambient noise signal, representing ambient noise;
generating a noise cancellation signal, by applying the ambient noise signal to a feedforward filter, wherein the feedforward filter comprises a high-pass filter having an adjustable cut-off frequency, and by applying a controllable gain;
applying the noise cancellation signal to a loudspeaker, to generate a sound to at least partially cancel the ambient noise; and
generating an error signal, representing unwanted sound in the region of the loudspeaker, wherein the method further comprises:
comparing a phase of the ambient noise signal to a phase of the error signal, and controlling said gain on the basis of a result of said comparison, and taking account of a phase shift introduced by said high-pass filter when performing said comparison.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-10-15||2015-12-29||2012-10-12||-||G10K0011175 | G10K0011178||G10K0011175 | G10K0011178 | G10K22103027 | G10K22103028||Clemow, Richard||Cirrus Logic Internat Uk Ltd | Cirrus Logic Internat Semiconductor Ltd||Cirrus Logic Inc||0||US9224382B2 | GB201218346D0 | GB2506908A | GB2506908B | US20140105413A1|
|3||US9224311B2||Combining Data Sources To Provide Accurate Effort Monitoring||By combining data from different sensors (on fitness device, mobile smartphone, smart clothing, other devices or people in same location), an intelligent system provides a better indicator of an individual's physical effort, using rich data sources to enhance quantified metrics such as distance/pace/altitude gain, to provide a clearer picture of an individual's exercise and activity.||1. A device comprising: |
at least one computer readable storage medium bearing instructions executable by a processor;
at least one processor configured for accessing the computer readable storage medium to execute the instructions to configure the processor for:
receiving signals from a position sensor from which the processor can calculate a speed and a distance over an interval of time ΔT;
receiving at least one signal representing at least one biometric condition of a user of the device;
adjusting a baseline value associated with the speed and/or distance based at least in part on the biometric condition to render an adjusted baseline; and
outputting an indicia of exercise effort based at least in part on the adjusted baseline.
|Yes||-||-||-||-||-||Yes||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||-||Yes||Yes||-||Yes||Yes||-||-||-||-||-||-||Yes||-||2014-04-17||2015-12-29||2013-09-17||-||A63B007100 | A61B000500 | A61B00050205 | A61B0005021 | A61B0005024 | A63B007106 | G01C002100 | G01C002120 | G01S001919 | G06F000301 | G06F00030481 | G06F00030484 | G06F000316 | G06F001730 | G06F001900 | G06Q001006 | G08B002501 | G09B001900 | G10L001500 | H04B000500 | H04L002906 | H04W000400 | H04W001208 | A61B000511 | A61B0005117 | A61B0005145 | H04M0001725||G09B00190038 | A61B000502055 | A61B0005021 | A61B000502438 | A61B00054815 | A63B007106 | G01C002100 | G01C002120 | G01S001919 | G06F0003017 | G06F00030481 | G06F00030484 | G06F0003165 | G06F00173074 | G06F00193481 | G06Q00100639 | G08B0025016 | G10L001500 | H04B00050025 | H04L00630853 | H04W0004008 | H04W001208 | A61B000511 | A61B00051172 | A61B00051176 | A61B000514532 | A61B000514542 | H04M00017253 | H04M225002 | H04M225004 | H04M225012||Yeh, Sabrina Tai-Chen | Fredriksson, Jenny Therese||Sony Corp||Sony Corp||0||US9224311B2 | CN104436615A | CN104460980A | CN104460981A | CN104460982A | CN104469585A | JP2015058362A | JP2015058363A | JP2015058364A | JP2015059935A | JP2015061318A | KR2015032169A | KR2015032170A | KR2015032182A | KR2015032183A | KR2015032184A | US20150079562A1 | US20150079563A1 | US20150081056A1 | US20150081066A1 | US20150081067A1 | US20150081209A1 | US20150081210A1 | US20150082167A1 | US20150082408A1 | US8795138B1 | US9142141B2 | WO2015041970A1 | WO2015041971A1|
|4||US9223540B2||Electronic Device And Method For Voice Recognition Upon Charge Transfer||An electronic device and a method for recognizing a voice are provided. An operating method of the electronic device includes detecting, at least one of two or more first sensors disposed in a preset region, detecting an amount of charge transfer over a preset value, when detecting the amount of the charge transfer over the preset value, detecting, at one of two or more second sensors disposed in a preset distance from two or more microphones, an object in a preset distance; and collecting, at one of the two or more microphones, the one disposed in a preset distance from the second sensor detecting the object in the preset distance, a voice.||1. An operating method of an electronic device, the method comprising: |
detecting, by at least one first sensor a charge transfer;
when an amount of the charge transfer is greater than a preset value, detecting, by one of two or more second sensors disposed at a position adjacent to each of two or more microphones, an object in a preset distance from the electronic device; and
receiving a voice by a microphone disposed in a position adjacent to the one of the two or more second sensors detecting the object.
|Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||Yes||-||-||Yes||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-08-08||2015-12-29||2012-10-30||-||G10L002100 | G06F000316 | G10L001500 | G10L002500 | H04M0001725||G06F0003167 | H04M000172522 | H04M225012 | H04M225074||Park, Hyung-Jin||Samsung Electronics Co Ltd | Samsung Electronics Co Ltd||Samsung Electronics Co Ltd||0||US9223540B2 | AU2013213762A1 | CN103795850A | EP2728840A2 | KR2014054960A | US20140122090A1|
|5||US9219967B2||Multiuser Audiovisual Control||Various audiovisual presentation arrangements are described. In some embodiments, a headset is configured to output audio to a user. A television receiver may be configured to output a plurality of video feeds for simultaneous presentation by a display device. Each video feed of the plurality of video feeds may be displayed in a different display region of the display device. The television receiver may receive a command indicative of a video feed of the plurality of videos feeds that the user is viewing on the display device. Based on the command, the television receiver may output, to the headset, an audio feed that corresponds to the video feed the user is viewing.||1. An audiovisual control system, the audiovisual control system comprising: |
a receiving device configured to:
receive a first command selecting a first video feed of a plurality of video feeds that a first user is viewing on a display device;
based on the first command, output a first audio feed that corresponds to the first video feed the first user is viewing, wherein the first audio feed is output to a first headphone device;
receive a second command indicative of a change command from the first user corresponding to the first video feed;
determine whether a second user is viewing the first video feed;
in response to determining whether the second user is viewing the first video feed, process the change command;
receive a third command indicative of a second video feed of the plurality of video feeds that the second user is viewing on the display device; and
based on the third command, output a second audio feed that corresponds to the second video feed the second user is viewing, wherein the second audio feed is output to a second headphone device.
|Yes||Yes||Yes||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-11-25||2015-12-22||2013-11-25||-||H04R000110 | H04R002700||H04R002700 | H04R00011041 | H04R249915||Nguyen, Phuc H. | Bruhn, Christopher William||Echostar Technologies Llc||Echostar Technologies Llc||0||US9219967B2 | US20150146879A1|
|6||US9219965B2||Body-Worn Control Apparatus For Hearing Devices||A control apparatus comprises a housing and is adapted to control a hearing device by recognizing predefined gestures made by the device wearer by moving one arm and/or or hand relative to the housing when the housing is in an operating position at or on the wearer's body. The housing comprises a reference electrode coupled capacitively to the wearer when the housing is in the operating position and a first sensor electrode. The control apparatus further comprises: a first signal generator to provide a first electric probe signal between the first sensor electrode and the reference electrode; a first measurement circuit to determine first signal values in dependence on the impedance between the first sensor electrode and the reference electrode; a detector to recognize gestures in dependence on the first signal values; and a control unit to provide control commands to the hearing device in dependence on recognized gestures.||1. A control apparatus comprising |
a housing and adapted to control a hearing device in dependence on recognising predefined gestures made by a wearer of the hearing device by moving one of his or her arms and/or the hand of said arm relative to the housing when the housing is in an operating position at or on the wearer's body, the housing comprising
a reference electrode arranged to couple capacitively to a body area of the wearer when the housing is in the operating position and
a first sensor electrode, the control apparatus further comprising:
a first signal generator adapted to provide a first electric probe signal between the first sensor electrode and the reference electrode;
a first measurement circuit adapted to determine first signal values in dependence on the impedance between the first sensor electrode and the reference electrode;
a detector adapted to recognise said gestures in dependence on the first signal values; and
a control unit adapted to provide control commands to the hearing device in dependence on recognised gestures, wherein
the first signal generator is adapted to provide the electric probe signal at multiple signal frequencies;
the first measurement circuit is adapted to determine the first signal values at multiple signal frequencies; and
the detector is adapted to recognise said gestures in dependence on changes in ratios between the first signal values determined at different signal frequencies.
|Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||-||-||Yes||-||2013-11-06||2015-12-22||2012-11-07||-||H04R002500 | G08C001910 | H04B000500||H04R002555 | H04R0025453 | H04R0025558||Rasmussen, Karsten Bo | Hauschultz, Lars Ivar||Oticon As||Oticon As||0||US9219965B2 | CN103813250A | EP2731356A1 | US20140126759A1|
|7||US9219961B2||Information Processing System, Computer-Readable Non-Transitory Storage Medium Having Stored Therein Information Processing Program, Information Processing Control Method, And Information Processing Apparatus||In an exemplary information processing system including a plurality of sound output sections, the positional relationship among the plurality of sound output sections is recognized. In addition, a sound corresponding to a sound source object present in a virtual space is generated. The output volume of the sound for the sound source object is determined, for each sound output section, in accordance with the positional relationship among the plurality of sound output sections, and the generated sound is outputted in accordance with the output volume.||1. An information processing system including a processor system including at least one processor and a plurality of sound output sections, the processor system being configured to at least: |
recognize the positional relationship among the plurality of sound output sections;
generate a sound corresponding to a sound source object present in a virtual space, based on predetermined information processing; and
cause each of the plurality of sound output sections to output the generated sound therefrom, and determine, for each of the plurality of sound output sections, the output volume of the sound corresponding to the sound source object in accordance with the positional relationship among the plurality of sound output sections.
|Yes||Yes||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-04-22||2015-12-22||2012-10-23||-||H04R000502 | H04S000700||H04R000502 | H04S0007303 | H04S0007304 | H04S240011 | H04S240013 | H04S240015||Osada, Junya||Nintendo Co Ltd||Nintendo Co Ltd||0||US9219961B2 | JP2014083205A | US20140112505A1|
|8||US9219957B2||Sound Pressure Level Limiting||Limiting the sound pressure level presented to the listener's ears by one or more headphones, using processing capabilities of a personal media device. Headphones, coupled to audio signals from a personal media device, include a sensor to measure the sound pressure level presented to the listener's ears, and provide that measure to the personal media device. The personal media device, optionally aided by one or more analog circuits, adjusts the audio signal so that the sound pressure level is maintained within a recommended range.||1. A method, including: |
measuring a sound pressure level next to a listener's ear;
comparing said sound pressure level with a pre-selected value; and
adjusting an audio signal emitted into said listener's ear, in response to a result of said comparing;
wherein said measuring includes obtaining a first sound pressure level next to said listener's first ear, and separately obtaining a second sound pressure level next to said listener's second ear; and
combining said first sound pressure level and said second sound pressure level;
wherein said adjusting is performed at one or more of said listener's ears, in response to a result of said combining.
|Yes||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||2013-03-12||2015-12-22||2012-03-30||-||H03G000320 | H03G000332 | H04R000110 | H04R000300||H04R0003002 | H03G000332 | H04R00011041 | H04R0003007 | H04R242007||Schul, Eran | Hogue, Douglas K. | Olson, Alan | Bruss, John||Imation Corp||Imation Corp||0||US9219957B2 | US20130259241A1|
|9||US9213861B2||Mobile Communication System||The mobile communication device is for use as a cell phone, as a wireless identity authentication device with other electronic devices (with cell phones, computers, and ATM's), and as a headset in the form of an earphone, an eye-covering, or a head covering for audio communication with a central processor, another mobile terminal a cell phone, or a pda. The mobile communication device is hands-free being worn on or near the face, and only requires a finger touching for bimodal identity authentication. An audio receiver is compatible with the ear of the user and a microphone transmits words spoken by the user, electronically therethrough. A fingerprint sensor is mounted and positioned within the device. When user authentication is required, the user is prompted to touch the fingerprint sensor, and said fingerprint data is compared with fingerprint images of authorized users. In another aspect of the invention, mobile communication device is an eye-covering, a head covering, or an identification badge including a fingerprint sensor and a processor and is used for wireless authentication of the user.||1. A method for accessing a central processor by means of a wearable computer for gaining physical access, financial access, and data access as approved by an issuing authority, said method comprising: |
a. receiving a user request at a processor remote from said wearable computer for physical access into a secure area or for access or entry of secure data or for financial access to purchase goods or services at a terminal;
b. determining at a processing computer remote from said wearable computer if said wearable computer has been authorized for purpose of said user request by said issuing authority;
c. prompting said wearable computer from a prompting processor remote from said wearable computer to submit fingerprint data to gain said physical access or said data access or said financial access;
d. receiving user sensed fingerprint data submitted from said wearable computer, said receiving occurring in a processing computer remote from said wearable computer, said wearable computer enabling said user to have both hands free for said physical, financial and data access request except when submitting said fingerprint data, reference fingerprint data having been previously registered to authenticate user identity;
e. comparing said sensed fingerprint data submitted through said wearable computer with said reference fingerprint in a comparing processor, said comparing processor being remote from said wearable computer;
f. approving said user request to said physical access to said secure area and said data access if said user is authorized by said issuing authority, authentication of user identity being made at least in part based upon a comparison of said sensed fingerprint data with reference fingerprint data by an authorizing processor remote from said wearable computer; and
g. approving said user request for said financial access if said user is authorized by said issuing authority and an account balance has not been exceeded, authentication of user identity being made at least in part based upon a comparison of said sensed fingerprint data with reference fingerprint data by an authorizing processor remote from said wearable computer.
|Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||Yes||-||-||-||Yes||-||-||-||-||-||-||2013-05-30||2015-12-15||2006-03-20||-||H04M000166 | G06F002162 | H04M000105 | H04M000160||G06F00216245 | H04M000105 | H04M000166 | H04M00016066 | H04M225012 | H04M225074||Black, Gerald R. | Black, Alyssa S.||Black Gerald R | Black Alyssa S||-||0||US9213861B2 | CA2647194A1 | US20100075631A1 | US20100311390A9 | US20130263284A1 | WO2008008101A2 | WO2008008101A3|
|10||US9211069B2||Personal Protective Equipment With Integrated Physiological Monitoring||Embodiments may comprise personal protective equipment with integrated physiological monitoring. Some embodiments may relate specifically to in-ear devices (such as hearing protection and/or communication devices) having one or more physiological sensors for early monitoring for heat related illnesses. Several embodiments may incorporate a temperature sensor and a speaker into such in-ear device.||1. A device comprising: |
an earpiece for use in a user's ear having a sealing ear tip;
at least one temperature sensor;
a speaker having a face;
and or more waveguides;
the earpiece has sufficient length and flexibility so that when in place in the user's ear it comfortably extends forward past at least a first bend of the user's ear canal;
the sealing tip is sufficiently pliable to form a good seal in the user's ear canal;
the temperature sensor comprises an IR sensor having a face, and the one or more waveguides comprise an IR waveguide;
the IR waveguide comprises an elongate hollow tube having an inner surface that is substantially reflective of IR which extends from the face of the IR sensor forward so that, when in place in the user's ear, the IR waveguide allows the IR sensor to detect temperature in the ear canal;
the one or more waveguides further comprise a sound waveguide;
the sound waveguide comprises an elongate hollow tube extending from the speaker face forward so that, when in place in the user's ear, the sound waveguide directs sound produced by the speaker into the user's ear canal at a point past the sealing ear tip;
the sound waveguide comprises an inner surface that is substantially sound reflective;
the sound waveguide and the IR waveguide are separate and apart waveguides offset side-by-side;
the earpiece further comprises a main body, for housing the speaker and the temperature sensor, and a stem; wherein:
the stem is elongate and has a front and a rear;
the rear of the stem is securely attached to the main body; and
the one or more waveguides span the length of the stem;
the speaker is laterally offset from the stem, with the speaker face angled with respect to a centerline of the stem so that the speaker face is not directly pointed towards the stem along a line parallel to the centerline of the stem;
the temperature sensor is laterally offset from the stem, with the temperature sensor face angled with respect to a centerline of the stem so that the temperature sensor face is not directly pointed towards the stem along a line parallel to the centerline of the stem; and
the IR waveguide and the sound waveguide extend essentially parallel to each other for most of their lengths, with only a rear portion of the sound waveguide curving to orient with the angled, offset face of the speaker and only a rear portion of the IR waveguide curving to orient with the angled, offset face of the temperature sensor.
|Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||Yes||-||-||-||-||Yes||Yes||Yes||-||-||-||-||-||-||-||-||-||-||2012-02-17||2015-12-15||2012-02-17||-||A61B000500 | A61B000501||A61B000501 | A61B00056817||Larsen, Christopher Scott | Padmanabhan, Aravind | Humphrey, Christopher | Muggleton, Neal||Larsen Christopher Scott | Padmanabhan Aravind | Humphrey Christopher | Muggleton Neal | Honeywell Int Inc||Honeywell Int Inc||0||US9211069B2 | US20130218022A1|
|11||US9208773B2||Headset Noise-Based Pulsed Attenuation||A headset having a talk-through microphones incorporates an audio circuit that compresses a signal representing sounds detected by the talk-through microphones in response to the audio circuit detecting the onset of a peak (positive and/or negative) in the signal that exceeds a predetermined voltage level (positive and/or negative voltage level, perhaps a predetermined magnitude of voltage from a zero voltage level), and that does so with a rate of change in voltage level that exceeds a predetermined rate of change in voltage level, the degree of compression possibly being a compression to or near a zero amplitude (perhaps to or near a zero voltage level) and the duration of the compression possibly being controlled by a timing circuit set to a predetermined period of time that may be retriggerable while amidst the predetermined period of time.||1. A method of controlling sounds acoustically output by an acoustic driver disposed within a casing of an earpiece of a headset, the method comprising: |
compressing a signal representing sounds detected by a microphone of the headset that is acoustically coupled to the environment external to the casing in response to detecting an onset of a peak in the signal that exceeds a predetermined voltage level and that has a rate of change in voltage level that exceeds a predetermined rate of change, and
reducing a gain of the signal in response to detecting speech sounds of a user of the headset detected by a noise-canceling communications microphone that is disposed on the headset towards the vicinity of the user's mouth.
|Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-03-01||2015-12-08||2011-12-23||-||G10K0011178 | H04R000110 | H04R0005033||H04R0003002 | G10K00111782 | H04R000110 | H04R00011083 | H04R00011041 | H04R0005033 | H04R2201107 | H04R242007 | H04R246001||Yamkovoy, Paul G.||Yamkovoy Paul G | Bose Corp||Bose Corp||0||US9208773B2 | CN104012110A | CN104221397A | EP2795921A1 | EP2795921B1 | EP2820860A1 | JP2015513855A | US20130163775A1 | US20130163776A1 | US20150245136A1 | US9208772B2 | WO2013095839A1 | WO2013130463A1|
|12||US9208769B2||Hybrid Adaptive Headphone||An adaptive noise-cancelling headphone including an earcup housing having a driver for outputting sound to a user positioned therein. The headphone further including an active noise control assembly. The active noise control assembly may include an ambient microphone capable of detecting an ambient noise outside of the housing and an error microphone capable of detecting an earcup noise inside of the housing. Based on the detected noise, active noise cancellation within the headphone is either enabled or disabled. The headphone may further include a passive noise control assembly. The passive noise control assembly may include an acoustic valve associated with an acoustic vent formed within the earcup housing. The acoustic valve is capable of being modified between an open configuration to decrease sound attenuation and a closed configuration to increase sound attenuation in response to the detected ambient noise so as to improve an acoustic performance of the earcup.||1. An adaptive noise-cancelling headphone comprising: |
an earcup comprising an earcup housing having a front portion defining an inner chamber dimensioned to encircle a user's ear, a back portion defining an outer chamber and a mid wall separating the inner chamber from the outer chamber;
a driver positioned within the mid wall for outputting sound to the inner chamber and in a direction of a user's ear;
an active noise control assembly integrated with the earcup housing, the active noise control assembly having an ambient microphone operable to detect an ambient sound outside of the earcup housing and an error microphone operable to detect an earcup sound inside of the earcup housing; and
a passive noise control assembly integrated with the earcup housing, the passive noise control assembly having an acoustic valve associated with an acoustic vent that opens to the outer chamber, the acoustic valve operable to be modified between an open configuration to decrease ambient sound attenuation within the earcup housing and a closed configuration to increase ambient sound attenuation within the earcup housing in response to the detected ambient sound.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-12-18||2015-12-08||2012-12-18||-||G10K001100 | G10K001116 | G10K0011178||G10K001116 | G10K0011178 | G10K22101081 | G10K22103026||Azmi, Yacine||Apple Inc||Apple Inc||0||US9208769B2 | US20140169579A1|
|13||US9198585B2||Mobile Terminal And Method Of Measuring Bioelectric Signals Thereof||A mobile terminal and a method of measuring a bioelectric signal thereof are provided. When the mobile terminal enters a call mode, a user's pulse wave data are acquired using a plurality of electrodes provided in a body of the mobile terminal or a body of an earphone.||1. A mobile terminal comprising: |
a proximity sensor disposed at a surface of the mobile terminal and configured to detect an object approaching the surface of the mobile terminal;
a plurality of electrodes disposed at the surface of the mobile terminal;
a pulse wave sensing unit configured to obtain a pulse wave signal through the plurality of electrodes; and
a controller configured to:
provide a control signal, for activating the pulse wave sensing unit, to the pulse wave sensing unit when the mobile terminal is in a call mode and the object is detected through the proximity sensor;
control the pulse wave sensing unit to obtain the pulse wave signal when the pulse wave sensing unit receives the control signal;
acquire at least one of a pulse wave data, a heart rate, and a heartbeat cycle based on the pulse wave signal;
determine whether a user's health state is abnormal considering that the acquired at least one of the pulse wave data, the heart rate, and the heartbeat cycle is deviated from a first preset reference,
output a notification for warning abnormality of the user's health state through the mobile terminal and transmit the notification to a call party, when the user's health state is abnormal;
transmit the user's health state to a preset another party using a phone number and an e-mail address of the preset another party when a preset cycler arrives;
store position information of the mobile terminal and the user's health state coupled to the position information;
recommend or provide contents of a specific kind of a help for the user's stability or relaxation, when the user's health state is abnormal and the call mode is terminated;
recommend specific restaurant position information related to a stored good health state of the user when the call mode is terminated and an application for searching restaurant position information is executed;
determine a user's excitement based on the acquired pulse wave data;
output an alarm for warning that the user is in an excited state, when the determined excitement is equal to or greater than a preset level;
and terminate the call mode when the determined excitement is equal to or greater than the preset level;
wherein the proximity sensor is activated when the mobile terminal is in the call mode.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||Yes||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-02-08||2015-12-01||2011-06-29||-||A61B0005024 | A61B000500 | A61B00050245 | A61B000516 | H04M0001725 | H04M000160||A61B000502438 | A61B00050245 | A61B0005165 | A61B00056898 | H04M000172569 | H04M00016058||Lim, Gukchan | Park, Sangmo | Kim, Seonghyok | Lee, Seehyung||Lim Gukchan | Park Sangmo | Kim Seonghyok | Lee Seehyung | Lg Electronics Inc||Lg Electronics Inc||0||US9198585B2 | CN102846314A | CN102866843A | EP2540220A1 | EP2540221A1 | KR2013007117A | KR2013028570A | KR2013055729A | US20130005303A1 | US20130005310A1 | US20150312669A1 | US9089270B2|
|14||US9196261B2||Voice Activity Detector (Vad)—Based Multiple-Microphone Acoustic Noise Suppression||Acoustic noise suppression is provided in multiple-microphone systems using Voice Activity Detectors (VAD). A host system receives acoustic signals via multiple microphones. The system also receives information on the vibration of human tissue associated with human voicing activity via the VAD. In response, the system generates a transfer function representative of the received acoustic signals upon determining that voicing information is absent from the received acoustic signals during at least one specified period of time. The system removes noise from the received acoustic signals using the transfer function, thereby producing a denoised acoustic data stream.||1. A method for removing noise from acoustic signals, comprising: |
receiving from a plurality of microphones, a plurality of acoustic signals;
receiving information on a vibration of human tissue associated with human voicing activity from a tissue vibration detector in physical contact with the human tissue, the tissue vibration detector comprises a skin surface microphone (SSM) of a voice activity detector (VAD) device included in a wireless earpiece or a wireless headset, the SSM including a covering operative to change an impedance of a microphone of the SSM;
generating at least one first transfer function representative of the plurality of acoustic signals upon determining that voicing information is absent from the plurality of acoustic signals for at least one specified period of time; and
removing noise from the plurality of acoustic signals using the at least one first transfer function to produce at least one denoised acoustic data stream.
|Yes||-||-||-||-||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||-||Yes||-||Yes||Yes||-||-||Yes||-||-||-||-||-||-||-||2011-02-28||2015-11-24||2000-07-19||-||G10K001116 | G10L001520 | G10L002102 | G10L00210208 | G10L001102 | G10L001902 | G10L00210216 | G10L002578||G10L002102 | G10L00210208 | G10L00190204 | G10L002578 | G10L202102082 | G10L202102161 | G10L202102165 | G10L202102168||Burnett, Gregory C. | Breitfeller, Eric F.||Burnett Gregory C | Breitfeller Eric F | Aliphcom||Aliphcom Inc||0||US9196261B2 | AU200176955A | AU2002359445A1 | AU2003223359A1 | AU2003263733A1 | AU2003263733A8 | AU2009308442A1 | AU2011248283A1 | AU2011248297A1 | AU2011279009A1 | AU2012229071A1 | CA2416926A1 | CA2448669A1 | CA2465552A1 | CA2477767A1 | CA2479758A1 | CA2741652A1 | CA2798282A1 | CA2798512A1 | CA2804638A1 | CA2830410A1 | CN101779476A | CN101779476B | CN102282865A | CN1443349A | CN1513278A | CN1589127A | CN1643571A | CN203086710U | CN203242334U | CN203351200U | CN203435060U | CN203811527U | EP1301923A2 | EP1415505A1 | EP1480589A1 | EP1483591A2 | EP1497823A1 | EP2165564A1 | EP2165564A4 | EP2353302A1 | EP2567377A1 | EP2567553A1 | EP2594059A1 | EP2686971A2 | EP2686971A4 | JP2004509362A | JP2005503579A | JP2005520211A | JP2005522078A | JP2005529379A | JP2011203755A | JP2013178570A | KR1402551B1 | KR1434071B1 | KR2003076560A | KR2004030638A | KR2004077661A | KR2004096662A | KR2004101373A | KR2011008333A | KR2011025853A | KR2012081639A | KR2012091454A | KR936093B1 | KR992656B1 | TW200304119A | TW200305854A | TW200425763A | TWI281354B | US20020039425A1 | US20020099541A1 | US20020198705A1 | US20030128848A1 | US20030179888A1 | US20030228023A1 | US20040133421A1 | US20040249633A1 | US20070233479A1 | US20090003623A1 | US20090003624A1 | US20090003625A1 | US20090003626A1 | US20090003640A1 | US20090010449A1 | US20090010450A1 | US20090010451A1 | US20090022350A1 | US20100128881A1 | US20100128894A1 | US20100278352A1 | US20100280824A1 | US20110026722A1 | US20110051950A1 | US20110051951A1 | US20120059648A1 | US20120184337A1 | US20120207322A1 | US20120230511A1 | US20120230699A1 | US20120288079A1 | US20130211830A1 | US20140140524A1 | US20140140527A1 | US20140177860A1 | US20140185824A1 | US20140185825A1 | US20140188467A1 | US20140286519A1 | US20140294208A1 | US20140328496A1 | US20140328497A1 | US20140372113A1 | US20150288823A1 | US20150319527A1 | US7246058B2 | US7433484B2 | US8019091B2 | US8130984B2 | US8254617B2 | US8280072B2 | US8321213B2 | US8326611B2 | US8452023B2 | US8467543B2 | US8477961B2 | US8488803B2 | US8494177B2 | US8503686B2 | US8503691B2 | US8503692B2 | US8682018B2 | US8699721B2 | US8731211B2 | US8837746B2 | US8838184B2 | US8942383B2 | US9066186B2 | US9099094B2 | WO2002007151A2 | WO2002007151A3 | WO2002098169A1 | WO2003083828A1 | WO2003096031A2 | WO2003096031A3 | WO2003096031A9 | WO2004056298A1 | WO2004068464A2 | WO2004068464A3 | WO2005029468A1 | WO2008157421A1 | WO2009003180A1 | WO2010048635A1 | WO2011002823A1 | WO2011140096A1 | WO2011140110A1 | WO2012009689A1 | WO2012125873A2 | WO2012125873A3|
|15||US9191744B2||Intelligent Ambient Sound Monitoring System||A system and method for interjecting ambient background sounds into a set of headphones is provided. The system monitors an ambient sound environment and compares the ambient sound environment to a preset set of sound characteristics (e.g., frequency signatures, amplitudes and durations) in order to detect important or critical background sounds (e.g., alarm, horn, directed vocal communications, crying baby, doorbell, telephone, etc.). When a critical background sound is detected, the system interjects either a notification signal or a portion of the ambient background into the audio stream, thus alerting a user of a potentially important sound or event occurring within their immediate vicinity.||1. An ambient sound monitoring system, comprising: |
a microphone, said microphone monitoring an ambient sound environment;
a set of headphones; and
a processor, said processor receiving a microphone output from said microphone, wherein said processor compares said microphone output to a preset set of sound characteristics and identifies critical background sounds within said ambient sound environment, said critical background sounds corresponding to a match between said microphone output and said preset set of sound characteristics, wherein said processor outputs an audio notification to said set of headphones only when said critical background sounds are identified, wherein said preset set of sound characteristics comprises at least one frequency signature, and wherein said audio notification is selected from the group consisting of an alarm signal and at least a portion of said ambient sound environment.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-08-09||2015-11-17||2012-08-09||-||H03G000320 | H04R000110 | H04R000504||H04R000504 | H04R00011083 | H04R242001 | H04R246001||Anderson, Jeffrey Steven||Anderson Jeffrey Steven | Logitech Europ Sa||Logitech International S.A.||1||US9191744B2 | CN103581803A | DE102013211056A1 | US20140044269A1|
|16||US9191733B2||Headphone Apparatus And Sound Reproduction Method For The Same||A headphone apparatus includes sound reproduction units which respectively reproduce sound signals and are arranged so as to be separated from ear auricles of a headphone user, wherein each of the sound reproduction unit is configured by a speaker array including a plurality of speakers.||1. A headphone apparatus comprising: |
sound reproduction units which respectively reproduce sound signals and are arranged so as to be separated from ear auricles of a headphone user; and
a head motion detecting unit which detects a state of a head of the headphone user,
wherein each of the sound reproduction units is configured by a speaker array including a plurality of speakers, and
wherein an orientation of a sound image formed by the reproduced sound signals is controlled, based on the detected state of the head of the headphone user in relation to a location of an object or a visual content that is associated with the reproduced sound signals and that is being viewed by the headphone user.
|-||Yes||-||-||-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||-||-||2012-02-16||2015-11-17||2011-02-25||-||H04S000700 | H04R000110 | H04R000140 | H04R000312||H04R00011091 | H04R000140 | H04R000312 | H04R243020 | H04S000730 | H04S240011||Yamada, Yuuji | Kon, Homare||Yamada Yuuji | Kon Homare | Sony Corp||Sony Corp||0||US9191733B2 | CN102651831A | EP2493211A2 | EP2493211A3 | EP2493211B1 | JP05716451B2 | JP2012178748A | KR2012098429A | US20120219165A1|
|17||US9190071B2||Noise Suppression Device, System, And Method||A noise-suppression assembly of a mechanical drive system having a rotational frequency includes an audio filter unit configured to receive a first audio signal and a timing signal of the mechanical drive system. The audio filter unit generates a noise-cancellation signal based on a frequency of the timing signal to suppress a noise generated by the mechanical drive system and to apply the noise-cancellation signal to the first audio signal to produce a filtered first audio signal. The frequency of the timing signal is based on the rotational frequency of the mechanical drive system.||1. A noise-suppression assembly of a mechanical drive system having a rotational frequency, the mechanical drive system including a rotor of a helicopter, the assembly comprising: |
an audio filter unit configured to receive a first audio signal and a timing signal of the mechanical drive system, the audio filter unit configured to generate a noise-cancellation signal based on a frequency of the timing signal, said frequency based on the rotational frequency of the rotor, to suppress a noise generated by the mechanical drive system and to apply the noise-cancellation signal to the first audio signal to produce a filtered first audio signal, the frequency based on a signal obtained from at least one sensor located on the rotor, wherein the sensor is a proximity sensor configured to detect the position of the rotor relative to a fixed position.
|Yes||Yes||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||Yes||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-09-14||2015-11-17||2012-09-14||-||G10K0011178 | G10L00210208 | G10L00210216||G10L00210208 | G10K22101081 | G10K2210121 | G10K2210128 | G10K22101281 | G10L00210216 | G10L202102085||Butts, Donald J. | Welsh, William A. | Millott, Thomas A. | Drost, Stuart K.||Butts Donald J | Welsh William A | Millott Thomas A | Drost Stuart K | Sikorsky Aircraft Corp||Sikorsky Aircraft Corp||0||US9190071B2 | US20140079234A1|
|18||US9190043B2||Assisting Conversation In Noisy Environments||A portable system for enhancing communication between at least two users in proximity to each other includes first and second noise-reducing headsets, each headset including an electroacoustic transducer for providing sound to a respective user's ear and a voice microphone for detecting sound of the respective user's voice and providing a microphone input signal. A first electronic device integral to the first headset and in communication with the second headset generates a first side-tone signal based on the microphone input signal from the first headset, generates a first voice output signal based on the microphone input signal from the first headset, combines the first side-tone signal with a first far-end voice signal associated with the second headset to generate a first combined output signal, and provides the first combined output signal to the first headset for output by the first headset's electroacoustic transducer.||1. A portable system for enhancing communication between at least two users in proximity to each other, comprising: |
first and second noise-reducing headsets, each headset comprising:
an electroacoustic transducer for providing sound to a respective user's ear, and
a voice microphone for detecting sound of the respective user's voice and providing a microphone input signal; and
a first electronic device integral to the first headset and in communication with the second headset, configured to:
generate a first side-tone signal based on the microphone input signal from the first headset,
generate a first voice output signal based on the microphone input signal from the first headset,
receive a first far-end voice signal from the second headset,
combine the first side-tone signal with the first far-end voice signal to generate a first combined output signal, and
provide the first combined output signal to the first headset for output by the first headset's electroacoustic transducer,
wherein the first and second headsets each include a noise cancellation circuit including a noise cancellation microphone for providing anti-noise signals to the respective electroacoustic transducer based on the noise cancellation microphone's output, and
the first electronic device is configured to provide the first combined output signal to the first headset for output by the first headset's electroacoustic transducer in combination with the anti-noise signals provided by the first headset's noise cancellation circuit.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-08-27||2015-11-17||2013-08-27||-||H04R000110 | G10K001100 | H04R000300||G10K0011002 | H04R00011083 | H04R00011091 | H04R0003005||Krisch, Kathleen S. | Isabelle, Steven H.||Bose Corp||Bose Corp||0||US9190043B2 | US20150063584A1 | WO2015031004A1|
|19||US9186277B2||External Ear Canal Pressure Regulation System||An external ear canal pressure regulation device including a fluid flow generator and an earpiece having a first axial earpiece conduit fluidicly coupled to the fluid flow generator, whereby the earpiece has a compliant earpiece external surface configured to sealably engage an external ear canal as a barrier between an external ear canal pressure and an ambient pressure.||1. An external ear canal pressure regulation device comprising: |
a first fluid flow generator capable of generating a first fluid flow;
a first earpiece having a first earpiece axial conduit which communicates between first earpiece first and second ends, said first earpiece axial conduit fluidicly coupled to said first fluid flow generator, said first earpiece having a first earpiece compliant external surface configured to sealably engage a first external ear canal of a first ear as a first barrier between a first external ear canal pressure and an ambient pressure;
said first fluid flow generator capable of generating a first pressure differential between said first external ear canal pressure and said ambient pressure, said first pressure differential comprising a first pressure differential amplitude;
a first pressure sensor which generates a first pressure sensor signal which varies based upon change in said first pressure differential; and
a first pressure sensor signal analyzer comprising:
a first pressure differential amplitude comparator which compares a pre-selected first pressure differential amplitude to said first pressure differential amplitude, said first pressure sensor signal analyzer generating a first pressure differential amplitude compensation signal to which a first fluid flow generator controller is responsive to control said first fluid flow generator to achieve said pre-selected first pressure differential amplitude.
|-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||2015-05-01||2015-11-17||2013-06-28||-||A61F001100 | A61F001112 | H04R000142||A61F001112 | H04R000142||George, David | Buckler, George | Sullivan, David Brice||Gbs Ventures Llc||Gbs Ventures Partner Ltd||0||US9186277B2 | AU2014302187A1 | CA2894410A1 | CA2915821A1 | TW201517884A | US20150000678A1 | US20150003644A1 | US20150230989A1 | US9039639B2 | WO2014210457A1 | WO2014210457A4 | WO2015009421A1|
|20||US9186071B2||Unlocking A Body Area Network||Disclosed is an apparatus, system, and method to unlock a body area network (BAN) of a patient and to transmit medical data about the patient. The BAN, under the control of a body area controller (BAC), may be unlocked based upon a pre-defined patient action performed by the patient and the BAN may then be connected to a wireless device. The BAN medical data of the patient may then be transmitted by the wireless device.||1. A method of unlocking a body area network (BAN) of a patient to transmit medical data comprising: |
unlocking the BAN based upon a pre-defined patient action performed by the patient, wherein the pre-defined patient action to unlock the BAN includes pressing against a pre-designated part of the body;
connecting the BAN to a wireless device; and
transmitting BAN medical data of the patient by the wireless device.
|-||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||-||2012-01-27||2015-11-17||2012-01-27||-||G08B000108 | A61B00050205 | A61B000500 | G06F001900||A61B00050024 | A61B00050022 | A61B000502055 | A61B0005747 | A61B00057475 | G06F00193418 | G06F0019345 | G06F00216245 | H04W001208 | A61B25600266||Moriarty, Anthony | Flanagan, Jessica M. | Mcdonald, Cameron A.||Moriarty Anthony | Flanagan Jessica M | Mcdonald Cameron A | Qualcomm Inc||Qualcomm Inc||0||US9186071B2 | CN104039217A | EP2806783A1 | JP2015510183A | KR2014128348A | US20130194092A1 | US20160015270A1 | WO2013112978A1|
|21||US9185488B2||Control Parameter Dependent Audio Signal Processing||Detection from sensors may be used to configure or modify the configuration of audio directional processing to improve user safety and/or communication by processing at least one control parameter dependent on at least one sensor input parameter, processing at least one audio signal dependent on the processed at least one control parameter, and outputting the processed at least one audio signal. |
L'invention concerne un appareil comprenant au moins un processeur et au moins une mémoire contenant un code de programme informatique. Ladite mémoire au moins et le code de programme sont configurés pour déclencher, avec ledit processeur au moins, le traitement par l'appareil d'au moins un paramètre de commande fonction d'au moins un paramètre d'entrée de capteur, le traitement d'au moins un signal audio fonction dudit paramètre de commande traité au moins, ainsi que l'émission dudit signal audio traité au moins.
|1. A method comprising: |
generating at least two sensor input parameters from a plurality of sensors, where the at least two sensor input parameters are different types of sensor input parameters;
generating by a control processor at least one control parameter dependent on the at least two sensor input parameters;
selecting a control parameter modifying mode by a context processor from a plurality of control parameter modifying modes, where at least one of the modes is configured to have the at least one control parameter from the control processor modified, and where the selecting of the control parameter modifying mode by the context processor is based, at least partially, upon an input from at least one of the plurality of sensors;
processing at least one audio signal dependent on the generated at least one control parameter and the selected control parameter modifying mode, wherein processing the at least one audio signal comprises beamforming the at least one audio signal; and
outputting the processed at least one audio signal associated with the selected control parameter modifying mode.
|Yes||-||-||Yes||Yes||-||-||-||-||-||-||-||Yes||-||Yes||-||-||-||Yes||-||-||Yes||-||Yes||Yes||Yes||-||Data Processing||-||-||-||Yes||2012-05-23||2015-11-10||2009-11-30||-||H04R000300||H04R000504 | H04R0001406 | H04R0003005 | H04R0005033 | H04S0001007 | H04R2201403 | H04R220312 | H04R246001 | H04S240013 | H04S240015||Karkkainen, Asta Maria | Virolainen, Jussi||Karkkainen Asta Maria | Virolainen Jussi | Nokia Technologies Oy||Nokia Corp||0||US9185488B2 | CA2781702A1 | CN102687529A | EP2508010A1 | US20120288126A1 | US20160014517A1 | WO2011063857A1|
|22||US9179237B2||Virtual Audio System Tuning||A method of virtually tuning an audio system that incorporates an acoustic compensation system, where the audio system is adapted to play audio signals in a listening environment over one or more sound transducers. The acoustic compensation system has an audio sensor located at a sensor location in the listening environment. The transfer functions from each sound transducer to the audio sensor location are inherent. The method contemplates recording noise at the sensor location, and creating virtual transfer functions from each sound transducer to the sensor location based on the inherent transfer functions from each sound transducer to the sensor location. Audio signals are processed through the virtual sound transducer to sensor location transfer functions. A virtual sensor signal is created by combining the audio signals processed through the virtual sound transducer to sensor location transfer functions with the noise recorded at the sensor location.||1. A method of virtually tuning an audio system that incorporates an acoustic compensation system, where the audio system is adapted to play audio signals in a listening environment using one or more sound transducers, the acoustic compensation system comprising an audio sensor located at a sensor location in the listening environment, wherein transfer functions from each sound transducer to the audio sensor location are inherent, and wherein there are a pair of sound evaluation locations in the listening environment at the approximate location of where the ears of a listener would be, where the sound evaluation locations are different than the sensor location, the method comprising: |
recording noise at the sensor location;
recording noise at both of the sound evaluation locations simultaneously with recording noise at the sensor location;
creating virtual transfer functions for each sound transducer to the sensor location, based on the inherent transfer functions from each sound transducer to the sensor location;
processing audio signals through the virtual sound transducer to sensor location transfer functions; and
creating a virtual sensor signal by combining the audio signals processed through the virtual sound transducer to sensor location transfer functions with the noise recorded at the sensor location.
|-||Yes||-||-||-||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||2011-12-16||2015-11-03||2011-12-16||-||G10K001116 | A61F001106 | G10K0011178 | H03B002900 | H04R002900 | H04S000700||H04S000700 | G10K00111788 | H04R002900 | G10K22101082 | G10K22101282 | G10K22103046 | G10K22103048 | G10K22103055 | H04R242001 | H04R249913||Pan, Davis Y. | Rabinowitz, William M. | Kim, Wontak | Greenberger, Hal||Pan Davis Y | Rabinowitz William M | Kim Wontak | Greenberger Hal | Bose Corp||Bose Corp||0||US9179237B2 | CN103988525A | EP2792167A1 | HK1198495A1 | JP2015506155A | US20130156213A1 | WO2013090007A1|
|23||US9173596B1||Movement Assessment Apparatus And A Method For Providing Biofeedback Using The Same||A movement assessment apparatus configured to provide biofeedback to a user regarding one or more bodily movements executed by the user is disclosed herein. The movement assessment apparatus generally includes a sensing device comprising one or more sensors, a data processing device operatively coupled to the sensing device, and a sensory output device operatively coupled to the data processing device. The data processing device is configured to determine a movement path and/or velocity profile of the body portion of the user using one or more signals from the one or more sensors, to compare the movement path and/or the velocity profile determined for the body portion of the user to a respective baseline movement path and/or velocity profile, and to determine how closely the movement path and/or the velocity profile determined for the body portion of the user conforms to the respective baseline movement path and/or baseline velocity profile.||1. A movement assessment apparatus configured to provide biofeedback to a user regarding one or more bodily movements executed by the user, the movement assessment apparatus comprising: |
at least one sensing device, the at least one sensing device comprising one or more sensors for detecting the motion of a body portion of a user and outputting one or more signals that are generated based upon the motion of the body portion of the user, the at least one sensing device further comprising attachment means for attaching the at least one sensing device to the body portion of the user;
a data processing device operatively coupled to the at least one sensing device, the data processing device configured to receive the one or more signals that are output by the one or more sensors of the at least one sensing device, and to determine executed motion data for an executed motion of the body portion of the user using the one or more signals, the data processing device configured to automatically select a reference motion by comparing the executed motion of the body portion of the user to each of a plurality of reference motions representing a plurality of different activities, the data processing device further configured to: (i) execute an agreement operation by converting the executed motion data to a feedback-agreeing form that agrees with at least one of the dimensions, reference frames, and units of baseline motion data of the reference motion, (ii) execute a comparison operation by comparing the feedback-agreeing form of the executed motion data to the baseline motion data of the reference motion, and (iii) determine how closely the feedback-agreeing form of the executed motion data conforms to the baseline motion data of the reference motion, the data processing device additionally configured to generate an abstract feedback signal based upon the execution of the comparison operation; and
a sensory output device operatively coupled to the data processing device, the sensory output device configured to generate a formed feedback signal for delivery to the user that is based upon the abstract feedback signal, the formed feedback signal comprising at least one of a visual indicator, an audible indicator, and a tactile indicator, and the sensory output device further configured to output the at least one of the visual indicator, the audible indicator, and the tactile indicator to the user in order to provide biofeedback as to conformity of the executed motion data to the baseline motion data of the reference motion.
|Yes||-||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||Yes||-||Yes||-||Yes||Yes||Yes||Yes||-||-||-||-||Yes||Yes||2014-06-28||2015-11-03||2014-06-28||-||A61B000500 | A61B000511 | G06F001900||A61B000511 | A61B00050024 | A61B00051122 | A61B0005486 | A61B00056823 | A61B00056824 | A61B00056828 | A61B00056829 | A61B00056895 | A61B00057405 | G06F001900 | A61B00051116 | A61B00057246 | G06F001934 | A61B00051112 | A61B25600214 | A61B25620219 | A61B25620223 | A61B2562029 | A61B256206 | A61B00051126 | A61B00056803 | A61B0005742 | A61B00057455 | A61B25600242||Berme, Necip | Ober, Jan Jakub||Bertec Ltd||Stryker Corporation||0||US9173596B1|
|24||US9173190B2||System And Method For Controlling Paging Delay||The disclosure relates to systems and methods for controlling a delay probability distribution associated with receiving a response to a page. The method entails performing a series of page operations, wherein each page operation entails transmitting a page and scanning for a page response. The method further entails adjusting at least one timing parameter associated with performing the series of page operations based on a characteristic of one or more scans for the page performed by the at least one remote device. The characteristic may be the period of periodic page scans performed by the at least one remote device.||1. A method of controlling a delay distribution associated with receiving a response to a page, comprising: |
performing a series of page operations, wherein each page operation comprises transmitting a page and scanning for a page response; and
adjusting at least one timing parameter associated with performing the series of page operations based on a characteristic of occurrences of separate scans for the page performed by at least one remote device, wherein prior to the adjusting, a timing of the page operations is based on another characteristic of occurrences of separate page scans performed by another remote device.
|Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||2012-06-29||2015-10-27||2012-06-29||1001||H04B000700 | H04W006802||H04W006802||Teague, Edward Harrison | Tian, Qingjiang | Julian, David Jonathan | Jia, Zhanfeng||Teague Edward Harrison | Tian Qingjiang | Julian David Jonathan | Jia Zhanfeng | Qualcomm Inc||Qualcomm Inc||0||US9173190B2 | CN104396324A | EP2868149A1 | JP2015523809A | KR2015032566A | US20140004899A1 | WO2014005057A1|
|25||US9173074B2||Personal Hub Presence And Response||Methods, devices, and systems for transmitting convenient messages to a recipient for rendering based on the recipient's device availabilities. A recipient's mobile device may be connected to a personal hub and/or earpiece devices configured to render various incoming communications, such as audio messages and visual messages. The incoming messages may be delivered to the recipient's mobile device and other connected devices that may render the contents of the incoming messages. A delivery confirmation message that describes the receipt and use of incoming messages may be generated and returned to a sender's computing device. In an embodiment, the recipient's devices may generate status information for describing the status of devices to a sender's computing device. In an embodiment, the sender's computing device may generate and transmit outgoing messages formatted based on the received status information and including metadata that instructs the recipient's devices to render message content in particular manners.||1. A method for communicating delivery confirmation information related to received messages by a recipient's mobile device, the method comprising: |
receiving a message in the recipient's mobile device identifying a device coupled to the recipient's mobile device via a short-range wireless communication technology;
obtaining from the received message instructions for rendering the received message on at least one of the recipient's mobile device or the device coupled to the recipient's mobile device via the short-range wireless communication technology, wherein obtaining from the received message instructions for rendering the received message on at least one of the recipient's mobile device or the device coupled to the recipient's mobile device via the short-range wireless communication technology includes decoding the received message to obtain metadata indicating the device on which the sender desires the received message to be rendered and at least one of sound or visual message contents;
determining whether the device indicated in the metadata is coupled to the recipient's mobile device via the short-range wireless communication technology;
providing the at least one of sound or visual message contents to the device indicated in the metadata in response to determining that the device is coupled to the recipient's mobile device;
generating a delivery confirmation message reporting whether the received message was delivered and, if the received message was delivered, a manner in which the received message was delivered; and
transmitting the delivery confirmation message to a sender of the received message.
|Yes||Yes||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||Yes||Yes||Yes||-||-||-||-||Yes||Yes||2012-11-27||2015-10-27||2012-05-27||1001||H04B000138 | H04L001258 | H04W000402 | H04W000412 | H04W000420||H04W000412 | H04L00125875 | H04L005130 | H04L005136 | H04W000402 | H04W000420||Miller, Brian F. | Menendez, Jose | Sauhta, Rohit||Qualcomm Inc||Qualcomm Inc||0||US9173074B2 | CN104335612A | EP2856782A2 | KR2015022897A | US20130316746A1 | WO2013180873A2 | WO2013180873A3|
|26||US9173045B2||Headphone Response Optimization||Optimized sound waves presented to the listener by headphones, notwithstanding differences in ear geometry and headphone positioning. A test signal causes an acoustic sensor to receive sound waves actually formed in the listener's ear cavity. A response from the sensor is compared with an expected ear cavity transfer function, from which desired adjustments to the audio signal are determined. The audio signal might be received from an application program, calibrated by an interface software element, and adjusted thereby, before forwarding to the headphones. Calibration might be performed from when the headphones are positioned, or dynamically in response to changes in the transfer function.||1. A method, including the steps of: |
emitting a test sound wave from a headphone into an ear of a listener;
receiving, by a sensor, a response to said test sound wave;
comparing said response to an expected response to said test sound wave, wherein the expected response is associated with a standard ear geometry;
determining differences between said response and said expected response; and
adjusting an input audio signal to the headphone in response to said differences, wherein the input audio signal is corrected to account for a result of comparing said response to the expected response associated with the standard ear geometry.
|-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||Acoustic Sensor For Providing Appropriet Sound Level Accor. To Ear Geometry||-||-||-||-||2013-02-21||2015-10-27||2012-02-21||1001||H04R002900 | H04R000110 | H04R000502 | H04R0005033||H04R0029002 | H04R00011091 | H04R000502 | H04R0029001 | H04R0005033||Bruss, John | Hogue, Douglas K. | Olson, Alan||Imation Corp||Imation Corp||0||US9173045B2 | US20130216052A1|
|27||US9173032B2||Methods Of Using Head Related Transfer Function (Hrtf) Enhancement For Improved Vertical-Polar Localization In Spatial Audio Systems||A method of enhancing vertical polar localization of a head related transfer (HRTF). The method includes splitting an audio signal and generating left and right output signals by determining a log lateral component of the respective frequency-dependent audio gain that is equal to a median log frequency-dependent audio gain for all audio signals of that channel having a desired perceived source location. A vertical magnitude of the respective audio signal is enhanced by determining a log vertical component of the respective frequency-dependent audio gain that is equal to a product of a first enhancement factor and a different between the respective frequency-dependent audio gain at the desired perceived source location and the lateral magnitude of respective audio signal. The output signals are time delayed according to an interaural time.||1. A method of enhancing vertical polar localization of a head related transfer function defining a left frequency-dependent audio gain, a right-frequency-dependent audio gain, and an interaural time delay for a plurality of perceived source locations, the method comprising: |
splitting an audio signal into a left audio signal and a right audio signal;
generating a left output signal by:
determining a log lateral component of the left frequency-dependent audio gain that is equal to a median log left frequency-dependent audio gain for all left audio signals having a desired one of the plurality of perceived source locations and applying the log lateral component of the left frequency-dependent audio gain to the left lateral magnitude of the left audio signal; and
determining a log vertical component of the left frequency-dependent audio gain that is equal to a product of a first enhancement factor and a difference between the left frequency-dependent audio gain at the desired one of the plurality of perceived source locations and the left lateral magnitude of the left audio signal and applying the log vertical component of the left frequency-dependent audio gain to the left vertical magnitude of the left audio signal;
generating a right output signal by:
determining a log lateral component of the right frequency-dependent audio gain that is equal to a median log right frequency-dependent audio gain for all right audio signals having the desired one of the plurality of perceived source locations and applying the log lateral component of the right frequency-dependent audio gain to the right lateral magnitude of the right audio signal; and
determining a log vertical component of the right frequency-dependent audio gain that is equal to a product of a second enhancement factor and a difference between the right frequency-dependent audio gain at the desired one of the plurality of perceived source locations and the right lateral magnitude of the right audio signal and applying the log vertical component of the right-frequency-dependent audio gain to the right vertical magnitude of the right audio signal;
time delaying the right output signal with respect to the left output signal in accordance with the interaural time delay; and
delivering the left and right output signals to left and right ears, respectively, of a listener.
|-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-03-15||2015-10-27||2009-05-20||1001||H04R000504 | H04S000700 | H04S000500||H04R000504 | H04S0007304 | H04R243003 | H04S000500 | H04S242001 | H04S242011||Brungart, Douglas S. | Romigh, Griffin D.||Us Air Force||Us Air Force||0||US9173032B2 | US20130202117A1 | US8428269B1|
|28||US9172345B2||Personalized Adjustment Of An Audio Device||Described herein are apparatuses, systems and methods that facilitate user adjustment of an audio effect of an audio device to match the hearing sensitivity of the user. The user can tune the audio device with a minimum perceptible level unique to the user. The audio device can adjust the audio effect in accordance with the minimum perceptible level. For example, a loudness level can adjust automatically to ensure that the user maintains a perceptible loudness, adjusting according to environmental noise and according to the minimum perceptible level. Also described herein are apparatuses, systems and methods related to an audio device equipped with embedded audio sensors that can maximize a voice quality while minimizing the effects of noise.||1. A device, comprising: |
a memory configured to store tuning data associated with a tuning process for a user identity in which the device is trained with the tuning data according to a defined hearing level based on an audio frequency control mechanism and an audio level control mechanism associated with the device, and other tuning data generated based on at least one predetermined tuning value that is not associated with the user identity; and
a processor configured to select an audio signal from a plurality of audio signals based on speech data, to repeatedly monitor a noise level associated with environmental noise, and to adjust, in response to a determination that the noise level is above a threshold level, the audio signal selected from the plurality of audio signals according to a plurality of filter bands associated with a digital transformation and based on the tuning data and the other tuning data.
|Yes||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2011-07-26||2015-10-27||2010-07-27||1001||H04R002900 | H03G000332 | H03G000502||H03G000332 | H03G0005025 | H04R000122 | H04R000304 | H04R243001||Kok, Hui Siew | Sui, Tan Eng||Kok Hui Siew | Sui Tan Eng | Bitwave Pte Ltd||Bitwave Pte Ltd||0||US9172345B2 | US20120027227A1 | US20160020744A1|
|29||US9168171B2||Combination Treatments||A method of treating a subject in need thereof, is carried out by (a) administering said subject a therapeutic intervention (e.g., an active agent) in a treatment effective amount; and concurrently (b) administering said subject caloric vestibular stimulation in a treatment effective amount, said caloric vestibular stimulation administered so as to enhance the efficacy of said active agent. In some embodiments, the caloric vestibular stimulation is administered as an actively controlled time varying waveform. |
La méthode de traitement d'un sujet en ayant besoin ci-décrite est mise en oeuvre par (a) administration audit sujet d'une intervention thérapeutique (par ex., principe actif) en une quantité thérapeutiquement efficace ; et concurremment (b) administration audit sujet d'une stimulation vestibulaire calorique en une quantité thérapeutiquement efficace, ladite stimulation vestibulaire calorique étant administrée de manière à améliorer l'efficacité dudit principe actif. Dans certains modes de réalisation, la stimulation vestibulaire calorique est administrée sous la forme d'une forme d'onde variant dans le temps contrôlée de manière active.
La méthode de traitement d'un sujet en ayant besoin ci-décrite est mise en œuvre par (a) administration audit sujet d'une intervention thérapeutique (par ex., principe actif) en une quantité thérapeutiquement efficace ; et concurremment (b) administration audit sujet d'une stimulation vestibulaire calorique en une quantité thérapeutiquement efficace, ladite stimulation vestibulaire calorique étant administrée de manière à améliorer l'efficacité dudit principe actif. Dans certains modes de réalisation, la stimulation vestibulaire calorique est administrée sous la forme d'une forme d'onde variant dans le temps contrôlée de manière active.
|1. A method of treating a subject in need thereof, comprising: |
(a) administering said subject an active agent in a treatment effective amount; and concurrently
(b) administering said subject caloric vestibular stimulation in a treatment effective amount, said caloric vestibular stimulation administered so as to enhance the efficacy of said active agent, wherein said caloric vestibular stimulation is administered as an actively controlled time varying waveform and the subject is afflicted with premenstrual dysphoric disorder, and said active agent comprises an active agent for treating premenstrual dysphoric disorder.
|Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||Yes||-||-||-||-||Yes||Yes||-||-||Yes||Yes||-||-||-||-||-||-||-||2013-07-24||2015-10-27||2009-12-18||1001||A61B001818 | A61F000700 | A61F000712||A61F000712 | A61F0007007 | A61F20070005 | A61F20070075 | A61F20070093 | A61F20070096||Rogers, Lesco L.||Rogers Lesco L | Scion Neurostim Llc||Scion Neurostim Llc||0||US9168171B2 | AU2011343564A1 | AU2011343589A1 | CA2821260A1 | CA2821262A1 | EP2651352A1 | EP2651364A1 | JP2014507964A | JP2014508553A | US20110313498A1 | US20110313499A1 | US20120316624A1 | US20120316625A1 | US20130296987A1 | US20130304165A1 | US20130310907A1 | US20130317576A1 | US20140088671A1 | US20140243941A1 | US20140309718A1 | US20150374538A1 | US8460356B2 | US8603152B2 | WO2011075573A1 | WO2011075574A1 | WO2012083098A1 | WO2012083102A1 | WO2012083106A1 | WO2012083126A1 | WO2012083151A1|
|30||US9167363B2||Adjustable Securing Mechanism For A Space Access Device||A securing mechanism comprising a plurality of outwardly projecting members having a plurality of contact points that are configured to contact a surface of an opening when disposed on a space access device that is inserted in the opening, the securing mechanism being configured to apply a pressure to a contact surface within the opening less than approximately 10000 kPa.||1. A securing mechanism for a space access device, comprising: |
a base comprising a longitudinal axis and an outer surface, said securing mechanism further comprising a plurality of projecting members disposed circumferentially around said base, said plurality of elongated members comprising at least 10 of said plurality of elongated members, each of said plurality of elongated members having a maximum length-thickness ratio in the range of 2:1-3:1,
each of said plurality of projecting members further comprising a proximal and distal end, said proximal ends of said plurality of projecting members being connected to said outer surface of said base and projecting outwardly therefrom at an angle relative to said base longitudinal axis in the range of 45°-65°,
said distal ends of said plurality of projecting members defining a plurality of contact points that are configured to contact a surface of an opening when disposed on an outer surface of a space access device that is inserted in said opening, each of said plurality of projecting members being further configured to apply a pressure to said opening surface when said space access device is disposed in said opening less than approximately 10000 kPa.
|-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||2014-04-11||2015-10-20||2010-07-21||1001||H04R002500 | H04R000110||H04R0025652 | H04R00011016 | H04R2225023 | H04R2225025||Michel, Florent | Michel, Raphael | Shen, Daniel | Perry, Michael||Aria Innovations Inc | Eargo Inc||Aria Innovations Inc||0||US9167363B2 | CN102498730A | CN102498730B | EP2457387A2 | EP2457387A4 | JP05765786B2 | JP2013500626A | KR2012068828A | SG179552A1 | US20110019851A1 | US20130266168A1 | US20140219488A1 | US20140294213A1 | US20150086054A1 | US8457337B2 | US8577067B2 | US9060230B2 | WO2011011555A2 | WO2011011555A3|
|31||US9167333B2||Headset Dictation Mode||Methods and apparatuses for headsets are disclosed. In one example, a headset includes a processor, a communications interface, a user interface, and a speaker. The headset includes a microphone array including two or more microphones arranged to detect sound and output two or more microphone output signals. The headset further includes a memory storing an application executable by the processor configured to operate the headset in a first mode utilizing a first set of signal processing parameters to process the two or more microphone output signals and operate the headset in a second mode utilizing a second set of signal processing parameters to process the two or more microphone output signals.||1. A headset comprising: |
a communications interface;
a user interface;
a speaker arranged to output audible sound to a headset wearer ear;
a microphone arranged to detect sound and output a microphone output signal; and
a memory storing an application executable by the processor configured to operate the headset in a first mode comprising a dictation mode utilizing a first set of signal processing parameters to process the microphone output signal and operate the headset in a second mode utilizing a second set of signal processing parameters to process the microphone output signal, wherein the first set of signal processing parameters are configured to optimize dictation speech and different from the second set of signal processing parameters.
|-||Yes||-||-||-||-||-||-||-||-||-||Yes||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-11-15||2015-10-20||2013-10-18||1001||H04M000100 | H04M000900 | H04R000110 | G10L001526 | H04M000160 | H04R002700||H04R00011041 | G10L001526 | H04M00016058 | H04R002700 | H04R2227003 | H04R242007||Johnston, Timothy P | Loewenthal, Jr., William J | Sarkar, Shantanu||Plantronics||Plantronics||0||US9167333B2 | US20150110263A1 | US20150112671A1|
|32||US9167331B2||Bendable Cord For Controlling An Electronic Device||Described is a technique for controlling an electronic device by manipulating a headphone cord. This may be accomplished by sensing various bends and/or bend patterns to the cord. The cord may include a resistive member such as a rod or hollow member for providing tactile feedback to a user. The resistive member may provide a bending resistance or a collapse that provides a tactile sense of when the bend produces an effect for controlling the electronic device. A degree of bend may be determined by the sensors and a controller may provide a control input to the electronic device based on the determined bend. In one instance, the volume of the electronic device may be decreased based on the degree of bend.||1. A cord configured to connect a headset to an electronic device and to provide input for controlling the electronic device, comprising: |
a resistive member configured to provide a tactile bending resistance;
a sensor configured to determine a bend of the resistive member,
wherein the determined bend includes a degree of bend; and
a signaling component configured to provide a signal to the electronic device based on the determined bend and a bending threshold.
|-||Yes||-||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Detects The Bending Og Cord||-||-||-||-||2013-03-26||2015-10-20||2013-03-26||1001||H04R000110 | G06F000116 | H04M000160||H04R00011041 | G06F0001163 | H04M00016058||Haynes, Thomas E.||Google Inc||Google Inc||0||US9167331B2 | EP2784626A2 | US20140294192A1|
|33||US9167329B2||Magnetic Earphones Holder||An earphones holder is used to affix a headset to clothing and/or other items. The earphones holder comprises a magnet which removably couples with a magnetically attractable portion of a set of earphones. In some embodiments, the earphones holder further comprises an electronic device controller which controls the operation of an electronic device. The controller is configured to send a signal to an electronic device activation circuit which operates the electronic device based upon a coupling status of the earbuds with the one or more magnetically attractable surfaces of the earphones holder body. In some embodiments, the electronic device controller controls the operation of an electronic device. The controller is configured to send a signal to an electronic device activation circuit which operates the electronic in a manner dependent upon a signal from the holder body.||1. A system for holding a set of earphones comprising: |
a. a holder body comprising one or more magnets;
b. a set of earphones comprising a magnetically attractable surface for removably coupling with the one or more magnets; and
c. an electronic device controller coupled to receive an activation signal when one or more of the set of earphones are decoupled from one of the one or more magnets, wherein the electronic device controller receives a deactivation signal when one or more of the set of earphones are coupled to one of the one or more magnets.
|-||Yes||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||2013-01-04||2015-10-20||2012-02-22||1001||H04R002500 | H04R000102 | H04R000110||H04R00011033 | H04R000102 | H04R0001028 | H04R000110 | H04R00011016 | H04R00011041 | H04R2201023||Honeycutt, Rob||Snik Llc||Snik Llc||0||US9167329B2 | CA2878907A1 | EP2817728A1 | EP2817728A4 | EP2872722A2 | JP2015513387A | US20130216085A1 | US20140198929A1 | US20150198029A1 | US20160007111A1 | WO2013126681A1 | WO2014012038A2 | WO2014012038A3|
|34||US9165549B2||Audio Noise Cancelling||A noise canceling system comprises a microphone (103) for generating a captured signal representing sound in an audio environment and a sound transducer (101) for radiating a sound canceling audio signal in the audio environment. A feedback path (105, 107, 109, 111, 113) exists from the microphone (103) to the sound transducer (101) and comprises a feedback filter (109). A tone processor (119) determines a tone component characteristic for a tone component of a feedback signal of the feedback path (105, 107, 109, 111, 113) and an adaptation processor (121) adapts the feedback path in response to the tone component characteristic. The invention allows detection of the onset of instability and dynamic compensation to mitigate or prevent such instability. Accordingly increased design freedom for the feedback filter is achieved resulting in improved noise cancellation. |
L'invention concerne un système de suppression de bruit comprenant un microphone (103) destiné à générer un signal capturé représentant un son dans un environnement audio et un transducteur sonore (101) destiné à rayonner un signal audio de suppression sonore dans l'environnement audio. Un trajet de rétroaction (105, 107, 109, 111, 113) est établi du microphone (103) au transducteur sonore (101) et comprend un filtre de rétroaction (109). Un processeur de sons (119) détermine une caractéristique pour une composante sonore d'un signal de rétroaction du trajet de rétroaction (105, 107, 109, 111, 113) et un processeur d'adaptation (121) adapte le trajet de rétroaction en réponse à la caractéristique de composante sonore. La présente invention permet de détecter le début d'une instabilité et d'établir une compensation dynamique pour limiter ou empêcher cette instabilité. On obtient ainsi une liberté de conception accrue du filtre de rétroaction, ce qui permet d'obtenir une meilleure suppression de bruit.
|1. A noise canceling system comprising: |
a microphone configured to capture an audio signal representing sound in an audio environment;
a sound transducer configured to radiate a sound canceling audio signal in the audio environment;
a first feedback path from the microphone to the sound transducer, the first feedback path comprising
a feedback filter having a loop filter configured to receive the captured audio signal and
a variable gain circuit configured to generate a drive signal for the sound transducer from the filtered captured audio signal; and
a second feedback path from the microphone to the sound transducer, the second feedback path comprising
a tone processor configured to determine a tone component characteristic for a tone component of the captured audio signal, and
an adaptation circuit configured to adapt a gain of the variable gain circuit in response to the determined tone component characteristic.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2011-11-09||2015-10-20||2009-05-11||1001||G10K001116 | G10K0011178 | H04R000110 | H04R000302||G10K00111782 | G10K22101081 | G10K22103026 | G10K22103028 | G10K2210503 | G10K221051 | H04R00011083 | H04R000302||Van Leest, Adriaan Johan||Van Leest Adriaan Johan | Koninkl Philips Nv||Koninklijke Philips Nv||0||US9165549B2 | CN102422346A | CN102422346B | EP2430632A1 | EP2430632B1 | JP05572698B2 | JP2012527148A | KR2012026530A | US20120057720A1 | WO2010131154A1|
|35||US9161303B2||Dual Mode Wireless Communications Device||A wireless communications device includes a battery, a processing section coupled to the battery, and an RF interface. The battery is configured to provide power to operate the wireless communications device in a first mode of operation. The processing section is configured to operate on battery power in the first mode of operation. The RF interface is configured to receive an RF signal and generate operating power for the wireless communication device from the RF signal in a second mode of operation. The wireless communications device is configured to detect available RF power and enter the second mode of operation from the first mode of operation.||1. A wireless communications device comprising: |
a battery configured to provide power to operate the wireless communications device in a first mode of operation;
an RF interface configured to:
receive an RF signal; and
generate operating power for direct use by the wireless communications device from the RF signal in a second mode of operation; and
a processing section comprising one or more processors;
wherein the wireless communications device is configured to:
operate on battery power in the first mode of operation;
detect available RF power;
enter the second mode of operation from the first mode of operation, wherein, during the second mode of operation, the generated operating power is used for operation of the device; and
detect a fill state of a memory and, based on the fill state, change the processing section from the second mode to the first mode and conduct data transfer with the memory.
|Yes||-||-||-||-||-||-||-||-||Yes||Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||Provide Image Data||-||-||-||-||2013-07-17||2015-10-13||2011-05-31||1001||H04M000100 | H02J001700 | H04B000138 | H04M0001725 | H04W005202 | H02J000702||H04W00520212 | H02J001700 | H04M00017253 | H02J0007025 | H04M225004||Maguire, Yael||Facebook Inc||Facebook Inc||0||US9161303B2 | AU2012262301A1 | AU2012262301B2 | AU2014235446A1 | AU2015242982A1 | AU2015242982B2 | CA2836588A1 | CA2836588C | CA2901056A1 | CA2904217A1 | CN103959660A | CN105210310A | EP2715944A2 | EP2715944A4 | EP2974076A1 | IL241179D0 | JP05782183B2 | JP2014526154A | KR2014037123A | KR2015131274A | MX2013014038A | US20120309295A1 | US20120309453A1 | US20130303225A1 | US20140113561A1 | US8644892B2 | US8929806B2 | WO2012166774A2 | WO2012166774A3 | WO2014150999A1|
|36||US9160829B2||Dynamic Audio Parameter Adjustment Using Touch Sensing||An audio communications device has a handset in which a touch sensing ear piece region is coupled to an acoustic leakage analyzer. The acoustic leakage analyzer is to analyze signals from the touch sensing ear piece region and on that basis adjust an audio processing parameter. The latter configures an audio processor which generates an audio receiver input signal for the device. Other embodiments are also described and claimed.||1. An audio communications device, comprising: |
a handset having a touch sensitive screen in which a touch sensing earpiece region is formed at an upper end thereof and a user input and display region is formed below the earpiece region;
an audio signal processor to generate an audio receiver input signal in accordance with an audio processing parameter; and
an acoustic leakage analyzer coupled to the touch sensing earpiece region to analyze signals from the region to detect one or more touch-activated regions in the touch sensing earpiece region and compare the touch-activated regions to a previously stored pattern and on that basis adjust the audio processing parameter of the audio signal processor.
|-||Yes||-||-||Yes||-||-||Yes||Yes||-||Yes||-||-||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||Yes||-||-||-||-||Yes||2012-03-19||2015-10-13||2009-03-31||1001||H03G000300 | H04M000160||H04M00016016 | H04M225012 | H04M225022||Chen, Shaohai||Chen Shaohai | Apple Inc||Apple Inc||0||US9160829B2 | US20100246855A1 | US20120177222A1 | US8155330B2|
|37||US9155460B2||Activity Regulation Based On Biometric Data||Disclosed are devices, systems, apparatus, methods, products, and other implementations, including a method that includes obtaining biometric data of a user, and generating instruction data, presentable on a user interface, based on data relating to one or more activities to be completed by the user and based on the biometric data of the user. In some embodiments, obtaining the biometric data may include measuring one or more of, for example, heart rate, blood pressure, blood oxygen level, temperature, speech-related attributes, breath, and/or eye behavior.||1. A method comprising: |
obtaining, by at least one processor-based device, biometric data of a user, at least some of the biometric data obtained via at least one biometric sensor housed in a user audio interface attachable to the user's ear;
obtaining location data for the user; and
generating, by the at least one processor-based device, based on the biometric data of the user and the location data for the user, instruction data, comprising audible instruction data presentable on the user audio interface, to regulate a pace to perform, by the user, a pre-determined schedule of one or more physical activities to be completed by the user at multiple locations, the one or more physical activities of the pre-determined schedule for the user selected from a global list of physical activities performable by multiple users;
wherein the pre-determined schedule for the user is adjusted based on subsequent biometric data for the user such that at least one physical activity is removed from the pre-determined schedule for the user and is added to another pre-determined schedule for another user from the multiple users when the subsequent biometric data for the user indicates that physical condition of the user is worsening, and another at least one physical activity previously assigned to at least another user is added to the pre-determined schedule for the user when the subsequent biometric data for the user indicates that the physical condition of the user is normal and another physical condition for the at least other user is worsening.
|Yes||-||Yes||-||Yes||Yes||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||-||2012-07-27||2015-10-13||2012-07-27||1001||A63B006900 | A61B000300 | A61B000500 | A61B00050205 | A61B000511 | A61B000518 | G09B000500 | A61B0005021 | A61B0005024 | A61B000508 | A61B0005145||A61B0005486 | A61B000300 | A61B000310 | A61B0005002 | A61B00050022 | A61B00050205 | A61B000502055 | A61B00051112 | A61B00051113 | A61B00051118 | A61B000514542 | A61B000518 | A61B00056803 | A61B00057246 | A61B0005741 | A61B000700 | G09B000500 | A61B000501 | A61B0005021 | A61B0005024 | A61B000502438 | A61B000508 | A61B0005082 | A61B0005145 | A61B250507||Steinmetz, Jay||Steinmetz Jay | Barcoding Inc||Barcoding Inc||0||US9155460B2 | US20140030684A1 | US20160000371A1|
|38||US9154868B2||Noise Cancellation System||An earphone comprises an earphone body, containing a speaker, and a projection, extending from a first surface of the earphone body, for location in the entrance to the user's ear canal. The earphone body comprises a sound outlet in the first surface, for allowing sounds generated by the speaker to leave the earphone body. The projection extends from the first surface of the earphone body, adjacent to the sound outlet, and contains a sound inlet port, connected to a microphone for detecting sounds entering the ear canal. A noise cancellation system includes noise cancellation circuitry, for applying a frequency dependent filter characteristic and applying a gain to an input signal representing ambient noise, at least one of the frequency dependent filter characteristic and the gain being adaptive. The earphone then has an ambient noise microphone, and an error microphone connected to the sound inlet port.||1. An earphone, for location in use in the concha of a user, wherein the earphone comprises: |
an earphone body, containing a speaker for generating sounds, wherein the earphone body comprises a sound outlet in a first surface thereof, for allowing sounds generated by the speaker to leave the earphone body; and
a projection, extending from the first surface of the earphone body, adjacent to the sound outlet, for location in or at the entrance to the ear canal of the user,
wherein the projection contains a sound inlet port, connected to a microphone for detecting sounds entering the ear canal; and
wherein the microphone for detecting sounds entering the ear canal is located within said projection.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-02-21||2015-10-06||2012-02-21||1001||A61F001106 | G10K001116 | H03B002900 | H04R000110 | H04R000300 | H04R000504||H04R00011083 | H04R0003005 | H04R00011016 | H04R0003007 | H04R000504 | H04R241005 | H04R243001||Narayan, Renjish Kodappully | Llewellyn, Steven||Wolfson Microelectronics Plc | Cirrus Logic Internat Semiconductor Ltd||Cirrus Logic Inc||0||US9154868B2 | CN103260101A | CN203482364U | GB201202974D0 | GB2499607A | US20130216060A1|
|39||US9153195B2||Providing Contextual Personal Information By A Mixed Reality Device||The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.||1. One or more processor-readable storage devices having instructions encoded thereon for causing one or more software controlled processors to execute a method for providing location-relevant contextual personal information by a mixed reality display device system, the method comprising: |
receiving and storing person selection criteria having been provided by a user wearing a mixed reality display device of the system, the person selection criteria being for identifying another person who satisfies the person selection criteria;
sending a request including a location of the user and the person selection criteria to a personal information service engine executing on one or more remote computer systems for a personal identification data set for each person sharing the location and satisfying the person selection criteria, the location being one shared by the user and one or more other persons such that face to face meetings can occur at the location between the user and the one or more other persons, the location including a scene in a field of view of the mixed reality display device;
receiving at least one personal identification data set from the personal information service engine for a person sharing the location;
determining whether the person associated with the at least one personal identification data set is in the field of view of the mixed reality display device;
responsive to the person associated with the at least one personal identification data set not being currently within the field of view of the mixed reality display device, determining a position of the person within the location, and outputting data which indicates the out-of-field-of-view position of the person within the location; and
responsive to the person associated with the at least one personal identification data set being in the field of view, outputting data which identifies the in-field-of-view position and identity of the person in the field of view.
|-||Yes||-||Yes||-||Yes||Yes||Yes||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||2012-01-30||2015-10-06||2011-08-17||1001||G09G000500 | G06F000300 | G06Q005000||G09G000500 | G06F0003002 | G06F0003011 | G06F0003012 | G06F0003013 | G06Q005001||Geisner, Kevin A. | Bennett, Darren | Markovic, Relja | Latta, Stephen G. | Mcculloch, Daniel J. | Scott, Jason | Hastings, Ryan L. | Kipman, Alex Aben-Athar | Fuller, Andrew John | Margolis, Jeffrey Neil | Perez, Kathryn Stone | Small, Sheridan Martin||Geisner Kevin A | Bennett Darren | Markovic Relja | Latta Stephen G | Mcculloch Daniel J | Scott Jason | Hastings Ryan L | Kipman Alex Aben-Athar | Fuller Andrew John | Margolis Jeffrey Neil | Perez Kathryn Stone | Small Sheridan Martin | Microsoft Technology Licensing Llc||Microsoft Corp||0||US9153195B2 | US20130044128A1 | US20130044130A1|
|40||US9153074B2||Wearable Augmented Reality Eyeglass Communication Device Including Mobile Phone And Mobile Computing Via Virtual Touch Screen Gesture Control And Neuron Command||Provided are an augmented reality eyeglass communication device and a method for facilitating shopping using an augmented reality eyeglass communication device. The augmented reality eyeglass communication device may comprise a frame, and a right earpiece and a left earpiece connected to the frame. Furthermore, the eyeglass communication device may comprise a processor configured to receive one or more commands of a user, perform operations associated with the commands of the user, receive product information, and process the product information. The eyeglass communication device may comprise a display connected to the frame and configured to display data received from the processor. In addition to that, the eyeglass communication device may comprise a transceiver electrically connected to the processor and configured to receive and transmit data over a wireless network. The eyeglass communication device may comprise a Subscriber Identification Module card slot, a camera, an earphone, a microphone, and a charging unit.||1. An augmented reality eyeglass communication device comprising: |
an eyeglass frame having a first end and a second end;
a right earpiece and a left earpiece, wherein the right earpiece is connected to the first end of the frame and the left earpiece is connected to the second end of the frame;
a camera disposed on the frame, the right earpiece or the left earpiece, the camera being configured to:
track a hand gesture command of a user,
capture a sequence of images containing a finger of the user and virtual objects of a virtual keypad displayed by the eyeglass communication device and operable to provide input to the eyeglass communication device by the user, finger motions in relation to virtual objects being detected based on the sequence, wherein one or more gestures are recognized based on the finger motions, wherein the one or more gestures define user commands input to the eyeglass communication device,
capture a skeletal representation of a body of the user, a virtual skeleton being computed based on the skeletal representation, and body parts being mapped to segments of the virtual skeleton, wherein the capturing is performed in real time,
a processor disposed in the frame, the right earpiece or the left earpiece and configured to:
receive one or more hand gesture commands of the user, wherein the one or more hand gesture commands comprise displaying product information comprising product description and product pricing of one or more products fetched from a networked database in response to user input of identifiers of the one or more products into the processor and displaying location information associated with the one or more products determined by the eyeglass communication device, including displaying a route on a map of a store to guide the user to the location within the store to obtain the product, and changing the frequency of a WiFi signal of the eyeglass communication device,
perform the one or more hand gesture commands of the user,
process the one or more hand gesture commands tracked by the camera, the hand gesture command being inferred from a collection of vertices and lines in a three dimensional mesh associated with a hand of the user,
derive parameters from the hand gesture command using a template database, the template database storing captured storing deformable two dimensional templates of a human hand, a deformable two dimensional template of the human hand being associated with a set of points on outline of the human hand;
receive product information, and
process the product information;
at least one display connected to the frame and configured to display data received from the processor corresponding to each of the one or more hand gesture commands, the display comprising:
an optical prism element embedded in the display, and
a projector embedded in the display, the projector being configured to project the data received from the processor to the optical prism element and to project the data received from the processor to a surface in environment of the user, the data including a virtual touch screen environment;
a transceiver electrically coupled to the processor and configured to receive and transmit data over a wireless network;
a Subscriber Identification Module (SIM) card slot disposed in the frame, the right earpiece or the left earpiece and configured to receive a SIM card;
at least one earphone disposed on the right earpiece or the left earpiece;
a microphone configured to sense a voice command of the user, wherein the voice command is operable to perform commands of the one or more hand gestures commands; and
a charging unit connected to the frame, the right earpiece or the left earpiece;
at least one electroencephalograph sensor configured to sense brain activity of the user and provide an alert when undesired brain activity is sensed;
a gesture recognition unit including at least three dimensional gesture recognition sensors, a range finder, a depth camera, and a rear projection system, the gesture recognition unit being configured to track the hand gesture command of the user, the hand gesture command being processed by the processor, wherein the hand gesture command is associated with the vertices and lines of the hand of the user, the vertices and lines being in a specific relation;
a band configured to secure the augmented reality eyeglass communication device on a head of the user;
wherein the augmented reality eyeglass communication device is configured to perform phone communication functions, and wherein the eyeglass communication device is operable to calculate a total price for the one or more products, encode the total price into a code that is scannable by a merchant scanning device, and wherein the eyeglass communication device is operable to communicate with the merchant scanning device and perform a payment transaction for the one or more products.
|-||Yes||-||Yes||Yes||Yes||Yes||-||-||-||-||-||-||Yes||Yes||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||2013-08-22||2015-10-06||2011-07-18||1001||G02B002701 | G01C002100 | G06F000116 | G06F000301 | G06F0003042 | G06Q002032 | G06Q003006 | G06T001900 | G08B002106||G06T0019006 | G01C002100 | G06F0001163 | G06F0003012 | G06F0003013 | G06F0003017 | G06F00030426 | G06Q00203276 | G06Q00300641 | G08B002106 | G02B0027017 | G02B20270138 | G02B2027014 | G02B20270178||Zhou, Dylan T X | Zhou, Tiger T G | Zhou, Andrew H B||Zhou Dylan T X | Zhou Tiger T G | Zhou Andrew H B||-||0||US9153074B2 | CN104781841A | CN104995545A | CN105164707A | EP2896001A1 | EP2896011A1 | EP2898464A1 | EP2904557A1 | EP2912620A1 | IN201503115P1 | IN201503127P1 | IN201503128P1 | US20110276636A1 | US20120006891A1 | US20120059699A1 | US20130006788A1 | US20130018715A1 | US20130018782A1 | US20130026232A1 | US20130043305A1 | US20130141313A1 | US20130146659A1 | US20130172068A1 | US20130173362A1 | US20130191174A1 | US20130225290A1 | US20130236877A1 | US20130238401A1 | US20130240622A1 | US20130311484A1 | US20130346168A1 | US20140058804A1 | US20140098758A1 | US20140129422A1 | US20140143037A1 | US20140189354A1 | US20140236750A1 | US20140239065A1 | US20140254896A1 | US20140330656A1 | US20140349692A1 | US20150026072A1 | US20150066613A1 | US20150088757A1 | US20150161721A1 | US20150229750A1 | US20150339696A1 | US20150371215A1 | US7702739B1 | US8851372B2 | US8968103B2 | US8985442B1 | US9009166B2 | US9016565B2 | US9047600B2 | US9098190B2 | US9100493B1 | US9208505B1 | WO2013064986A1 | WO2014041456A1 | WO2014041458A1 | WO2014045145A1 | WO2014053924A1 | WO2014064549A1 | WO2014118703A1 | WO2014122558A1 | WO2014132192A1 | WO2014141076A2 | WO2014141076A3 | WO2014162257A2 | WO2014162257A3 | WO2014174398A2 | WO2014174398A3 | WO2014174399A2 | WO2014174399A3 | WO2015025251A1 | WO2015063627A1 | WO2015107442A1 | WO2015114475A1 | WO2015132767A1 | WO2015162565A1 | WO2015170253A1 | WO2015177760A2 | WO2015177760A3 | WO2016009287A1|
|41||US9152378B2||Bluetooth Or Other Wireless Interface With Power Management For Head Mounted Display||A headset computer that includes a wireless front end that interprets spoken commands and/or hand motions and/or body gestures to selectively activate subsystem components only as needed to carry out specific commands.||1. A method for controlling a headset computer system that includes a microdisplay, a user input device, a first processor, and two or more peripherals comprising: |
entering a first state by enabling only the first processor and user input device;
detecting a user input;
interpreting, with the first processor, the user input as a spoken command or gesture command; and
entering a second state by the first processor issuing a command enabling a selected ones of the two or more peripherals and the first processor issuing a command disabling peripherals based on the spoken command or gesture command.
|Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||Yes||-||-||-||-||Yes||-||2014-01-17||2015-10-06||2010-09-20||1001||G06F000316 | G02B002701 | G06F000301||G06F0003167 | G02B00270101 | G02B0027017 | G06F0003011 | G06F0003017 | G02B2027014||Jacobsen, Jeffrey J. | Parkinson, Christopher | Pombo, Stephen A.||Kopin Corp||Kopin Corp||0||US9152378B2 | CN103890836A | EP2617202A2 | EP2617202A4 | JP2014503085A | US20120235896A1 | US20140132507A1 | US8736516B2 | WO2012040030A2 | WO2012040030A3|
|42||US9148717B2||Earbud Charging Case||A case for a mobile electronic device includes an aperture configured to receive one or more earbuds, a portion configured to receive power from a power source, and circuitry configured to simultaneously charge the one or more earbuds and the mobile electronic device.||1. A case for a mobile electronic device, the case comprising: a housing; one or more earbud receiving apertures, wherein each earbud receiving aperture is associated with one or more electrical components configured to transfer an electrical charge from a power source to an earbud when the earbud is positioned within the aperture; one or more electrical components that provide a conductive connection from the power source to a power input port of a mobile electronic device that is in contact with the housing, to enable a simultaneous charge of the one or more earbuds when placed in the one or more apertures and of the mobile electronic device when placed in the housing; one or more earbuds, each of which is positioned to fit within one of the earbud receiving apertures, and each of which further comprises: one or more of the electrical contacts, one or more sensors configured to detect when the earbud is within or outside of an earbud receiving aperture, and programming that causes the earbud to receive the output of the one or more sensors and use the output to: activate the earbud when the earbud is removed from an earbud receiving aperture, and power down the earbud by turning the earbud off or placing the earbud in an idle mode when the earbud is placed within an earbud receiving aperture.||Yes||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||2015-01-23||2015-09-29||2014-02-21||1001||H04R002500 | H04R000110||H04R00011025 | H04R00011016 | H04R242007 | H04R249911||Shaffer, Jonathan Everett||Alpha Audiotronics Inc||Alpha Audiotronics Inc||0||US9148717B2 | US20150245125A1 | US20150245126A1 | US20150245127A1 | US20150373448A1 | US8891800B1 | WO2015126572A1 | WO2015126611A1|
|43||US9148280B2||Method And System For Providing Secure, Modular Multimedia Interaction||An approach is provided for the secure exchange of multimedia content through a mobile telephony device. A docking station receives a control signal from a media headset, and in response thereto determines to establish a communication link. The docking station selects one of a plurality of communication options corresponding to different networks based on the type of the communication link. The docking station initiates an authentication procedure for the communication link according to the selected communication option. Subsequent to successful authorization, the docking station receives multimedia content over the authenticated communication link, and transmits the received media signal to the media headset.||1. A method comprising: |
receiving, at a docking station that is registered with an authentication platform of a service provider network allowing the docking station to connect to services of the service provider network, a control signal from a media headset in response to a voice input, a biometric input, a gesture input, or a combination thereof;
determining in response to the control signal at the docking station, to establish a communication link;
selecting, by the docking station, one of a plurality of communication options corresponding to different networks based on the type of the communication link;
initiating, by the docking station, an authentication procedure for the communication link according to the selected communication option;
accessing a session controller on a service provider side of the service provider network that is in communication with the docking station, to perform a plurality of services including media treatment, security, network peering, or a combination thereof;
receiving a media signal over the authenticated communication link;
determining based on received location information from an environment sensor or a mobile device by the docking station, a particular multimedia component of a plurality of multimedia components to engage to perform a particular task associated with the media signal;
determining to transmit the received media signal to the particular multimedia component; and
transmitting the media signal to the media headset for presentation on a display of the media headset in response to the determination that the particular multimedia component is the media headset and the particular task is displaying,
wherein the docking station includes an electrical connector adapted to connect to the mobile device, one or more wireline modems, a wireless modem, an environment transceiver adapted to connect to environment sensors or appliances, and a transceiver adapted to connect to one or more of the plurality of multimedia components.
|-||Yes||-||Yes||Yes||Yes||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||Yes||-||-||-||-||-||-||Yes||2011-09-29||2015-09-29||2011-09-29||1001||H04L000900 | H04L002906 | G02B002701 | G06F000116 | H04L001228 | H04M0001725 | H04N002141 | H04W000402 | H04W008806||H04L000900 | H04L00122858 | H04L0012287 | H04L006308 | G02B0027017 | G02B20270178 | G06F00011632 | H04L00122856 | H04M000172547 | H04N00214126 | H04W000402 | H04W008806||Schultz, Paul T.||Schultz Paul T | Verizon Patent & Licensing Inc||Verizon Communication Inc||0||US9148280B2 | US20130086633A1|
|44||US9143878B2||Method And System For Headset With Automatic Source Detection And Volume Control||An audio headset receives one or more audio signals carrying one or more audio channels and processes the audio channels to generate stereo signals for output to a left and a right speaker of the audio headset. The processing determines a number of the audio channels carried in the received audio signal(s), adjusts level(s) of the audio channels based on the determined number of audio channels and/or adjusts gain and/or phase of the audio channels to control a perceived location of a listener wearing the headset relative to a source of sounds carried in the stereo signals.||1. A method, comprising: |
in an audio headset:
receiving one or more audio signals carrying audio channels; and
processing said audio channels to generate stereo signals for output to a left and a right speaker of said audio headset, wherein said processing comprises:
determining a quantity of said audio channels carried in said one or more received audio signals, wherein said quantity of said audio channels is determined to be at least six audio channels when a level of audio on a subwoofer channel and/or on a center channel of said one or more received audio signals is above a threshold during a determined time period;
adjusting a level of one or more of said audio channels based on said quantity of said audio channels carried in said one or more received audio signals; and
combining said audio channels to generate said stereo signals.
|Yes||-||Yes||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||-||-||-||Yes||Yes||-||-||-||-||-||-||2014-10-07||2015-09-22||2013-10-09||1001||H04R000502 | H04S000100||H04S0001005 | H04S0003004 | H04S240001||Kulavik, Richard | Kuruba Buchannagari, Shobha Devi||Voyetra Turtle Beach Inc||Voyetra Turtle Beach Inc||0||US9143878B2 | US20150098598A1 | WO2015054385A1|
|45||US9143858B2||User Designed Active Noise Cancellation (Anc) Controller For Headphones||Embodiments are directed towards enabling headphones to perform active noise cancellation for a particular user. Each separate user may enable individualized noise canceling headphones for one or more noise environments. When the user is wearing the headphones in a quiet environment, a user may employ a computer to initiate determination of a plant model of each ear cup specific to the user. When the user is wearing the headphones in a target noise environment, the user may utilize the computer to initiate determination of operating parameters of a controller for each ear cup of the headphones. The computer may provide the operating parameters of each controller to the headphones. And the operation of each controller may be updated based on the determined operating parameters. The updated headphones may be utilized by the user to provide active noise cancellation.||1. A method for providing active noise cancellation for headphones worn by a user, comprising: |
when the headphones are worn by the user in a current quiet environment, determining a plant model for each ear cup of the headphones for the user based on at least one reference audio signal provided by at least one speaker within each ear cup and an audio signal captured at the same time by a microphone located within each ear cup;
when the headphones are worn by the user in a current noise environment, determining at least one operating parameter for each controller that corresponds to each ear cup based on at least each ear cup's corresponding plant model and at least one other audio signal from the current noise environment which is captured at the same time by at least one microphone that corresponds to each ear cup;
updating at least one operation of each controller for each ear cup based on the at least one determined operating parameter for each controller; and
employing the updated controllers to provide active noise cancellation when the headphones are worn by at least the user.
|Yes||-||Yes||-||-||-||-||-||-||Yes||Yes||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||Yes||-||2013-12-17||2015-09-22||2012-03-29||1001||A61F001106 | G10K0011178 | H04R000300 | H04R000110||H04R0003002 | G10K00111784 | G10K22103033 | G10K22103035 | G10K22103055 | G10K2210504 | H04R00011083||Alves, Rogerio Guedes | Zuluaga, Walter Andrés||Csr Technology Inc||Csr Technology Inc||0||US9143858B2 | DE102014018843A1 | GB201208152D0 | GB201421652D0 | GB2501325A | GB2522760A | US20130259253A1 | US20140105412A1|
|46||US9143853B2||Headset With Turnable Ear Hook With Two Off Positions||A headset (1) comprising a switch (15; 19) and an actuating member (13), which is turnable in relation to the housing (2) about a first axis (A1). An ear hook (3; 32) is attached to the actuating member (13). The ear hook (3; 32) can be arranged in a right ear mode for wearing the headset (1) at the right ear and a left ear mode for wearing the headset at the left ear. The switch (15; 19) switches the headset electronics from an active state to a passive state, when the ear hook (3) is turned about the first axis (A1) in a first direction (P1) from an active position to a first passive position. The switch (15; 19) switches the headset electronics from the active state to the passive state when the ear hook (3) is turned about the first axis (A1) in the second direction (P2) from the active position to a second passive position. |
Cette invention comprend un écouteur (1) comprenant un commutateur (15 ; 19) et un élément actionneur (13) qui peut tourner par rapport au boîtier (2) sur un premier axe (A1). Un crochet d'oreille (3 ; 32) est relié à l'élément actionneur (3). Ce crochet d'oreille (3 ; 32) peut être disposé en mode d'écoute pour oreille droite lorsque l'écouteur (1) est monté dans l'oreille droite et en mode d'écoute pour oreille gauche lorsque l'écouteur (1) est monté dans l'oreille gauche. Le commutateur (15 ; 19) permet de faire passer l'électronique de l'écouteur d'un état actif à un état passif lorsque le crochet d'oreille (3) est tourné sur le premier axe (A1) depuis une première position (P1), à partir d'une position active sur une première position passive. Le commutateur (15 ; 19) fait passer l'électronique de l'écouteur de l'état actif à l'état passif lorsque le crochet d'oreille (3) est tourné sur le premier axe (A1) dans une seconde direction (P2), de la position active à une seconde position passive.
|1. A headset comprising a housing with headset electronics and an ear hook for attaching the headset to a users ear, and for switching the headset from an active, power on state, to a passive, standby or power off state, wherein |
the headset comprises a switch,
an actuating member, which is turnable in relation to the housing about a first axis,
the ear hook is attached to the actuating member,
the ear hook can be arranged in a right ear mode for wearing the headset at the right ear and a left ear mode for wearing the headset at the left ear,
wherein the passive state has the ear hook and the housing are substantially overlapping in a compact storage position, and the active state has the ear hook rotated generally orthogonal to the housing, in a user wearable position,
the switch switches the headset electronics from an active state to a passive state, when the actuating member is turned about the first axis in a first direction from an active position to a first passive position and from the passive state to the active state, when turned in a second opposite direction from the first passive position to the active position, wherein
the switch switches the headset electronics from the active state to the passive state when the actuating member is turned about the first axis in the second direction from the active position to a second passive position and from the second passive state to the active state when turned in the first direction from the second passive position to the active position,
wherein the housing comprises an inner side facing the user's ear during use and an outer side facing away from the user's ear during use, and the headset further comprises a speaker tower extending from the inner side along the first axis, and
wherein the actuating member constitutes a turnable part of the speaker tower.
|Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||Turning Off And On Based On Position Relative To Ear||-||-||-||-||2012-06-07||2015-09-22||2009-10-16||1001||H04R002500 | H04R000110||H04R00011041 | H04R0001105 | H04R2201109||Sorensen, Michael||Sorensen Michael | Gn Netcom As||Gn Store Nord A/S||0||US9143853B2 | CN102577431A | CN102577431B | EP2489202A1 | US20120243704A1 | WO2011044897A1|
|47||US9143594B2||Mass Deployment Of Communication Headset Systems||The present disclosure relates to devices, systems and methods for programming base units of communication headset systems with new or updated configuration parameters by a portable or handheld programming unit.||1. A method of mass deployment of set up parameters from a reference base unit and special reference headset configured to operate with the reference base unit to a plurality of other base units each also having a normal headset associated therewith, of a communication headset system having a headset, the method comprising steps of: |
i) configuring said reference headset to operate in a first mode of audio communication with said base reference unit, and a second transfer programming mode,
ii) configuring said reference headset to switch between modes,
iii) determining a set of base configuration parameters related to an interface between the reference base unit and a telecommunication,
iv) pairing said reference base unit with a reference headset,
v) uploading a set of uniform base configuration parameters from the reference base unit to the reference headset selected from at least one of the following of the group of termination switch settings of receive and transmit signals, microphone gain setting, transmit volume setting, hook-switch protocol to the reference headset RF transmission power selection, sound bandwidth mode selection, audio sampling frequency, audio protection level selection;
vi) determining the current operating mode of the reference headset and switching said reference headset to transfer mode and uploading said network parameters to the reference headset;
vii) moving the reference headset to a location within communications range of another of said plurality of base units and coupling said reference headset to said base units,
viii) linking to the reference headset to said another base unit through a data interface,
ix) transmitting a signal from the reference headset to said another base unit to put said another base unit in a data transfer mode by a user movement to change to transfer mode to transfer the set of base configuration parameters in said another base unit,
x) storing the set of received base configuration parameters in said another base unit,
xi) decoupling said reference headset from one of said based units and switching said reference headset back to communications mode thereby allowing said reference headset to communicate through said another of said base units to download set up parameters, so that an operator may copy said base parameters into said reference headset and download identical parameters to other base units in said location from a single set of uploaded parameters, and
xii) confirming recoupling said normal headset to said base unit, thereby allowing said base unit and normal headset to operate with identical set up parameters as other base units receiving parameters from said reference headset;
wherein the user movement includes movement of the user's headset in a predetermined pattern.
|Yes||-||-||-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||Yes||-||-||-||-||Yes||Yes||2013-09-03||2015-09-22||2010-08-23||1001||H04M000160 | H04M0001725 | H04M000351 | H04W000822||H04M00016066 | H04M00016058 | H04M000172502 | H04W000822 | H04M000351||Goldman, Tomasz Jerzy||Gn Netcom As||Gn Store Nord A/S||0||US9143594B2 | CN102378080A | EP2424202A1 | US20120052852A1 | US20140004910A1 | US8606334B2|
|48||US9142141B2||Determining Exercise Routes Based On Device Determined Information||A device includes at least one computer readable storage medium bearing instructions executable by a processor and at least one processor configured for accessing the computer readable storage medium to execute the instructions. The instructions configure the processor for accessing terrain information representing preferred terrain of a user of the device, accessing map information, accessing location information indicating current location of the user, and receiving user input indicating a desire for route information. Based at least in part on time constraints indicated from calendar information, the instructions configure the processor for accessing the location information, terrain information, and map information to determine a route and audibly and/or visually displaying the route on the device.||1. A device comprising: |
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor for;
receiving first input of terrain information, the terrain information comprising data related to at least one of a desired net elevation gain, a desired net elevation loss, a desired total elevation gain, and a desired total elevation loss;
accessing map information;
accessing location information indicating current location of the user;
receiving second input indicating, a desire for route information;
responsive to receipt of the second input, determine at least one route based on the location information, the terrain information, and the map information such that the at least one route is determined based at least partially on the terrain information; and
audibly and/or visually presenting the at least one route at the device.
|-||Yes||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||Yes||-||-||-||Yes||-||Yes||Yes||-||-||-||-||-||-||Heat And Proximity Detection||-||-||-||-||2013-09-25||2015-09-22||2013-09-17||1001||G01C002136 | A61B000500 | A61B00050205 | A61B0005021 | A61B0005024 | A63B007106 | G01C002100 | G01C002120 | G01C002134 | G01S001919 | G06F000301 | G06F00030481 | G06F00030484 | G06F000316 | G06F001730 | G06F001900 | G06Q001006 | G08B002501 | G09B001900 | G10L001500 | H04B000500 | H04L002906 | H04W000400 | H04W001208 | A61B000511 | A61B0005117 | A61B0005145 | H04M0001725||G09B00190038 | A61B000502055 | A61B0005021 | A61B000502438 | A61B00054815 | A63B007106 | G01C002100 | G01C002120 | G01S001919 | G06F0003017 | G06F00030481 | G06F00030484 | G06F0003165 | G06F00173074 | G06F00193481 | G06Q00100639 | G08B0025016 | G10L001500 | H04B00050025 | H04L00630853 | H04W0004008 | H04W001208 | A61B000511 | A61B00051172 | A61B00051176 | A61B000514532 | A61B000514542 | H04M00017253 | H04M225002 | H04M225004 | H04M225012||Yeh, Sabrina Tai-Chen | Young, David Andrew | Friedlander, Steven||Sony Corp||Sony Corp||0||US9142141B2 | CN104436615A | CN104460980A | CN104460981A | CN104460982A | CN104469585A | JP2015058362A | JP2015058363A | JP2015058364A | JP2015059935A | JP2015061318A | KR2015032169A | KR2015032170A | KR2015032182A | KR2015032183A | KR2015032184A | US20150079562A1 | US20150079563A1 | US20150081056A1 | US20150081066A1 | US20150081067A1 | US20150081209A1 | US20150081210A1 | US20150082167A1 | US20150082408A1 | US8795138B1 | US9224311B2 | WO2015041970A1 | WO2015041971A1|
|49||US9137598B2||Headphone||A headphone includes a headphone assembly which includes a head band, two joining structures and two in-ear components pivoted to two ends of the head band by the joining structures, and a sensor module which is disposed in the headphone assembly and includes an upper part, a lower part and a press sensor disposed between the upper part and the lower part and having a sensing face. The upper part and the lower part are designated with inner structures of the head band or the in-ear components. The press sensor detects states of the headphone by judging whether the sensing face is pressed by the upper or lower part by virtue of elastic deformation or movement of the upper part and the lower part at the head band and the joining structure while wearing and removing the headphone. The headphone uses signals from the press sensor to control actions thereof.||1. A headphone, comprising: |
a headphone assembly including a head band, a pair of in-ear components and a pair of joining structures, the in-ear components being pivoted to two distal ends of the head band by the joining structures respectively; and
a sensor module disposed in the headphone assembly and coupled with the headphone assembly, the sensor module including an upper part, a lower part disposed apart opposite the upper part and a press sensor disposed between the upper part and the lower part, the press sensor having a sensing face,
wherein the upper part and the lower part are designated with inner structures of the head band or the in-ear components of the headphone assembly, the press sensor detects using states of the headphone by judging whether the sensing face thereof is pressed by the upper part or the lower part by virtue of elastic deformation or movement of the upper part and the lower part at the head band and the joining structure of the headphone assembly while wearing and taking off the headphone, then the headphone uses signals from the press sensor to control actions thereof,
wherein the sensor module is located in the joining structure of the headphone assembly, a cover assembly at the connection of the head band and the in-ear component includes a front cover and a rear cover which are acted as the upper part and the lower part of the sensor module respectively, the front cover and the rear cover are coupled with each other and are pivoted together by the joining structure to realize a relative movement therebetween, the press sensor is disposed with the sensing face stretching between the front cover and the rear cover for touching or disconnecting from the front cover or the rear cover by virtue of the relative movement of the front cover and the rear cover.
|Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Press Sensor To Detect Wiether The Headphones Are Weared By The User||-||-||-||Yes||2014-06-30||2015-09-15||2014-01-10||1001||H04R000110 | H04R0005033||H04R00011058 | H04R00011041 | H04R0005033 | H04R00011008 | H04R00011066 | H04R00050335 | H04R242007||Chang, Yu Chao | Lee, Tsung Chieh||Cheng Uei Prec Ind Co Ltd||Cheng Uei Prec Ind Co Ltd||0||US9137598B2 | EP2894875A1 | TWM477745U | US20150201268A1|
|50||US9134793B2||Headset Computer With Head Tracking Input Used For Inertial Control||A Head-tracker is built into a headset computer as a user input device. A user interface navigation tool utilizes the head tracking but with inertial control. The navigation tool is formed of two different sized circles concentrically depicted, and a pointer. The pointer is moveable within the two circles defining inner and outer boundaries. The pointer represents user's head position and movement sensed by the head tracker. The HSC displays a document and pans (navigates) the document as a function of user head movement sensed by the head tracker and illustrated by the navigation tool. The direction of movement of the pointer depicted in the navigation tool defines pan direction of the displayed document. Pan speed of the displayed document is defined based on position of the pointer, with respect to the inner and outer circle boundaries in the navigation tool.||1. A method of controlling document navigation in a headset computer, the method comprising: |
overlaying a navigator tool on a subject document displayed on the headset computer being worn by a user;
indicating, at the navigator tool and in response to received head movement of the user at the headset computer, a representation of the received head movement; and
panning the subject document at a speed and in a direction based on the received head movement,
wherein overlaying the navigator tool includes overlaying an inner boundary and an outer boundary, the inner boundary and outer boundary being concentric circles having different diameters and further overlaying a pointer configured to move within the inner boundary and the outer boundary;
wherein indicating the representation of the received head movement includes indicating the representation by positioning the pointer with respect to the origin of the two circles;
wherein panning the subject document includes panning the subject document at the speed based on a distance from the origin minus a radius of the inner boundary.
|-||Yes||Yes||-||Yes||Yes||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||2013-03-13||2015-09-15||2013-01-04||1001||G09G000508 | G02B002701 | G06F000301 | G06F00030346 | G06F0003038 | G06F00030481 | G06F00030485||G06F0003011 | G02B0027017 | G06F0003012 | G06F00030346 | G06F000304812 | G06F00030485 | G02B2027014 | G02B20270187||Mcdonald, Lee | Jacobsen, Jeffrey J. | Pombo, Stephen A. | Parkinson, Christopher||Kopin Corp||Kopin Corp||0||US9134793B2 | US20140191964A1 | WO2014107220A1|
|51||US9131312B2||Physiological Monitoring Methods||A method of monitoring a subject via an earbud module includes positioning the earbud module within an ear of the subject such that a sensor region thereof matingly engages a region of the ear at the intersection of the anti tragus and acoustic meatus and is oriented in a direction away from the ear canal. Physiological information is then detected and/or measured via the optical sensor. The optical sensor includes an optical emitter and an optical detector, and detecting and/or measuring physiological information about the subject includes directing optical energy at the ear region via the optical emitter and detecting optical energy absorbed, scattered, and/or reflected by the ear region via the optical detector. Environmental information in a vicinity of the subject may be monitored via an environmental sensor associated with the earbud module and subject motion may be monitored via a motion sensor associated with the earbud module.||1. A method of monitoring a subject via an earbud module, the earbud module having a sensor region with an optical sensor, the method comprising: |
positioning the earbud module within an ear of the subject, wherein the sensor region is contoured to matingly engage a region of the ear at the intersection of the anti tragus and acoustic meatus and is oriented in a direction away from the ear canal; and
detecting or measuring physiological information about the subject via the optical sensor.
|Yes||Yes||-||-||-||Yes||-||Yes||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||Yes||Yes||-||Yes||-||-||-||-||-||-||-||2014-05-08||2015-09-08||2009-02-25||1001||A61B000500 | A61B00050205 | A61B0005024 | A61B00050476 | A61B000511 | A61B00051455 | A61B000516 | F21V000800 | G06F001900 | H04R000110 | A61B000501 | A61B0005021 | A61B0005026 | A61B00050295 | A61B000508 | A61B0005091||A61B00050205 | A61B000500 | A61B00050013 | A61B00050022 | A61B00050024 | A61B00050059 | A61B00050082 | A61B00050084 | A61B000501 | A61B000502055 | A61B0005021 | A61B000502427 | A61B000502433 | A61B00050261 | A61B00050476 | A61B00050816 | A61B000511 | A61B00051107 | A61B00051118 | A61B000514532 | A61B00051455 | A61B000514551 | A61B0005165 | A61B0005418 | A61B00054812 | A61B00054845 | A61B00054848 | A61B0005486 | A61B00054866 | A61B00054875 | A61B00054884 | A61B00056803 | A61B00056815 | A61B00056817 | A61B00056819 | A61B00056826 | A61B00056838 | A61B0005721 | A61B00057214 | A61B00057282 | A61B0005742 | A61B00057475 | G02B00060001 | G06F00193418 | H04R0001105 | H04R00011091 | A61B0005024 | A61B000502405 | A61B000502416 | A61B0005026 | A61B00050295 | A61B0005091 | A61B0005411 | A61B0005415 | A61B25600242 | A61B25620233||Leboeuf, Steven Francis | Tucker, Jesse Berkley | Aumer, Michael Edward||Valencell Inc||Valencell Inc||0||US9131312B2 | EP2400884A2 | EP2400884A4 | EP2405805A2 | EP2405805A4 | EP2932729A1 | EP2932729A4 | JP05789199B2 | JP2012518515A | JP2015231550A | US20100217098A1 | US20100217099A1 | US20100217100A1 | US20100217102A1 | US20130131519A1 | US20140135596A1 | US20140140567A1 | US20140171755A1 | US20140171762A1 | US20140180039A1 | US20140243620A1 | US20140249381A1 | US20140288394A1 | US20140288395A1 | US20140323830A1 | US20150032009A1 | US20150073236A1 | US20150080741A1 | US20150105633A1 | US20150119657A1 | US20150126824A1 | US20150131837A1 | US20150157222A1 | US20150289818A1 | US20150342467A1 | US8647270B2 | US8700111B2 | US8788002B2 | US8886269B2 | US8923941B2 | US8929965B2 | US8929966B2 | US8934952B2 | US8942776B2 | US8961415B2 | US8989830B2 | WO2010098912A2 | WO2010098912A3 | WO2010098915A1 | WO2010099066A2 | WO2010099066A3 | WO2010099190A2 | WO2010099190A3 | WO2014092932A1|
|52||US9130660B2||Multiple-Path Noise Cancellation||A communication device, such as a smartphone or tablet, includes a communication interface with noise cancellation logic. The noise cancellation logic includes a lead path and a reference path. A signal source provides a signal to the lead path and the references path. The signal is amplified along the lead path and the reference path. Distortion is imparted onto the signal during amplification on the lead path. A correction signal based on the difference between the amplified signal on the lead path and the amplified signal on the reference path is generated by the noise cancellation logic. The correction signal may reflect to distortion imparted during amplification on the lead path. The correction signal is differentially combined with the amplified signal on the lead path to attempt to remove the distortion and generate an output.||1. A device, comprising: |
a first path configured to receive an input signal, the first path comprising a first amplifier, the first path configured to produce a first signal based on the input signal and a first distortion imparted by the first amplifier;
a second path configured to receive the input signal, the second path comprising a second amplifier, the second path configured to produce a second signal based on the input signal and a second distortion imparted by the second amplifier;
a first combiner connected to the first and second paths, the first combiner configured to combine the first and second signals to produce a correction signal based on a difference between the first and second distortion; and
a second combiner connected to the first path and the first combiner, the second combiner configured to combine the first signal and the correction signal to generate an output signal.
|-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||2014-03-07||2015-09-08||2014-01-08||1001||H04B000100 | H04B000104 | H04B000162||H04B000162 | H04B00010483 | H04B20010425||Afsahi, Ali||Broadcom Corp||Broadcom Corp||0||US9130660B2 | US20150195002A1|
|53||US9129500B2||Apparatus For Monitoring The Condition Of An Operator And Related System And Method||An apparatus includes a headset having one or more speaker units. Each speaker unit is configured to provide audio signals to an operator. Each speaker unit includes an ear cuff configured to contact the operator's head. The headset further includes multiple sensors configured to measure one or more characteristics associated with the operator. At least one of the sensors is embedded within at least one ear cuff of at least one speaker unit. The sensors could include an electrocardiography electrode, a skin conductivity probe, pulse oximetry light emitting diodes and photodetectors, an accelerometer, a gyroscope, or a temperature sensor. The apparatus could also include a processing unit configured to analyze audio signals captured by a microphone unit of the headset to identify respiration by the operator or at least one voice characteristic of the operator.||1. An apparatus comprising: |
one or more speaker units, each speaker unit configured to provide audio signals to an operator, each speaker unit comprising an ear cuff configured to contact the operator's head;
a support structure configured to secure the apparatus to the operator's head;
multiple sensors configured to measure (i), one or more physiological characteristics associated with the operator, (ii) one or more environmental characteristics surrounding the operator, and (iii) one or more operator behaviors, at least one of the sensors embedded within or attached to at least one ear cuff of at least one speaker unit, the sensors including an electrocardiography electrode embedded in or attached to a lower portion of at least one ear cuff so as to be in proximity to an artery of the operator when the at least one ear cuff is in contact with the operator's head; and
at least one processing unit configured to (i) receive and analyze measurements associated with the one or more physiological characteristics, the one or more environmental characteristics, and the one or more operator behaviors and (ii) in response to the analyzed measurements, determine a measure of operator awareness associated with the operator and trigger feedback to the operator.
|Yes||-||Yes||-||-||Yes||-||Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||Yes||-||-||-||-||-||-||2012-09-11||2015-09-08||2012-09-11||1001||H04R000110 | G08B002102 | G08B002106 | H04R000502||G08B002102 | G08B002106||Tenenbaum, Carl N. | Strickland, Julie N. | Saunders, Jeffrey H. | Wilds, Andrew M.||Tenenbaum Carl N | Strickland Julie N | Saunders Jeffrey H | Wilds Andrew M | Raytheon Co||-||0||US9129500B2 | US20140072136A1|
|54||US9124982B2||Always On Headwear Recording System||A system that records audio and stores the recording is provided. The system includes first and second monitoring assemblies mounted in an earpiece that occludes and forms an acoustic seal of an ear canal. The first monitoring assembly includes an ambient sound microphone (ASM) to monitor an ambient acoustic field and produce an ASM signal. The second monitoring assembly includes an ear canal microphone (ECM) to monitor an acoustic field within the ear canal and produce an ECM signal. The system also includes a data storage device configured to act as a circular buffer for continually storing at least one of the ECM signal or the ASM signal, a further data storage device and a record-activation system. The record-activation system activates the further data storage device to record a content of the data storage device.||1. An Always-On Recording System (AORS) comprising: |
a monitoring assembly mounted on a mobile phone, the monitoring assembly including an ambient sound microphone (ASM) to monitor an ambient acoustic field proximate to the mobile phone, the ASM producing an ASM signal responsive to the ambient acoustic field;
a data storage device configured to act as a circular buffer for continually storing the ASM signal;
a further data storage device coupled to the data storage device; and
a record-activation system including software configured to activate the further data storage device to record a content of the data storage device.
|Yes||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||2013-09-09||2015-09-01||2007-04-09||1001||H04M000100 | G06F000316 | H04R000110 | H04R000304 | H04R002900||G06F000316 | H04R00011091 | H04R000304 | H04R002900 | H04R242007 | H04R246015||Goldstein, Steven Wayne | Usher, John||Personics Holdings Inc | Personics Holdings Inc||Personics Holdings Llc||0||US9124982B2 | US20080253583A1 | US20120123573A1 | US20140012403A1 | US20140219464A1 | US20150370527A1 | US8111839B2 | US8553905B2 | WO2008124786A2 | WO2008124786A3|
|55||US9124975B2||Headset Device With Fitting Memory||A headset device (1) comprising an attachment device (2) for attaching the headset device (1) to the head (3) of a user. The headset device (1) also comprises an audio device (4, 6) for transducing audio to an electrical signal or vice versa and adjustment means (7, 8) for adjusting the mutual positions and/or orientations of the attachment device (2) and the audio device to (4, 6) to a user-specific position, in which the headset device (1) is adjusted to the geometry of the users head (3). The adjustment means (7, 8) comprises selecting means (9; 10; 20; 41) for storing a first user-specific position, whereby a user quickly can readjust the headset device (1) from a non-user-specific position or other user-specific position to the first user-specific position.||1. A headset device comprising |
an attachment device for attaching the headset device to the head of a user,
an audio device, including a microphone boom and a pivot axis, said boom being rotatable on said axis, for transducing audio to an electrical signal or vice versa, an adjuster capable of adjusting the mutual positions and/or orientations of the audio device to a user-specific position, wherein the adjuster comprises
a selector capable of storing a first user-specific angular position of said microphone boom, whereby a user quickly can readjust the headset device from a non-user-specific position to the first user-specific position whereby the actual position is stored as the first user-specific position, when the user activates the selector.
|Yes||Yes||-||Yes||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||2013-06-27||2015-09-01||2012-06-29||1001||H04R000908 | H04R000108 | H04R000110||H04R0001105 | H04R000108 | H04R2201107 | H04R2201109||Andersen, Michael Hoby||Andersen Michael Hoby | Gn Netcom As||Gn Store Nord A/S||0||US9124975B2 | CN103517175A | EP2680607A1 | US20140003646A1|
|56||US9124970B2||System And Method For Using A Headset Jack To Control Electronic Device Functions||Systems and methods for automatically controlling an electronic device based on whether or not a headset is in a listening position are described. The existing wired stereo headset conductors may be used to provide power to a sensor and hardware subsystem within the headset. In some aspects, a sensor-enabled headset or headphones can sense whether each earbud of the headset is placed in the user's ears and communicate that information to an electronic device.||1. A device for communicating with an electronic device through a headset port, comprising: |
a control device comprising a first capacitive touch sensor, wherein the control device is configured to receive power and communicate with the electronic device through a headset connection; and
a modulation circuit within the control device and configured to modulate an electronic signal to the headset connection based on contact with the capacitive touch sensor; wherein the device is a headset having at least one earpiece, and wherein the at least one earpiece comprises the capacitive touch sensor; and wherein transmission of audio signals to the at least one earpiece may be discontinued if the at least one earpiece is not touching a user's ear.
|Yes||Yes||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||2013-07-22||2015-09-01||2013-07-22||1001||H04M000100 | G06F000316 | H04M000900 | H04R000110 | H04R000300 | H04R000504||H04R000504 | G06F0003165 | H04R00011041 | H04R000300 | H04R2205022 | H04R222561 | H04R242003 | H04R243001 | H04R246001||Rabii, Khosro Mohammad | Antao, Sherman Sebastian||Qualcomm Inc||Qualcomm Inc||0||US9124970B2 | US20150023516A1 | WO2015012964A1|
|57||US9117443B2||Wearing State Based Device Operation||Methods and apparatuses for wearing state device operation are disclosed. In one example, a headset includes a sensor for detecting a headset donned state or a headset doffed state. The headset operation is modified based on whether the headset is donned or doffed.||1. A headset comprising: |
a wireless transceiver;
a donned or doffed detector configured to identify a headset donned state or a headset doffed state, wherein the headset donned state is the headset worn on a user ear;
a text to speech application comprising instructions which when executed by the processor cause the headset to convert a text based message to audio speech; and
a text based message notification application comprising instructions which when executed by the processor cause the headset to receive a notification message at the headset from a computing device of a text based message, identify a transition from a headset doffed state to a headset donned state subsequent to receiving the notification message at the headset from the computing device of the text based message, and output playback options regarding the text based message at the speaker responsive to the transition from the headset doffed state to the headset donned state.