|PATENT ANALYTICS -"Technology Scouting Analysis"|
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|TABLE OF INDEX|
|3||Masted Patent Dataset|
|4||Critical Patent Reference Findings|
|5||Top 5 Competitor Intelligence|
|6||Patent List of Top 5 Competitors|
|7||Top Manufacturers and Vendors|
|9||Intersting Research Articles|
|10||List of Universities|
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|To perform a Technology Scouting Analysis|
|The present technology relates to "Wearable Audio Device with Embedded Technology".|
To find wearable audio device with Humidity Sensor, Temperature Sensor and Pressure Sensor provide sensing in healthcare and entertainments. The wearable device is not only for entertainment and monitoring our physical wellbeing but will blend seamlessly in to our lives, and providing a link to the Internet of Things (IoT) and beyond. The wearable device is allowing us to manipulate our surroundings, as well as enable our surroundings to adapt to our immediate needs.
These days, wearable come in various forms such as a smart watch, smart shoes, smart glasses, armband, waist accessories etc. Similarly, a user can keep mobile in his front pocket, back pocket, shirt pocket, and hand or on a table.
Then, we also developed some wireless wearable sensor systems, such as the mobile force plate system, to implement quantitative human kinematic and kinetic analysis, which may be applied in rehabilitation, clinical diagnosis and healthcare monitoring in the future.
The above mentioned sensors (Humidity, Temperature and Pressure Sensors) are majorly used to solve the above defined problems with functionality and expandability of sophisticated engineering development platforms.
|This assignment relates to scouting & analysis of patents describing “Wearable Audio Devices with Embedded Technology” using the 3 different types of sensors - Humidity, Temperature and Pressure sensors. |
Wearable has various types of sensors but mainly Humidity Sensor, Temperature Sensor and Pressure Sensor potential to change the world, and Bluetooth (BLE) have empowered devices with power of sensing and communication to take complex decisions. These sensors are performed by main basic monitored parameters in clinical practice and daily life and the framework and main modules utilized in the device, which constitute the basis of wearable sensor systems for users, were summarized.
Monitoring methods and techniques such as single-parameter monitoring, multi-parameter monitoring and textile electrode technology in the wearable sensor system were reviewed according to some recent research and applications in the technology.
These sensors are supported with application for Android and iOS, developers to connect the device to the cloud out of the box, without any additional software development.
These sensors are used by the wearable sensor systems can be used for some special cases in healthcare and patient monitoring.
The wearable device includes a wearable sensor system which is becoming smaller, more intelligent and many of them have been commercialized, which benefit numerous users around the world. Various kinds of monitoring methods and techniques, such as direct monitoring, indirect monitoring, multi-parameter monitoring, single-parameter monitoring, textile technology, integration, wireless sensing and power supply, have been applied in these systems.
The future of wearable technology is going to see unprecedented growth and evolution in the next few short years, and we’re all invited along for the ride. The sensors and Bluetooth (BLE) have empowered devices with the power of sensing and communication to take complex decisions.
|Search Strategy |
|Following steps were undertaken, not necessarily in sequence, to perform the search. |
• Various keywords and classifications were used independently and/or in combination with each other to perform multiple searches in various databases.
• Various assignee names were used in combination with keywords and classifications to perform multiple searches in various databases.
• To supplement the search analytics, Assignee standardization is done for enabling the accuracy in patent count.
• Researchers have adopted a progressively evolving search strategy to identify the most relevant results within the project budget.
Disclaimer:The information in this report is provided solely for assisting in the independent evaluation of the patent portfolio and intellectual property (IP).
In no event shall TransactionsIP be liable for any incidental, consequential, or special damages of any kind, or any damages whatsoever associated with this report.
|Taxonomy relates to “Wearable Audio Devices with Embedded Technology” having different Mode of connections, different types of sensors, processing units and additional features, etc.|
|Technical Features||Mode of Connection||Wireless|
|Types of Sensors||Gyroscopic Sensor|
|Optical / Light Sensor|
|Piezoelectric / Capacitive Sensor|
|Physiology / Biometric Sensor|
|Processing Unit (Location)||Inside Earphone|
|Application Area||Internal (Physiological)|
|External (Environmental)||Noise Cancellation|
|The sheet shows all the patent/publication numbers with corresponding categorization list.|
|LANDSCAPE ANALYSIS - "AUDIO DEVICES WITH EMBEDDED TECHNOLOGY"|
|S.No||Publication Number||Title||Abstract||First Claim||Technical Features||Application Date||Publication Date||Earliest Priority Date||US Classification||IPC Classification||CPC Classification||Inventors||Assignee / Applicant||Assignee - Standardized||Count of Citing Patents||INPADOC Family Members|
|Mode of Connection||Types of Sensors||Processing Unit (Location)||Application Area||Additional Features|
|Wireless||Wired||Gyroscopic Sensor||Position Sensor||Pressure Sensor||Accelerometer||Magnetometer||Optical / Light Sensor||Acoustic Sensor||Piezoelectric / Capacitive Sensor||Image Sensor||Ultrasonic Sensor||Microphone||Physiology / Biometric Sensor||Temperature Sensor||Proximity Sensor||Vibration sensor||Humidity Sensor||Inside Earphone||Outside Earphone||Internal (Physiological)||External (Environmental)||Feedback||Video display||Gesture||Touch control|
|1||US9226090B1||Sound Localization For An Electronic Call||During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.||1. A method, comprising: |
capturing, with an electronic earphone located at a head of a talking person, binaural sound that will be provided to a listening person during a telephone call;
designating, with a computer system, a sound localization point in empty space that is away from and proximate to the listening person such that the sound localization point simulates an origin of the binaural sound at the empty space that the listening person hears during the telephone call;
adjusting, with the computer system, the binaural sound captured at the earphone of the talking person so the binaural sound originates during the telephone call from the sound localization point in empty space that is away from and proximate to the listening person; and
providing, with an electronic earphone located at a head of the listening person, the binaural sound to the listening person during the telephone call such that the origin of the binaural sound for the listening person occurs at the sound localization point in empty space that is away from and proximate to the listening person.
|-||Yes||Yes||Yes||-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||Yes||Yes||-||-||Yes||Yes||-||Facial Motion Capture||-||-||-||-||2014-06-23||2015-12-29||2014-06-23||-||H04S000700||H04S0007303||Norris, Glen A. | Lyren, Philip Scott||Norris Glen A | Lyren Philip Scott||-||0||US9226090B1 | US20150373477A1|
|2||US9224382B2||Noise Cancellation||A noise cancellation signal is generated by generating an ambient noise signal, representing ambient noise, and generating a noise cancellation signal, by applying the ambient noise signal to an feedforward filter, where the feedforward filter comprises a high-pass filter having an adjustable cut-off frequency, and by applying a controllable gain. The noise cancellation signal is then applied to a loudspeaker, to generate a sound to at least partially cancel the ambient noise. An error signal is generated, representing unwanted sound in the region of the loudspeaker. The phase of the ambient noise signal is compared to a phase of the error signal, and the gain is controlled on the basis of a result of the comparison, taking account of a phase shift introduced by the high-pass filter when performing the comparison.||1. A method of generating a noise cancellation signal, the method comprising: |
generating an ambient noise signal, representing ambient noise;
generating a noise cancellation signal, by applying the ambient noise signal to a feedforward filter, wherein the feedforward filter comprises a high-pass filter having an adjustable cut-off frequency, and by applying a controllable gain;
applying the noise cancellation signal to a loudspeaker, to generate a sound to at least partially cancel the ambient noise; and
generating an error signal, representing unwanted sound in the region of the loudspeaker, wherein the method further comprises:
comparing a phase of the ambient noise signal to a phase of the error signal, and controlling said gain on the basis of a result of said comparison, and taking account of a phase shift introduced by said high-pass filter when performing said comparison.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-10-15||2015-12-29||2012-10-12||-||G10K0011175 | G10K0011178||G10K0011175 | G10K0011178 | G10K22103027 | G10K22103028||Clemow, Richard||Cirrus Logic Internat Uk Ltd | Cirrus Logic Internat Semiconductor Ltd||Cirrus Logic Inc||0||US9224382B2 | GB201218346D0 | GB2506908A | GB2506908B | US20140105413A1|
|3||US9224311B2||Combining Data Sources To Provide Accurate Effort Monitoring||By combining data from different sensors (on fitness device, mobile smartphone, smart clothing, other devices or people in same location), an intelligent system provides a better indicator of an individual's physical effort, using rich data sources to enhance quantified metrics such as distance/pace/altitude gain, to provide a clearer picture of an individual's exercise and activity.||1. A device comprising: |
at least one computer readable storage medium bearing instructions executable by a processor;
at least one processor configured for accessing the computer readable storage medium to execute the instructions to configure the processor for:
receiving signals from a position sensor from which the processor can calculate a speed and a distance over an interval of time ΔT;
receiving at least one signal representing at least one biometric condition of a user of the device;
adjusting a baseline value associated with the speed and/or distance based at least in part on the biometric condition to render an adjusted baseline; and
outputting an indicia of exercise effort based at least in part on the adjusted baseline.
|Yes||-||-||-||-||-||Yes||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||-||Yes||Yes||-||Yes||Yes||-||-||-||-||-||-||Yes||-||2014-04-17||2015-12-29||2013-09-17||-||A63B007100 | A61B000500 | A61B00050205 | A61B0005021 | A61B0005024 | A63B007106 | G01C002100 | G01C002120 | G01S001919 | G06F000301 | G06F00030481 | G06F00030484 | G06F000316 | G06F001730 | G06F001900 | G06Q001006 | G08B002501 | G09B001900 | G10L001500 | H04B000500 | H04L002906 | H04W000400 | H04W001208 | A61B000511 | A61B0005117 | A61B0005145 | H04M0001725||G09B00190038 | A61B000502055 | A61B0005021 | A61B000502438 | A61B00054815 | A63B007106 | G01C002100 | G01C002120 | G01S001919 | G06F0003017 | G06F00030481 | G06F00030484 | G06F0003165 | G06F00173074 | G06F00193481 | G06Q00100639 | G08B0025016 | G10L001500 | H04B00050025 | H04L00630853 | H04W0004008 | H04W001208 | A61B000511 | A61B00051172 | A61B00051176 | A61B000514532 | A61B000514542 | H04M00017253 | H04M225002 | H04M225004 | H04M225012||Yeh, Sabrina Tai-Chen | Fredriksson, Jenny Therese||Sony Corp||Sony Corp||0||US9224311B2 | CN104436615A | CN104460980A | CN104460981A | CN104460982A | CN104469585A | JP2015058362A | JP2015058363A | JP2015058364A | JP2015059935A | JP2015061318A | KR2015032169A | KR2015032170A | KR2015032182A | KR2015032183A | KR2015032184A | US20150079562A1 | US20150079563A1 | US20150081056A1 | US20150081066A1 | US20150081067A1 | US20150081209A1 | US20150081210A1 | US20150082167A1 | US20150082408A1 | US8795138B1 | US9142141B2 | WO2015041970A1 | WO2015041971A1|
|4||US9223540B2||Electronic Device And Method For Voice Recognition Upon Charge Transfer||An electronic device and a method for recognizing a voice are provided. An operating method of the electronic device includes detecting, at least one of two or more first sensors disposed in a preset region, detecting an amount of charge transfer over a preset value, when detecting the amount of the charge transfer over the preset value, detecting, at one of two or more second sensors disposed in a preset distance from two or more microphones, an object in a preset distance; and collecting, at one of the two or more microphones, the one disposed in a preset distance from the second sensor detecting the object in the preset distance, a voice.||1. An operating method of an electronic device, the method comprising: |
detecting, by at least one first sensor a charge transfer;
when an amount of the charge transfer is greater than a preset value, detecting, by one of two or more second sensors disposed at a position adjacent to each of two or more microphones, an object in a preset distance from the electronic device; and
receiving a voice by a microphone disposed in a position adjacent to the one of the two or more second sensors detecting the object.
|Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||Yes||-||-||Yes||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-08-08||2015-12-29||2012-10-30||-||G10L002100 | G06F000316 | G10L001500 | G10L002500 | H04M0001725||G06F0003167 | H04M000172522 | H04M225012 | H04M225074||Park, Hyung-Jin||Samsung Electronics Co Ltd | Samsung Electronics Co Ltd||Samsung Electronics Co Ltd||0||US9223540B2 | AU2013213762A1 | CN103795850A | EP2728840A2 | KR2014054960A | US20140122090A1|
|5||US9219967B2||Multiuser Audiovisual Control||Various audiovisual presentation arrangements are described. In some embodiments, a headset is configured to output audio to a user. A television receiver may be configured to output a plurality of video feeds for simultaneous presentation by a display device. Each video feed of the plurality of video feeds may be displayed in a different display region of the display device. The television receiver may receive a command indicative of a video feed of the plurality of videos feeds that the user is viewing on the display device. Based on the command, the television receiver may output, to the headset, an audio feed that corresponds to the video feed the user is viewing.||1. An audiovisual control system, the audiovisual control system comprising: |
a receiving device configured to:
receive a first command selecting a first video feed of a plurality of video feeds that a first user is viewing on a display device;
based on the first command, output a first audio feed that corresponds to the first video feed the first user is viewing, wherein the first audio feed is output to a first headphone device;
receive a second command indicative of a change command from the first user corresponding to the first video feed;
determine whether a second user is viewing the first video feed;
in response to determining whether the second user is viewing the first video feed, process the change command;
receive a third command indicative of a second video feed of the plurality of video feeds that the second user is viewing on the display device; and
based on the third command, output a second audio feed that corresponds to the second video feed the second user is viewing, wherein the second audio feed is output to a second headphone device.
|Yes||Yes||Yes||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-11-25||2015-12-22||2013-11-25||-||H04R000110 | H04R002700||H04R002700 | H04R00011041 | H04R249915||Nguyen, Phuc H. | Bruhn, Christopher William||Echostar Technologies Llc||Echostar Technologies Llc||0||US9219967B2 | US20150146879A1|
|6||US9219965B2||Body-Worn Control Apparatus For Hearing Devices||A control apparatus comprises a housing and is adapted to control a hearing device by recognizing predefined gestures made by the device wearer by moving one arm and/or or hand relative to the housing when the housing is in an operating position at or on the wearer's body. The housing comprises a reference electrode coupled capacitively to the wearer when the housing is in the operating position and a first sensor electrode. The control apparatus further comprises: a first signal generator to provide a first electric probe signal between the first sensor electrode and the reference electrode; a first measurement circuit to determine first signal values in dependence on the impedance between the first sensor electrode and the reference electrode; a detector to recognize gestures in dependence on the first signal values; and a control unit to provide control commands to the hearing device in dependence on recognized gestures.||1. A control apparatus comprising |
a housing and adapted to control a hearing device in dependence on recognising predefined gestures made by a wearer of the hearing device by moving one of his or her arms and/or the hand of said arm relative to the housing when the housing is in an operating position at or on the wearer's body, the housing comprising
a reference electrode arranged to couple capacitively to a body area of the wearer when the housing is in the operating position and
a first sensor electrode, the control apparatus further comprising:
a first signal generator adapted to provide a first electric probe signal between the first sensor electrode and the reference electrode;
a first measurement circuit adapted to determine first signal values in dependence on the impedance between the first sensor electrode and the reference electrode;
a detector adapted to recognise said gestures in dependence on the first signal values; and
a control unit adapted to provide control commands to the hearing device in dependence on recognised gestures, wherein
the first signal generator is adapted to provide the electric probe signal at multiple signal frequencies;
the first measurement circuit is adapted to determine the first signal values at multiple signal frequencies; and
the detector is adapted to recognise said gestures in dependence on changes in ratios between the first signal values determined at different signal frequencies.
|Yes||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||-||-||Yes||-||2013-11-06||2015-12-22||2012-11-07||-||H04R002500 | G08C001910 | H04B000500||H04R002555 | H04R0025453 | H04R0025558||Rasmussen, Karsten Bo | Hauschultz, Lars Ivar||Oticon As||Oticon As||0||US9219965B2 | CN103813250A | EP2731356A1 | US20140126759A1|
|7||US9219961B2||Information Processing System, Computer-Readable Non-Transitory Storage Medium Having Stored Therein Information Processing Program, Information Processing Control Method, And Information Processing Apparatus||In an exemplary information processing system including a plurality of sound output sections, the positional relationship among the plurality of sound output sections is recognized. In addition, a sound corresponding to a sound source object present in a virtual space is generated. The output volume of the sound for the sound source object is determined, for each sound output section, in accordance with the positional relationship among the plurality of sound output sections, and the generated sound is outputted in accordance with the output volume.||1. An information processing system including a processor system including at least one processor and a plurality of sound output sections, the processor system being configured to at least: |
recognize the positional relationship among the plurality of sound output sections;
generate a sound corresponding to a sound source object present in a virtual space, based on predetermined information processing; and
cause each of the plurality of sound output sections to output the generated sound therefrom, and determine, for each of the plurality of sound output sections, the output volume of the sound corresponding to the sound source object in accordance with the positional relationship among the plurality of sound output sections.
|Yes||Yes||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-04-22||2015-12-22||2012-10-23||-||H04R000502 | H04S000700||H04R000502 | H04S0007303 | H04S0007304 | H04S240011 | H04S240013 | H04S240015||Osada, Junya||Nintendo Co Ltd||Nintendo Co Ltd||0||US9219961B2 | JP2014083205A | US20140112505A1|
|8||US9219957B2||Sound Pressure Level Limiting||Limiting the sound pressure level presented to the listener's ears by one or more headphones, using processing capabilities of a personal media device. Headphones, coupled to audio signals from a personal media device, include a sensor to measure the sound pressure level presented to the listener's ears, and provide that measure to the personal media device. The personal media device, optionally aided by one or more analog circuits, adjusts the audio signal so that the sound pressure level is maintained within a recommended range.||1. A method, including: |
measuring a sound pressure level next to a listener's ear;
comparing said sound pressure level with a pre-selected value; and
adjusting an audio signal emitted into said listener's ear, in response to a result of said comparing;
wherein said measuring includes obtaining a first sound pressure level next to said listener's first ear, and separately obtaining a second sound pressure level next to said listener's second ear; and
combining said first sound pressure level and said second sound pressure level;
wherein said adjusting is performed at one or more of said listener's ears, in response to a result of said combining.
|Yes||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||2013-03-12||2015-12-22||2012-03-30||-||H03G000320 | H03G000332 | H04R000110 | H04R000300||H04R0003002 | H03G000332 | H04R00011041 | H04R0003007 | H04R242007||Schul, Eran | Hogue, Douglas K. | Olson, Alan | Bruss, John||Imation Corp||Imation Corp||0||US9219957B2 | US20130259241A1|
|9||US9213861B2||Mobile Communication System||The mobile communication device is for use as a cell phone, as a wireless identity authentication device with other electronic devices (with cell phones, computers, and ATM's), and as a headset in the form of an earphone, an eye-covering, or a head covering for audio communication with a central processor, another mobile terminal a cell phone, or a pda. The mobile communication device is hands-free being worn on or near the face, and only requires a finger touching for bimodal identity authentication. An audio receiver is compatible with the ear of the user and a microphone transmits words spoken by the user, electronically therethrough. A fingerprint sensor is mounted and positioned within the device. When user authentication is required, the user is prompted to touch the fingerprint sensor, and said fingerprint data is compared with fingerprint images of authorized users. In another aspect of the invention, mobile communication device is an eye-covering, a head covering, or an identification badge including a fingerprint sensor and a processor and is used for wireless authentication of the user.||1. A method for accessing a central processor by means of a wearable computer for gaining physical access, financial access, and data access as approved by an issuing authority, said method comprising: |
a. receiving a user request at a processor remote from said wearable computer for physical access into a secure area or for access or entry of secure data or for financial access to purchase goods or services at a terminal;
b. determining at a processing computer remote from said wearable computer if said wearable computer has been authorized for purpose of said user request by said issuing authority;
c. prompting said wearable computer from a prompting processor remote from said wearable computer to submit fingerprint data to gain said physical access or said data access or said financial access;
d. receiving user sensed fingerprint data submitted from said wearable computer, said receiving occurring in a processing computer remote from said wearable computer, said wearable computer enabling said user to have both hands free for said physical, financial and data access request except when submitting said fingerprint data, reference fingerprint data having been previously registered to authenticate user identity;
e. comparing said sensed fingerprint data submitted through said wearable computer with said reference fingerprint in a comparing processor, said comparing processor being remote from said wearable computer;
f. approving said user request to said physical access to said secure area and said data access if said user is authorized by said issuing authority, authentication of user identity being made at least in part based upon a comparison of said sensed fingerprint data with reference fingerprint data by an authorizing processor remote from said wearable computer; and
g. approving said user request for said financial access if said user is authorized by said issuing authority and an account balance has not been exceeded, authentication of user identity being made at least in part based upon a comparison of said sensed fingerprint data with reference fingerprint data by an authorizing processor remote from said wearable computer.
|Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||Yes||-||-||-||Yes||-||-||-||-||-||-||2013-05-30||2015-12-15||2006-03-20||-||H04M000166 | G06F002162 | H04M000105 | H04M000160||G06F00216245 | H04M000105 | H04M000166 | H04M00016066 | H04M225012 | H04M225074||Black, Gerald R. | Black, Alyssa S.||Black Gerald R | Black Alyssa S||-||0||US9213861B2 | CA2647194A1 | US20100075631A1 | US20100311390A9 | US20130263284A1 | WO2008008101A2 | WO2008008101A3|
|10||US9211069B2||Personal Protective Equipment With Integrated Physiological Monitoring||Embodiments may comprise personal protective equipment with integrated physiological monitoring. Some embodiments may relate specifically to in-ear devices (such as hearing protection and/or communication devices) having one or more physiological sensors for early monitoring for heat related illnesses. Several embodiments may incorporate a temperature sensor and a speaker into such in-ear device.||1. A device comprising: |
an earpiece for use in a user's ear having a sealing ear tip;
at least one temperature sensor;
a speaker having a face;
and or more waveguides;
the earpiece has sufficient length and flexibility so that when in place in the user's ear it comfortably extends forward past at least a first bend of the user's ear canal;
the sealing tip is sufficiently pliable to form a good seal in the user's ear canal;
the temperature sensor comprises an IR sensor having a face, and the one or more waveguides comprise an IR waveguide;
the IR waveguide comprises an elongate hollow tube having an inner surface that is substantially reflective of IR which extends from the face of the IR sensor forward so that, when in place in the user's ear, the IR waveguide allows the IR sensor to detect temperature in the ear canal;
the one or more waveguides further comprise a sound waveguide;
the sound waveguide comprises an elongate hollow tube extending from the speaker face forward so that, when in place in the user's ear, the sound waveguide directs sound produced by the speaker into the user's ear canal at a point past the sealing ear tip;
the sound waveguide comprises an inner surface that is substantially sound reflective;
the sound waveguide and the IR waveguide are separate and apart waveguides offset side-by-side;
the earpiece further comprises a main body, for housing the speaker and the temperature sensor, and a stem; wherein:
the stem is elongate and has a front and a rear;
the rear of the stem is securely attached to the main body; and
the one or more waveguides span the length of the stem;
the speaker is laterally offset from the stem, with the speaker face angled with respect to a centerline of the stem so that the speaker face is not directly pointed towards the stem along a line parallel to the centerline of the stem;
the temperature sensor is laterally offset from the stem, with the temperature sensor face angled with respect to a centerline of the stem so that the temperature sensor face is not directly pointed towards the stem along a line parallel to the centerline of the stem; and
the IR waveguide and the sound waveguide extend essentially parallel to each other for most of their lengths, with only a rear portion of the sound waveguide curving to orient with the angled, offset face of the speaker and only a rear portion of the IR waveguide curving to orient with the angled, offset face of the temperature sensor.
|Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||Yes||-||-||-||-||Yes||Yes||Yes||-||-||-||-||-||-||-||-||-||-||2012-02-17||2015-12-15||2012-02-17||-||A61B000500 | A61B000501||A61B000501 | A61B00056817||Larsen, Christopher Scott | Padmanabhan, Aravind | Humphrey, Christopher | Muggleton, Neal||Larsen Christopher Scott | Padmanabhan Aravind | Humphrey Christopher | Muggleton Neal | Honeywell Int Inc||Honeywell Int Inc||0||US9211069B2 | US20130218022A1|
|11||US9208773B2||Headset Noise-Based Pulsed Attenuation||A headset having a talk-through microphones incorporates an audio circuit that compresses a signal representing sounds detected by the talk-through microphones in response to the audio circuit detecting the onset of a peak (positive and/or negative) in the signal that exceeds a predetermined voltage level (positive and/or negative voltage level, perhaps a predetermined magnitude of voltage from a zero voltage level), and that does so with a rate of change in voltage level that exceeds a predetermined rate of change in voltage level, the degree of compression possibly being a compression to or near a zero amplitude (perhaps to or near a zero voltage level) and the duration of the compression possibly being controlled by a timing circuit set to a predetermined period of time that may be retriggerable while amidst the predetermined period of time.||1. A method of controlling sounds acoustically output by an acoustic driver disposed within a casing of an earpiece of a headset, the method comprising: |
compressing a signal representing sounds detected by a microphone of the headset that is acoustically coupled to the environment external to the casing in response to detecting an onset of a peak in the signal that exceeds a predetermined voltage level and that has a rate of change in voltage level that exceeds a predetermined rate of change, and
reducing a gain of the signal in response to detecting speech sounds of a user of the headset detected by a noise-canceling communications microphone that is disposed on the headset towards the vicinity of the user's mouth.
|Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-03-01||2015-12-08||2011-12-23||-||G10K0011178 | H04R000110 | H04R0005033||H04R0003002 | G10K00111782 | H04R000110 | H04R00011083 | H04R00011041 | H04R0005033 | H04R2201107 | H04R242007 | H04R246001||Yamkovoy, Paul G.||Yamkovoy Paul G | Bose Corp||Bose Corp||0||US9208773B2 | CN104012110A | CN104221397A | EP2795921A1 | EP2795921B1 | EP2820860A1 | JP2015513855A | US20130163775A1 | US20130163776A1 | US20150245136A1 | US9208772B2 | WO2013095839A1 | WO2013130463A1|
|12||US9208769B2||Hybrid Adaptive Headphone||An adaptive noise-cancelling headphone including an earcup housing having a driver for outputting sound to a user positioned therein. The headphone further including an active noise control assembly. The active noise control assembly may include an ambient microphone capable of detecting an ambient noise outside of the housing and an error microphone capable of detecting an earcup noise inside of the housing. Based on the detected noise, active noise cancellation within the headphone is either enabled or disabled. The headphone may further include a passive noise control assembly. The passive noise control assembly may include an acoustic valve associated with an acoustic vent formed within the earcup housing. The acoustic valve is capable of being modified between an open configuration to decrease sound attenuation and a closed configuration to increase sound attenuation in response to the detected ambient noise so as to improve an acoustic performance of the earcup.||1. An adaptive noise-cancelling headphone comprising: |
an earcup comprising an earcup housing having a front portion defining an inner chamber dimensioned to encircle a user's ear, a back portion defining an outer chamber and a mid wall separating the inner chamber from the outer chamber;
a driver positioned within the mid wall for outputting sound to the inner chamber and in a direction of a user's ear;
an active noise control assembly integrated with the earcup housing, the active noise control assembly having an ambient microphone operable to detect an ambient sound outside of the earcup housing and an error microphone operable to detect an earcup sound inside of the earcup housing; and
a passive noise control assembly integrated with the earcup housing, the passive noise control assembly having an acoustic valve associated with an acoustic vent that opens to the outer chamber, the acoustic valve operable to be modified between an open configuration to decrease ambient sound attenuation within the earcup housing and a closed configuration to increase ambient sound attenuation within the earcup housing in response to the detected ambient sound.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-12-18||2015-12-08||2012-12-18||-||G10K001100 | G10K001116 | G10K0011178||G10K001116 | G10K0011178 | G10K22101081 | G10K22103026||Azmi, Yacine||Apple Inc||Apple Inc||0||US9208769B2 | US20140169579A1|
|13||US9198585B2||Mobile Terminal And Method Of Measuring Bioelectric Signals Thereof||A mobile terminal and a method of measuring a bioelectric signal thereof are provided. When the mobile terminal enters a call mode, a user's pulse wave data are acquired using a plurality of electrodes provided in a body of the mobile terminal or a body of an earphone.||1. A mobile terminal comprising: |
a proximity sensor disposed at a surface of the mobile terminal and configured to detect an object approaching the surface of the mobile terminal;
a plurality of electrodes disposed at the surface of the mobile terminal;
a pulse wave sensing unit configured to obtain a pulse wave signal through the plurality of electrodes; and
a controller configured to:
provide a control signal, for activating the pulse wave sensing unit, to the pulse wave sensing unit when the mobile terminal is in a call mode and the object is detected through the proximity sensor;
control the pulse wave sensing unit to obtain the pulse wave signal when the pulse wave sensing unit receives the control signal;
acquire at least one of a pulse wave data, a heart rate, and a heartbeat cycle based on the pulse wave signal;
determine whether a user's health state is abnormal considering that the acquired at least one of the pulse wave data, the heart rate, and the heartbeat cycle is deviated from a first preset reference,
output a notification for warning abnormality of the user's health state through the mobile terminal and transmit the notification to a call party, when the user's health state is abnormal;
transmit the user's health state to a preset another party using a phone number and an e-mail address of the preset another party when a preset cycler arrives;
store position information of the mobile terminal and the user's health state coupled to the position information;
recommend or provide contents of a specific kind of a help for the user's stability or relaxation, when the user's health state is abnormal and the call mode is terminated;
recommend specific restaurant position information related to a stored good health state of the user when the call mode is terminated and an application for searching restaurant position information is executed;
determine a user's excitement based on the acquired pulse wave data;
output an alarm for warning that the user is in an excited state, when the determined excitement is equal to or greater than a preset level;
and terminate the call mode when the determined excitement is equal to or greater than the preset level;
wherein the proximity sensor is activated when the mobile terminal is in the call mode.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||Yes||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-02-08||2015-12-01||2011-06-29||-||A61B0005024 | A61B000500 | A61B00050245 | A61B000516 | H04M0001725 | H04M000160||A61B000502438 | A61B00050245 | A61B0005165 | A61B00056898 | H04M000172569 | H04M00016058||Lim, Gukchan | Park, Sangmo | Kim, Seonghyok | Lee, Seehyung||Lim Gukchan | Park Sangmo | Kim Seonghyok | Lee Seehyung | Lg Electronics Inc||Lg Electronics Inc||0||US9198585B2 | CN102846314A | CN102866843A | EP2540220A1 | EP2540221A1 | KR2013007117A | KR2013028570A | KR2013055729A | US20130005303A1 | US20130005310A1 | US20150312669A1 | US9089270B2|
|14||US9196261B2||Voice Activity Detector (Vad)—Based Multiple-Microphone Acoustic Noise Suppression||Acoustic noise suppression is provided in multiple-microphone systems using Voice Activity Detectors (VAD). A host system receives acoustic signals via multiple microphones. The system also receives information on the vibration of human tissue associated with human voicing activity via the VAD. In response, the system generates a transfer function representative of the received acoustic signals upon determining that voicing information is absent from the received acoustic signals during at least one specified period of time. The system removes noise from the received acoustic signals using the transfer function, thereby producing a denoised acoustic data stream.||1. A method for removing noise from acoustic signals, comprising: |
receiving from a plurality of microphones, a plurality of acoustic signals;
receiving information on a vibration of human tissue associated with human voicing activity from a tissue vibration detector in physical contact with the human tissue, the tissue vibration detector comprises a skin surface microphone (SSM) of a voice activity detector (VAD) device included in a wireless earpiece or a wireless headset, the SSM including a covering operative to change an impedance of a microphone of the SSM;
generating at least one first transfer function representative of the plurality of acoustic signals upon determining that voicing information is absent from the plurality of acoustic signals for at least one specified period of time; and
removing noise from the plurality of acoustic signals using the at least one first transfer function to produce at least one denoised acoustic data stream.
|Yes||-||-||-||-||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||-||Yes||-||Yes||Yes||-||-||Yes||-||-||-||-||-||-||-||2011-02-28||2015-11-24||2000-07-19||-||G10K001116 | G10L001520 | G10L002102 | G10L00210208 | G10L001102 | G10L001902 | G10L00210216 | G10L002578||G10L002102 | G10L00210208 | G10L00190204 | G10L002578 | G10L202102082 | G10L202102161 | G10L202102165 | G10L202102168||Burnett, Gregory C. | Breitfeller, Eric F.||Burnett Gregory C | Breitfeller Eric F | Aliphcom||Aliphcom Inc||0||US9196261B2 | AU200176955A | AU2002359445A1 | AU2003223359A1 | AU2003263733A1 | AU2003263733A8 | AU2009308442A1 | AU2011248283A1 | AU2011248297A1 | AU2011279009A1 | AU2012229071A1 | CA2416926A1 | CA2448669A1 | CA2465552A1 | CA2477767A1 | CA2479758A1 | CA2741652A1 | CA2798282A1 | CA2798512A1 | CA2804638A1 | CA2830410A1 | CN101779476A | CN101779476B | CN102282865A | CN1443349A | CN1513278A | CN1589127A | CN1643571A | CN203086710U | CN203242334U | CN203351200U | CN203435060U | CN203811527U | EP1301923A2 | EP1415505A1 | EP1480589A1 | EP1483591A2 | EP1497823A1 | EP2165564A1 | EP2165564A4 | EP2353302A1 | EP2567377A1 | EP2567553A1 | EP2594059A1 | EP2686971A2 | EP2686971A4 | JP2004509362A | JP2005503579A | JP2005520211A | JP2005522078A | JP2005529379A | JP2011203755A | JP2013178570A | KR1402551B1 | KR1434071B1 | KR2003076560A | KR2004030638A | KR2004077661A | KR2004096662A | KR2004101373A | KR2011008333A | KR2011025853A | KR2012081639A | KR2012091454A | KR936093B1 | KR992656B1 | TW200304119A | TW200305854A | TW200425763A | TWI281354B | US20020039425A1 | US20020099541A1 | US20020198705A1 | US20030128848A1 | US20030179888A1 | US20030228023A1 | US20040133421A1 | US20040249633A1 | US20070233479A1 | US20090003623A1 | US20090003624A1 | US20090003625A1 | US20090003626A1 | US20090003640A1 | US20090010449A1 | US20090010450A1 | US20090010451A1 | US20090022350A1 | US20100128881A1 | US20100128894A1 | US20100278352A1 | US20100280824A1 | US20110026722A1 | US20110051950A1 | US20110051951A1 | US20120059648A1 | US20120184337A1 | US20120207322A1 | US20120230511A1 | US20120230699A1 | US20120288079A1 | US20130211830A1 | US20140140524A1 | US20140140527A1 | US20140177860A1 | US20140185824A1 | US20140185825A1 | US20140188467A1 | US20140286519A1 | US20140294208A1 | US20140328496A1 | US20140328497A1 | US20140372113A1 | US20150288823A1 | US20150319527A1 | US7246058B2 | US7433484B2 | US8019091B2 | US8130984B2 | US8254617B2 | US8280072B2 | US8321213B2 | US8326611B2 | US8452023B2 | US8467543B2 | US8477961B2 | US8488803B2 | US8494177B2 | US8503686B2 | US8503691B2 | US8503692B2 | US8682018B2 | US8699721B2 | US8731211B2 | US8837746B2 | US8838184B2 | US8942383B2 | US9066186B2 | US9099094B2 | WO2002007151A2 | WO2002007151A3 | WO2002098169A1 | WO2003083828A1 | WO2003096031A2 | WO2003096031A3 | WO2003096031A9 | WO2004056298A1 | WO2004068464A2 | WO2004068464A3 | WO2005029468A1 | WO2008157421A1 | WO2009003180A1 | WO2010048635A1 | WO2011002823A1 | WO2011140096A1 | WO2011140110A1 | WO2012009689A1 | WO2012125873A2 | WO2012125873A3|
|15||US9191744B2||Intelligent Ambient Sound Monitoring System||A system and method for interjecting ambient background sounds into a set of headphones is provided. The system monitors an ambient sound environment and compares the ambient sound environment to a preset set of sound characteristics (e.g., frequency signatures, amplitudes and durations) in order to detect important or critical background sounds (e.g., alarm, horn, directed vocal communications, crying baby, doorbell, telephone, etc.). When a critical background sound is detected, the system interjects either a notification signal or a portion of the ambient background into the audio stream, thus alerting a user of a potentially important sound or event occurring within their immediate vicinity.||1. An ambient sound monitoring system, comprising: |
a microphone, said microphone monitoring an ambient sound environment;
a set of headphones; and
a processor, said processor receiving a microphone output from said microphone, wherein said processor compares said microphone output to a preset set of sound characteristics and identifies critical background sounds within said ambient sound environment, said critical background sounds corresponding to a match between said microphone output and said preset set of sound characteristics, wherein said processor outputs an audio notification to said set of headphones only when said critical background sounds are identified, wherein said preset set of sound characteristics comprises at least one frequency signature, and wherein said audio notification is selected from the group consisting of an alarm signal and at least a portion of said ambient sound environment.
|-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-08-09||2015-11-17||2012-08-09||-||H03G000320 | H04R000110 | H04R000504||H04R000504 | H04R00011083 | H04R242001 | H04R246001||Anderson, Jeffrey Steven||Anderson Jeffrey Steven | Logitech Europ Sa||Logitech International S.A.||1||US9191744B2 | CN103581803A | DE102013211056A1 | US20140044269A1|
|16||US9191733B2||Headphone Apparatus And Sound Reproduction Method For The Same||A headphone apparatus includes sound reproduction units which respectively reproduce sound signals and are arranged so as to be separated from ear auricles of a headphone user, wherein each of the sound reproduction unit is configured by a speaker array including a plurality of speakers.||1. A headphone apparatus comprising: |
sound reproduction units which respectively reproduce sound signals and are arranged so as to be separated from ear auricles of a headphone user; and
a head motion detecting unit which detects a state of a head of the headphone user,
wherein each of the sound reproduction units is configured by a speaker array including a plurality of speakers, and
wherein an orientation of a sound image formed by the reproduced sound signals is controlled, based on the detected state of the head of the headphone user in relation to a location of an object or a visual content that is associated with the reproduced sound signals and that is being viewed by the headphone user.
|-||Yes||-||-||-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||-||-||-||-||-||2012-02-16||2015-11-17||2011-02-25||-||H04S000700 | H04R000110 | H04R000140 | H04R000312||H04R00011091 | H04R000140 | H04R000312 | H04R243020 | H04S000730 | H04S240011||Yamada, Yuuji | Kon, Homare||Yamada Yuuji | Kon Homare | Sony Corp||Sony Corp||0||US9191733B2 | CN102651831A | EP2493211A2 | EP2493211A3 | EP2493211B1 | JP05716451B2 | JP2012178748A | KR2012098429A | US20120219165A1|
|17||US9190071B2||Noise Suppression Device, System, And Method||A noise-suppression assembly of a mechanical drive system having a rotational frequency includes an audio filter unit configured to receive a first audio signal and a timing signal of the mechanical drive system. The audio filter unit generates a noise-cancellation signal based on a frequency of the timing signal to suppress a noise generated by the mechanical drive system and to apply the noise-cancellation signal to the first audio signal to produce a filtered first audio signal. The frequency of the timing signal is based on the rotational frequency of the mechanical drive system.||1. A noise-suppression assembly of a mechanical drive system having a rotational frequency, the mechanical drive system including a rotor of a helicopter, the assembly comprising: |
an audio filter unit configured to receive a first audio signal and a timing signal of the mechanical drive system, the audio filter unit configured to generate a noise-cancellation signal based on a frequency of the timing signal, said frequency based on the rotational frequency of the rotor, to suppress a noise generated by the mechanical drive system and to apply the noise-cancellation signal to the first audio signal to produce a filtered first audio signal, the frequency based on a signal obtained from at least one sensor located on the rotor, wherein the sensor is a proximity sensor configured to detect the position of the rotor relative to a fixed position.
|Yes||Yes||-||-||-||-||-||Yes||-||-||-||-||Yes||-||-||Yes||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2012-09-14||2015-11-17||2012-09-14||-||G10K0011178 | G10L00210208 | G10L00210216||G10L00210208 | G10K22101081 | G10K2210121 | G10K2210128 | G10K22101281 | G10L00210216 | G10L202102085||Butts, Donald J. | Welsh, William A. | Millott, Thomas A. | Drost, Stuart K.||Butts Donald J | Welsh William A | Millott Thomas A | Drost Stuart K | Sikorsky Aircraft Corp||Sikorsky Aircraft Corp||0||US9190071B2 | US20140079234A1|
|18||US9190043B2||Assisting Conversation In Noisy Environments||A portable system for enhancing communication between at least two users in proximity to each other includes first and second noise-reducing headsets, each headset including an electroacoustic transducer for providing sound to a respective user's ear and a voice microphone for detecting sound of the respective user's voice and providing a microphone input signal. A first electronic device integral to the first headset and in communication with the second headset generates a first side-tone signal based on the microphone input signal from the first headset, generates a first voice output signal based on the microphone input signal from the first headset, combines the first side-tone signal with a first far-end voice signal associated with the second headset to generate a first combined output signal, and provides the first combined output signal to the first headset for output by the first headset's electroacoustic transducer.||1. A portable system for enhancing communication between at least two users in proximity to each other, comprising: |
first and second noise-reducing headsets, each headset comprising:
an electroacoustic transducer for providing sound to a respective user's ear, and
a voice microphone for detecting sound of the respective user's voice and providing a microphone input signal; and
a first electronic device integral to the first headset and in communication with the second headset, configured to:
generate a first side-tone signal based on the microphone input signal from the first headset,
generate a first voice output signal based on the microphone input signal from the first headset,
receive a first far-end voice signal from the second headset,
combine the first side-tone signal with the first far-end voice signal to generate a first combined output signal, and
provide the first combined output signal to the first headset for output by the first headset's electroacoustic transducer,
wherein the first and second headsets each include a noise cancellation circuit including a noise cancellation microphone for providing anti-noise signals to the respective electroacoustic transducer based on the noise cancellation microphone's output, and
the first electronic device is configured to provide the first combined output signal to the first headset for output by the first headset's electroacoustic transducer in combination with the anti-noise signals provided by the first headset's noise cancellation circuit.
|Yes||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||2013-08-27||2015-11-17||2013-08-27||-||H04R000110 | G10K001100 | H04R000300||G10K0011002 | H04R00011083 | H04R00011091 | H04R0003005||Krisch, Kathleen S. | Isabelle, Steven H.||Bose Corp||Bose Corp||0||US9190043B2 | US20150063584A1 | WO2015031004A1|
|19||US9186277B2||External Ear Canal Pressure Regulation System||An external ear canal pressure regulation device including a fluid flow generator and an earpiece having a first axial earpiece conduit fluidicly coupled to the fluid flow generator, whereby the earpiece has a compliant earpiece external surface configured to sealably engage an external ear canal as a barrier between an external ear canal pressure and an ambient pressure.||1. An external ear canal pressure regulation device comprising: |
a first fluid flow generator capable of generating a first fluid flow;
a first earpiece having a first earpiece axial conduit which communicates between first earpiece first and second ends, said first earpiece axial conduit fluidicly coupled to said first fluid flow generator, said first earpiece having a first earpiece compliant external surface configured to sealably engage a first external ear canal of a first ear as a first barrier between a first external ear canal pressure and an ambient pressure;
said first fluid flow generator capable of generating a first pressure differential between said first external ear canal pressure and said ambient pressure, said first pressure differential comprising a first pressure differential amplitude;
a first pressure sensor which generates a first pressure sensor signal which varies based upon change in said first pressure differential; and
a first pressure sensor signal analyzer comprising:
a first pressure differential amplitude comparator which compares a pre-selected first pressure differential amplitude to said first pressure differential amplitude, said first pressure sensor signal analyzer generating a first pressure differential amplitude compensation signal to which a first fluid flow generator controller is responsive to control said first fluid flow generator to achieve said pre-selected first pressure differential amplitude.
|-||Yes||-||-||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||2015-05-01||2015-11-17||2013-06-28||-||A61F001100 | A61F001112 | H04R000142||A61F001112 | H04R000142||George, David | Buckler, George | Sullivan, David Brice||Gbs Ventures Llc||Gbs Ventures Partner Ltd||0||US9186277B2 | AU2014302187A1 | CA2894410A1 | CA2915821A1 | TW201517884A | US20150000678A1 | US20150003644A1 | US20150230989A1 | US9039639B2 | WO2014210457A1 | WO2014210457A4 | WO2015009421A1|
|20||US9186071B2||Unlocking A Body Area Network||Disclosed is an apparatus, system, and method to unlock a body area network (BAN) of a patient and to transmit medical data about the patient. The BAN, under the control of a body area controller (BAC), may be unlocked based upon a pre-defined patient action performed by the patient and the BAN may then be connected to a wireless device. The BAN medical data of the patient may then be transmitted by the wireless device.||1. A method of unlocking a body area network (BAN) of a patient to transmit medical data comprising: |
unlocking the BAN based upon a pre-defined patient action performed by the patient, wherein the pre-defined patient action to unlock the BAN includes pressing against a pre-designated part of the body;
connecting the BAN to a wireless device; and
transmitting BAN medical data of the patient by the wireless device.
|-||Yes||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||-||2012-01-27||2015-11-17||2012-01-27||-||G08B000108 | A61B00050205 | A61B000500 | G06F001900||A61B00050024 | A61B00050022 | A61B000502055 | A61B0005747 | A61B00057475 | G06F00193418 | G06F0019345 | G06F00216245 | H04W001208 | A61B25600266||Moriarty, Anthony | Flanagan, Jessica M. | Mcdonald, Cameron A.||Moriarty Anthony | Flanagan Jessica M | Mcdonald Cameron A | Qualcomm Inc||Qualcomm Inc||0||US9186071B2 | CN104039217A | EP2806783A1 | JP2015510183A | KR2014128348A | US20130194092A1 | US20160015270A1 | WO2013112978A1|
|21||US9185488B2||Control Parameter Dependent Audio Signal Processing||Detection from sensors may be used to configure or modify the configuration of audio directional processing to improve user safety and/or communication by processing at least one control parameter dependent on at least one sensor input parameter, processing at least one audio signal dependent on the processed at least one control parameter, and outputting the processed at least one audio signal. |
L'invention concerne un appareil comprenant au moins un processeur et au moins une mémoire contenant un code de programme informatique. Ladite mémoire au moins et le code de programme sont configurés pour déclencher, avec ledit processeur au moins, le traitement par l'appareil d'au moins un paramètre de commande fonction d'au moins un paramètre d'entrée de capteur, le traitement d'au moins un signal audio fonction dudit paramètre de commande traité au moins, ainsi que l'émission dudit signal audio traité au moins.
|1. A method comprising: |
generating at least two sensor input parameters from a plurality of sensors, where the at least two sensor input parameters are different types of sensor input parameters;
generating by a control processor at least one control parameter dependent on the at least two sensor input parameters;
selecting a control parameter modifying mode by a context processor from a plurality of control parameter modifying modes, where at least one of the modes is configured to have the at least one control parameter from the control processor modified, and where the selecting of the control parameter modifying mode by the context processor is based, at least partially, upon an input from at least one of the plurality of sensors;
processing at least one audio signal dependent on the generated at least one control parameter and the selected control parameter modifying mode, wherein processing the at least one audio signal comprises beamforming the at least one audio signal; and
outputting the processed at least one audio signal associated with the selected control parameter modifying mode.
|Yes||-||-||Yes||Yes||-||-||-||-||-||-||-||Yes||-||Yes||-||-||-||Yes||-||-||Yes||-||Yes||Yes||Yes||-||Data Processing||-||-||-||Yes||2012-05-23||2015-11-10||2009-11-30||-||H04R000300||H04R000504 | H04R0001406 | H04R0003005 | H04R0005033 | H04S0001007 | H04R2201403 | H04R220312 | H04R246001 | H04S240013 | H04S240015||Karkkainen, Asta Maria | Virolainen, Jussi||Karkkainen Asta Maria | Virolainen Jussi | Nokia Technologies Oy||Nokia Corp||0||US9185488B2 | CA2781702A1 | CN102687529A | EP2508010A1 | US20120288126A1 | US20160014517A1 | WO2011063857A1|
|22||US9179237B2||Virtual Audio System Tuning||A method of virtually tuning an audio system that incorporates an acoustic compensation system, where the audio system is adapted to play audio signals in a listening environment over one or more sound transducers. The acoustic compensation system has an audio sensor located at a sensor location in the listening environment. The transfer functions from each sound transducer to the audio sensor location are inherent. The method contemplates recording noise at the sensor location, and creating virtual transfer functions from each sound transducer to the sensor location based on the inherent transfer functions from each sound transducer to the sensor location. Audio signals are processed through the virtual sound transducer to sensor location transfer functions. A virtual sensor signal is created by combining the audio signals processed through the virtual sound transducer to sensor location transfer functions with the noise recorded at the sensor location.||1. A method of virtually tuning an audio system that incorporates an acoustic compensation system, where the audio system is adapted to play audio signals in a listening environment using one or more sound transducers, the acoustic compensation system comprising an audio sensor located at a sensor location in the listening environment, wherein transfer functions from each sound transducer to the audio sensor location are inherent, and wherein there are a pair of sound evaluation locations in the listening environment at the approximate location of where the ears of a listener would be, where the sound evaluation locations are different than the sensor location, the method comprising: |
recording noise at the sensor location;
recording noise at both of the sound evaluation locations simultaneously with recording noise at the sensor location;
creating virtual transfer functions for each sound transducer to the sensor location, based on the inherent transfer functions from each sound transducer to the sensor location;
processing audio signals through the virtual sound transducer to sensor location transfer functions; and
creating a virtual sensor signal by combining the audio signals processed through the virtual sound transducer to sensor location transfer functions with the noise recorded at the sensor location.
|-||Yes||-||-||-||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||2011-12-16||2015-11-03||2011-12-16||-||G10K001116 | A61F001106 | G10K0011178 | H03B002900 | H04R002900 | H04S000700||H04S000700 | G10K00111788 | H04R002900 | G10K22101082 | G10K22101282 | G10K22103046 | G10K22103048 | G10K22103055 | H04R242001 | H04R249913||Pan, Davis Y. | Rabinowitz, William M. | Kim, Wontak | Greenberger, Hal||Pan Davis Y | Rabinowitz William M | Kim Wontak | Greenberger Hal | Bose Corp||Bose Corp||0||US9179237B2 | CN103988525A | EP2792167A1 | HK1198495A1 | JP2015506155A | US20130156213A1 | WO2013090007A1|
|23||US9173596B1||Movement Assessment Apparatus And A Method For Providing Biofeedback Using The Same||A movement assessment apparatus configured to provide biofeedback to a user regarding one or more bodily movements executed by the user is disclosed herein. The movement assessment apparatus generally includes a sensing device comprising one or more sensors, a data processing device operatively coupled to the sensing device, and a sensory output device operatively coupled to the data processing device. The data processing device is configured to determine a movement path and/or velocity profile of the body portion of the user using one or more signals from the one or more sensors, to compare the movement path and/or the velocity profile determined for the body portion of the user to a respective baseline movement path and/or velocity profile, and to determine how closely the movement path and/or the velocity profile determined for the body portion of the user conforms to the respective baseline movement path and/or baseline velocity profile.||1. A movement assessment apparatus configured to provide biofeedback to a user regarding one or more bodily movements executed by the user, the movement assessment apparatus comprising: |
at least one sensing device, the at least one sensing device comprising one or more sensors for detecting the motion of a body portion of a user and outputting one or more signals that are generated based upon the motion of the body portion of the user, the at least one sensing device further comprising attachment means for attaching the at least one sensing device to the body portion of the user;
a data processing device operatively coupled to the at least one sensing device, the data processing device configured to receive the one or more signals that are output by the one or more sensors of the at least one sensing device, and to determine executed motion data for an executed motion of the body portion of the user using the one or more signals, the data processing device configured to automatically select a reference motion by comparing the executed motion of the body portion of the user to each of a plurality of reference motions representing a plurality of different activities, the data processing device further configured to: (i) execute an agreement operation by converting the executed motion data to a feedback-agreeing form that agrees with at least one of the dimensions, reference frames, and units of baseline motion data of the reference motion, (ii) execute a comparison operation by comparing the feedback-agreeing form of the executed motion data to the baseline motion data of the reference motion, and (iii) determine how closely the feedback-agreeing form of the executed motion data conforms to the baseline motion data of the reference motion, the data processing device additionally configured to generate an abstract feedback signal based upon the execution of the comparison operation; and
a sensory output device operatively coupled to the data processing device, the sensory output device configured to generate a formed feedback signal for delivery to the user that is based upon the abstract feedback signal, the formed feedback signal comprising at least one of a visual indicator, an audible indicator, and a tactile indicator, and the sensory output device further configured to output the at least one of the visual indicator, the audible indicator, and the tactile indicator to the user in order to provide biofeedback as to conformity of the executed motion data to the baseline motion data of the reference motion.
|Yes||-||Yes||-||-||Yes||Yes||Yes||-||-||-||-||-||Yes||Yes||-||-||Yes||Yes||-||Yes||-||Yes||Yes||Yes||Yes||-||-||-||-||Yes||Yes||2014-06-28||2015-11-03||2014-06-28||-||A61B000500 | A61B000511 | G06F001900||A61B000511 | A61B00050024 | A61B00051122 | A61B0005486 | A61B00056823 | A61B00056824 | A61B00056828 | A61B00056829 | A61B00056895 | A61B00057405 | G06F001900 | A61B00051116 | A61B00057246 | G06F001934 | A61B00051112 | A61B25600214 | A61B25620219 | A61B25620223 | A61B2562029 | A61B256206 | A61B00051126 | A61B00056803 | A61B0005742 | A61B00057455 | A61B25600242||Berme, Necip | Ober, Jan Jakub||Bertec Ltd||Stryker Corporation||0||US9173596B1|
|24||US9173190B2||System And Method For Controlling Paging Delay||The disclosure relates to systems and methods for controlling a delay probability distribution associated with receiving a response to a page. The method entails performing a series of page operations, wherein each page operation entails transmitting a page and scanning for a page response. The method further entails adjusting at least one timing parameter associated with performing the series of page operations based on a characteristic of one or more scans for the page performed by the at least one remote device. The characteristic may be the period of periodic page scans performed by the at least one remote device.||1. A method of controlling a delay distribution associated with receiving a response to a page, comprising: |
performing a series of page operations, wherein each page operation comprises transmitting a page and scanning for a page response; and
adjusting at least one timing parameter associated with performing the series of page operations based on a characteristic of occurrences of separate scans for the page performed by at least one remote device, wherein prior to the adjusting, a timing of the page operations is based on another characteristic of occurrences of separate page scans performed by another remote device.
|Yes||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||Yes||-||Yes||-||-||-||-||-||-||-||-||-||-||-||2012-06-29||2015-10-27||2012-06-29||1001||H04B000700 | H04W006802||H04W006802||Teague, Edward Harrison | Tian, Qingjiang | Julian, David Jonathan | Jia, Zhanfeng||Teague Edward Harrison | Tian Qingjiang | Julian David Jonathan | Jia Zhanfeng | Qualcomm Inc||Qualcomm Inc||0||US9173190B2 | CN104396324A | EP2868149A1 | JP2015523809A | KR2015032566A | US20140004899A1 | WO2014005057A1|
|25||US9173074B2||Personal Hub Presence And Response||Methods, devices, and systems for transmitting convenient messages to a recipient for rendering based on the recipient's device availabilities. A recipient's mobile device may be connected to a personal hub and/or earpiece devices configured to render various incoming communications, such as audio messages and visual messages. The incoming messages may be delivered to the recipient's mobile device and other connected devices that may render the contents of the incoming messages. A delivery confirmation message that describes the receipt and use of incoming messages may be generated and returned to a sender's computing device. In an embodiment, the recipient's devices may generate status information for describing the status of devices to a sender's computing device. In an embodiment, the sender's computing device may generate and transmit outgoing messages formatted based on the received status information and including metadata that instructs the recipient's devices to render message content in particular manners.||1. A method for communicating delivery confirmation information related to received messages by a recipient's mobile device, the method comprising: |
receiving a message in the recipient's mobile device identifying a device coupled to the recipient's mobile device via a short-range wireless communication technology;
obtaining from the received message instructions for rendering the received message on at least one of the recipient's mobile device or the device coupled to the recipient's mobile device via the short-range wireless communication technology, wherein obtaining from the received message instructions for rendering the received message on at least one of the recipient's mobile device or the device coupled to the recipient's mobile device via the short-range wireless communication technology includes decoding the received message to obtain metadata indicating the device on which the sender desires the received message to be rendered and at least one of sound or visual message contents;
determining whether the device indicated in the metadata is coupled to the recipient's mobile device via the short-range wireless communication technology;
providing the at least one of sound or visual message contents to the device indicated in the metadata in response to determining that the device is coupled to the recipient's mobile device;
generating a delivery confirmation message reporting whether the received message was delivered and, if the received message was delivered, a manner in which the received message was delivered; and
transmitting the delivery confirmation message to a sender of the received message.
|Yes||Yes||Yes||-||-||Yes||-||-||-||Yes||-||-||-||-||Yes||-||-||-||-||Yes||-||-||-||Yes||Yes||Yes||-||-||-||-||Yes||Yes||2012-11-27||2015-10-27||2012-05-27||1001||H04B000138 | H04L001258 | H04W000402 | H04W000412 | H04W000420||H04W000412 | H04L00125875 | H04L005130 | H04L005136 | H04W000402 | H04W000420||Miller, Brian F. | Menendez, Jose | Sauhta, Rohit||Qualcomm Inc||Qualcomm Inc||0||US9173074B2 | CN104335612A | EP2856782A2 | KR2015022897A | US20130316746A1 | WO2013180873A2 | WO2013180873A3|
|26||US9173045B2||Headphone Response Optimization||Optimized sound waves presented to the listener by headphones, notwithstanding differences in ear geometry and headphone positioning. A test signal causes an acoustic sensor to receive sound waves actually formed in the listener's ear cavity. A response from the sensor is compared with an expected ear cavity transfer function, from which desired adjustments to the audio signal are determined. The audio signal might be received from an application program, calibrated by an interface software element, and adjusted thereby, before forwarding to the headphones. Calibration might be performed from when the headphones are positioned, or dynamically in response to changes in the transfer function.||1. A method, including the steps of: |
emitting a test sound wave from a headphone into an ear of a listener;
receiving, by a sensor, a response to said test sound wave;
comparing said response to an expected response to said test sound wave, wherein the expected response is associated with a standard ear geometry;
determining differences between said response and said expected response; and
adjusting an input audio signal to the headphone in response to said differences, wherein the input audio signal is corrected to account for a result of comparing said response to the expected response associated with the standard ear geometry.
|-||Yes||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||Acoustic Sensor For Providing Appropriet Sound Level Accor. To Ear Geometry||-||-||-||-||2013-02-21||2015-10-27||2012-02-21||1001||H04R002900 | H04R000110 | H04R000502 | H04R0005033||H04R0029002 | H04R00011091 | H04R000502 | H04R0029001 | H04R0005033||Bruss, John | Hogue, Douglas K. | Olson, Alan||Imation Corp||Imation Corp||0||US9173045B2 | US20130216052A1|
|27||US9173032B2||Methods Of Using Head Related Transfer Function (Hrtf) Enhancement For Improved Vertical-Polar Localization In Spatial Audio Systems||A method of enhancing vertical polar localization of a head related transfer (HRTF). The method includes splitting an audio signal and generating left and right output signals by determining a log lateral component of the respective frequency-dependent audio gain that is equal to a median log frequency-dependent audio gain for all audio signals of that channel having a desired perceived source location. A vertical magnitude of the respective audio signal is enhanced by determining a log vertical component of the respective frequency-dependent audio gain that is equal to a product of a first enhancement factor and a different between the respective frequency-dependent audio gain at the desired perceived source location and the lateral magnitude of respective audio signal. The output signals are time delayed according to an interaural time.||1. A method of enhancing vertical polar localization of a head related transfer function defining a left frequency-dependent audio gain, a right-frequency-dependent audio gain, and an interaural time delay for a plurality of perceived source locations, the method comprising: |
splitting an audio signal into a left audio signal and a right audio signal;
generating a left output signal by:
determining a log lateral component of the left frequency-dependent audio gain that is equal to a median log left frequency-dependent audio gain for all left audio signals having a desired one of the plurality of perceived source locations and applying the log lateral component of the left frequency-dependent audio gain to the left lateral magnitude of the left audio signal; and
determining a log vertical component of the left frequency-dependent audio gain that is equal to a product of a first enhancement factor and a difference between the left frequency-dependent audio gain at the desired one of the plurality of perceived source locations and the left lateral magnitude of the left audio signal and applying the log vertical component of the left frequency-dependent audio gain to the left vertical magnitude of the left audio signal;
generating a right output signal by:
determining a log lateral component of the right frequency-dependent audio gain that is equal to a median log right frequency-dependent audio gain for all right audio signals having the desired one of the plurality of perceived source locations and applying the log lateral component of the right frequency-dependent audio gain to the right lateral magnitude of the right audio signal; and
determining a log vertical component of the right frequency-dependent audio gain that is equal to a product of a second enhancement factor and a difference between the right frequency-dependent audio gain at the desired one of the plurality of perceived source locations and the right lateral magnitude of the right audio signal and applying the log vertical component of the right-frequency-dependent audio gain to the right vertical magnitude of the right audio signal;
time delaying the right output signal with respect to the left output signal in accordance with the interaural time delay; and
delivering the left and right output signals to left and right ears, respectively, of a listener.
|-||Yes||Yes||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||-||Yes||-||-||-||-||-||-||-||2013-03-15||2015-10-27||2009-05-20||1001||H04R000504 | H04S000700 | H04S000500||H04R000504 | H04S0007304 | H04R243003 | H04S000500 | H04S242001 | H04S242011||Brungart, Douglas S. | Romigh, Griffin D.||Us Air Force||Us Air Force||0||US9173032B2 | US20130202117A1 | US8428269B1|
|28||US9172345B2||Personalized Adjustment Of An Audio Device||Described herein are apparatuses, systems and methods that facilitate user adjustment of an audio effect of an audio device to match the hearing sensitivity of the user. The user can tune the audio device with a minimum perceptible level unique to the user. The audio device can adjust the audio effect in accordance with the minimum perceptible level. For example, a loudness level can adjust automatically to ensure that the user maintains a perceptible loudness, adjusting according to environmental noise and according to the minimum perceptible level. Also described herein are apparatuses, systems and methods related to an audio device equipped with embedded audio sensors that can maximize a voice quality while minimizing the effects of noise.||1. A device, comprising: |
a memory configured to store tuning data associated with a tuning process for a user identity in which the device is trained with the tuning data according to a defined hearing level based on an audio frequency control mechanism and an audio level control mechanism associated with the device, and other tuning data generated based on at least one predetermined tuning value that is not associated with the user identity; and
a processor configured to select an audio signal from a plurality of audio signals based on speech data, to repeatedly monitor a noise level associated with environmental noise, and to adjust, in response to a determination that the noise level is above a threshold level, the audio signal selected from the plurality of audio signals according to a plurality of filter bands associated with a digital transformation and based on the tuning data and the other tuning data.