doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
3939751
05506072
1
1. In an electronic musical instrument for playing scales including those with more than twelve tones per octave and those with unequal temperament, a master programmable unit comprising: oscillator means for providing a fixed reference frequency signal; and a plurality of individually programmable sections, each said programmable section having capability for producing twelve tones per octave independently of any other programmable section, and each including at least twelve divider chains connected to said oscillator means for receiving said fixed reference frequency signal and for producing simultaneously a first twelve distinct signals from said reference frequency signal, the divisor number of each said divider chain being adjustable and the divisor number of one of said divider chains being related to the divisor number of any second one of said chains by a factor greater than one and less than two, program means connected to said frequency divider chain for changing the divisor numbers of individual chains to vary the frequencies of said first twelve distinct signals, octave divider means connected to receive said first twelve distinct signals from said twelve divider chains and to produce therefrom at least a second twelve signals, each of the second twelve signals having a frequency related to the frequency of one of said first twelve signals by a multiple of two, and said octave divider means having an output for each of said second twelve signals, reproducing means for receiving and reproducing signals applied thereto, and having a multiplicity of output terminals, and switching means having at least twelve key switches, each key switch connected to an output of said divider means, for coupling the signal from said divider means output to said reproducing means.
9521976
14163971
1
1. A biofeedback system that enables biofeedback training to be accomplished during interaction by an individual with the individual's environment, comprising: a physiologic data acquisition device for acquiring physiologic data concerning the individual; a processor connected to the physiologic data acquisition device for processing said physiologic data and generating at least one control signal in response to said processing of the physiologic data; a wearable device through which the individual receives sensory information that includes, at least, visual information from the individual's environment, said wearable device comprising a lens display arranged to interrupt the visual information and thereby change the individual's visual perception of the individual's environment by varying a clarity or opacity of the lens display in response to said at least one control signal.
8355920
12135452
1
1. A method for use in a computer system having at least one processor configured to perform speech recognition, said method comprising steps of: accepting first speech input from a user and using the at least one processor to recognize the first speech input to produce a first recognition result comprising at least a first word and a second word, the first word being associated with a first recognition score and the second word being associated with a second recognition score; providing feedback of the first recognition result to the user; accepting, as second speech input, corrective information from the user relating to at least one correction to the one or more errors; using the at least one processor to recognize the second speech input to produce a second recognition result comprising a third word; and identifying a location of the one or more errors at least partially by determining whether to align the third word with the first word or the second word based at least in part on the first and second recognition scores.
8830243
12446861
1
1. A computer-implemented method for allowing a user to generate a storyboard including a character graphically depicting a particular facial expression of emotion and/or an emotion expressing pose to be portrayed by an actor, comprising: providing a user interface; providing an emotional facial expression producing unit and associated first database containing data defining a plurality of different basic facial emotion-expressing characters each having a combination of basic facial features which in the aggregate express a different basic emotion; providing an emotional facial expression producing unit and associated second database containing data defining a plurality of different basic emotion-expressing character poses each having a particular configuration of body part dispositions, the combination of which defines a different emotional expression; providing a preview screen viewable by the user; providing a first menu for enabling the user to select from said first database, via inputs to said interface, and to display on said preview screen a character having facial characteristics expressing a desired basic facial emotion; providing a second menu for enabling the user to select a particular manipulation of one or more of the facial characteristics of the selected and displayed character to produce a manipulated facial emotion-expressing character having a desired different facial emotion; providing another menu for enabling the user to select, via inputs to said interface, a basic emotion-expressing character pose for the displayed character, or to select adjustments to be made to particular character body part dispositions of the displayed character, and to manipulate the pose characteristics thereof to produce a desired different emotion-expressing posed character; providing a storage unit for storing data corresponding to the selected and displayed character or manipulated character; and using a processor of the computer to generate storyboard data including the stored character or manipulated character.
10056081
15375075
1
1. A method of controlling a plurality of equipment pieces by a controller, the controller including a microphone, a sensor, and a speaker, the method comprising: collecting, with the microphone, sound around the controller; sensing, with the sensor, a location of a person with respect to the plurality of equipment pieces; generating sensing data based on the sensing; extracting an utterance for controlling the plurality of equipment pieces from the sound collected by the microphone; in accordance with the utterance, identifying a target equipment piece to be controlled for opening its door among the plurality of equipment pieces; in accordance with the utterance, determining whether the person is providing the utterance for opening a door of the target equipment piece; determining whether the person is located within a predetermined range from the target equipment piece or the person is located outside of the predetermine range based on the sensing data; generating a command for opening the door of the target equipment piece when the person is determined to provide the utterance for opening the door of the target equipment piece, and the person is determined to be located outside the predetermined range; outputting the command to the target equipment piece; generating an audio response for verifying whether to open the door of the target equipment piece when (i) the person is determined to provide the utterance for opening the door of the target equipment piece, (ii) the person is determined to be located within the predetermined range, and (iii) a line of sight of the person, a face of the person, and a trunk of the person are not directed towards the target equipment piece; and causing the speaker to output the audio response to the person.
9082040
13107717
1
1. A method comprising: receiving an image; defining a region of visual context in the image; partitioning the region of visual context into sectors; producing a sub-histogram for each sector, wherein the sub-histograms represent which of the sectors contain which of one or more local interest points in the region of visual context; concatenating the sub-histograms produced for each sector to generate a histogram describing a contextual distribution of the region of visual context; and storing the histogram describing the contextual distribution of the region of visual context.
20150161113
13555405
0
1. A computer-implemented method comprising: receiving, at a translation server in communication with a network, a request for a translation of text in a source language to a target language, the request including an identifier associated with (i) a phrase table and (ii) a glossary, the phrase table and glossary both being specific to a user that identified the text for translation; translating, at the translation server, at least a portion of the text from the source language to the target language to obtain a translated version of the text in the target language, the translating including: identifying the phrase table and the glossary based on the identifier; determining a source segment from the text corresponding to a target segment in the identified phrase table; applying the target segment to create the translated version of the text; determining a source term from the text corresponding to a target term in the identified glossary; applying the target term to create the translated version of the text with the applying the target segment having priority over the applying the target term; and applying a machine translation to translate the text from the source language to the target language with the applying the target segment and the applying the target term having priority over applying the machine translation; and providing, via the translation server, the translated version of the text to a web server.
20120059579
12876420
0
1. A vehicle navigation system in which voice data from a remote communication device disposed outside of the vehicle is transmitted to a host communication device, said navigation system comprising: a hands free communication unit configured to connect the host communication device so as to communicate with the remote communication device; and a voice recognition engine connected to said hands free communication unit through a voice data link; wherein said hands free communication unit transmits said voice data originating from said remote communication device to said voice recognition engine over said voice data link, and wherein said voice recognition engine processes said voice data to extract a destination in order to control a peripheral.
9218807
12986855
1
1. A speech recognition system that can be acoustically trained with free text audio, the system comprising: a speech recognition software application operating on a computing device having a processor, the speech recognition software application comprising: a speech recognition engine configured to receive the free text audio at the speech recognition engine which is unknown to the speech recognition engine previous to acoustical training of the speech recognition engine in both spoken audio and text forms, translate the free text audio into text form for display to a user, and receive a reviewed version of the text form and convert the reviewed version of the text form into a context free grammar based on text indicated as validated text as indicated by the user; a comparison module configured to receive an indication of the validated text and associate the validated text with at least one word from the free text audio; and a plurality of voice models; wherein upon receipt of a plurality of instances in which validated text is associated with the at least one word from the free text audio, the speech recognition software application selects a subset of voice models of the plurality of voice models in such a way that the subset of voice models shares a plurality of characteristics with the free text audio associated with the validated text.
20080204411
12116813
0
1. A movement recognition apparatus comprising: receiving means for receiving a signal via a wireless medium, the signal originating from a pointing device; demodulation means for demodulating the received signal; control means for receiving the demodulated signal and providing a plurality of output data messages, the plurality of output data messages including at least one acceleration data value detected by an accelerometer during a motion of the pointing device; storage means for storing the plurality of output data messages as well as a plurality of threshold definitions associated with accelerometer output of the pointing device, wherein each of the plurality of threshold definitions includes at least one threshold value utilized to indicate that the motion of the pointing device has occurred; and processing means for processing the plurality of output data messages, wherein the processing includes accessing each of the plurality of threshold definitions, comparing each of the threshold definitions to the plurality of output data messages, identifying whether the at least one threshold value in each of the threshold definitions is exceeded by any of the plurality of output data messages, and selecting one of the threshold definitions, wherein the at least one threshold value in the selected one of the threshold definitions is exceeded by any of the plurality of output data messages.
9466295
14142932
1
1. A method for correcting a speech response, the method comprising: receiving a first speech input; parsing at least one first keyword included in the first speech input to obtain a candidate list, wherein the candidate list has a plurality of report answers; selecting one of the report answers from the candidate list as a first report answer and outputting a first speech response according to the first report answer; receiving and parsing a second speech input to determine whether the first report answer is correct; and if the first report answer is incorrect, selecting another report answer other than the first report answer from the candidate list as a second report answer and outputting a second speech response according to the second report answer, wherein, each of the report answers has a priority, the priorities of the report answers of the candidate list are determined according to usages of the report answers, the first report answer is one of the report answers with the highest priority, and the second report answer is one of the report answers with the second highest priority, wherein the step of selecting the first report answer comprises: parsing a third speech input and obtaining at least one third keyword, wherein the third speech input is inputted before the first speech input; and selecting one of the report answers matching the at least one first keyword and the at least one third keyword as the first report answer.
9954746
14794906
1
1. A computer system for automatically generating service documentation based on usage of a web service, the computer system comprising: one or more computers including: one or more input/output components configured to operatively communicate with a network; a processor operatively coupled with the one or more input/output components and configured to execute computer-executable instructions; and memory storing one or more computer-executable instructions that, when executed by the processor, perform operations configured to: capture network traffic communicated via the one or more input/output components including one or more actual requests to a service endpoint of the web service and one or more actual responses from the service endpoint of the web service; analyze the captured network traffic using one or more machine learning algorithms to determine one or more operations that are available at the service endpoint, one or more input arguments that are accepted by the service endpoint, and one or more output arguments that are provided by the service endpoint, including using the one or more machine learning algorithms to determine one or more mandatory input arguments that are necessary for successful operation of the web service based at least partially on analysis of (i) one or more output arguments that indicate an unsuccessful operation of the web service, and (ii) one or more output arguments that indicate a successful operation of the web service; generate metadata based on the analysis of the captured network traffic for the service endpoint that identifies the one or more operations, the one or more input arguments, and the one or more output arguments; automatically generate service documentation for the web service based on the metadata, the service documentation including at least identification of the one or more mandatory input arguments that are necessary for successful operation of the web service determined using the one or more machine learning algorithms; and communicate the service documentation for the web service to a client device.
20160240195
14752229
0
1. An information processing method, applicable to an electronic device, wherein the electronic device comprises a voice input and output unit, and the method comprises: detecting to obtain voice information; obtaining at least one voice feature in the voice information by identifying the voice information; generating a voice operation instruction based on the voice information; determining a presentation outcome of multimedia data based on the at least one voice feature and the voice operation instruction, wherein the presentation outcome comprises a content to be presented for the multimedia data and a presenting form for the content to be presented, and the presentation outcome matches the voice feature; and presenting the multimedia data based on the presentation outcome.
9760089
15078700
1
1. A system comprising: an input/output interface; a memory with plurality of instructions; a processor in communication with the memory; an autonomous robot electronically coupled with the processor; a mobile electronic device mounted on the autonomous robot and electronically coupled to the processor to initialize an interaction with one or more humans and one or more IoT based devices in a predefined range of the autonomous robot; a virtual reality device electronically coupled to the processor to visualize a stereoscopic image in a virtual environment; a handheld electronic device electronically coupled to the processor, placed in the virtual reality device, to execute a plurality of actions using the autonomous robot from the virtual environment, wherein the plurality of actions comprise: determining the presence of the one or more humans in the received image using cognitive intelligence of the autonomous robot; learning one or more characteristics of the determined one or more human using social intelligence of the autonomous robot; and communicating with the determined one or more humans based on the learned one or more characteristics using a mobile electronic device.
10038419
15642428
1
1. An audio playback system including a processor and associated programming, the programming, when executed on the processor, causing the audio playback system to perform a method comprising: identifying a first type of audio included in a first audio stream; tagging the first audio stream with a first digital tag corresponding to the first type of audio; identifying a second type of audio included in a second audio stream; tagging the second audio stream with a second digital tag corresponding to the second type of audio; rendering the first audio stream with a first equalization profile applied thereto, the first equalization profile selected responsive to the audio playback system detecting the first digital tag in the first audio stream; and rendering the second audio stream with a second equalization profile different from the first equalization profile applied thereto, the second equalization profile selected responsive to the audio playback system detecting the second digital tag in the second audio stream, the audio playback system comprising a master streaming audio player and at least one slave streaming audio player, the at least one slave streaming audio player configured to render the first audio stream and the second audio stream under control of the master streaming audio player, the at least one slave streaming audio player being configured to identify a spoken user query and communicate the user query to the master streaming audio player, the master streaming audio player being configured to generate a response to the user query and communicate the response to the user query in the first audio stream to the at least one slave streaming audio player for rendering, the first digital tag included in the first audio stream identifying the first audio stream as including the response to the user query, the master streaming audio player being further configured to communicate the second audio stream to the at least one slave streaming audio player, the second digital tag in the second audio stream identifying the second audio stream as including audio other than the response to the user query, the at least one slave streaming audio player being configured to identify the second digital tag in the second audio stream and to apply the second equalization profile to the second audio stream responsive to detecting the second digital tag, the master streaming audio player being further configured to communicate a third audio stream including an audio chime to the at least one slave streaming audio player, the third audio stream including a third digital tag identifying the third audio stream as including the audio chime, the at least one slave streaming audio player being configured to identify the third digital tag in the third audio stream and to apply a third equalization profile different from the first equalization profile to the third audio stream responsive to detecting the third digital tag.
20060074667
10535295
0
1. A speech recognition device ( 1 ) for recognizing text information (TI) corresponding to speech information (SI), which speech information (SI) can be characterized in respect of language properties, wherein first language-property recognition means ( 20 ) are provided that, by using the speech information (SI), are arranged to recognize a first language property and to generate first property information (ASI) representing the first language property that is recognized, wherein at least second language-property recognition means ( 21 , 22 , 23 ) are provided that, by using the speech information (SI), are arranged to recognize a second language property of the speech information (SI) and to generate second property information (LI, SGI, CI) representing the second language property that is recognized, and wherein speech recognition means ( 24 ) are provided that are arranged to recognize the text information (TI) corresponding to the speech information (SI) by continuously taking into account at least the first property information (ASI) and the second property information (LI, SGI, CI).
20130282734
13664268
0
1. A computer system for building an inventory database, the computer system comprising: an electronic memory storage configured to store modules; a computer processor configured to execute the modules comprising at least: a data access module configured to access inventory data for a plurality of products from at least one inventory data source, a metadata extractor module configured to extract stated metadata from the inventory data for each of the plurality of products, a product evaluator module configured to determine a product category or a product identification based on the stated metadata for each of the plurality of products, a comparator module configured to compare for each of the plurality of products the product category or the product identification to stored metadata in a metadata database, wherein the metadata database comprises additional metadata for a plurality of products, a metadata identifier module configured to identify derived metadata for each of the plurality of products based on the comparison, wherein the derived metadata is metadata that is not stated metadata and is in the metadata database, a conversions module configured to generate a specific conversion rate for each of the plurality of products based on inputting the stated metadata and the derived metadata into a conversion rate formula, wherein the conversion rate formula is based on a linear or logistic regression analysis, a conversion rate assignor module configured to assign the specific conversion rate corresponding to each of the plurality of products, and a storage module configured to store in the inventory database for each of the plurality of products the stated metadata, the derived metadata, and the specific conversion rate.
20100009719
12470421
0
1. A mobile terminal, comprising: a microphone configured to receive a user's voice during a video call; a display configured to display information; and a controller configured to recognize the voice, detect a voice command included in the voice, and automatically display a menu corresponding to the detected voice command on the display.
20060144212
11030279
0
1. A wireless handheld baton for communicating with a receiver of an electronic tone generation system for producing audible sounds in response to movements of the baton, comprising: a housing having a grippable end portion; a pair of radiation sensors positioned on opposite sides of said housing from which a differential is determinable; and a processor carried in said housing for causing the baton to transmit a mute signal when said differential determined from said radiation sensors exceeds a threshold level.
8306514
13047206
1
1. A system to automatically provide differing levels of information according to a predetermined social hierarchy, comprising: a communication device comprising a sensor set which detects sensor data including a first detected sensor value comprising an amount of light of an environment of the communication device detected by an optical sensor and a second detected sensor value comprising a sound level of the environment of the communication device detected by an acoustic sensor, and transmits the detected sensor data; a memory which stores social templates, each social template corresponding to a unique social signature comprising a first sensor value range and a second sensor value range other than the first sensor value range and each social template being selectable to provide, for each level of the predetermined social hierarchy, a corresponding differing amount of information to each member of the predetermined social hierarchy; and a server comprising a processor which receives the sensor data transmitted from the communication device, creates a detected social signature from the received sensor data, determines which of the social signatures of the stored social templates has a greatest correspondence with the created social signature through comparison of the first and second detected sensor values and the first and second sensor value ranges of each stored social template, retrieves from the memory the determined one social template having the greatest correspondence and having the detected amount of light within the first sensor value range and the detected sound level within the second sensor value range, and provides to at least one member of the predetermined social hierarchy only as much information as allowed based on the retrieved social template, wherein, for at least one of the social templates, each level of the social hierarchy corresponds to a corresponding different social networking service, and the processor automatically provides different updates to each of the social networking services as allowed based on the one social template.
20090325546
12163243
0
1. A computer readable medium including executable instructions which, when executed, provide options for data services using push-to-talk (PTT), by: receiving a signal for initiating a voice enabled service session; communicating a prompt to a user to begin speaking to provide voice data for processing of a spoken voice service request; recording, from a user, voice data comprising the spoken voice service request; and transmitting the recorded voice data to a service server for processing the voice data to satisfy the spoken voice service request.
7487151
10987158
1
1. An information processing apparatus comprising: modification means for acquiring M information sets each including N pieces of individual information and modifying at least partially the N pieces of individual information of each of the M information sets such that correlations among the N pieces of individual information are emphasized, where N is an integer equal to or greater than 2 and M is an integer equal to or greater than 1; generation means for generating a reference information set including N pieces of individual information for use as a reference in a calculation of similarity, from the M information sets each including N pieces of individual information modified by the modification means; and similarity calculation means for acquiring, as a comparative information set, a new information set including N individual information elements and calculating the similarity of the comparative information set with respect to the reference information set produced by the generation means.
20020029114
09934084
0
1. A method for determining properties of products from a combinatorial chemical library P using features of their respective building blocks, the method comprising the steps of: (1) determining at least one feature for each building block in the combinatorial library P, {a ijk, i=1,2,...,r; j=1,2,...,r l ; k=1, 2,..., n i }, wherein r represents the number of variation sites in the combinatorial library, r i represents the number of building blocks at the i-th variation site, and ni represents the number of features used to characterize each building block at the i-th variation site; (2) selecting a training subset of products {p l, i=1,2,...,m; p i &egr;P} from the combinatorial library P; (3) determining q properties for each compound pi in the selected training subset of products, wherein y i ={y ij, i=1,2,...,m, j=1,2,...,q} represents the determined properties of compound p i, and wherein q is greater or equal to one; (4) identifying, for each product p l of the training subset of products, the corresponding building blocks {t ij, t ij =1, 2,..., r j, j=1, 2,..., r} and concatenating their features determined in step (1) into a single vector {x i =a 1t i |a 2t i2 |... |a rt ir }; (5) using a supervised machine learning approach to infer a mapping function f that transforms input values x i to output values y i from the input/output pairs in the training set T={(x i, y i ), i=1,2,...,m}; (6) identifying, after the mapping function f is determined, for a product p z &egr;P, the corresponding building blocks {t zj, j=1, 2,..., r} and concatenating their features, a 1t z1, a 2t z2,..., a rt zr, into a single vector {x z =a 1t 1 |a 2t 2 |... |a rt r}, and (7) mapping x z →y z, using the mapping function f determined in step (5), wherein y z represents the properties of product p z.
9384758
14754539
1
1. A computer-implemented method for matching audio sequences, the method performed by a computer processor and comprising: deriving, by the computer processor, a first probability density function P M outputting a probability that an initial correspondence score for a pair of chroma vectors of an audio sequence indicates a semantic correspondence between the chroma vectors; deriving, by the computer processor, a second probability density function P R outputting a probability that the initial correspondence score for a pair of chroma vectors of an audio sequence indicates that the chroma vectors have a random correspondence, the deriving of P R comprising: randomly selecting a set of pairs of audio sequences; deriving initial correspondence scores for the set of pairs of audio sequences; and fitting the initial correspondence scores to a probability distribution; deriving, by the computer processorusing P M and P R , a match function indicating whether a given pair of chroma vectors of an audio sequence correspond semantically; obtaining a first audio sequence; comparing, by the computer processorusing the match function, the first audio sequence with a plurality of known audio sequences; and based on the comparing, identifying, by the computer processor, a best-matching audio sequence for the first audio sequence from the known audio sequences.
7590224
09699495
1
1. An automated task classification system that operates on a task objective of a user, through a natural language dialog with the user in which system prompts are not ordered in a menu, the system comprising: a recognizer that spots at least one of a plurality of meaningful phrases in substantially simultaneous user natural language verbal and non-verbal input, wherein the natural language verbal and non-verbal input each convey different information and are associated with a coordinated message that achieves an appropriate response, each of the plurality of meaningful phrases having an association with at least one of a predetermined set of task objectives, and the predetermined set of task objectives based, at least partly, on a salience measure of one of the plurality of meaningful phrases to a specified one of the predetermined task objectives, wherein the salience measure is represented as a conditional probability of the task objective being requested given an appearance of one of the plurality of meaningful phrases in the input communication, the conditional probability being a highest value in a distribution of conditional probabilities over the set of predetermined task objectives; and a task classifier that makes a classification decision based, at least partly, on the spotted at least one of the plurality of meaningful phrases.
9129013
13795886
1
1. A method comprising: matching a token from at least a portion of a text string with a matching concept in an ontology; identifying a first concept as being hierarchically related to the matching concept within the ontology; identifying a second concept as being hierarchically related to the first concept within the ontology; including the first and second concepts in a set of features of the token; and determining, using at least one processor, a measure related to a likelihood that the at least a portion of the text string corresponds to a particular entity type, based at least in part on the set of features of the token.
20070061148
11154897
0
1. A method for displaying speech command input state information in a multimodal browser, the method comprising: displaying an icon representing a speech command type; and displaying an icon representing an input state of the speech command.
20090144372
11948370
0
1. A method for messaging integration of a business object comprising: embedding a business object in message text in a messaging session provided by a messenger; applying an action to the business object from within the message session of the messenger; identifying a pronoun in the message text referencing the business object; and, visually distinguishing the identified pronoun in the message text to draw a correlation between the business object and the pronoun.
20020138266
10044760
0
1. A computer method for converting an utterance representation into a response, the computer method comprising the steps of: generating a goal derived from the utterance representation; analyzing the utterance representation based on the goal and a set of goal-directed rules to identify ambiguous information in the utterance representation; and generating a response based on the analysis of the utterance representation.
20080319971
10900039
0
1. A method of personalizing a search of a document collection to a user, the method comprising: storing a user model associated with the user, and comprising a plurality of phrases contained in documents accessed by the user; receiving a query from the user; selecting search results comprising a plurality of documents responsive to the query; identifying phrases that are related to the query and present in the user model; weighting a plurality of scores of a corresponding plurality of the search results according to the identified phrases; ranking the plurality of the search results for presentation to the user according to their weighted scores, to provide personalized search results; and presenting the personalized search results to the user.
20130231920
13782914
0
1. A system comprising: a report generation component configured to generate a report; a report presentation component configured to allow an operator to select an observation from the report; a root cause component configured to determine one or more causal factors associated with the observation; a memory configured to store the report generation component, the report presentation component, and the root cause component; and at least one processor to implement the report generation component, the report presentation component, and the root cause component.
9183198
13847288
1
1. A method for computer-aided translation, comprising: receiving a document comprising one or more sentences to be translated; generating a suggestion pool of possible translations for each sentence in the document using a processor; providing a best suggestion from the suggestion pool to a user for a sentence being translated; updating the suggestion pool based on the user's input of a translation prefix; and providing an updated best suggestion from the updated suggestion pool to the user for the sentence being translated.
10102269
14742213
1
1. A computing device comprising: a processor; and memory, coupled to the processor, storing instructions that, when executed, cause the computing device to: receive, from a client application, a query object indicative of a query; in response to receiving the query object, identify a data source associated with the client application, the query object being defined according to an object model that is: expressed in an object-oriented programming language, and independent of a data model implemented by the data source; parse the query object to generate an intermediate description of the query; translate the intermediate description of the query into a query string in a target query language that corresponds to the data model implemented by the data source; and transmit the query string in the target query language to the data source for execution of the query.
9520138
14210036
1
1. A method for identifying a target speaker, comprising: obtaining spectral features of an audio signal; obtaining a signal-to-noise ratio for the audio signal that is based at least on the spectral features; adapting a speaker model based on the signal-to-noise ratio; and determining a likelihood that the audio signal is associated with the target speaker based on the adapted speaker model.
20170323576
15150141
0
1. A method comprising: receiving, by a processing device, an input of a word that is to be learned by a user; performing, by the processing device, a search for a definition of the word using a search engine; receiving, by the processing device, the definition of the word based on the search; prompting, by the processing device, the user to rewrite the definition; receiving, by the processing device, a user input of a new definition for the word; prompting, by the processing device, the user to select a vocabulary learning mode from a group of vocabulary learning modes consisting of: a story mode, an etymology mode, an image mode, and a word connections mode; receiving, by the processing device, a selection of a vocabulary learning mode from the group of vocabulary learning modes; providing, by the processing device, a user interface and one or more tools for generation of a card for study of the word, wherein the one or more tools are based on the selected vocabulary learning mode; generating the card by the processing device; and saving the card by the processing device.
20080132221
11566821
0
1. A method of processing calendar information in a mobile communication device, the method comprising the acts of: receiving and storing calendar information for an appointment in a calendar application of the mobile communication device, the calendar information being associated with a date and time of the appointment; and causing a warning indication to be produced at a user interface of the mobile communication device in response to identifying an out-of-coverage condition within a predetermined time period of the date and time of the appointment.
20030235807
10414075
0
1. A method of preparing a representation of a textual work containing words, each word appearing a number of times in the text, comprising: reading the words of the text; as each word is read, adding that word to a database, the database containing a record for each word, each record containing a plurality of fields; positioning the words of the text on a display about a central region, each word being positioned at a position measured linearly along the circumference of the display proportional to the position of the word in the text; drawing each word within the display at the average of all of its positions in the text and around the circumference of the display; presenting each word within the display in a color or a shade of a color that provides an indication of the number of times that word appears in the text; and for a chosen word, drawing one radiating line from the location of the drawing of that word to each position of that word along the circumference of the display.
9681794
15143023
1
1. A computing device, comprising: at least one output device, the at least one output device including a display screen to provide graphical output for a user interface; at least one input device, the at least one input device including an input sensor to detect non-contact human input from a human user; at least one processor; and at least one memory, the at least one memory providing a plurality of instructions, wherein the instructions are operable with the at least one processor, to: output a prompt in the user interface via the display screen, wherein the prompt corresponds to a stage of a medical device handling workflow, wherein the medical device handling workflow includes a plurality of stages, and wherein the prompt is outputted to obtain the non-contact human input from the human user; capture the non-contact human input with the input sensor, wherein the non-contact human input is captured at the stage of the medical device handling workflow, wherein the non-contact human input is detected with the input sensor in response to the prompt; correlate the captured non-contact human input to a command in the user interface, wherein the command corresponds to an activity in the medical device handling workflow; and perform the command in the user interface, wherein the command causes the activity to be performed in the medical device handling workflow.
8015198
12106450
1
1. A method for retrieving based on a search term together with its corresponding meaning from a set of base documents those documents which contain said search term and in which said search term has said corresponding meaning, said method comprising: searching, utilizing a computer, for those base documents among said set of base documents which contain said search term; evaluating, utilizing the computer, the found base documents as to whether said search term contained in said found base documents has said corresponding meaning, said evaluation comprising: generating, utilizing the computer, a text document to represent elements surrounding the search term and the elements' corresponding relative position with respect to said search term, said elements' relative position with respect to said search term comprising where the elements are located in the surrounding area of the search term, as compared with where the search term is located; inputting, utilizing the computer, said text document into a trainable classifying apparatus which has been trained to recognize whether said search term in each said found base document has said corresponding meaning, whereas said training has been performed based on a training sample of said found base documents which have been generated for documents in which the search term surrounded by the surrounding elements has said corresponding meaning inputted by said user; and classifying, utilizing the computer, each said found base document to judge whether said search term in each said found based document has said corresponding meaning; generating a database from the elements and their corresponding meaning.
8234274
12629043
1
1. A method for characterizing a corpus of documents each having one or more links, comprising: forming a Bayesian network using the documents; determining a Bayesian network structure using the one or more links; generating a content link model where the model is a generative probabilistic model of the corpus along with citation information among documents, each document represented as a mixture over latent topics, and each relationship among documents is modeled by another generative process with a topic distribution of each document being a mixture of distributions associated with related documents; using a citation-topic (CT) model with a generative process for each word w in the document d in the corpus, with document probabilities Ξ, topic distribution matrix Θ and word probabilities matrix Ψ, including: choosing a related document c from p (c|d,Ξ), a multinomial probability conditioned on the document d; choosing a topic z from the topic distribution of the document c, p(z|c,Θ); choosing a word w which follows the multinomial distribution p(w|z,Ψ) conditioned on the topic z; and determining one or more topics in the corpus and topic distribution for each document wherein the content link model captures direct and indirect relationships represented by the links.
20020194388
10007092
0
1. A multi-modal browser, comprising: a model manager for managing a model comprising a modality-independent representation of an application, and a plurality of channel-specific controllers, wherein each controller processes and transforms the model to generate a corresponding channel-specific view of the model, wherein the channel-specific views are synchronized by the model manager such that a user interaction in one channel-specific view is reflected in another channel-specific view.
20120121181
13351676
0
1. One or more computer-storage media embodying computer-useable instructions for performing a method comprising: receiving input corresponding with user handwriting from a handheld writing device; displaying digital ink representing the user handwriting based on the input; analyzing the input using a recognizer to identify one or more words as recognition text for the digital ink; employing at least three triggers to determine when to convert display of the digital ink to display of the recognition text, wherein the at least three triggers include a distance-based trigger, a recognition-based trigger, and an overall timer-based trigger; determining that at least one of the at least three triggers has been satisfied indicating to convert display of the digital ink to the recognition text; and displaying the recognition text in place of the digital ink.
20050120867
11003240
0
1. An interactive voice response system comprising: a voice application interpreter for processing an interaction with a user; a music score describing background music for playing during the interaction; a music synthesizer for generating music from the music score in accordance with acoustic parameters; and means for controlling the music synthesizer whereby the acoustic parameters may be controlled in response to the interaction with the user and independently of the music score.
8775176
13975901
1
1. A method comprising: upon verifying an identity of a user: identifying, via a processor, a template for a domain associated with the user; receiving input speech from the user, the input speech comprising a substantive portion and an instructional portion, the instructional portion related to navigation between fields in the template; transcribing the substantive portion of the input speech to text, to yield transcribed text; inserting the transcribed text into the template, to yield a completed template; and storing the completed template in a database; and upon receiving a request to play a dictation for a particular word in the completed template, playing the dictation of the particular word.
8145205
12089660
1
1. A method of estimating the quality of speech information associated with a voice call over a communication system comprising a core network and an access network wherein speech information is carried in frames between the access network and the core network and within the access network, the method comprising the steps of: considering only frames containing speech and ignoring silent frames; determining a rate of frame loss for the considered frames transported between the access network and the core network or within the core network: and, mapping the rate of frame loss to a subjective speech quality estimation value using data collected by simulating frame loss on representative speech samples and determining quality estimation values for the damaged speech samples; wherein said communication system is a Universal Mobile Telecommunications System network; and, wherein the method is implemented at the Radio Network Controller of the access network, wherein said frames are lu frames received from a Media Gateway of the core network.
20100070899
12558304
0
1. A computer-readable storage medium storing a plurality of instructions for controlling a processor, the plurality of instructions comprising: instructions that cause the processor to identify, from a set of content elements contained by a web page loaded in a browser, a first content element to be made sharable; and instructions that cause the processor to make the first content element sharable.
20090259629
12103126
0
1. A method for handling abbreviations in web queries, the method comprising: building a dictionary of a plurality of possible word expansions for a plurality of potential abbreviations related to query terms received or anticipated to be received by a search engine; accepting a query including an abbreviation; expanding the abbreviation into one of the plurality of word expansions if a probability that the expansion is correct is above a threshold value, wherein the probability is determined by taking into consideration a context of the abbreviation within the query, wherein the context comprises at least anchor text; and sending the query with the expanded abbreviation to the search engine to generate a search results page related to the query.
20120179457
13345219
0
1. A method of performing speech recognition in a distributed speech recognition system comprising an electronic device including an embedded speech recognizer and a network device including a remote speech recognizer remote from the electronic device, the method comprising: receiving, by the electronic device, input audio comprising speech; transmitting at least a portion of the input audio to the network device for processing by the remote speech recognizer; processing, by the embedded speech recognizer, at least a portion of the input audio to produce a local speech recognition result; and performing a partial action, based, at least in part, on the local speech recognition result.
8654933
11932146
1
1. A voice messaging system for converting an audio message from a caller into text, the voice messaging system comprising: at least one automatic speech recognition (ASR) system to automatically recognize at least some of the audio message; a computer implemented preprocessing front-end to process the audio message from the caller and to detect if the audio message contains no voice content, wherein: if the preprocessing front-end detects that the audio message contains no voice content, the preprocessing front-end does not provide the audio message to the ASR component; and if the preprocessing front-end detects that the audio message contains voice content, the front-end provides the audio message to the ASR component, and wherein the computer implemented preprocessing front-end comprises a computer implemented speech quality detector to determine at least one measure of speech quality of the voice content of the audio message, and wherein the speech quality detector detects drop-outs, estimates noise levels and/or calculates an overall measure of voice quality using an adaptive threshold to reject lowest quality messages.
20100153320
12714392
0
1. (canceled)
7529657
10950091
1
1. A method for authoring a grammar for use in a language processing application, comprising: i. receiving a semantic structure including a plurality of parts that represent a task; ii. receiving a plurality of grammar configuration parameters separate from the semantic structure, the plurality of grammar configuration parameters providing information related to grammar components associated with parts of the semantic structure in order to configure a grammar for the semantic structure, the parameters comprising a first value for a grammar component having a first type of grammar topology and a second value for a grammar component having a second type of grammar topology, wherein the second type of grammar topology is different than the first type of grammar topology; and iii. creating the grammar, via a processor, based on the plurality of grammar configuration parameters, wherein the grammar utilizes grammar components of selected topologies based on the plurality of grammar configuration parameters to analyze a natural language input.
10095684
15475016
1
1. A data input system comprising: a processor; a language model implemented using the processor, which computes candidate next items in an input sequence of one or more items; a training engine implemented using the processor which performs training of the language model using training data comprising a plurality of true words and at least one alternative candidate word for each of the plurality of true words, wherein the plurality of true words comprises respective words intended to be input with a virtual keyboard, wherein the at least one alternative candidate word for each true word comprises at least one word received from imperfect entry during attempted input of the true word with the virtual keyboard, and wherein, as a result of the training, the language model is trained to reward discriminating between the plurality of true words and the at least one alternative candidate word for each of the plurality of true words.
20160247542
15013681
0
1. A voice retrieval apparatus comprising: a display; a memory; and a processor that executes following processes: a voice recording process of storing recorded voices in the memory; an accepting process of accepting a retrieval term; a retrieval process of retrieving, from the recorded voices, a candidate segment where an utterance of the accepted retrieval term is estimated; a replay process of replaying voices in the candidate segment retrieved in the retrieval process; and a display control process of adding a marking to display information indicating a transition of the recorded voices in time based on the replay result of the voices in the candidate segment in the replay process, and displaying the display information with the marking on the display, the marking specifying an utterance location of the voices in the candidate segment.
9900429
14745364
1
1. A method for network recording comprising: receiving, by a processor, a request for a telephony call from a first communication device; invoking, by the processor, a rule based on an attribute of the telephony call, wherein the rule identifies a condition for recording the call; determining, by the processor, whether the condition for recording the call is satisfied; in response to determining that the condition for recording the call is satisfied, establishing, by the processor, a first call path with a recording system instead of a second call path, wherein the recording system is configured to receive media transmitted by the first communication device via the first call path instead of the second call path, bridge a media path between the first communication and a second communication device, and record the media in a storage device; and in response to determining that the condition for recording the call is not satisfied, establishing, by the processor, the second call path with the second communication device without establishing the first call path, wherein the media transmitted by the first communication device is for being received via the second call path instead of the first call path.
9135348
12624182
1
1. A computer implemented method of profiling a user of a computing device connected to a network, the method comprising: receiving a user event comprising a content identifier for indicating web content requested by the user and a user identifier; when the content identifier is not present in a cached web map, sending the content identifier to a modeling system which performs a mapping function of a location associated with the content identifier and determines classification information of the content identifier, wherein the classification information is added to the cached web map; accessing the classification information from the cached web map, stored remotely from the user, using the content identifier of the user event, the cached web map associating a plurality of content identifiers each with respective classification information, the classification information associating a score with a text string defined in a lexical ontology comprising a hierarchy of categories, the score representing a strength of association of the respective text string to web content associated with the respective content identifier; accessing a user profile associated with the user identifier of the user event, the user profile associating one or more scores with one or more respective categories from the hierarchy of categories, each score of the user profile providing an indication of user preference for an associated category; and updating scores of the user profile based on the classification information associated with the content identifier by applying profiling rules generated from a plurality of user events of one or more users, the profiling rules being provided by the modeling system for modeling user behavior from the plurality of user events to predict user preferences, wherein the profiling rules are generated by: aggregating the plurality of user events; periodically modeling user behavior based on the aggregated plurality of user events independently of user identifiers of the aggregated plurality of user events; and receiving updated profiling rules in response to the modeled aggregated plurality of user events, the updated profiling rules used for updating scores of user profiles.
9466009
14565342
1
1. An object data processing system comprising: at least one processor configured to execute: a plurality of diverse recognition modules stored on at least one non-transitory computer-readable storage medium, each recognition module comprising at least one recognition algorithm and having feature density selection criteria wherein the feature density selection criteria include rules that operate as a function of features per unit volume representing pixels squared times a depth of field; and a data preprocessing module executed by at least one processor, the data preprocessing module comprising an invariant feature identification algorithm and configured to: obtain a digital representation of a scene; generate a set of invariant features by applying the invariant feature identification algorithm to the digital representation; cluster the set of invariant features into regions of interest in the digital representation of the scene, each region of interest having a region feature density; assign each region of interest at least one recognition module from the plurality of diverse recognition modules as a function of the region feature density of each region of interest and the feature density selection criteria of the plurality of diverse recognition modules; and configure the assigned recognition modules to process their respective regions of interest.
9372925
14032145
1
1. A method comprising: receiving user selection of two audio samples including a first audio sample and a second audio sample; obtaining metadata corresponding to the two audio samples, the metadata corresponding to an audio sample describing characteristics of the audio sample; and combining, in response to a user request to combine the samples, the first and second audio samples by automatically adjusting a rhythm of at least one of the first audio sample and the second audio sample to increase rhythmic coherence of the first audio sample and the second audio sample, and automatically adjusting a pitch of at least one of the first audio sample and the second audio sample to increase harmonic coherence of the first audio sample and the second audio sample, the combining resulting in a set of samples that includes the first audio sample and the second audio sample, and metadata corresponding to the set of samples describing characteristics of the set of samples.
20110142320
12968492
0
1. A method for providing automatic diagnosis and decision support in whole-body imaging, comprising: using whole-body imaging to obtain a first set of image data of a patient; fitting a statistical whole-body atlas using the first set of image data, wherein the statistical whole-body atlas includes at least one of statistics on voxel intensities, statistics on global and local shape deformations, or statistics on joint articulations; and using the statistical whole-body atlas to characterize pathological findings in terms of a diagnosis.
20110231748
13022874
0
1. A method of viewing information associated with data in a spreadsheet, comprising: providing a document including data and information associated with the data; parsing the document to retrieve the associated information; processing the associated information to break the associated information down into at least one sentence; categorizing the at least one sentence to determine whether the at least one sentence corresponds to at least one category in a taxonomy corresponding to the data; assigning an association strength to the categorized at least one sentence, the association strength indicating a likelihood that the categorized at least one sentence actually corresponds to the at least one category in the taxonomy; filtering the at least one categorized sentence based on the association strength to determine whether to match the categorized at least one sentence with the at least one category in the taxonomy; and outputting only the categorized at least one sentence matched with the at least one category in the taxonomy.
20150067657
14010737
0
1. A method, comprising: replacing, in one or more initial source code files, each reference to a first function configured to convey system messages with a respective reference to a second function configured to convey the system messages, thereby producing respective corresponding preprocessed source code files for the one or more initial source code files; compiling the respective corresponding preprocessed source code files, thereby creating an executable file; and while executing the executable file: receiving, by a processor, a call to the second function, the call comprising a text string; identifying a name of one of the respective corresponding preprocessed source code files storing the call to the second function; determining, based on the identified name and the text string, a computed destination for the text string; and conveying the text string to the computed destination.
8046225
12068600
1
1. A prosody-pattern generating apparatus comprising: an initial-prosody-pattern generating unit that generates an initial prosody pattern based on language information and a prosody model which is obtained by modeling prosody information in units of phonemes, syllables and words that constitute speech data; a normalization-parameter generating unit that generates, as normalization parameters, mean values and standard deviations of the initial prosody pattern and a prosody pattern of a training sentence included in a speech corpus, respectively; a normalization-parameter storing unit that stores the normalization parameters; and a prosody-pattern normalizing unit that normalizes a variance range or a variance width of the initial prosody pattern, bringing the variance range or the variance width of the initial prosody pattern to the same level as a variance range or a variance width of the prosody pattern of the training sentence in the speech corpus in accordance with the normalization parameters.
20160078149
14484489
0
1. A method for verifying factual assertions in natural language, the method comprising: monitoring a natural language input; detecting a factual assertion in the natural language input; verifying the factual assertion in the natural language input; and outputting a notification to a user of a result of the verification.
20170245127
15052483
0
1. A specifically programmed mobile communication computer system, with at least one specialized communication computer machine including artificial intelligence expert system decision making electronic capability, comprising: at least one RF multicast transceiver for receiving multicast information transmissions, a non-transient memory having at least one portion for storing data and at least one portion for storing particular computer executable program code; and at least one processor for executing the particular program code stored in the memory, wherein the particular program code is configured to at least perform the following operations upon the execution: electronically receiving, by the specifically programmed mobile communication computer system, RF multicast addressed signals from a content provider, wherein the RF multicast addressed signals include information descriptive of objects potentially of interest to a user of the specifically programmed mobile communication computer system; electronically determining, by the specifically programmed mobile communication computer system whether the received multicast RF addressed signal is intended for reception by the specifically programmed mobile communication computer system; wherein the specifically programmed mobile communication computer system is configured to electronically decode the received multicast RF signal to determine information parameters about an object of potential interest to the user; wherein the information parameters about an object of potential interest to the user comprise one or more of the following: i) at least one first information parameter identifying object type, ii) at least one second information parameter identifying object description, iii) at least one third information parameter identifying object price, and ii) at least one fourth information parameter identifying object location; electronically determining, by the specifically programmed mobile communication computer system the location of the specifically programmed mobile communication computer system; wherein the specifically programmed mobile communication computer system is configured to electronically determine the location of the specifically programmed mobile communication computer system by using a location sensor; wherein the computer-implemented method does not require content providers determination or knowledge of the location of a user of the specifically programmed mobile communication computer system; performing, by the specifically programmed mobile communication computer system, artificial intelligence expert system operations comprising at least the following: electronic comparative analysis of the received object information with object information prestored in said specifically programmed mobile communication computer system descriptive of the user's level of interest in that object; electronic generation of results of comparisons of the received object information with object information prestored in said specifically programmed mobile communication computer system descriptive of the user's level of interest in that object; electronically comparing the location of said specifically programmed mobile communication computer system to the location of the where the object may be obtained; and, electronic generation of an electronic communication by the specifically programmed mobile communication computer system to the user of the specifically programmed communication computer system; wherein said electronic communication includes an advisory action index to advise the user of the specifically programmed mobile communication computer system of specific artificial intelligence expert system derived advice of recommended user actions concerning the object, and wherein said advisory action index is based on an artificial intelligence expert system evaluation of a combination of the user's level of interest in the object, the relative locations of the user and the object of interest and the information parameters determined by the specifically programmed mobile communication computer system.
20020133248
10093069
0
1. A data structure of configuration information, comprising: an audio buffer identifier to uniquely identify an audio buffer when the audio buffer is instantiated according to the configuration information; an audio buffer type identifier to identify a type of the audio buffer; one or more logical bus identifiers to uniquely identify one or more logical buses that correspond to the audio buffer, an individual logical bus configured to stream audio data to the audio buffer when the audio buffer is instantiated.
20090070115
12192510
0
1. A speech synthesis system for synthesizing speech from text, comprising: a speech segment database for storing data of speech segments having prosody information; means for entering a text to be speech-synthesized; means for determining a speech segment sequence corresponding to the input text from the speech segment database so as to minimize a cost including at least a frequency slope likelihood cost on the basis of a statistical model of prosody variations; means for determining prosody modification values so as to minimize a cost including at least the frequency slope likelihood cost and a prosody modification cost on the basis of the statistical model of prosody variations regarding the determined speech segment sequence; and means for applying the determined prosody modification values to the determined speech segment sequence.
10083688
14838331
1
1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions for voice control of displayed content, which when executed by one or more processors of an electronic device, cause the electronic device to: receive a first spoken user input; obtain a first text string based on the first spoken user input; derive a representation of a first user intent based on the first text string, wherein the first user intent is derived based on a degree of match between the first text string and one or more words associated with a first predefined domain; determine whether a task associated with one or more displayed affordances may be identified based on the representation of the first user intent based on the first text string; in accordance with a determination that a task may be identified based on the representation of the first user intent based on the first text string, perform the task associated with the one or more displayed affordances; and in accordance with a determination that a task may not be identified based on the representation of the first user intent: highlight one or more of the displayed affordances; receive a second spoken user input corresponding to an affordance of the one or more affordances; obtain a second text string based on the second spoken user input; derive a representation of a second user intent based on the second text string, wherein the second user intent is derived based on a degree of match between the second text string and one or more words associated with a second predefined domain; determine whether a task may be identified based on the representation of the second user intent based on the second text string; and in accordance with a determination that a task may be identified based on the representation of the second user intent based on the second text string, select the affordance of the one or more affordances.
9256830
14596204
1
1. An apparatus comprising: a heuristic model configured to generate estimated deformation data for a structure based on input strain data; and a trainer configured to identify training deformation data and training strain data for each training case in a plurality of training cases, train the heuristic model using the training deformation data and the training strain data identified for the each training case in the plurality of training cases such that the heuristic model generates the estimated deformation data for the structure based on the input strain data in which the estimated deformation data has a desired level of accuracy, and receive the training strain data for the each training case in the plurality of training cases from a sensor system associated with the structure in which the sensor system comprises a plurality of sensors positioned at a plurality of points on the structure in which the plurality of sensors is configured to generate a plurality of strain measurements for the plurality of points on the structure.
20100204982
12367131
0
1. A computer-implemented method in a dialog system, comprising: defining a set of one or more grammar rules; labeling each grammar rule of the set of grammar rules with semantic or syntactic characteristics to produce labeled grammar rules; and generating labeled sentences from the set of labeled grammar rules.
4799191
06840660
1
1. A system for checking a spelling of an input English noun word comprising: an input means for inputting an English word; a dictionary memory for storing a plurality of English words which are classified into a first group and a second group, said first group including noun words and said second group including non-noun words, the noun words being stored in a specified memory region storing only noun words in the dictionary memory, said memory region being divided into four blocks including a first block of words whose plural forms can have a common suffix "s"; a second block of words whose plural forms can have a common suffix "es"; a third block of other words of singular form; and a fourth block of words each of which is a plural form of each word in the third group; a search means for searching whether or not an input word exists in the dictionary memory; a suffix judgment means for judging whether or not an input word which does not exist in the dictionary memory has a possessive suffix and, if the input word has a possessive suffix, for searching through the first group in the dictionary memory to find, in cooperation with the search means, word data coincident with the input word without the possessive suffix; and an output means for displaying the input word and indicating whether or not the input word is spelled correctly.
9318112
14181345
1
1. A computer implemented method comprising: providing a text-to-speech prompt for output; receiving, at a processing system, particular audio data encoding (i) at least a portion of the text-to-speech prompt and (ii) a user utterance; providing the particular audio data to an additional audio activity detector comprising a model that is trained, using training audio data comprising text-to-speech prompts, to identify whether given audio data comprises additional audio other than a text-to-speech prompt; receiving, from the additional audio activity detector, data indicating that the particular audio data comprises additional audio other than the text-to-speech prompt; and in response to receiving the data indicating that the particular audio data comprises additional audio other than the text-to-speech prompt, initiating a reduction in an audio output level of the text-to-speech prompt.
20130097185
13271696
0
1. A computer-implemented method for generating context specific terms comprising: obtaining a collection of terms from at least one electronic file associated with a given context; comparing the collection of terms with a collection of expected terms to generate candidate terms that are not in the collection of expected terms; determining a relevance for each of the candidate terms; and determining whether to add a given candidate term to a collection of context specific terms for the given context if the relevance for the given candidate term is above a threshold.
10134400
14084974
1
1. A method of diarization of audio files, the method comprising: receiving a plurality of audio files from a database server and speaker metadata associated with each of the plurality of audio files from the database server of the plurality of audio files; identifying, with a processor, a subset of audio files from the plurality belonging to a specific speaker based upon the received speaker metadata, wherein each audio file is a recording of a customer service interaction and the specific speaker is a customer service agent and there is at least one other speaker in the audio file, wherein the at least one other speaker is not the identified specific speaker; selecting a subset of the audio files belonging to the specific speaker of the identified set of audio files with the processor; wherein each audio file of the subset is selected to maximize an acoustical difference in voice frequencies between the specific speaker and the at least one other speaker in the same audio file, wherein the audio file is a recording of a customer service interaction and the specific speaker is a customer service agent and the at least one other speaker is a customer wherein the acoustical difference is a distance between clusters identified by blind diarization between the specific speaker and the at least one other speaker in the same audio file, wherein the at least one other speaker is a customer; computing an acoustic voiceprint for the specific speaker from the selected subset of audio files with the processor by diarizing the audio files into speaker segments, clustering similar speaker segments, classifying the clustered speaker segments as belonging to the customer service agent or the customer, and building the acoustic voiceprint using the clustered speaker segments belonging to the customer service agent, wherein the acoustic voiceprint will consist of all clustered speaker segments that are a match to the customer service agent; saving the acoustic voiceprint to a voiceprint database server and associating it with the metadata of the known speaker; and with the processor, applying the saved acoustic voiceprint from the voiceprint database server to a new audio file from an audio source to identify the specific speaker in diarization of the new audio file by diarizing the new audio file into new speaker segments, comparing each new speaker segment to the acoustic voiceprint, and determining if the new speaker segment matches the acoustic voiceprint.
20040225499
10803851
0
1. A voice application creation and deployment system comprising: a voice application server for creating and serving voice applications to clients over a communication network; at least one voice portal node having access to the communication network, the portal node for facilitating client interaction with the voice applications; and an inference engine executable from the application server; characterized in that the inference engine is called during one or more predetermined points of an ongoing voice interaction to decide whether an inference of client need can be made based on analysis of existing data related to the interaction during a pre-determined point in an active call flow of the served voice application, and if an inference is warranted, determines which inference dialog will be executed and inserted into the call flow.
9305101
14607019
1
1. A non-transitory computer readable storage medium storing one or more programs configured for execution by a computer system that hosts an online media store, the one or more programs comprising instructions for: receiving, over a network, at least one search character entered at a client application on a client device; determining a set of words that match the at least one search character, each word in the determined set of words being associated with one or more digital media assets available at the online media store; based on capabilities of the client device, filtering out a particular set of words from the determined set of words that match the at least one search character; obtaining sales popularity data for the digital media assets, wherein the sales popularity data for a respective digital media asset is based on purchase data for the respective digital media asset; ordering words in the determined set of words based on the sales popularity data of corresponding digital media assets; and sending at least a subset of the ordered words to the client device for presentation in the client application.
9069768
13855906
1
1. A system for creating subgroups of documents using optical character recognition data, the system comprising: one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processors to: create a matrix for words included in documents, wherein each column-row combination in the matrix indicates whether a corresponding word that is associated with the column-row combination is included in a corresponding document that is associated with the column-row combination; identify distances between pairs of the words in the matrix, wherein each distance is based on a number of the documents that differ in including a corresponding pair of the words; create word clusters, wherein each word cluster comprises pairs of words associated with a corresponding distance less than a distance threshold; create sets of word clusters, wherein a set of word clusters comprises word clusters that are not associated with any of the documents associated with other word clusters in the set of word clusters; and create subgroups of the digitized documents based on a set of word clusters corresponding to a high word score relative to at least one other word score corresponding to at least one other set of word clusters.
9558275
13713197
1
1. A computer system, comprising: one or more processors; and one or more computer-readable media having stored thereon computer-executable instructions that are executable by the one or more processors, and that configure the computer system to provide an action frame utilizing functionality from a plurality of different network-connected computer-executable applications, including computer-executable instructions that configure the computer system to perform at least the following: identify a plurality of different network-connected computer-executable applications that are accessible to the computer system; generate an action catalog identifying, for each of the plurality of different computer-executable applications, one or more corresponding actions, including, for at least a particular computer-executable application of the plurality of different computer-executable applications: parsing one or more descriptive texts corresponding to the particular computer-executable application, to identify at least one particular action that may be provided by the particular computer-executable application; and populating the action catalog with the at least one particular action in association with the particular computer-executable application; based at least on having generated the action catalog, generate an action frame for each of the one or more actions, each action frame identifying how to invoke a fillable form at a corresponding computer-executable application to carry out a corresponding action, including, for the at least one particular action: identifying at least one fillable form of the particular computer-executable application for carrying out the at least one particular action, the at least one fillable form including one or more parameters for receiving user-supplied values to use as part of carrying out the at least one particular action; extracting the one or more parameters from the at least one fillable form; identifying at least one execution endpoint that is usable for invoking the at least one fillable form of the particular computer-executable application; and populating a particular action frame with the one or more parameters and with the at least one execution endpoint; subsequent to generating the action frame, identify a user intent to perform the least one particular action; based at least on identifying the user intent to perform the at least one particular action, identify the particular action frame; and based at least on identifying the particular action frame, invoke the least one execution endpoint over a network using at least one user-supplied value for at least one of the one or more parameters as input to the at least one fillable form.
7502738
11747547
1
1. A system responsive to a user generated natural language speech utterance, comprising: an agent architecture that includes a plurality of domain agents, each of the plurality of domain agents being an autonomous executable configured to receive, process, and respond to requests associated with a respective context; a parser configured to determine a context for one or more keywords contained in the utterance and to determine a meaning of the utterance based on the determined context, wherein the parser selects at least one of the plurality of domain agents based on the determined meaning, wherein the selected domain agent is configured to receive, process, and respond to requests associated with the determined context; an event manager configured to coordinate interaction between the parser and the agent architecture; and an update manager that enables the user to purchase one or more domain agents from a third party on a one-time or subscription basis.
9679258
14097862
1
1. A method of reinforcement learning, the method comprising: obtaining training data relating to a subject system being interacted with by a reinforcement learning agent that performs actions from a set of actions to cause the subject system to move from one state to another state; wherein the training data comprises a plurality of transitions, each transition comprising respective starting state data, action data and next state data defining, respectively, a starting state of the subject system, an action performed by the reinforcement learning agent when the subject system was in the starting state, and a next state of the subject system resulting from the action being performed by the reinforcement learning system; and training a second neural network used to select actions to be performed by the reinforcement learning agent on the transitions in the training data and, for each transition, a respective target output generated by a first neural network, wherein the first neural network is another instance of the second neural network but with possibly different parameter values than those of the first neural network; and during the training, periodically updating the parameter values of the first neural network from current parameter values of the second neural network, wherein the state data and the next state data in each transition are image data.
20150161988
14528638
0
1. A method for training a deep neural network, comprising: receiving and formatting speech data for the training; performing Hessian-free sequence training on a first subset of a plurality of subsets of the speech data; and iteratively performing the Hessian-free sequence training on successive subsets of the plurality of subsets of the speech data; wherein iteratively performing the Hessian-free sequence training comprises reusing information from at least one previous iteration; and wherein the receiving, formatting, performing and iteratively performing steps are performed by a computer system comprising a memory and at least one processor coupled to the memory.
9406019
13757013
1
1. A computer-implemented method comprising: receiving initial training data, the initial training data comprising initial training records, each initial training record identifying input data as input and a category as output; generating first intermediate training records by inputting input data of a first subset of the initial training records to a first trained predictive model, the first trained predictive model generated using at least a second subset of the initial training records and a training function, each first intermediate training record having a first score; generating second intermediate training records by inputting input data of the second subset of the initial training records to a second trained predictive model, the second trained predictive model generated using the training function and at least the first subset of the initial training records, each second intermediate training record having a second score; and generating, for the first trained predictive model and the second trained predictive model, a score normalization model using a score normalization training function, the first intermediate training records, and the second intermediate training records.
8275612
12081409
1
1. A method of detecting noise comprising: receiving an input of a voice frame and converting the voice frame into a filter bank vector; converting the converted filter bank vector into band data; calculating a weight Gaussian mixture model (GMM) for each band by using the converted band data and filter bank order; and detecting noise in the voice frame based on the calculation result wherein in the converting of the converted filter bank vector into band data, the filter bank vectors for the entire frequency bands of the voice frame are converted into data for respective bands.
20130283208
13849514
0
1. A method, comprising: presenting, by a computer, multiple interactive items on a display coupled to the computer; receiving an input indicating a direction of a gaze of a user of the computer; selecting, in response to the gaze direction, one of the multiple interactive items; subsequent to selecting the one of the interactive items, receiving a sequence of three-dimensional (3D) maps containing at least a hand of the user; analyzing the 3D maps to detect a gesture performed by the user; and performing an operation on the selected interactive item in response to the gesture.
8131547
12544576
1
1. A method for automatic segmentation of speech to generate a speech inventory, the method comprising: initializing, via a processor, a Hidden Markov Model (HMM) using seed input data; performing a segmentation of the HMM into speech units to generate phone labels; correcting, via the processor, the segmentation of the speech units by performing the steps: re-estimating the HMM based on a current version of the phone labels; embedded re-estimating of the HMM; and updating the current version of the phone labels using spectral boundary correction.
9324317
14481326
1
1. A method comprising: receiving a gesture from a user during a presentation of media content, wherein the gesture comprises a metadata request associated with the media content; selecting a piece of metadata for output, to yield selected metadata, the selected metadata being responsive to the metadata request regarding the primary media content; and outputting the selected metadata as synthetically generated speech, the synthetically generated speech having an accent selected from a plurality of accents based on the selected metadata.
8438032
11621347
1
1. A method of tuning synthesized speech, said method comprising: synthesizing user supplied text to produce synthesized speech by a text-to-speech engine; maintaining state information related to said synthesized speech; receiving a user modification of duration cost factors associated with said synthesized speech to change the duration of said synthesized speech, including modifying a search of speech units when the text is re-synthesized to favor shorter speech units in response to user marking of any speech units in the synthesized speech as too long and modifying the search of speech units to favor longer speech units in response to user marking of any speech units in the synthesized speech as too short; receiving a user modification of pitch cost factors associated with said synthesized speech to change the pitch of said synthesized speech; receiving a user indication of segments of the user supplied text and/or the synthesized speech to skip during re-synthesis of said speech; displaying a waveform associated with said synthesized speech and receiving user manipulations of the waveform; and re-synthesizing said speech based on said user supplied text, said user modified duration cost factors, said user modified pitch cost factors, said user indicated segments to skip and said user manipulations of the waveform.
20070011150
11427165
0
1. A computer-implemented method of processing a geotext query, said method comprising: receiving a first free-text query string from a user; and decomposing the first free-text query into a non geographic query and a geographic query, wherein the nongeographic query is a second free-text query string derived from the first free-text query string and the geographic query is a geographical location description.
8583422
12723472
1
1. A processor-implemented method for automatic labeling of natural language text, the method comprising: receiving text from at least one natural language document in electronic form; performing, using a processor, a basic linguistic analysis of the text that includes recognizing cause-effect relationships in the text and generating cause-effect labels for words or phrases in the text that form part of the cause-effect relationships; matching the linguistically analyzed text and the generated cause-effect labels against stored target semantic relationship patterns, wherein the stored target semantic relationship patterns generically describe semantic relationships between words or phrases, the stored target semantic relationships being derived in part from cause-effect relationships between words or phrases; producing additional semantic relationship labels for the linguistically analyzed text based on the matching of the linguistically analyzed text and the generated cause-effect labels against the stored target semantic relationship patterns, wherein the additional semantic relationship labels are tagged to words or phrases from sentences within the linguistically analyzed text in order to identify semantic relationships between those words or phrases by identifying those words or phrases as components of semantic relationships of the stored target semantic relationship patterns; and storing the linguistically analyzed text and the additional semantic relationship labels in a non-transitory storage medium.
8990092
13582950
1
1. A voice recognition device comprising: a voice recognition unit for carrying out voice recognition on an inputted voice; a voice recognition dictionary in which each word which is spoken by the inputted voice and recognized as a result of the voice recognition on the inputted voice is registered; a reply voice data storage unit for storing recorded voice data of each word spoken by the inputted voice which is registered in said voice recognition dictionary; a dialog control unit for, when said voice recognition unit voice recognizes a word which is registered in said voice recognition dictionary, acquiring recorded voice data corresponding to the word from said reply voice data storage unit; a reproduction noise reduction unit for carrying out a process of reducing noise included in the recorded voice data which are acquired from said reply voice data storage unit by said dialog control unit; an amplitude adjusting unit for adjusting an amplitude of said recorded voice data in which the noise has been reduced by said reproduction noise reduction unit to a predetermined sound amplitude level; and a voice reproduction unit for reproducing a voice from the recorded voice data for reproduction which are outputted from said amplitude adjusting unit.
9460360
14809302
1
1. A method for training a classifier, the method comprising: receiving a predetermined threshold distance; selecting, with a processor, a plurality of training samples from an atlas image having at least one pre-identified structure of interest, wherein the atlas image includes a plurality of image data points, wherein the plurality of training samples are randomly selected from image data points located within the predetermined threshold distance from a contour of the structure of interest; determining, with the processor, a set of image attributes associated with each selected training sample; and applying, with the processor, the selected training samples and the image attributes associated with the selected training samples to a machine-learning algorithm to generate a structure classifier, the structure classifier being configured to determine a structure of interest in a subject image.
10037712
14609874
1
1. A vision-assist device configured to be worn by a user, comprising: at least one image sensor for generating image data corresponding to a scene; a processor, wherein the processor is programmed to: receive the image data from the at least one image sensor; perform object recognition on the image data to determine a classification of a detected object that is present within the scene; determine a confidence value with respect to the classification of the detected object, wherein the confidence value is based on a confidence that the classification of the detected object matches an actual classification of the detected object; and generate an auditory signal based on the confidence value; and an audio device for receiving the auditory signal from the processor and for producing an auditory speech message configured to be heard by the user from the auditory signal, wherein the auditory speech message is indicative of the classification of the detected object and a qualifying statement associated with a sub-increment comprising the confidence value.
20130046539
13210471
0
1. A method for automatic speech recognition, wherein the method comprises: obtaining at least one language model word and at least one rule-based grammar word; determining an acoustic similarity of at least one pair of language model word and rule-based grammar word; and increasing a transition cost to the at least one language model word based on the acoustic similarity of the at least one language model word with the at least one rule-based grammar word to generate a modified language model for automatic speech recognition; wherein at least one of the steps is carried out by a computer device.
20130138422
13304983
0
1. A method for delivering an announcement in one or more languages, the method comprising the steps of: a computer receiving input representative of audio from one or more human speakers speaking in one or more natural languages; the computer processing the input to identify the one or more natural languages being spoken; the computer determining a relative proportion of each of the identified one or more natural languages; the computer determining one or more natural languages in which to deliver the announcement based, at least in part, on the determined relative proportion of each of the identified one or more natural languages; and the computer causing to be delivered the announcement in the determined one or more natural languages.
8543383
13284111
1
1. A method comprising: generating a first finite-state automaton from a set of rules associated with a context-free grammar; generating a second finite-state automaton based on the first finite-state automaton, wherein the second finite-state automaton defines a delayed acceptor for a plurality of non-terminal symbols of the context-free grammar; generating a third finite-state automaton associated with a topology of the context-free grammar as the context-free grammar is applied to an input string of symbols, wherein the topology defines an application of the context-free grammar; and modifying, via a processor, the third finite-state automaton by: identifying, for each edge of a plurality of edges of the third finite-state automaton, a non-terminal symbol of the plurality of non-terminal symbols; and replacing the each edge of the plurality of edges of the third finite-state automaton with an edge of the second finite-state automaton based on the non-terminal symbol for each edge.
20150134304
14076106
0
1. A method executed at least its part in a computing device to provide a hierarchical, feature based statistical model for behavior predication and classification, the method comprising: constructing a hierarchical, feature based model for predicting subscriber interactions with the communication system based on community and personal parameters; determining one or more community parameters associated with a subscriber of a communication system; determining one or more personal parameters associated with the subscriber; and generating one or more personalized predictions for one or more interactions with the communication system using the model.
20140199664
14216002
0
1. A computer-implemented method of providing cybersecurity training to a user of an electronic device, comprising, by one or more processors: accessing identifying information relating to an electronic device; selecting a mock attack situation that corresponds to the electronic device; causing the mock attack situation to be delivered to a user of the electronic device via the electronic device in the user's regular context of use of the electronic device; sensing an action of the user in a response to the mock attack situation; using the sensed action to determine whether the user should receive a training intervention; and determining that the user should receive a training intervention, and in response selecting a training intervention from a set of at least one training intervention and delivering the selected training intervention to the user.
20150051910
13969825
0
1. A natural language understanding system using at least one hardware implemented computer processor for automatic unsupervised clustering of dialog data from a natural language dialog application, the arrangement comprising: a log parser configured to extract structured dialog data from application logs; a dialog generalizing module configured to automatically generalize the extracted dialog data using different independent generalization methods to produce a generalization identifier vectors aggregating the results of the generalization methods used; and a data clustering module configured to automatically clustering the dialog data based on the generalization identifier vectors using an unsupervised density-based clustering algorithm without a predefined number of clusters and without a predefined distance threshold.
9940577
14793157
1
1. One or more non-transitory computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method for finding semantic parts in images, the method comprising: applying a convolutional neural network (CNN) to a set of images, the CNN detecting features for each image, each image being defined by a feature vector; clustering a subset of the set of images in accordance with a similarity between feature vectors; generating a plurality of part proposals, the plurality of part proposals comprising parts at various locations and of various sues for an image of the subset of images; and associating, via information gain matching, a label with at least one of the parts for the image.
8428944
11745029
1
1. A method of speech recognition comprising: prompting a user to provide a first utterance comprising a phrase; recording the first user utterance; performing speech recognition on the first user utterance and generating a first recognition result using a speech recognition system, wherein the first recognition result comprises a first confidence value that represents an expected accuracy of the speech recognition on the first user utterance; in response to a determination that the first confidence value is less than a threshold value, re-prompting the user to provide a second utterance comprising the same phrase as the first utterance; recording the second user utterance; performing speech recognition on the second user utterance and generating a second recognition result using the speech recognition system, wherein the second recognition result comprises a second confidence value that represents an expected accuracy of the speech recognition on the second user utterance; in response to a determination that the second confidence value is less than a threshold value, detecting a plurality of acoustic differences between the first and second user utterances; compensating the speech recognition system by adjusting a plurality of adjustable parameters to compensate for the plurality of acoustic differences; and performing compensated speech recognition on the first and/or the second user utterances, wherein the performing compensated speech recognition comprises: performing speech recognition on a selected one of the first user utterance and the second user utterance and generating a compensated speech recognition result for the selected utterance using the compensated speech recognition system, wherein the first user utterance is selected when the first confidence value is higher than the second confidence value, and wherein the second user utterance is selected when the second confidence value is higher than the first confidence value; or performing speech recognition on the first user utterance to generate a first compensated result having a first compensated confidence value using the compensated speech recognition system, performing speech recognition on the second user utterance to generate a second compensated result having a second compensated confidence value using the compensated speech recognition system, selecting the first compensated result as the compensated speech recognition result when the first compensated confidence value is greater than the second compensated confidence value, and selecting the second compensated result as the compensated speech recognition result when the second compensated confidence value is greater than the first compensated confidence value.
20150139329
14607951
0
1. A moving picture coding device adapted to code moving pictures in units of blocks obtained by partitioning each picture of the moving pictures, comprising: a motion vector predictor candidate generation unit configured to derive one or more motion vector predictor candidates from motion vectors of coded prediction blocks neighboring a prediction block subject to coding within the same picture as the prediction block subject to coding, and to add the derived motion vector predictor candidates in a motion vector predictor candidate list; a motion vector predictor selection unit configured to select a motion vector predictor from the motion vector predictor candidate list; a motion vector difference derivation unit configured to derive a motion vector difference by subtracting the selected motion vector predictor from the motion vector; and a coding unit configured to code index information indicating the motion vector predictor candidate selected in the motion vector predictor candidate list and the motion vector difference, wherein the motion vector predictor candidate generation unit determines, for the purpose of obtaining a predetermined number of motion vector predictor candidates, which of the coded prediction blocks provides the motion vector from which to derive the motion vector predictor candidate, such that the motion vector predictor candidate generation unit processes, in a predetermined order, prediction blocks in a block group neighboring to the left and in a block group neighboring above, said processing being done according to conditions 1 and 2 below in the stated order and then according to conditions 3 and 4 below in the stated order, condition 1: there is found a motion vector that is predicted by using the same reference list and the same reference picture as that of the motion vector predictor subject to derivation in the prediction block subject to coding; condition 2: there is found a motion vector that is predicted by using a reference list different from that of the motion vector predictor subject to derivation in the prediction block subject to coding and using the same reference picture as that of the motion vector predictor subject to derivation in the prediction block subject to coding; condition 3: there is found a motion vector that is predicted by using the same reference list as that of the motion vector predictor subject to derivation in the prediction block subject to coding and using a reference picture different from that of the motion vector predictor subject to derivation in the prediction block subject to coding; and condition 4: there is found a motion vector that is predicted by using a reference list different from that of the motion vector predictor subject to derivation in the prediction block subject to coding and using a reference picture different from that of the motion vector predictor subject to derivation in the prediction block subject to coding.
20040064316
10672767
0
1. A method for analyzing verbal communication, the method comprising acts of: (A) producing an electronic recording of a plurality of spoken words; (B) processing the electronic recording to identify a plurality of word alternatives for each of the spoken words, each of the plurality of word alternatives being identified by comparing a portion of the electronic recording with a lexicon, each of the plurality of word alternatives being assigned a probability of correctly identifying a spoken word; (C) loading the word alternatives and the probabilities to a database for subsequent analysis; and (D) examining the word alternatives and the probabilities to determine at least one characteristic of the plurality of spoken words.