FAQ
Troubleshooting for those using AmiVoice API, CLICK HERE
Choose the service you want to know about
AmiVoice, AI Speech Recognition Technology
AmiVoice API
AmiVoice SDK
AmiVoice API Private
AmiVoice IVR for Amazon Connect
AmiVoice MRCP Server
Keyword search
Frequently Asked Questions
Search results for ""
― AI voice recognition technology AmiVoice
-
QWhat are the features of AmiVoice?A
AmiVoice can be used for various purposes, fields, and environments.
It is compatible with many platforms including mobile phones and PCs.
It is compatible with both server-type and stand-alone type recognition methods.
Acoustic models include telephone speech, speech inside a car, and speech in noisy environments such as inside a factory, while language models include medical document creation, call center call records, and daily report creation, making it suitable for many industries and usage scenarios.If this answer does not solve your problem, please contact us.
Contact -
QDo you recognize anyone's voice?A
It can recognize anyone's voice. The recognition rate varies depending on the clarity of your pronunciation, speaking style, and the volume of your voice.
If this answer does not solve your problem, please contact us.
Contact -
QAre intonation differences and dialects okay?A
It accommodates differences in intonation and speed of speech. However, dialects that are not included in the dictionary will need to be newly registered.
If this answer does not solve your problem, please contact us.
Contact -
QThere are many misrecognitions when speaking word by word...A
AmiVoice outputs recognition results by looking at the preceding and following words, so if you break the utterance in the middle of a sentence, it may cause misrecognition. Try to speak in one sentence units as much as possible.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know how to improve the recognition rate when inputting voice.A
Use the tips below.
- Speak clearly and with a clear voice
- Speak in one sentence or punctuation mark
- Speak in multiple phrases
* When multiple phrases are entered, Japanese context analysis works more effectively.
*All sounds input from the microphone will be analyzed as Japanese sentences.
*Repeating something midway through or using words like "um" or "hmm" are considered words in context.If this answer does not solve your problem, please contact us.
Contact -
QHow does kanji conversion work?A
The most probable word sequence calculated using the acoustic model and linguistic model is used as the recognition result, and the recognition result, which includes both kanji and kana, is obtained directly from the speech. In speech recognition, hiragana is not recognized and then converted to kanji.
If this answer does not solve your problem, please contact us.
Contact -
QI would like them to input it in hiragana, even if it is just one character at a time.A
Recognition of individual hiragana characters is not in practical use because there are many instances of misrecognition.
If this answer does not solve your problem, please contact us.
Contact
- AmiVoice API
-
QWhat is the difference between this and other companies' cloud speech recognition services?A
The main differences between our cloud speech recognition services and those of other companies are as follows:
- We provide a speech recognition engine that is strong in "Japanese" and "technical terminology."
- It is designed for B2B use, and is trained to remove inappropriate words that are not used in business.
- You are only charged for the speech sections that are targeted for speech recognition. This allows you to keep costs down compared to other companies that charge for silent periods as well.
- For any technical questions, our dedicated engineers will respond carefully in Japanese.
We also summarized the differences between our cloud speech recognition services and those of other companies. Useful for selecting a speech recognition API! Introducing AmiVoice API with a "Price and feature comparison table of the top 5 companies". Please see the details posted on the website.
If this answer does not solve your problem, please contact us.
Contact -
QWhat is the difference between "Conversation_General Purpose" and "Voice Input_General Purpose"?A
Conversation is an engine that excels at natural speech between people, such as in face-to-face meetings, conferences, and web conferences.
Voice input is an engine that is strong at voice input to PCs and smartphones, such as voice operations and composing emails.If this answer does not solve your problem, please contact us.
Contact -
QWhat is the difference between general-purpose engines and industry-specific engines (domain-specific engines)?A
The general-purpose engine is a speech recognition engine that has been trained with a wide range of commonly used words and can be used for a variety of purposes.
Engines for each industry (domain-specific engines) such as medical, pharmaceutical, insurance, and finance are engines that are trained to specialize in the words unique to each industry, and are trained mainly on specialized terms and phrases within that industry. They can recognize industry-specific terms with high accuracy that cannot be recognized by general-purpose engines.If this answer does not solve your problem, please contact us.
Contact -
QHow long does it take to process voice recognition?A
Recognition is processed in real time, and the recognition results are returned as soon as the speech is finished.
*Delays may occur due to network influences and congestion.
With synchronous and asynchronous HTTP speech recognition APIs, conversion to text will be completed within 1x the duration of the audio sent, at the latest.
For example, if you have an hour of audio, the text can be converted within an hour.
* In the case of the asynchronous HTTP speech recognition API, the speech recognition server is started separately after receiving a request, and it takes about one minute for the recognition process to start.If this answer does not solve your problem, please contact us.
Contact -
QWill the accuracy improve automatically?A
At AmiVoice API, our conversation engine learns new words every day.
Other engines are updated once a month.
*The engine does not learn for each user.If this answer does not solve your problem, please contact us.
Contact -
QThe recognition results are returned, but they are not very accurate. What is the reason?A
There are various possible causes for this, including environmental noise, the speaker's pronunciation, microphone performance, recording volume, and the engine not learning the words spoken.
If you feel the accuracy is poor, please check the recorded audio.
If the sound quality is good but the app is still misrecognizing your voice, try registering words.If this answer does not solve your problem, please contact us.
Contact -
QDoes the accuracy of voice recognition vary depending on the audio format?A
Audio compressed with a lossy compression codec will be recognized less accurately than uncompressed audio.
The degree of degradation in recognition rate depends on the type of codec and compression rate.If this answer does not solve your problem, please contact us.
Contact -
QWhat languages are supported?A
AmiVoice API provides engines for Japanese, English, Chinese, and Korean, and a Thai engine is also available through AmiVoice API Private.
If you wish,CLICK HEREfrom the inquiry form.If this answer does not solve your problem, please contact us.
Contact -
QThe Chinese engine is described as a "model targeted at standard Chinese mainland language," but what type of spoken language is it and what written characters are output?A
It supports Mandarin Chinese and outputs recognition results in Simplified Chinese.
If this answer does not solve your problem, please contact us.
Contact
-
QPlease tell me the differences between the three types of API and their respective features.A
The differences and features are as follows:
1. Synchronous HTTP Speech Recognition API
It is suitable for uploading short audio files. The maximum audio data size that can be sent at one time is 16MB.
2. Asynchronous HTTP speech recognition API
Suitable for uploading long audio files. The maximum amount of audio data that can be sent at one time is 2.14GB. Even long audio files can be quickly converted to text. Speaker diarization function is available.
3. WebSocket Speech Recognition API
You can convert text into text in real time.
For more information,AmiVoice API Manual "Overview".If this answer does not solve your problem, please contact us.
Contact -
QWhat specific information is included in the response that is returned in JSON format?A
The AmiVoice API includes the written pronunciation, spoken pronunciation, speech start time, speech end time, and confidence.
For more information, please see the following page.
-Synchronous HTTP speech recognition API response
-For asynchronous HTTP speech recognition API responsesIf this answer does not solve your problem, please contact us.
Contact -
QIs there a limit to how long a WebSocket connection can last?A
With the AmiVoice API, when using the WebSocket speech recognition API, the maximum time a session can be maintained is 24 hours.
Regardless of whether audio is being transmitted or not, if the session maintenance time has elapsed, the server will disconnect the connection. In this case, please reconnect.
For other restrictions,Limitationsplease confirm.If this answer does not solve your problem, please contact us.
Contact -
QIs there a limit to the size of data I can upload?A
The AmiVoice API has a maximum data size that can be uploaded at one time: 16MB for synchronous HTTP speech recognition API, 2.14GB for asynchronous HTTP speech recognition API, and there is no limit for WebSocket speech recognition API.
For more information,Limitations.If this answer does not solve your problem, please contact us.
Contact -
QAre there any restrictions on the data format that can be uploaded? Can I upload video files?A
The AmiVoice API has restrictions on the data format that you can upload.
For more information,Audio formatplease confirm.
Also, we do not support video formats. You will need to extract the audio data from the video yourself before sending it.If this answer does not solve your problem, please contact us.
Contact -
QIs it necessary to specify the format name of the audio data?A
Generally, the format name must be specified, but there are exceptions where it can be omitted.
For more information,Audio format.If this answer does not solve your problem, please contact us.
Contact -
QIs voice recognition not possible in an environment where there is no Internet connection?
-
QHow can I switch the type of voice recognition engine and connect?A
With the AmiVoice API, when sending audio, you need to specify which engine to send it to.
For example, if you are using the conversation_general engine, specify "-a-general", and if you are using the English_general engine, specify "-a-general-en" after "grammarFileNames=" in the d parameter.
For a list of available connection engine names, seeMyPageIt is displayed in.
For more information,Connection engine name (grammarFileNames).If this answer does not solve your problem, please contact us.
Contact -
QIs there a limit to the number of words that can be registered using "Word Registration"?A
Aim to register around 1000 words per profile.
For more information, seeWord registration.If this answer does not solve your problem, please contact us.
Contact -
QCan I use one user ID for a service that is used by multiple people?A
No problem.
However, please note that if multiple people use the service with one user ID, we will not be able to tally up the usage time of each user individually.If this answer does not solve your problem, please contact us.
Contact -
QIf I want to recognize multiple voices at the same time, can I use the same AppKey?A
The AmiVoice API does not place a limit on the number of simultaneous connections, so you can send multiple audio streams at the same time using the same AppKey.
*Even if multiple audio files are sent at the same time, they will be recognized in parallel. However, delays may occur depending on the server's congestion, so please let us know in advance if you plan to send a large amount of audio at the same time.If this answer does not solve your problem, please contact us.
Contact
-
QI would like to test the service without registering. How can I do this?A
Try out recognition accuracy. You can test the accuracy of speech recognition.
If you want to try out the actual API, you need to register.
Each engine is free for 60 minutes per month, so you can test it for free within that time frame.
To apply, CLICK HEREIf this answer does not solve your problem, please contact us.
Contact -
QI understand that 60 minutes are free each month, but will I be notified when I exceed 60 minutes?
-
QI'd like to try it out. Please tell me how to sign up.A
AmiVoice API requires user registration even for trial use. When applying for use online (user registration),Terms of Use and SLA The contract will be completed upon your consent.
Each engine comes with 60 minutes free per month, so you can test it for free within that range.
To apply, CLICK HEREIf this answer does not solve your problem, please contact us.
Contact -
QI have forgotten my registered email address, user ID, and password.A
CLICK HEREplease contact us using the Inquiry Form.
After verifying your identity, we will process the reissue.If this answer does not solve your problem, please contact us.
Contact -
QHow can I change my user ID?A
You cannot change your user ID.
If this answer does not solve your problem, please contact us.
Contact -
QI have applied for the service, but I have not received the user registration email.A
The email you sent may have been marked as spam by your environment.
The email for user registration will be sent from 'acp-info@amivoice.com', so please check your spam settings.
If you haven't received it, pleaseCLICK HEREfrom the inquiry form.If this answer does not solve your problem, please contact us.
Contact -
QCan sole proprietors also register?A
Terms of Use and SLA Anyone who agrees to the above can use this service.
If this answer does not solve your problem, please contact us.
Contact -
QI want to cancel my membership, how do I go about doing so?
-
QIs there a fully-fee plan?A
AmiVoice API is a pay-as-you-go system and does not offer a flat-rate plan.
However, if you are using the asynchronous HTTP speech recognition API for a long period of time, we may be able to accommodate flat-rate pay-per-use charges.CLICK HEREPlease contact us for more information.If this answer does not solve your problem, please contact us.
Contact -
QWhat audio portion is chargeable?A
With AmiVoice API, charges are only incurred for the parts of the audio data where voice is detected, that is, the speech sections that are the subject of voice recognition.
There is no charge for sections where no human voice is detected, however background music, television audio, voices from adjacent seats, and some other noises may be detected as speech segments.If this answer does not solve your problem, please contact us.
Contact -
QIf the voice data includes a call on-hold tone, will I be charged for it?A
With the AmiVoice API, telephone hold music is generally not subject to charges.
* Sections where guidance messages, etc. are recognized as human voices will be subject to charges.If this answer does not solve your problem, please contact us.
Contact -
QHow much does it cost to recognize 100 hours of audio data?A
The usage fee for 100 hours is as follows:
・AmiVoice API log saving enabled
For general-purpose engines (WebSocket/synchronous HTTP): 99 yen x 100 hours = 9,900 yen (tax included)
For general-purpose engine (asynchronous HTTP): 79.2 yen x 100 hours = 7,920 yen (tax included)
For finance and insurance industry engines: 118.8 yen x 100 hours = 11,880 yen (tax included)
For medical industry engines: 148.5 yen x 100 hours = 14,850 yen (tax included)
・AmiVoice API log not saved
For general-purpose engines (WebSocket/synchronous HTTP): 148.5 yen x 100 hours = 14,850 yen (tax included)
For general-purpose engine (asynchronous HTTP): 99 yen x 100 hours = 9,900 yen (tax included)
For finance and insurance industry engines: 148.5 yen x 100 hours = 14,850 yen (tax included)
For medical industry engines: 222.75 yen x 100 hours = 22,275 yen (tax included)
However, since you are only charged for the spoken portion, the actual amount is expected to be lower than the amount stated above.For details on the fees,CLICK HEREPlease check the price list.
If this answer does not solve your problem, please contact us.
Contact -
QIs there a way for the program to tally up the number of seconds of speech that are subject to charges?A
With the AmiVoice API, the speech start time (starttime) and speech end time (endtime) can be obtained for each speech section from the returned recognition results, so it is possible to aggregate the results by adding up the differences.
Incidentally,MyPageYou can also check this month's usage.
*Updated every 6 hours (please note that the XNUMX-hour interval may change).If this answer does not solve your problem, please contact us.
Contact -
QIs there a minimum charge per request?A
There is no minimum usage fee per request for the AmiVoice API.
If this answer does not solve your problem, please contact us.
Contact -
QI used it a little last month, but I wasn't charged. Why?A
With the AmiVoice API, if your "usage" for the current month is within the free time range for each engine, or if the usage fee for each engine is less than 10 yen, you will not be charged.
* "Usage" is the cumulative time of the "speech section" of the audio data sent to the server (rounded down to the nearest second).If this answer does not solve your problem, please contact us.
Contact
-
QPlease tell me the difference between "Log saved" and "No log saved".A
Logs refer to voice data and recognition results.
"Log storage" is offered at a lower price than "no log storage" if you agree to the use of these logs for research, development, and quality improvement of our products and services.
If you select "No Logs," voice data and recognition results will not be saved as files on the server. All processing will be done in memory, and they will be deleted from memory once processing is complete.
For details on when logs are left, please see the Q: "Please tell me the cases in which voice data and recognition results are stored on the server side."If this answer does not solve your problem, please contact us.
Contact -
QPlease tell me cases in which voice data and recognition results are stored on the server side.A
1. Asynchronous HTTP speech recognition API
From the time the job is accepted until the recognition process is completed, the voice file is temporarily stored on the storage. This voice file is automatically deleted immediately after the recognition process is completed. The recognition results are also stored on the storage for one week and can be retrieved by the user. Access to these voice recognition results is protected by the AppKey and session ID.
2. When using the WebSocket speech recognition API or synchronous HTTP speech recognition API with "log storage" enabled
The voice data and the recognition result string are temporarily stored on the server, and then moved to a safe and robust storage by late-night batch processing. Some of this voice data may be used as material for machine learning in our voice recognition engine. Data that is not used for learning will be automatically deleted after a certain period of time.
3. When using the WebSocket speech recognition API or synchronous HTTP speech recognition API with "no logs saved"
Neither the voice data nor the recognition result string is stored on the server side. Everything is processed in memory, and once the recognition result is returned, it is deleted from memory.If this answer does not solve your problem, please contact us.
Contact -
QIs the transmitted voice data recognized and processed on a server in Japan?A
Yes. All services are run on domestic servers.
If this answer does not solve your problem, please contact us.
Contact -
QWhat is the uptime rate of your service?
-
QHow will I be notified if there is a service outage?A
AmiVoice API will notify you as soon as possible during the reception hours listed below on the login screen of the AmiVoice API service or on the ACP site.
Reception hours: 9:30AM to 5:30PM
(Excluding Saturdays, Sundays, national holidays, national holidays, and the year-end and New Year holidays designated by the company)
For more information,SLA .If this answer does not solve your problem, please contact us.
Contact -
QCan I use the service during system maintenance?A
With the AmiVoice API, voice recognition processing will remain available even during maintenance.
If we need to suspend the service and carry out maintenance, we will notify you at least three days in advance on the AmiVoice API service login screen or the AmiVoice Cloud Platform website.
*This does not apply in the event of an emergency.
For more information,SLA .If this answer does not solve your problem, please contact us.
Contact -
QHas your security been certified by a third party?A
We have obtained the Privacy Mark.
If this answer does not solve your problem, please contact us.
Contact
- AmiVoice SDK
-
QCan it be recognized even when it is not connected to the network?A
There are three types of recognition: on-server recognition, on-device recognition, and hybrid recognition, which is used depending on the usage situation. When performing on-device recognition, recognition can be performed even if not connected to a network. When performing on-device recognition, the number of vocabulary words registered in the dictionary and language model depends on the device specifications.
If this answer does not solve your problem, please contact us.
Contact -
QCan I use a domain-specific engine?A
Domain-specific engineIs available.
If this answer does not solve your problem, please contact us.
Contact -
QCan I customize the engine?A
Yes, it is possible. For details and prices,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QDo you offer multilingual support?A
We can provide support in Japanese, English, Chinese, Korean, and Thai (as of March 2022).
The range of support for each language is different, soCLICK HERE.
*Packaged products are only available in Japanese, and multilingual support is available on an individual basis.If this answer does not solve your problem, please contact us.
Contact -
QI would like to know the price.A
As for the price,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know about development examples.A
For examples,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QDoes it have a voice synthesis function?A
to do so.
If this answer does not solve your problem, please contact us.
Contact
-
QCan I develop this even if I have no experience in developing speech recognition?A
It is possible to develop it.
We provide the libraries necessary for speech recognition and examples of how to use the libraries.If this answer does not solve your problem, please contact us.
Contact -
QIs it possible to receive support when developing a speech recognition app?A
We provide development support services that will guide you by email on how to use the libraries required for speech recognition. For more information on development support, Please contact us, CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QIs it possible to develop on multiple devices?A
Yes, that is possible. If it is within the scope of the application you are developing, it can be used on multiple devices.
If this answer does not solve your problem, please contact us.
Contact -
QAre there any costs involved if I use or sell the application or service I developed?A
Once you have completed development, if you want to use the voice recognition app for your own company or sell it, you will need a "commercial license." The cost isCLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QWhat are the target platforms/development languages?A
If this answer does not solve your problem, please contact us.
Contact -
QCan I use it in a web browser?A
Not available.
This applies to desktop apps for Windows and native apps for Android and iOS.
*Please use AmiVoice API for voice recognition in web browsers.If this answer does not solve your problem, please contact us.
Contact -
QWhat does the SDK contain?A
AmiVoice SDK consists of libraries, documentation, and sample source code.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know how to obtain the SDK.A
You can download it from the dedicated website.
If this answer does not solve your problem, please contact us.
Contact
- AmiVoice API Private
-
QWhat is the difference with AmiVoice API?A
AmiVoice API Private provides the AmiVoice API in a dedicated cloud environment or on-premise.
Because it is a dedicated environment, we can achieve stable response and use settings and engines that suit your needs.If this answer does not solve your problem, please contact us.
Contact -
QWhat is On-Premise?A
We provide the AmiVoice API within your network.
Companies can build and operate their own voice recognition system suited to their environment, purpose, and service while complying with the security standards of their own company. This allows them to safely input voice data even in financial institutions, where information security is important, or for highly confidential corporate information that cannot be stored outside the company.If this answer does not solve your problem, please contact us.
Contact -
QI would like to know the operating requirements for on-premise.A
For operating requirements, CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QCan I use rulegrammar?A
Rule grammarYou can use the grammar file stored on the server to perform recognition.
If this answer does not solve your problem, please contact us.
Contact -
QCan I use a domain-specific engine?A
Domain-specific engineIs available.
If this answer does not solve your problem, please contact us.
Contact -
QCan I customize the engine?A
Yes, it is possible. For details and prices,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know the price.A
As for the price,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact
― AmiVoice IVR for Amazon Connect
-
QI would like to know its use.A
We develop and build voice recognition IVR using Amazon Connect.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know the features of voice recognition.A
In addition to domain-specific speech recognition engines, you can customize recognition phrases to suit your IVR flow.
If this answer does not solve your problem, please contact us.
Contact -
QCan it be used outside of Amazon Connect?A
It does not support anything other than Amazon Connect.
If you want to develop a voice recognition IVR outside of Amazon Connect, you will need to develop it separately using the AmiVoice SDK.
CLICK HERE.If this answer does not solve your problem, please contact us.
Contact -
QIs the audio data stored?A
The audio files and voice recognition result files will be stored in the user's AWS S3.
If this answer does not solve your problem, please contact us.
Contact -
QI'd like to know how to apply.A
For details on how to apply,CLICK HEREPlease contact us via the form below. After we have a consultation, you can start using the service after filling out the application form and completing the setup.
If this answer does not solve your problem, please contact us.
Contact -
QWhat is the fee structure/payment method?A
For information on fees and payment methods,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know what environment is necessary for voice recognition.A
You will need to prepare an Amazon Connect environment on your side, as well as AWS Lambda, Kinesis Video Streams, Kinesis Data Streams, and Dynamo DB for voice recognition integration.
If this answer does not solve your problem, please contact us.
Contact -
QCan I develop this even if I have no experience in developing speech recognition?A
We provide documentation and sample programs. There is no significant difference in the skills required compared to standard development that does not use voice recognition.
If this answer does not solve your problem, please contact us.
Contact -
QIs development support available?A
We provide development support via email inquiries.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know about available speech recognition engines.A
You can freely choose the AmiVoice speech recognition engine that is specialized for the content of call centers' calls.
* There are six selectable engines: general-purpose, finance, securities, communications, pharmaceuticals, and manufacturing.If this answer does not solve your problem, please contact us.
Contact -
QIs there a word registration function?A
You can register dictionary words by registering the reading and spelling of the word you want to register. You can also use a word list customized for voice recognition IVR.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know how to check my usage history.A
You will be billed according to your monthly usage. As of April 2022, it is not possible to check your usage history each time.
If this answer does not solve your problem, please contact us.
Contact
- AmiVoice MRCP Server
-
QI would like to know its use.A
This is a voice recognition server system that supports MRCP (Media Resource Control Protocol). It is possible to develop and build a voice recognition IVR that supports MRCP.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know which versions of MRCP are supported.A
Both MRCP version 1 and MRCP version 2 are supported.
If your IVR supports the above, you can use the AmiVoice speech recognition engine by accessing the system from the IVR.If this answer does not solve your problem, please contact us.
Contact -
QDoes it support NAT conversion?A
It does not correspond.
If this answer does not solve your problem, please contact us.
Contact -
QI'd like to know how to apply.A
For details on how to apply,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QI'd like to know the pricing structure.A
For prices,CLICK HERE.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know what environment is necessary for voice recognition.A
You will need to set up the entire AmiVoice MRCP Server system in a server environment that can connect to your MRCP system.
If this answer does not solve your problem, please contact us.
Contact -
QCan I develop this even if I have no experience in developing speech recognition?A
Development documentation is provided.
Scratch development is required on the MRCP IVR side.If this answer does not solve your problem, please contact us.
Contact -
QIs development support available?A
We provide development support via email inquiries.
If this answer does not solve your problem, please contact us.
Contact -
QI would like to know about available speech recognition engines.A
Various voice recognition engines are available.
If this answer does not solve your problem, please contact us.
Contact -
QIs there a word registration function?A
You can register words from the system.
It is also possible to use rule grammars that are customized with original recognition phrases specific to IVR.If this answer does not solve your problem, please contact us.
Contact -
QCan you perform sentiment analysis?A
No
If this answer does not solve your problem, please contact us.
Contact