Last month, the XDA Developers team reported a change to the Chromium source with the addition of a new feature described as follows: “enable display of the voice search button in the top toolbar”.
Chromium is a free browser created by Google, which derives much of the Google Chrome source code.
Today, monitoring Google Chrome Beta (the version that has the features under development) I noticed the presence of the same feature.
The new voice options in Chrome Beta
In the image below, you can see the new options on Chrome Beta.
In the video you will see an experiment in which, through a FAQ page and structured data, I obtained a rich snippet in SERP with the possibility of directly opening a voice application (a Google Action or a Skill for Alexa) on the smartphone or Smart Home device.
An experiment that combines the world of SEO and Search with Voice Technology
Let’s recap the operations I did, described also in the video (you can use subtitles, but the video is still very simple).
Nel video vediamo un esperimento in cui, attraverso una pagina FAQ ed i dati strutturati, ho ottenuto un rich snippet in SERP con la possibilità di aprire direttamente un’applicazione vocale (una Google Action o una Skill per Alexa) nello smartphone o in un device della Smart Home.
Un esperimento che unisce il mondo SEO e della Search con la Voice Technology
Ricapitoliamo le operazioni che sono state effettuate e che vengono descritte nel video.
The universe of conversational systems now proposes various solutions, which are updated very often.
With this post, I will try to provide some clarity, giving you some ideas on how you can think and choose according to your needs.
If you want to create a voice application, in the Amazon environment (therefore a skill for Alexa), you will use the Alexa Skills Kit: https://amzn.to/315OslN.
If you want to implement a Google Action, there are at least two solutions: Dialogflow (http://bit.ly/dialogflow-console-es) or Actions Builder (available in the Actions on Google console: http://bit.ly/ aog-console-ab).
Of course, the second is a new tool…
In this mini post, in a simple example, we will understand how to make a conversational agent recognize the product code entered by the user within Dialogflow. The principle is the same on Actions Builder and Dialogflow CX.
The following are two examples of code format to recognize:
So I create the “codice-prodotto” entity, and I select “Regexp entity” as the type of recognition.
To describe the formats, I used the following rules.
Google yesterday launched Dialogflow CX in beta, a new system created for companies that need larger and more complex projects.
First of all the intent was (rightly) detached from the actions: they only have to recognize what the user says.
When is the action to be determined based on the intent established? Within the scene!
This is an optimization compared to Dialogflow, because the same intent can be used in different "contexts". I also did it on Dialogflow, through the development of fulfillment, to standardize flows with Alexa Skills. With l it is much simpler and more natural.
But how do you manage priorities with intent?
Priority no longer exists, but the intent can go global! This too is an…
Con la voice technology si monetizza creando applicazioni vocali di valore, che soddisfino i bisogni degli utenti nelle diverse categorie come il gioco, l’intrattenimento, i servizi. Successivamente è possibile offrire l’acquisto vocale di esperienze arricchite ed abbonamenti.
Quest’ultima fase avviene grazie agli acquisti in-skill di Alexa e alle integrazioni per i pagamenti di Google
In meno di 12 mesi, i giochi per Alexa di Volley hanno accumulato oltre 500.000 utenti mensili attivi!
Grazie agli acquisti in-skill sono riusciti a monetizzare, proponendo contenuti premium sotto forma di pacchetti di espansione per Yes Sire, il loro gioco di punta.
AP MidiBridge 1.0: a simple Web Based MIDI controller.
This is a prototype of a simple midi controller based on a web interface.
The communication between the system and the DAW is through the rtpMIDI Protocol (midi over WiFi).
The heart of the system was installed on a Raspberry Pi, which also functions as an access point for clients (each device connected to the wifi becomes a potential midi controller).
The system provides two MIDI ports: one to control the DAW, and the other to control a VST instrument.
The prototype allows complete customization of the interface and saving configurations of different profiles. Each MIDI parameter for the VST Instrument control can be customized.
The web interface makes the system usable from any device regardless of operating system.
SEO Manager, Head of Voice Technology, AI Conversation Designer, Voice UX @ site By site // Autore di Voice Technology — Dario Flaccovio Editore