Google has developed a neural network to improve the relevance of app search results on Google Play by predicting topics on the basis of each app’s name and description.
With well over two million apps each on Google’s Play Store and Apple’s App Store, app discovery remains a challenge for Android and iOS users, as well as for developers.
According to Google, about half of the search queries on the Play Store are general topic searches such as ‘horror games’ or ‘selfie apps’. However, for various reasons search has been hindered on each store. Apple and Google have been working to stamp out efforts by developers to game search on each store in a bid to get seen by users.
Apple, for example, recently prevented iOS developers from using long app names and irrelevant descriptions to boost discoverability via search. More recently it launched app search ads to help smaller developers stand out.
Google offered a similar ad search for apps product last year and recently changed store rules to stop developers using bogus ratings and fraudulent installs to get discovered.
Google’s latest effort to improve relevance in app search results on the Play Store relies on a deep neural network with a little help from humans, who like their machine counterparts, also needed some training to classify topics for apps.
The main goal of the neural network is to automatically predict which search topics should be linked to an app based purely on the app’s name and description. Google needed a system that could apply thousands of topics to millions of apps.
Finding vast troves of data to train neural networks rarely seems to be a challenge for Google, but when it came to topics for apps on the Play Store it actually was. So the company’s software engineers needed to develop a neural network that could be trained on a lean data diet.
“While for some popular topics such as ‘social networking‘ we had many labeled apps to learn from, the majority of topics had only a handful of examples,” Google’s software engineers explain.
“Our challenge was to learn from a very limited number of training examples and scale to millions of apps across thousands of topics, forcing us to adapt our machine-learning techniques.”
Google says it overcame the shortage by emulating how humans use app descriptions to quickly categorize apps, including ones they’ve never seen before.
The company used several neural networks that when combined were capable of predicting ‘share’ if presented with the word ‘photo’. The design includes a series of app classifiers that link different topics to each app. While the system sufficed at classifying words around popular topics such as ‘social networking’, it was less adept at smaller topics like ‘selfie’.
Human app reviewers added a final layer of training by reporting whether they agreed with the system’s output.
“To evaluate {app, topic} pairs by human raters, we asked them questions of the form, ‘To what extent is topic X related to app Y?‘. Multiple raters received the same question and independently selected answers on a rating scale to indicate if the topic was ‘important’ for the app, ‘somewhat related’, or completely ‘off-topic’,” Google’s engineers said.
However, Google found that raters were disagreeing among themselves over which topics were relevant to an app, so Google ended up training the humans too.
“Asking raters to choose an explicit reason for their answer from a curated list further improved reliability. Despite the improvements, we sometimes still have to ‘agree to disagree’ and currently discard answers where raters fail to reach consensus,” they noted.
Google doesn’t say how effective the system is. However, it is good enough for the company to be using it to guide search and discovery on the app store.
[Source:-ZDNET]