[protected-iframe id=”d21cffc7dbafe3b794eaa81205aca453-24588526-39990176″ info=”https://www.youtube.com/embed/6AH_RD_hAOA” width=”640″ height=”360″ frameborder=”0″ allowfullscreen=””]
Does the ever-increasing volume of digital information demand a more powerful search interface? One that allows for more complex queries to be formulated, and supports instances where the information being sought out is not necessarily something the user is aware of when they begin tapping on the keyboard?
Researchers at the Helsinki Institute for Information Technology in Finland think so, and they have come up with just such an interface — spinning it out as a company, Etsimo, with the aim of commercializing the technology. Etsimo has been backed by University of Helsinki funds and the Finnish government’s TEKES program. The team is now looking for investors and partners to “go global”, as they put it.
Their visual discovery search interface is called SciNet, and can be seen in action in the above video. It’s currently web-based, but a mobile version with a touchscreen interface is in the works. They have also published a paper on it in the journal Communications of the ACM.
“We started research on user intent modeling and search user interfaces in 2011 by combining the strong expertise on machine learning and human-computer interaction at our institute,” says researcher Tuukka Ruotsalo, in an interview with TechCrunch, explaining the genesis of the project.
“We decided to target search after realizing that while search engines have developed dramatically over the last decades, not much has happened to better involve humans in the search process. The role of users in search is still at best reactive: we type in few keywords and then try to make sense of results from a list of links and snippets.”
The SciNet approach to the increasingly hard problem of effective search is to involve the human user more by having them steer the algorithmic results — by signaling multiple intents as the process progresses. This generates a dynamic and visible spectrum of results — depending on what they are looking for, or interested in — and allows them to selectively drill down into complex queries in an informed, and self-guided way. The basic idea being that human-steered results are better than algorithms alone.
“We want to rely on users to make the decision and steer the search rather than only trying to build a search engine that would try to come up with a perfect answer on the first shot,” says Ruotsalo.
It’s a refreshing counter-current to the growth of algorithmic content selection being increasingly foisted on digital users (Twitter springs to mind here — albeit human content choices still remain core to its product proposition. For now).
“In cases of complex searches when even the user is uncertain about the initial query and types something in hopes to get closer to the relevant information, it is unlikely that the search engine would be able to offer a perfect result at first iteration. This is why we need to support the user to specify her information need in interaction with the search engine. Our intent modeling and visualization technology has been shown to up to double user’s performance in complex search tasks,” notes Ruotsalo.
“More specifically, we turn the search process from a memory recall task (user tries to come up with optimal keywords) to a recognition task (the computer suggests and visualizes the user with alternative search directions that user can react upon). This also makes the search process and personalization transparent for the user, the user sees what she gets and can direct the search to more suitable direction, instead of having to rely on purely algorithmic ranking, which can at worst lock the user to a filter bubble.”
The SciNet search engine displays topics/keywords, or user “intents” as it calls these potential avenues where a search may lead, on a circle — called the “intent radar”. The relevance of each keyword to the initial search query of the user (which kicks off the process, as per a normal web search) is displayed as its distance from the centre point of the radar — the closer the keyword is to the centre, the more closely related to the topic it is deemed.
This visual structure allows the user to view and browse an overview of related information, as the algorithm sees it, gathered into groups of inter-related topics. Search results related to the current intent radar are then displayed in a list at the right of the circle, tagged with the various intent keywords they relate to.
The interfaces also allows the user to manipulate and amend search results by dragging whichever intents they are most interest in into the centre of the circle, and having the algorithm serve up a new set of more specifically directed results. This could either be to help them narrow a complex query, or browse for topical information in a more general sense, by exploring inter-relations between pieces of data and topics.
“The current generation of consumer search engines are well suited for lookup search, i.e. tasks in which the user already knows the question exactly, or even the information she is looking for, but just doesn’t remember where it can be found. However, they offer limited support for exploratory search,” says Ruotsalo. “A unique feature of our system is that structured background knowledge is not needed, but our technology is able to learn the user intent and the connections between the keywords on-line as the search progresses.
“Our search technology can model the intent of the user and visualize the uncertain dimensions for the user, the user can then react and specify the required information to make the decision. It acts like a smart person who doesn’t understand the question specifically enough to give good answers and needs to ask for more information.”
“We are targeting exploratory searches in which users’ goals are uncertain and evolve throughout the course of the search,” he adds.
The SciNet approach is a little reminiscent of a predictive discovery engine called Random which we covered last year — also coming out of Finland — especially in an earlier incarnation, back in 2013, when it was called Futureful. Back then its interface used a series of interlinking bubbles to visualize different pieces of data connected to a topic. The idea there was also to help free users from the filter bubble effect caused by over-reliance on algorithms yielding increasingly narrow results. (To my eye, SciNet also has some structural overlap with this authoring interface project, created to aid machine-learning assisted poetry composition — so also to help algorithms and humans work together to generate results.)
Who is SciNet being aimed at? While the team says it sees very wide potential utility for its visual discovery approach to search they are focusing on custom implementations initially.
“At first, we are not targeting general web search, but custom search, for example companies or organizations who want to make their custom or deep web content searchable,” says Ruotsalo. “Our current showcase is an application to scientific literature, but it can be applied to any domain in which users struggle with complex search tasks.”
Via its startup business, it says it is in the process of working on implementations with a few, unnamed customers — who will integrate its visual discovery interface into their existing web presence. The Etsimo business model will be a SaaS subscription fee.
“We’ll have a limited commercial version, i.e. closed beta ready by end of Q1/2015. Our public cloud offering will be available late Q2 or early Q3/2015. This will be an outsourced search solution, where our customers make their own data searchable for their own customers through our cloud based service,” Etsimo CEO Thomas Grandell tells TechCrunch.