LEN24’s solutions are based on proprietary semantic technology – a set of techniques to find meaning in unstructured text, and to use meaning to find other texts. Finding meaning is done by analyzing text linguistically, mapping words and expressions onto a ConceptNet (a semantic network built from nodes representing words or short phrases of natural language, and labeled relationships between them) and using powerful semantic pattern matching to combine these concepts into meaningful entities.
Machine/Deep Learning and Human Linguists are used to build these ConceptNets semi-automatically and API’s are in place to scale the companies’ Natural Language Understanding products rapidly. The basic semantic analysis and matching engine is language and domain independent. This means that whenever the engine has to handle a new domain, a new ConceptNet has to be built. This is obviously a notoriously expensive procedure when this is done fully manually. The basic approach is to use open-source & domain specific ontologies and taxonomies. When a new language is added to the domain-dependent application, a dictionary has to be created mapping lexical expressions to the concepts. The ConceptNet then functions as an interlingua, and matching between documents written in different languages becomes possible. The crux in Len24’s approach is to find ways to speed up building ConceptNets and mapping lexicons without the need for many computational linguists. In order to do this, Len24 uses Deep Learning algorithms to generate concept-candidates and lexicons automatically.