We use Apache Solr & Sphinx for enterprise search and indexing. Solr is a separate enterprise search server with a REST-like API. You put documents in it (called “indexing“) via JSON, XML, CSV or binary over HTTP. You query it via HTTP GET and receive JSON, XML, CSV or binary results. Solr enables powerful matching capabilities driven by LuceneT including phrases, wildcards, joins, grouping and much more across any data type. Configured with battle-tested Apache Zookeeper, Solr makes it simple to scale up and down. Solr bakes in replication, distribution, rebalancing and fault tolerance out of the box.
Sphinx is a full-text search engine, publicly distributed under GPL version.
Technically, Sphinx is a standalone software package delivers fast and appropriate full-text search functionality to client applications. It was specifically designed to combine well with SQL databases storing the data, and to be easily accessed by scripting languages. However, Sphinx does not depend on nor require any specific database to function. Applications can access Sphinx search daemon (searchd) using any of the three different access methods: a) via Sphinx own implementation of MySQL network protocol (using a small SQL subset called SphinxQL, this is recommended way), b) via native search API (SphinxAPI) or c) via MySQL server with a pluggable storage engine (SphinxSE).
Kreara serve clients of all scales and geographies. We have extensively worked in the US and European markets with specific focus on analytics and application solutions for Pharmaceutical, Financial and Retail Sectors. Recently we have also started to work with the government, helping them to analyse public data and visualise it in a very user friendly manner. We try and treat each of our clients as our only customer and work with them very closely to understand and cater to their requirements.