Software:Norconex Web Crawler

From HandWiki
Revision as of 17:41, 15 May 2024 by Importwiki (talk | contribs) (import)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Norconex Web Crawler
Other namesNorconex HTTP Collector
Developer(s)Norconex Inc.
Initial release2016
Stable release
3.0.2 / 2022-01-05
RepositoryGitHub Repository
Written inJava
Operating systemCross-platform
LicenseApache License
WebsiteNorconex Web Crawler

Norconex Web Crawler is a free and open-source web crawling and web scraping Software written in Java and released under an Apache License. It can export data to many repositories such as Apache Solr, Elasticsearch,[1] Microsoft Azure Cognitive Search, Amazon CloudSearch and more.[2][3][4]

The Crawler can be run on its own or embedded in your own Java application.[5][6]

Some key features are:

  • Multi-threaded
  • Extract text from a variety of file formats (HTML, PDF, Word, etc.)
  • Extract metadata associated with documents
  • Supports pages rendered with JavaScript
  • Incremental crawls
  • Supports external commands to parse or manipulate documents
  • Send extracted data to a variety of repositories

Some well-known companies and products using Norconex Web Crawler are: Apache Solr Ecosystem, Department of National Defence, Universities Canada, U.S. Department of Education, Department of National Defence.[7] [8]

History

Norconex Web Crawler was released as free and open-source software in 2013.[9]

References

Mentions in Academic Research

See also