| 注册
请输入搜索内容

热门搜索

Java Linux MySQL PHP JavaScript Hibernate jQuery Nginx
SteffenM01
9年前发布

lucene简单入门

来自: https://segmentfault.com/a/1190000004422101

说lucene是Java界的检索之王,当之无愧。近年来elasticsearch的火爆登场,包括之前的solr及solr cloud,其底层都是lucene。简单了解lucene,对使用elasticsearch还是有点帮助的。本文就简单过一下其简单的api使用。

添加依赖

        <dependency>              <groupId>org.apache.lucene</groupId>              <artifactId>lucene-core</artifactId>              <version>4.6.1</version>          </dependency>          <dependency>              <groupId>org.apache.lucene</groupId>              <artifactId>lucene-analyzers-common</artifactId>              <version>4.6.1</version>          </dependency>          <dependency>              <groupId>org.apache.lucene</groupId>              <artifactId>lucene-queryparser</artifactId>              <version>4.6.1</version>          </dependency>          <dependency>              <groupId>org.apache.lucene</groupId>              <artifactId>lucene-codecs</artifactId>              <version>4.6.1</version>          </dependency>

索引与检索

创建索引

File indexDir = new File(this.getClass().getClassLoader().getResource("").getFile());        @Test      public void createIndex() throws IOException {  //        Directory index = new RAMDirectory();          Directory index = FSDirectory.open(indexDir);          // 0. Specify the analyzer for tokenizing text.          //    The same analyzer should be used for indexing and searching          StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);          IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_46, analyzer);            // 1. create the index          IndexWriter w = new IndexWriter(index, config);          addDoc(w, "Lucene in Action", "193398817");          addDoc(w, "Lucene for Dummies", "55320055Z");          addDoc(w, "Managing Gigabytes", "55063554A");          addDoc(w, "The Art of Computer Science", "9900333X");          w.close();      }        private void addDoc(IndexWriter w, String title, String isbn) throws IOException {          Document doc = new Document();          doc.add(new TextField("title", title, Field.Store.YES));          // use a string field for isbn because we don't want it tokenized          doc.add(new StringField("isbn", isbn, Field.Store.YES));          w.addDocument(doc);      }

检索

 @Test      public void search() throws IOException {          // 2. query          String querystr = "lucene";            // the "title" arg specifies the default field to use          // when no field is explicitly specified in the query.          Query q = null;          try {              StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);              q = new QueryParser(Version.LUCENE_46,"title", analyzer).parse(querystr);          } catch (Exception e) {              e.printStackTrace();          }            // 3. search          int hitsPerPage = 10;          Directory index = FSDirectory.open(indexDir);          IndexReader reader = DirectoryReader.open(index);          IndexSearcher searcher = new IndexSearcher(reader);          TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);          searcher.search(q, collector);          ScoreDoc[] hits = collector.topDocs().scoreDocs;            // 4. display results          System.out.println("Found " + hits.length + " hits.");          for (int i = 0; i < hits.length; ++i) {              int docId = hits[i].doc;              Document d = searcher.doc(docId);              System.out.println((i + 1) + ". " + d.get("isbn") + "\t" + d.get("title"));          }            // reader can only be closed when there          // is no need to access the documents any more.          reader.close();      }

分词

对于搜索来说,分词出现在两个地方,一个是对用户输入的关键词进行分词,另一个是在索引文档时对文档内容的分词。两个分词最好一样,这样才可以更好地匹配出来。

    @Test      public void cutWords() throws IOException {  //        StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_46);  //        CJKAnalyzer analyzer = new CJKAnalyzer(Version.LUCENE_46);          SimpleAnalyzer analyzer = new SimpleAnalyzer();          String text = "Spark是当前最流行的开源大数据内存计算框架,采用Scala语言实现,由UC伯克利大学AMPLab实验室开发并于2010年开源。";          TokenStream tokenStream = analyzer.tokenStream("content", new StringReader(text));          CharTermAttribute charTermAttribute = tokenStream.addAttribute(CharTermAttribute.class);          try {              tokenStream.reset();              while (tokenStream.incrementToken()) {                  System.out.println(charTermAttribute.toString());              }              tokenStream.end();          } finally {              tokenStream.close();              analyzer.close();          }      }

输出

spark  是  当前  最  流行  的  开源  大数  据  内存  计算  框架  采用  scala  语言  实现  由  uc  伯克利  大学  amplab  实验室  开发  并于  2010  年  开源

本工程 github

参考

</div>

 本文由用户 SteffenM01 自行上传分享,仅供网友学习交流。所有权归原作者,若您的权利被侵害,请联系管理员。
 转载本站原创文章,请注明出处,并保留原始链接、图片水印。
 本站是一个以用户分享为主的开源技术平台,欢迎各类分享!
 本文地址:https://www.open-open.com/lib/view/open1454938788245.html
Lucene 中文分词 搜索引擎