| 注册
请输入搜索内容

热门搜索

Java Linux MySQL PHP JavaScript Hibernate jQuery Nginx
jopen
5年前发布

Pig + Ansj 统计中文文本词频

最近特别喜欢用Pig,有能满足大部分需求的内置函数(Built In Functions),支持自定义函数(user defined functions, UDF ),能load 纯文本、avro等格式数据;可以 illustrate 看pig执行步骤的结果, describe 看alias的schema;以轻量级脚本形式跑MapReduce任务,各种爽爆。

1. Word Count

A = load '/user/.*/req-temp/text.txt' as (text:chararray);  B = foreach A generate flatten(TOKENIZE(text)) as word;  C = group B by word;  D = foreach C generate COUNT(B), group;

TOKENIZE.java 的 实现 ;抽象类 EvalFunc<T> 被用来实现对数据字段进行转换操作,其中 exec() 方法在pig运行期间被调用。

public class TOKENIZE extends EvalFunc<DataBag> {      TupleFactory mTupleFactory = TupleFactory.getInstance();      BagFactory mBagFactory = BagFactory.getInstance();        @Override      public DataBag exec(Tuple input) throws IOException {          ...          DataBag output = mBagFactory.newDefaultBag();          ...          String delim = " \",()*";          ...          StringTokenizer tok = new StringTokenizer((String)o, delim, false);          while (tok.hasMoreTokens()) {              output.add(mTupleFactory.newTuple(tok.nextToken()));          }          return output;          ...      }  }

TOKENIZE类继承抽象类 EvalFunc<T> ,用StringTokenizer来对英文文本进行分词,返回的是 DataBag 。所以,为了能统计单个词,pig脚本中要用函数 flatten 进行打散。

2. Ansj中文分词

为了写Pig的UDF,需要添加maven依赖:

<dependency>      <groupId>org.apache.hadoop</groupId>      <artifactId>hadoop-common</artifactId>      <version>${hadoop.version}</version>      <scope>provided</scope>  </dependency>        <dependency>      <groupId>org.apache.pig</groupId>      <artifactId>pig</artifactId>      <version>${pig.version}</version>      <scope>provided</scope>  </dependency>        <dependency>      <groupId>org.ansj</groupId>      <artifactId>ansj_seg-all-in-one</artifactId>      <version>3.0</version>  </dependency>

输入命令 hadoop version 得到hadoop的版本,输入 pig -i 得到pig的版本。务必要保证与集群部署的pig版本一致,要不然会报:

ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias D

依葫芦画瓢,根据 TOKENIZE.java 修改,得到中文分词 Segment.java :

package com.pig.udf;    public class Segment extends EvalFunc<DataBag> {        TupleFactory mTupleFactory = TupleFactory.getInstance();      BagFactory mBagFactory = BagFactory.getInstance();        @Override      public DataBag exec(Tuple input) throws IOException {          try {              if (input==null)                  return null;              if (input.size()==0)                  return null;              Object o = input.get(0);              if (o==null)                  return null;              DataBag output = mBagFactory.newDefaultBag();              if (!(o instanceof String)) {                  int errCode = 2114;                  String msg = "Expected input to be chararray, but" +                  " got " + o.getClass().getName();                  throw new ExecException(msg, errCode, PigException.BUG);              }                            // filter punctuation              FilterModifWord.insertStopNatures("w");              List<Term> words = ToAnalysis.parse((String) o);              words = FilterModifWord.modifResult(words);                            for(Term word: words) {                  output.add(mTupleFactory.newTuple(word.getName()));              }              return output;          } catch (ExecException ee) {              throw ee;          }      }        @SuppressWarnings("deprecation")      @Override      public Schema outputSchema(Schema input) {      ...      }      ...

ansj支持设置词性的停用词 FilterModifWord.insertStopNatures("w"); ,如此可以去掉标点符号的词。将java文件打包后放在hdfs上,然后通过register jar包调用该函数:

REGISTER hdfs:///user/.*/piglib/udf-0.0.1-SNAPSHOT-jar-with-dependencies.jar  A = load '/user/.*/req-temp/renmin.txt' as (text:chararray);  B = foreach A generate flatten(com.pig.udf.Segment(text)) as word;  C = group B by word;  D = foreach C generate COUNT(B), group;

截取人民日报社论的一段:

树好家风,严管才是厚爱。古人说:“居官所以不能清白者,率由家人喜奢好侈使然也。”要看到,好的家风,能系好人生的“第一粒扣子”。“修身、齐家”,才能“治国、平天下”,领导干部首先要“正好家风、管好家人、处好家事”,才能看好“后院”、堵住“后门”。“父母之爱子,则为之计深远”,与其冒着风险给子女留下大笔钱财,不如给子女留下好家风、好作风,那才是让子女受益无穷的东西,才是真正的“为之计深远”。

统计词频如下:

(3,能)

(2,要)

(2,计)

(1,让)

(1,说)

(1,那)

(2,风)

(1,不如)

(1,不能)

(1,与其)

(1,东西)

(1,人生)

(1,作风)

(1,使然)

(1,修身)

(1,厚爱)

(1,受益)

(1,古人)

(1,后门)

(1,后院)

</div>

ansj在不加载用户字段你自定义此表的情况下,分词的效果并不理想。

</div>

来自: http://www.cnblogs.com/en-heng/p/5125507.html

 本文由用户 jopen 自行上传分享,仅供网友学习交流。所有权归原作者,若您的权利被侵害,请联系管理员。
 转载本站原创文章,请注明出处,并保留原始链接、图片水印。
 本站是一个以用户分享为主的开源技术平台,欢迎各类分享!