2)加載文件load data local inpath '/home/jifeng/hadoop/Freq/dic.dic' into table dic;?
3)按中文排序,取前面10條
select word from dic order by word limit 10;
這個實現select top 10 * from dic的功能
hive> create table dic(word string,num string,class string)row format delimited fields terminated by ',';
OK
Time taken: 0.194 seconds
hive> load data local inpath '/home/jifeng/hadoop/Freq/dic.dic' into table dic;
Copying data from file:/home/jifeng/hadoop/Freq/dic.dic
Copying file: file:/home/jifeng/hadoop/Freq/dic.dic
Loading data to table default.dic
Table default.dic stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 2961911, raw_data_size: 0]
OK
Time taken: 0.281 seconds
hive> select word from dic order by word limit 10;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:set mapred.reduce.tasks=<number>
Starting Job = job_201408202333_0004, Tracking URL = http://jifeng01:50030/jobdetails.jsp?jobid=job_201408202333_0004
Kill Command = /home/jifeng/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201408202333_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-21 10:15:54,411 Stage-1 map = 0%, reduce = 0%
2014-08-21 10:15:56,430 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:15:57,439 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:15:58,448 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:15:59,459 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:16:00,469 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:16:01,477 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.61 sec
2014-08-21 10:16:02,482 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 0.61 sec
2014-08-21 10:16:03,489 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.1 sec
2014-08-21 10:16:04,504 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.1 sec
MapReduce Total cumulative CPU time: 1 seconds 100 msec
Ended Job = job_201408202333_0004
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 1.1 sec HDFS Read: 2962117 HDFS Write: 97 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 100 msec
OK
一一
一一七
一一三
一一九
一一二
一一四
一一點
一丁點兒
一七
一七三
Time taken: 13.937 seconds, Fetched: 10 row(s)
hive>
sort by排序測試
select word from dic sort by word limit 10;?
hive> select word from dic sort by word limit 10;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:set mapred.reduce.tasks=<number>
Starting Job = job_201408202333_0014, Tracking URL = http://jifeng01:50030/jobdetails.jsp?jobid=job_201408202333_0014
Kill Command = /home/jifeng/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201408202333_0014
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-21 13:19:44,026 Stage-1 map = 0%, reduce = 0%
2014-08-21 13:19:46,040 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:47,045 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:48,052 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:49,058 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:50,065 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:51,071 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:52,077 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.6 sec
2014-08-21 13:19:53,083 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.05 sec
2014-08-21 13:19:54,089 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.05 sec
MapReduce Total cumulative CPU time: 1 seconds 50 msec
Ended Job = job_201408202333_0014
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:set mapred.reduce.tasks=<number>
Starting Job = job_201408202333_0015, Tracking URL = http://jifeng01:50030/jobdetails.jsp?jobid=job_201408202333_0015
Kill Command = /home/jifeng/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201408202333_0015
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2014-08-21 13:19:56,360 Stage-2 map = 0%, reduce = 0%
2014-08-21 13:19:58,372 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:19:59,377 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:00,385 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:01,391 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:02,398 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:03,402 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:04,407 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.28 sec
2014-08-21 13:20:05,413 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 0.78 sec
2014-08-21 13:20:06,420 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 0.78 sec
MapReduce Total cumulative CPU time: 780 msec
Ended Job = job_201408202333_0015
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 1.05 sec HDFS Read: 2962117 HDFS Write: 363 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 0.78 sec HDFS Read: 819 HDFS Write: 97 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 830 msec
OK
一一
一一七
一一三
一一九
一一二
一一四
一一點
一丁點兒
一七
一七三
Time taken: 25.807 seconds, Fetched: 10 row(s)
hive>