4、Hive数据操作,DDL操作,CRUD database,CRUD table,partition,view,index,show命令等
4.1DDL操作
4.1.1Create/Drop/Alter/Use Database
4.1.1.1Create Database
4.1.1.2Drop Database
4.1.1.3Alter Database
4.1.1.4Use Database
4.1.2Create Table
4.1.2.1內(nèi)部表和外部表
4.1.2.2Storage Formats
4.1.2.3Create內(nèi)部表,表分區(qū)
4.1.2.4Create外部表(External Tables)
4.1.2.5 Create Table As Select (CTAS)
4.1.2.6 Create Table Like
4.1.2.7Bucketed Sorted Tables
4.1.2.8Skewed Tables
4.1.2.9Temporary Tables
4.1.2.10Transactional Tables
4.1.2.11Constraints
4.1.3Drop Table
4.1.4Truncate Table
4.1.5Alter Table
4.1.5.1Alter Table
4.1.5.2Alter Table Properties
4.1.5.3Alter Table Comment
4.1.5.4Add SerDe Properties
4.1.5.5 Alter Table Storage Properties
4.1.5.6Alter Table Skewed or Stored as Directories
4.1.5.7Alter Table Skewed
4.1.5.8Alter Table Not Skewed
4.1.5.9Alter Table Not Stored as Directories
4.1.5.10Alter Table Set Skewed Location
4.1.5.11Alter Table Constraints
4.1.6 Alter Partition
4.1.6.1Add Partitions
4.1.6.2Dynamic Partitions
4.1.6.3Rename Partition
4.1.6.4Exchange Partition
4.1.6.5 Recover Partitions (MSCK REPAIR TABLE)
4.1.6.6 Drop Partitions
4.1.6.7 (Un)Archive Partition
4.1.7 Alter Either Table or Partition
4.1.7.1 Alter Table/Partition File Format
4.1.7.2Alter Table/Partition Location
4.1.8 Alter Column
4.1.8.1Change Column Name/Type/Position/Comment
4.1.8.2 Add/Replace列
4.1.9 Create/Drop/Alter View
4.1.9.1 Create View
4.1.9.2 Drop View
4.1.9.3 Alter View的屬性
4.1.9.4 Alter View As Select
4.1.10 Create/Drop/Alter Index
4.1.10.1Create Index
4.1.10.2 Drop Index
4.1.10.3Alter Index
4.1.11Show
?Show Databases
?Show Tables/Views/Partitions/Indexes
?Show Tables
?Show Views
?Show Partitions
?Show Table/Partition Extended
?Show Table Properties
?Show Create Table
?Show Indexes
?Show Columns
?Show Functions
?Show Granted Roles and Privileges
?Show Locks
?Show Conf
?Show Transactions
?Show Compactions
4.1DDL操作
4.1.1Create/Drop/Alter/Use Database
4.1.1.1Create Database
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name[COMMENT database_comment][LOCATION hdfs_path][WITH DBPROPERTIES (property_name=property_value, ...)];使用SCHEMA和DATABASE是可互換的,它們以為著同一件事情。
案例:
CREATE DATABASE IF NOT EXISTS demo_dbCOMMENT 'demo'LOCATION '/hive/demo/demo_db' WITH DBPROPERTIES ("name"="test demo");hive>show databases; OK default demo_db Time taken: 0.013 seconds, Fetched: 4 row(s) hive>4.1.1.2Drop Database
DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];案例:
hive> DROP DATABASE IF EXISTS test_database; hive> show databases; OK default demo_db shopping test_database Time taken: 0.008 seconds, Fetched: 4 row(s) hive> DROP DATABASE IF EXISTS test_database CASCADE; hive> show databases; OK default demo_db shopping Time taken: 0.011 seconds, Fetched: 3 row(s) hive>4.1.1.3Alter Database
ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...); -- (Note: SCHEMA added in Hive 0.14.0)ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role; -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path; -- (Note: Hive 2.2.1, 2.4.0 and later)案例:
hive> ALTER DATABASE demo_db SET DBPROPERTIES ("name"="test demo db");4.1.1.4Use Database
USE database_name; USE DEFAULT;案例:
hive> use demo_db; OK Time taken: 0.03 seconds hive> select current_database(); OK demo_db Time taken: 0.148 seconds, Fetched: 1 row(s) hive>4.1.2Create Table
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)[(col_name data_type [COMMENT col_comment], ... [constraint_specification])][COMMENT table_comment][PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)][CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS][SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)[STORED AS DIRECTORIES][[ROW FORMAT row_format] [STORED AS file_format]| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)][LOCATION hdfs_path][TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_nameLIKE existing_table_or_view_name[LOCATION hdfs_path];data_type: primitive_type| array_type| map_type| struct_type| union_type -- (Note: Available in Hive 0.7.0 and later)primitive_type: TINYINT| SMALLINT| INT| BIGINT| BOOLEAN| FLOAT| DOUBLE| DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)| STRING| BINARY -- (Note: Available in Hive 0.8.0 and later)| TIMESTAMP -- (Note: Available in Hive 0.8.0 and later)| DECIMAL -- (Note: Available in Hive 0.11.0 and later)| DECIMAL(precision, scale) -- (Note: Available in Hive 0.13.0 and later)| DATE -- (Note: Available in Hive 0.12.0 and later)| VARCHAR -- (Note: Available in Hive 0.12.0 and later)| CHAR -- (Note: Available in Hive 0.13.0 and later)array_type: ARRAY < data_type >map_type: MAP < primitive_type, data_type >struct_type: STRUCT < col_name : data_type [COMMENT col_comment], ...>union_type: UNIONTYPE < data_type, data_type, ... > -- (Note: Available in Hive 0.7.0 and later)row_format: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char][MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char][NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]file_format:: SEQUENCEFILE| TEXTFILE -- (Default, depending on hive.default.fileformat configuration)| RCFILE -- (Note: Available in Hive 0.6.0 and later)| ORC -- (Note: Available in Hive 0.11.0 and later)| PARQUET -- (Note: Available in Hive 0.13.0 and later)| AVRO -- (Note: Available in Hive 0.14.0 and later)| JSONFILE -- (Note: Available in Hive 4.0.0 and later)| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classnameconstraint_specification:: [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ][, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE 通過指定表明的方式創(chuàng)建表,如果已經(jīng)含有相同的表明,將拋出一個(gè)已經(jīng)存在了的異常。你可以使用”IF NOT EXISTS”的方式跳過這個(gè)異常。1.表明和列名不敏感,但是SerDe和屬性名稱敏感A: 在Hive 0.12和更早的版本中,表名和列名中只允許字母、數(shù)字和下劃線字符B:在Hive 0.13和更高版本中,列名可以包含任何Unicode字符(參見Hive -6013),但是,點(diǎn)(.)和冒號(hào)(:)在查詢時(shí)產(chǎn)生錯(cuò)誤,因此在Hive 1.2.0中不允許使用它們(參見Hive -10120)。在反引號(hào)(')中指定的任何列名都按字面意義處理。在反勾字符串中,使用雙反勾(' ')表示反勾字符。反引號(hào)還允許為表和列標(biāo)識(shí)符使用保留關(guān)鍵字。C: 要恢復(fù)到0.13.0之前的行為,并將列名限制為字母、數(shù)字和下劃線字符,請(qǐng)?jiān)O(shè)置configuration屬性hive.support. quotes。標(biāo)識(shí)符。在這個(gè)配置中,反勾的名稱被解釋為正則表達(dá)式。有關(guān)詳細(xì)信息,請(qǐng)參見支持列名中引用的標(biāo)識(shí)符。 2.表和列的注釋是字符串(單引號(hào)) 3.沒有使用EXTERNAL語(yǔ)句的創(chuàng)建的表叫內(nèi)部表,因?yàn)镠ive管理它的數(shù)據(jù)。查找一個(gè)表示內(nèi)部表還是外部表,可以在hive命令行窗口輸入: DESCRIBE EXTENDED table_name.的方式獲取tableType (如果tableType:EXTERNAL_TABLE,則為外部表,如果是:tableType:MANAGED_TABLE表示是內(nèi)部表)。 4. TBLPROPERTIES語(yǔ)句允許你為表添加一些定義標(biāo)簽。例如: A: TBLPROPERTIES ("comment"="table_comment")B: TBLPROPERTIES ("hbase.table.name"="table_name")C: TBLPROPERTIES ("immutable"="true") or ("immutable"="false")D: TBLPROPERTIES ("orc.compress"="ZLIB") 或 ("orc.compress"="SNAPPY") 或 ("orc.compress"="NONE") 和其它的 ORC 屬性。E: TBLPROPERTIES ("transactional"="true") 或 ("transactional"="false") 在 0.14.0+版本中開始支持,默認(rèn)值是"false".F: TBLPROPERTIES ("NO_AUTO_COMPACTION"="true") or ("NO_AUTO_COMPACTION"="false"), 默認(rèn)值是 "false" G: TBLPROPERTIES ("compactor.mapreduce.map.memory.mb"="mapper_memory") H: TBLPROPERTIES ("compactorthreshold.hive.compactor.delta.num.threshold"="threshold_num")I: TBLPROPERTIES ("compactorthreshold.hive.compactor.delta.pct.threshold"="threshold_pct") J: TBLPROPERTIES ("auto.purge"="true") or ("auto.purge"="false") , 1.2.0+版本開始支持 (HIVE-9118)K: TBLPROPERTIES ("EXTERNAL"="TRUE") in release 0.6.0+ (HIVE-1329) –將一個(gè)內(nèi)部表變成一個(gè)外部表通過將值變成"FALSE".(在hive2.4.0版本中,這個(gè)屬性值’EXTERNAL’被解析外一個(gè)布爾值,這個(gè)值為大小寫敏感的true/false)L: TBLPROPERTIES ("external.table.purge"="true") in release 4.0.0+ (HIVE-19981) ,當(dāng)這個(gè)屬性值被設(shè)置到外部表示,刪除表的時(shí)候?qū)⑼瑯訉h除數(shù)據(jù)。 5.在創(chuàng)建表的時(shí)候,指定一個(gè)數(shù)據(jù)庫(kù)(使用命令:USE database_name),或者通過數(shù)據(jù)庫(kù)名限定表明。(在Hive 0.7版本之后使用"database_name.table.name"的方式),“default”關(guān)鍵字可以作為默認(rèn)的數(shù)據(jù)庫(kù)。4.1.2.1內(nèi)部表和外部表
默認(rèn)情況下,Hive創(chuàng)建的是內(nèi)部表,這種表所有的文件,元數(shù)據(jù)和統(tǒng)計(jì)信息被Hive的內(nèi)部進(jìn)程管理,如果想了解更多關(guān)于內(nèi)部表和外部表,可以查看https://cwiki.apache.org/confluence/display/Hive/Managed+vs.+External+Tables
4.1.2.2Storage Formats
Hive支持內(nèi)置和自定義開發(fā)的文件格式。關(guān)于更多關(guān)于壓縮存儲(chǔ)的可以查看:https://cwiki.apache.org/confluence/display/Hive/CompressedStorage,下面是Hive中內(nèi)置的一些文件格式:
| STORED AS TEXTFILE | 作為純文本文件存儲(chǔ). TEXTFILE是默認(rèn)的文件格式, 除非 hive.default.fileformat 這個(gè)屬性有不同的.設(shè)置 /// 使用DELIMITED 子句讀取帶有分割符的文件 /// 可以使用’ESCAPED BY’ 子句來(lái)進(jìn)行轉(zhuǎn)義 (例如:ESCAPED BY ‘’) /// 一個(gè)自定義 NULL 格式可以通過使用 'NULL DEFINED AS’子句指定 (默認(rèn)是 ‘\N’) |
| STORED AS SEQUENCEFILE | 以壓縮的Sequence File方式存儲(chǔ)。 |
| STORED AS ORC | 以 ORC 文件格式存儲(chǔ). 支持ACID 事務(wù) & Cost-based Optimizer (CBO). Stores column-level metadata. |
| STORED AS PARQUET | 以Parquet 格式存儲(chǔ) (Hive 0.13.0 及其以后版本支持); /// 在 Hive 0.10, 0.11, or 0.12版本中使用 ROW FORMAT SERDE … STORED AS INPUTFORMAT … OUTPUTFORMAT syntax …來(lái)定義 |
| STORED AS AVRO | 以Avro格式存儲(chǔ)(Hive 0.14.0及其以后版本) |
| STORED AS RCFILE | 以Record Columnar File 文件格式存儲(chǔ). |
| STORED AS JSONFILE | 以Json file format文件格式存儲(chǔ)(Hive 4.0.0 and later及其以后版本支持) |
| STORED BY | 以一種非本地表格式存儲(chǔ). 通過create或 link到一個(gè)非本地的表, 例如由一個(gè) HBase 或 Druid 或 Accumulo 支持的表 /// 查看更多:https://cwiki.apache.org/confluence/display/Hive/StorageHandlers |
| INPUTFORMAT 和 OUTPUTFORMAT | 在這個(gè)文件格式中通過手工指定 InputFormat 和OutputFormat(這兩者的值為字符串)/// 例如, ‘org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextInputFormat’ /// 對(duì)于 LZO 壓縮格式, 這個(gè)值是 : ‘INPUTFORMAT “com.hadoop.mapred.DeprecatedLzoTextInputFormat” ; OUTPUTFORMAT “org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat”’ |
| (查看更多關(guān)于 LZO 壓縮格式:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO). |
4.1.2.3Create內(nèi)部表,表分區(qū)
CREATE TABLE page_view(viewTime INT, userid BIGINT,page_url STRING, referrer_url STRING,ip STRING COMMENT 'IP Address of the User')COMMENT 'This is the page view table'PARTITIONED BY(dt STRING, country STRING)ROW FORMAT DELIMITEDFIELDS TERMINATED BY '\001' STORED AS SEQUENCEFILE;以上表的含義:分區(qū)列dt,分區(qū)列country,以SEQUENCEFILE方式存儲(chǔ)。數(shù)據(jù)分割符是’\001’,上面的表數(shù)據(jù)存儲(chǔ)在<hive.metastore.warehouse.dir>/page_view中,即在hive-site.xml中配置指定的hive.metastore.warehouse.dir 的位置中。
4.1.2.4Create外部表(External Tables)
使用EXTERNAL關(guān)鍵字能夠讓你通過指定一個(gè)LOCATION的方式創(chuàng)建一個(gè)表,而不使用Hive的數(shù)據(jù)默認(rèn)存放位置,這種表對(duì)于已經(jīng)存在數(shù)據(jù)的情況下很有幫助,當(dāng)drop表的時(shí)候,使用EXTERNAL修飾的表數(shù)據(jù)將不被刪除。
如果想drop表的時(shí)候同樣刪除外部表數(shù)據(jù),可以設(shè)置表的屬性”external.table.purge=true”
一個(gè)EXTERNAL表可以存儲(chǔ)在Hive的任何位置,而不僅僅是存儲(chǔ)在hive.metastore.warehouse.dir設(shè)置的目錄中。
CREATE EXTERNAL TABLE page_view(viewTime INT, userid BIGINT,page_url STRING, referrer_url STRING,ip STRING COMMENT 'IP Address of the User',country STRING COMMENT 'country of origination')COMMENT 'This is the staging page view table'ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'STORED AS TEXTFILELOCATION '/hive/test/page_view';4.1.2.5 Create Table As Select (CTAS)
這種情況下的限制是:
1.目標(biāo)表不能是外部表
2.目標(biāo)表不能是一個(gè)桶表
上面的new_key_value_store 就是目標(biāo)表,它的schema是 (new_key DOUBLE, key_value_pair STRING),它是通過SELECT語(yǔ)句查詢獲取到的。如果select表中沒有指定列的別名,那么列名將被自動(dòng)取為 _col0,_col1。
4.1.2.6 Create Table Like
CREATE TABLE通過LIKE方式允許你復(fù)制一個(gè)已經(jīng)存在的表(不拷貝數(shù)據(jù))。語(yǔ)法格式如下:
CREATE TABLE empty_key_value_store LIKE key_value_store [TBLPROPERTIES (property_name=property_value, ...)];4.1.2.7Bucketed Sorted Tables
CREATE TABLE page_view(viewTime INT, userid BIGINT,page_url STRING, referrer_url STRING,ip STRING COMMENT 'IP Address of the User')COMMENT 'This is the page view table'PARTITIONED BY(dt STRING, country STRING)CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETSROW FORMAT DELIMITEDFIELDS TERMINATED BY '\001'COLLECTION ITEMS TERMINATED BY '\002'MAP KEYS TERMINATED BY '\003'STORED AS SEQUENCEFILE;上面的例子中,表被按照userid字段分桶了(clustered by),每個(gè)桶中的數(shù)據(jù)都被按照viewTime升序存儲(chǔ)。
CLUSTERED BY和SORTED BY創(chuàng)表語(yǔ)句不會(huì)影響數(shù)據(jù)是如何插入的,只會(huì)影響它是怎么讀的。這就意味著用戶必須小心翼翼的插入數(shù)據(jù),在這個(gè)過程中指定reduce的數(shù)據(jù)量bucket的數(shù)量。在查詢的時(shí)候使用CLUSTER BY和SORT BY命令。
4.1.2.8Skewed Tables
Skewed Table可以提高有一個(gè)或多個(gè)列有傾斜值的表的性能,通過指定經(jīng)常出現(xiàn)的值(嚴(yán)重傾斜),hive將會(huì)在元數(shù)據(jù)中記錄這些傾斜的列名和值,在join時(shí)能夠進(jìn)行優(yōu)化。若是指定了STORED AS DIRECTORIES,也就是使用列表桶(ListBucketing),hive會(huì)對(duì)傾斜的值建立子目錄,查詢會(huì)更加得到優(yōu)化。
可以再創(chuàng)建表是指定為 Skewed Table,如下例子,STORED AS DIRECTORIES是可選擇的,它指定了使用列表桶(ListBucketing)。
更多關(guān)于Skewed表的信息,可以查看:https://blog.csdn.net/mhtian2015/article/details/78931236
4.1.2.9Temporary Tables
臨時(shí)表只能在當(dāng)前的Hive會(huì)話中被看到,數(shù)據(jù)將會(huì)被存儲(chǔ)在用戶的scratch目錄(hive-site.xml中指定的表中),在會(huì)話的時(shí)候?qū)⒈粍h除。
臨時(shí)表創(chuàng)建的限制條件:
1.分區(qū)列不支持。
2.不支持創(chuàng)建索引。
從Hive 1.1.0開始,通過使用hive.exec.temporary.table.storage 配置,臨時(shí)表的存儲(chǔ)策略可以被設(shè)置成 內(nèi)存,ssd,或者default。具體可以參考:http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html#Storage_Types_and_Storage_Policies
CREATE TEMPORARY TABLE list_bucket_multiple (col1 STRING, col2 int, col3 STRING);4.1.2.10Transactional Tables
語(yǔ)法是:
CREATE TRANSACTIONAL TABLE transactional_table_test(key string, value string) PARTITIONED BY(ds string) STORED AS ORC;4.1.2.11 Constraints
Hive支持未被驗(yàn)證的主鍵和外鍵約束。當(dāng)有約束是,一些SQL工具可以形成更加有效的查詢。因?yàn)檫@些外鍵沒有被驗(yàn)證,一些更新系統(tǒng)在load數(shù)據(jù)到hive的時(shí)候需要確保數(shù)據(jù)完整性。
案例:
create table pk(id1 integer, id2 integer,primary key(id1, id2) disable novalidate);create table fk(id1 integer, id2 integer,constraint c1 foreign key(id1, id2) references pk(id2, id1) disable novalidate);4.1.3Drop Table
DROP TABLE [IF EXISTS] table_name [PURGE]; -- (注意: PURGE 在 Hive 0.14.0 及其以后版本才可用)DROP TABLE刪除表的元數(shù)據(jù)和表的數(shù)據(jù)。如果Trash被配置了(并且PURGE沒有指定),表的數(shù)據(jù)實(shí)際上是被移動(dòng)到了.Trash/Current目錄下。表的元數(shù)據(jù)完完整整的丟失了。
當(dāng)刪除一個(gè)外部表的時(shí)候,存儲(chǔ)在文件系統(tǒng)中的表述就實(shí)際上是沒有被刪除的。
如果外部表的表屬性設(shè)置了external.table.purge=true,那么數(shù)據(jù)也同樣會(huì)被刪除。
當(dāng)刪除含有視圖引用的表時(shí),不會(huì)給出任何警告(視圖將會(huì)被掛起,無(wú)效了,必須用戶刪除或重新創(chuàng)建)
否則,表的信息將會(huì)從metastroe中刪除,原始的表數(shù)據(jù)將會(huì)被刪除,就像是被’hadoop dfs -rm’刪除一樣。一些情況下,表中的數(shù)據(jù)被移動(dòng)到他們的用戶目錄下的.Trash文件夾下。因此用戶誤刪的表數(shù)據(jù)將可以通過創(chuàng)建相同的schema的方式恢復(fù),可以重新創(chuàng)建分區(qū)。這個(gè)過程可以通過hadoop手動(dòng)的將數(shù)據(jù)移會(huì)原來(lái)的位置。
如果PURGE被指定了,表ode數(shù)據(jù)沒有進(jìn)入.Trash/Current目錄。因此在誤刪除的時(shí)候不能被恢復(fù)。Purge的屬性可以通過表的屬性 ‘a(chǎn)uto.purge’的方式設(shè)置(更多查看:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-listTableProperties)
在Hive 0.7.0及其以后版本中,如果表不存在,將會(huì)返回錯(cuò)誤,除非再刪除表的時(shí)候添加IF EXISTS.或者將’ hive.exec.drop.ignorenonexistent’屬性設(shè)置成true.
4.1.4Truncate Table
TRUNCATE TABLE table_name [PARTITION partition_spec];partition_spec:: (partition_column = partition_col_value, partition_column = partition_col_value, ...)在表和分區(qū)中的所有的數(shù)據(jù)行都會(huì)被刪除,如果文件系統(tǒng)中的Trash啟動(dòng)了的話,這些數(shù)據(jù)行將會(huì)被放到trash中。否則他們將會(huì)被刪除。當(dāng)前這個(gè)目標(biāo)表應(yīng)該是native/內(nèi)部表,否則將會(huì)拋出異常。
從Hive2.3.0開始,如果TBLPROPERTIES中的表屬性”auto.purge”被設(shè)置成true,表的數(shù)據(jù)將不會(huì)被移動(dòng)到Trash,誤操作TRUNCATE TABLE后,數(shù)據(jù)仍然不會(huì)被找回。
4.1.5Alter Table
Alter table命令能夠允許你改變已經(jīng)存在的表的結(jié)構(gòu)。你可以添加列或分區(qū),改變SerDe,添加表和SerDe屬性,或者重命名表。
相似的,alter table partition命令允許你改變一個(gè)指定的partition的屬性值。
4.1.5.1Alter Table
ALTER TABLE table_name RENAME TO new_table_name;通過這個(gè)命令可將原來(lái)的表明變成一個(gè)新的表明。
4.1.5.2Alter Table Properties
ALTER TABLE table_name SET TBLPROPERTIES table_properties;table_properties:: (property_name = property_value, property_name = property_value, ... )通過上面的命令,你可以添加你自己的metadata到表上。當(dāng)前的最后修改用戶,最后修改時(shí)間屬性值被自動(dòng)添加到Hive中。用戶可以添加他們自己的屬性值到這個(gè)列表中??梢詧?zhí)行DESCRIBE EXTENDED TABLE 獲取這些信息。
4.1.5.3Alter Table Comment
想改變表的comment,你必須改變TBLPROPERTIES注釋屬性值
ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comment);4.1.5.4Add SerDe Properties
ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;serde_properties:: (property_name = property_value, property_name = property_value, ... )例如:
ALTER TABLE table_name SET SERDEPROPERTIES ('field.delim' = ',');4.1.5.5 Alter Table Storage Properties
ALTER TABLE table_name CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name, ...)]INTO num_buckets BUCKETS;這些statements改變表的物理存儲(chǔ)屬性。
4.1.5.6Alter Table Skewed or Stored as Directories
一個(gè)表的SKEWED和STORED AS DIRECTORIES選項(xiàng)可以通過ALTER TABLE語(yǔ)句來(lái)改變。
4.1.5.7Alter Table Skewed
ALTER TABLE table_name SKEWED BY (col_name1, col_name2, ...)ON ([(col_name1_value, col_name2_value, ...) [, (col_name1_value, col_name2_value), ...][STORED AS DIRECTORIES];STORED AS DIRECTORIES 選項(xiàng)決定一個(gè)skewed的表是否使用list bucketing特征,這個(gè)過程為skewed的值創(chuàng)建子目錄。
4.1.5.8Alter Table Not Skewed
ALTER TABLE table_name NOT SKEWED;NOT SKEWED選項(xiàng)使表變成non-skewed,并且關(guān)閉list bucketing特征。
4.1.5.9Alter Table Not Stored as Directories
ALTER TABLE table_name NOT STORED AS DIRECTORIES;4.1.5.10Alter Table Set Skewed Location
ALTER TABLE table_name SET SKEWED LOCATION (col_name1="location1" [, col_name2="location2", ...] );4.1.5.11Alter Table Constraints
通過ALTER TABLE語(yǔ)句,表的Constraints可以被添加或remove。
ALTER TABLE table_name ADD CONSTRAINT constraint_name PRIMARY KEY (column, ...) DISABLE NOVALIDATE; ALTER TABLE table_name ADD CONSTRAINT constraint_name FOREIGN KEY (column, ...) REFERENCES table_name(column, ...) DISABLE NOVALIDATE RELY; ALTER TABLE table_name DROP CONSTRAINT constraint_name;4.1.6 Alter Partition
Partitions可以被添加,刪除,exchanged (moved)、drop、或者unarchived
4.1.6.1Add Partitions
ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec [LOCATION 'location'][, PARTITION partition_spec [LOCATION 'location'], ...];partition_spec:: (partition_column = partition_col_value, partition_column = partition_col_value, ...)例如:
ALTER TABLE page_view ADD PARTITION (dt='2008-08-08', country='us') location '/path/to/us/part080808'PARTITION (dt='2008-08-09', country='us') location '/path/to/us/part080809';上面的語(yǔ)法是針對(duì)Hive 0.8版本的,添加多個(gè)分區(qū)的時(shí)候使用一個(gè)ALTER TABLE語(yǔ)句。
如果是在0.7版本,如果想修改多個(gè)分區(qū),需要使用以下方式:
4.1.6.2Dynamic Partitions
分區(qū)可以動(dòng)態(tài)的添加到一個(gè)表中,使用Hive的INSERT語(yǔ)句。通過下面的連接地址查看更多的detail和例子:
?Design Document for Dynamic Partitions:https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions
?Tutorial: Dynamic-Partition Insert:https://cwiki.apache.org/confluence/display/Hive/Tutorial#Tutorial-Dynamic-PartitionInsert
?Hive DML: Dynamic Partition Inserts:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-DynamicPartitionInserts
?HCatalog Dynamic Partitioning:https://cwiki.apache.org/confluence/display/Hive/HCatalog+DynamicPartitions
4.1.6.3Rename Partition
ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;4.1.6.4Exchange Partition
分區(qū)可以在表之間進(jìn)行交換,
-- 將分區(qū)從 table_name_1 移動(dòng)到 table_name_2 表 ALTER TABLE table_name_2 EXCHANGE PARTITION (partition_spec) WITH TABLE table_name_1; -- 多分區(qū) ALTER TABLE table_name_2 EXCHANGE PARTITION (partition_spec, partition_spec2, ...) WITH TABLE table_name_1; Exchange Partition允許你將一個(gè)表的分區(qū)到另外一個(gè)表中,能夠做這種交換的前提是擁有相同的schema并且沒有這個(gè)分區(qū)。4.1.6.5 Recover Partitions (MSCK REPAIR TABLE)
hive在元數(shù)據(jù)中保存著分區(qū)信息,如果直接用 hadoop fs -put 命名在HDFS上添加分區(qū),元數(shù)據(jù)不會(huì)意識(shí)到。
需要用戶在hive上為每個(gè)新分區(qū)執(zhí)行ALTER TABLE table_name ADD PARTITION,元數(shù)據(jù)才會(huì)意識(shí)到。
用戶可以用元數(shù)據(jù)檢查命令修復(fù)表,它會(huì)添加新分區(qū)的元數(shù)據(jù)信息到hive的元數(shù)據(jù)中。換句話說,這個(gè)命令會(huì)把HDFS上有的分區(qū),但是元數(shù)據(jù)中沒有的分區(qū),補(bǔ)充到元數(shù)據(jù)信息中。
MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS];4.1.6.6Drop Partitions
ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...][IGNORE PROTECTION] [PURGE]; -- (Note: PURGE available in Hive 1.2.0 and later, IGNORE PROTECTION not available 2.0.0 and later)你可以使用ALTER TABLE DROP PARTITION來(lái)drop一個(gè)表中的一個(gè)分區(qū)。浙江刪除分區(qū)的元數(shù)據(jù)信息和數(shù)據(jù)。如果Trash設(shè)置了的話,數(shù)據(jù)實(shí)際上是被移動(dòng)到.Trash/Current目錄中。除非指定了PURGE,否則表的metadata將被完完整整的丟失。
如果PURGE指定了,刪除了的分區(qū)信息將不會(huì)進(jìn)入.Trash/Current目錄,因此不能恢復(fù)錯(cuò)誤Drop操作的刪除的數(shù)據(jù)。
ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec PURGE; -- (Note: Hive 1.2.0 and later)在TBLPROPERTIES中同樣可以設(shè)置purge選項(xiàng),它的值是’ auto.purge’.
在Hive 0.7.0及其后續(xù)版本,如果分區(qū)不存在,刪除的時(shí)候?qū)?huì)返回error,除非IF EXISTS指定了或者在配置變量里面加上” hive.exec.drop.ignorenonexistent ”,并設(shè)置值為true。
例如:
ALTER TABLE page_view DROP PARTITION (dt='2008-08-08', country='us');4.1.6.7(Un)Archive Partition
ALTER TABLE table_name ARCHIVE PARTITION partition_spec; ALTER TABLE table_name UNARCHIVE PARTITION partition_spec;Archiving 是移動(dòng)一個(gè)分區(qū)文件到hadoop Archive(HAR)的過程。注意只有文件數(shù)量將會(huì)被減少;HAR不提供任何的壓縮。關(guān)于Archiving的相關(guān)信息可以查看:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Archiving
4.1.7Alter Either Table or Partition
4.1.7.1Alter Table/Partition File Format
ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;改語(yǔ)句改變表或分區(qū)的文件格式, file_format的可選值有:TEXTFILE,SEQUENCEFILE,ORC,PARQUET,AVRO,RCFILE,JSONFILE等。這個(gè)操作只會(huì)改變表的元數(shù)據(jù)數(shù)據(jù)。任何現(xiàn)有數(shù)據(jù)的改變都必須在表外進(jìn)行改變。
4.1.7.2Alter Table/Partition Location
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";4.1.8 Alter Column
在Hive 0.12.0及其以前版本,列名只能包含字母和下劃線。
在Hive 0.13.0及其以后版本,默認(rèn)情況下,列名可以使用反引號(hào) () 來(lái)引用,并且可以包含任何Unicode的字符串。但是點(diǎn)(.) 和冒號(hào) (:) 在查詢的時(shí)候產(chǎn)生錯(cuò)誤。在 () 里面,除了 (``) 表示一個(gè)引號(hào)之外,其它的字母都是其原本本的含義。
反引號(hào)允許對(duì)列名和表名使用保留關(guān)鍵字。
4.1.8.1Change Column Name/Type/Position/Comment
改變列名/類型/Position/注釋的語(yǔ)法如下:
ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type[COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];上面的命令允許你改變列名,數(shù)據(jù)類型,注釋,或者position,再或者他們之間任意的組合,PARTITION子句在Hive0.14.0之后可用的。CASCADE|RESTRICT子句在Hive 1.1.0開始被允許。帶有CASCADE的ALTER TABLE CHANGE COLUMN改變表的元數(shù)據(jù)中的列,同樣改變所有的分區(qū)元數(shù)據(jù)。RESTRICT是默認(rèn)值,只改變表的元數(shù)據(jù)。
注意:ALTER TABLE CHANGE COLUMN CASCADE子句將重寫表分區(qū)列的元數(shù)據(jù),而不管表或分區(qū)的保護(hù)模式,請(qǐng)謹(jǐn)慎使用。
列的改變只會(huì)改變Hive的元數(shù)據(jù),不會(huì)改變數(shù)據(jù)。
案例:
CREATE TABLE test_change (a int, b int, c int);// 將列名 a 的名字改成 a1. ALTER TABLE test_change CHANGE a a1 INT; //將a1列改成a2,將數(shù)據(jù)的類型改成string的, 并將它放到b列的后面. ALTER TABLE test_change CHANGE a1 a2 STRING AFTER b; //為列a1添加注釋 ALTER TABLE test_change CHANGE a2 a2 STRING COMMENT 'this is column a2';4.1.8.2 Add/Replace列
ALTER TABLE table_name [PARTITION partition_spec] -- (注意: Hive 0.14.0 及其以后版本支持)ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)[CASCADE|RESTRICT] -- (注意: Hive 1.1.0 及其以后版本支持)ADD COLUMNS 的時(shí)候,新列將在已經(jīng)存在的列的后面,但是在分區(qū)列的前面。 REPLACE COLUMNS的作用是,刪除已經(jīng)存在的列,添加新的列。REPLACE COLUMNS也可以用于刪除列,例如:” ALTER TABLE test_change REPLACE COLUMNS (b int,a2 string);” 將刪除test_change表中的c列。
4.1.9 Create/Drop/Alter View
4.1.9.1 Create View
創(chuàng)建視圖的語(yǔ)法如下:
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ][COMMENT view_comment][TBLPROPERTIES (property_name = property_value, ...)]AS SELECT ...;注意:視圖只能被讀,不能使用LOAD/INSERT/ALTER。
案例:
CREATE VIEW onion_referrers(url COMMENT 'URL of Referring page')COMMENT 'Referrers to The Onion website'ASSELECT DISTINCT referrer_urlFROM page_viewWHERE page_url='http://www.theonion.com';
4.1.9.2 Drop View
語(yǔ)法:
DROP VIEW [IF EXISTS] [db_name.]view_name;案例:
DROP VIEW onion_referrers;4.1.9.3 Alter View的屬性
ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;table_properties:: (property_name = property_value, property_name = property_value, ...) 例如: ALTER VIEW onion_referrers SET TBLPROPERTIES ('userName' = 'zhangsan');4.1.9.4 Alter View As Select
ALTER VIEW [db_name.]view_name AS select_statement;例子:
hive> ALTER VIEW onion_referrers AS select referrer_url,ip from page_view; OK Time taken: 0.188 seconds hive> show create table onion_referrers; OK CREATE VIEW `onion_referrers` AS select `page_view`.`referrer_url`,`page_view`.`ip` from `demo_db`.`page_view` Time taken: 0.076 seconds, Fetched: 1 row(s) hive> desc onion_referrers; OK referrer_url string ip string Time taken: 0.104 seconds, Fetched: 2 row(s) hive>4.1.10 Create/Drop/Alter Index
關(guān)于Hive Indexes的文檔,可以參考:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing
Index的設(shè)計(jì)文檔:https://cwiki.apache.org/confluence/display/Hive/IndexDev
4.1.10.1Create Index
索引是標(biāo)準(zhǔn)的數(shù)據(jù)庫(kù)技術(shù),hive 0.7版本之后支持索引。hive索引采用的不是’one size fites all’的索引實(shí)現(xiàn)方式,而是提供插入式接口,并且提供一個(gè)具體的索引實(shí)現(xiàn)作為參考。
hive索引具有以下特點(diǎn):
1.索引key冗余存儲(chǔ),提供基于key的數(shù)據(jù)視圖
2.存儲(chǔ)設(shè)計(jì)以優(yōu)化查詢&檢索性能
3.對(duì)于某些查詢減少IO,從而提高性能。
注意:
1.index的partition默認(rèn)和數(shù)據(jù)表一致
2.視圖上不能創(chuàng)建index
3.index可以通過stored as配置存儲(chǔ)格式。
4.1.10.2 Drop Index
語(yǔ)法:
DROP INDEX [IF EXISTS] index_name ON table_name;4.1.10.3Alter Index
ALTER INDEX index_name ON table_name [PARTITION partition_spec] REBUILD;4.1.11Show
Show可做的操作有
?Show Databases
?Show Tables/Views/Partitions/Indexes
?Show Tables
?Show Views
?Show Partitions
?Show Table/Partition Extended
?Show Table Properties
?Show Create Table
?Show Indexes
?Show Columns
?Show Functions
?Show Granted Roles and Privileges
?Show Locks
?Show Conf
?Show Transactions
?Show Compactions
4.1.11.1Show Databases
語(yǔ)法:
SHOW (DATABASES|SCHEMAS) [LIKE 'identifier_with_wildcards'];其中l(wèi)ike用于過濾,例如:
hive> show databases; OK default demo_db shopping Time taken: 0.016 seconds, Fetched: 3 row(s) hive> show databases like "demo*|shop*g"; OK demo_db shopping Time taken: 0.013 seconds, Fetched: 2 row(s) hive>4.1.11.2 Show tables
語(yǔ)法:
SHOW TABLES [IN database_name] ['identifier_with_wildcards'];案例:
hive> use demo_db; OK Time taken: 0.028 seconds hive> select current_database(); OK demo_db Time taken: 0.273 seconds, Fetched: 1 row(s) hive> show tables; OK fk onion_referrers page_view pk test_change test_serializer Time taken: 0.085 seconds, Fetched: 6 row(s) hive> show tables in shopping; OK Time taken: 0.066 seconds hive> show tables in demo_db like '*test*'; OK test_change test_serializer Time taken: 0.06 seconds, Fetched: 2 row(s) hive>4.1.11.3 Show views
語(yǔ)法:
SHOW VIEWS [IN/FROM database_name] [LIKE 'pattern_with_wildcards'];案例:
SHOW VIEWS; -- 顯示當(dāng)前database下的所有的view SHOW VIEWS 'test_*'; -- 顯示所有的以"test_"開頭的view SHOW VIEWS '*view2'; -- 顯示所有以"view2"結(jié)尾的view SHOW VIEWS LIKE 'test_view1|test_view2'; -- 顯示"test_view1" 或 "test_view2"視圖 SHOW VIEWS FROM test1; -- 從database test1中查找視圖 SHOW VIEWS IN test1; -- 從database test1中查找視圖 SHOW VIEWS IN test1 "test_*"; -- 顯示所有在database test2 中以 "test_"開頭的視圖4.1.11.4 Show Partitions
語(yǔ)法:
SHOW PARTITIONS table_name;案例:
SHOW PARTITIONS table_name PARTITION(ds='2010-03-03'); -- (Note: Hive 0.6 and later) SHOW PARTITIONS table_name PARTITION(hr='12'); -- (Note: Hive 0.6 and later) SHOW PARTITIONS table_name PARTITION(ds='2010-03-03', hr='12'); -- (Note: Hive 0.6 and later)4.1.11.5 Show Table/Partition Extended
語(yǔ)法案例:
SHOW TABLE EXTENDED [IN|FROM database_name] LIKE 'identifier_with_wildcards' [PARTITION(partition_spec)];案例:
hive> show table extended like 'test*'> ; OK tableName:test_change owner:root location:hdfs://hadoop1:9000/hive/demo/demo_db/test_change inputformat:org.apache.hadoop.mapred.TextInputFormat outputformat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat columns:struct columns { i32 b, string a2} partitioned:false partitionColumns: totalNumberFiles:0 totalFileSize:0 maxFileSize:0 minFileSize:0 lastAccessTime:0 lastUpdateTime:1559095998866tableName:test_serializer owner:root location:hdfs://hadoop1:9000/hive/demo/demo_db/test_serializer inputformat:org.apache.hadoop.mapred.TextInputFormat outputformat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat columns:struct columns { string string1, i32 int1, byte tinyint1, i16 smallint1, i64 bigint1, bool boolean1, float float1, double double1, list<string> list1, map<string,i32> map1, struct<sint:i32,sboolean:bool,sstring:string> struct1, uniontype<float,bool,string> union1, string enum1, i32 nullableint, binary bytes1, binary fixed1} partitioned:false partitionColumns: totalNumberFiles:0 totalFileSize:0 maxFileSize:0 minFileSize:0 lastAccessTime:0 lastUpdateTime:1559003554642Time taken: 0.063 seconds, Fetched: 30 row(s) hive>4.1.11.6 Show Table Properties
語(yǔ)法:
SHOW TBLPROPERTIES tblname; SHOW TBLPROPERTIES tblname("foo");案例:
hive> show tblproperties test_change; OK COLUMN_STATS_ACCURATE {"BASIC_STATS":"true"} last_modified_by root last_modified_time 1559097354 numFiles 0 numRows 0 rawDataSize 0 totalSize 0 transient_lastDdlTime 1559097354 Time taken: 0.058 seconds, Fetched: 8 row(s) hive>4.1.11.7 Show Create Table
SHOW CREATE TABLE ([db_name.]table_name|view_name);案例:
4.1.11.8 Show Columns
語(yǔ)法:
SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name];案例:
hive> show columns from test_change; OK b a2 Time taken: 0.072 seconds, Fetched: 2 row(s) hive>4.1.11.9 Show Functions
顯示Function
hive> SHOW FUNCTIONS "a.*"; SHOW FUNCTIONS is deprecated, please use SHOW FUNCTIONS LIKE instead. OK abs acos add_months aes_decrypt aes_encrypt and array array_contains ascii asin assert_true atan avg Time taken: 0.005 seconds, Fetched: 13 row(s) hive>4.1.12 Describe
4.1.12.1Describe Database
語(yǔ)法:
DESCRIBE DATABASE [EXTENDED] db_name; DESCRIBE SCHEMA [EXTENDED] db_name; -- (Note: Hive 1.1.0 and later)4.1.12.2Describe Table/View/Column
DESCRIBE [EXTENDED | FORMATTED][db_name.]table_name [PARTITION partition_spec] [col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];4.1.12.3 Describe Partition
hive> show partitions part_table; OK d=abchive> DESCRIBE extended part_table partition (d='abc'); OK i int d string # Partition Information # col_name data_type comment d string Detailed Partition Information Partition(values:[abc], dbName:default, tableName:part_table, createTime:1459382234, lastAccessTime:0, sd:StorageDescriptor(cols:[FieldSchema(name:i, type:int, comment:null), FieldSchema(name:d, type:string, comment:null)], location:file:/tmp/warehouse/part_table/d=abc, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), parameters:{numFiles=1, COLUMN_STATS_ACCURATE=true, transient_lastDdlTime=1459382234, numRows=1, totalSize=2, rawDataSize=1}) Time taken: 0.325 seconds, Fetched: 9 row(s)hive> DESCRIBE formatted part_table partition (d='abc'); OK # col_name data_type comment i int # Partition Information # col_name data_type comment d string # Detailed Partition Information Partition Value: [abc] Database: default Table: part_table CreateTime: Wed Mar 30 16:57:14 PDT 2016 LastAccessTime: UNKNOWN Protect Mode: None Location: file:/tmp/warehouse/part_table/d=abc Partition Parameters: COLUMN_STATS_ACCURATE true numFiles 1 numRows 1 rawDataSize 1 totalSize 2 transient_lastDdlTime 1459382234 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format 1 Time taken: 0.334 seconds, Fetched: 35 row(s)打個(gè)賞唄,您的支持是我堅(jiān)持寫好博文的動(dòng)力
總結(jié)
以上是生活随笔為你收集整理的4、Hive数据操作,DDL操作,CRUD database,CRUD table,partition,view,index,show命令等的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 社会保障卡可以异地激活吗
- 下一篇: 定期存款和活期存款的区别