06_Flink命令行界面、作业管理示例、Savepoints、语法(run、通用配置、yarn-cluster、info、list、stop、cancel、savepoint等)
1.6.Flink命令行界面
1.6.1.Deployment targets
1.6.2.例子
1.6.3.作業管理示例
1.6.4.Savepoints
1.6.5.語法
1.6.Flink命令行界面
此篇內容來自:https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/ops/cli.html
Flink提供了一個命令行接口(CLI)來運行打包為JAR文件的程序,并控制它們的執行。CLI是任何Flink設置的一部分,可以在本地單節點設置和分布式設置中使用。它位于/bin/flink下,默認情況下連接到從相同安裝目錄啟動的正在運行的JobManager。
可以使用命令行:
?提交作業執行。
?取消正在運行的作業。
?提供有關job的信息。
?列出正在運行和等待的作業。
?觸發和釋放保存點(savepoints)
存儲使用命令行接口的先決條件是,JobManager已經啟動(通過/bin/start-cluster.sh),或者在YARN 或Kubernetes上部署了。
1.6.1.Deployment targets
Flink具有執行器的概念,用于定義可用的目標部署位置。你可以通過bin/flink --help查看可用的部署位置。例如:
Options for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"remote", "local", "kubernetes-session", "yarn-per-job","yarn-session", "yarn-application" and "kubernetes-application".1.6.2.例子
Run example program with no arguments: ./bin/flink run ./examples/batch/WordCount.jarRun example program with arguments for input and result files: ./bin/flink run ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with parallelism 16 and arguments for input and result files: ./bin/flink run -p 16 ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with flink log output disabled:./bin/flink run -q ./examples/batch/WordCount.jar Run example program in detached mode: ./bin/flink run -d ./examples/batch/WordCount.jarRun example program on a specific JobManager: ./bin/flink run -m myJMHost:8081 \./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with a specific class as an entry point: ./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program using a per-job YARN cluster with 2 TaskManagers: ./bin/flink run -m yarn-cluster \./examples/batch/WordCount.jar \--input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out1.6.3.作業管理示例
Display the optimized execution plan for the WordCount example program as JSON: ./bin/flink info ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outList scheduled and running jobs (including their JobIDs): ./bin/flink listList scheduled jobs (including their JobIDs): ./bin/flink list -sList running jobs (including their JobIDs): ./bin/flink list -rList all existing jobs (including their JobIDs): ./bin/flink list -aList running Flink jobs inside Flink YARN session: ./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -rCancel a job: ./bin/flink cancel <jobID> Cancel a job with a savepoint (deprecated; use “stop” instead):./bin/flink cancel -s [targetDirectory] <jobID> Gracefully stop a job with a savepoint (streaming jobs only):./bin/flink stop [-p targetDirectory] [-d] <jobID>1.6.4.Savepoints
Savepoints are controlled via the command line client:
Trigger a Savepoint
./bin/flink savepoint <jobId> [savepointDirectory]This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.
Trigger a Savepoint with YARN
./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.
Stop
Use the stop to gracefully stop a running streaming job with a savepoint.
./bin/flink stop [-p targetDirectory] [-d] <jobID>A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows from source to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrier that will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling their cancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpoint barrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting for a specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.
Cancel with a savepoint (deprecated)
You can atomically trigger a savepoint and cancel a job.
./bin/flink cancel -s [savepointDirectory] <jobID>If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Note: Cancelling a job with savepoint is deprecated. Use “stop” instead.
Restore a savepoint
./bin/flink run -s <savepointPath> ...The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.
This is useful if your program dropped an operator that was part of the savepoint.
Dispose a savepoint
./bin/flink savepoint -d <savepointPath>Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:
Otherwise, you will run into a ClassNotFoundException.
1.6.5.語法
[root@hadoop6 flink-1.11.1]# bin/flink --help SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/admin/installed/flink-1.11.1/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] ./flink <ACTION> [OPTIONS] [ARGUMENTS]The following actions are available:Action "run" compiles and runs a program.Syntax: run [OPTIONS] <jar-file> <arguments>"run" action options:-c,--class <classname> Class with the program entry point("main()" method). Only needed if theJAR file does not specify the class inits manifest.-C,--classpath <url> Adds a URL to each user codeclassloader on all nodes in thecluster. The paths must specify aprotocol (e.g. file://) and beaccessible on all nodes (e.g. by meansof a NFS share). You can use thisoption multiple times for specifyingmore than one URL. The protocol mustbe supported by the {@linkjava.net.URLClassLoader}.-d,--detached If present, runs the job in detachedmode-n,--allowNonRestoredState Allow to skip savepoint state thatcannot be restored. You need to allowthis if you removed an operator fromyour program that was part of theprogram when the savepoint wastriggered.-p,--parallelism <parallelism> The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.-py,--python <pythonFile> Python script with the program entrypoint. The dependent resources can beconfigured with the `--pyFiles`option.-pyarch,--pyArchives <arg> Add python archive files for job. Thearchive files will be extracted to theworking directory of python UDFworker. Currently only zip-format issupported. For each archive file, atarget directory be specified. If thetarget directory name is specified,the archive file will be extracted toa name can directory with thespecified name. Otherwise, the archivefile will be extracted to a directorywith the same name of the archivefile. The files uploaded via thisoption are accessible via relativepath. '#' could be used as theseparator of the archive file path andthe target directory name. Comma (',')could be used as the separator tospecify multiple archive files. Thisoption can be used to upload thevirtual environment, the data filesused in Python UDF (e.g.: --pyArchivesfile:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutablepy37.zip/py37/bin/python). The datafiles could be accessed in Python UDF,e.g.: f = open('data/data.txt', 'r').-pyexec,--pyExecutable <arg> Specify the path of the pythoninterpreter used to execute the pythonUDF worker (e.g.: --pyExecutable/usr/local/bin/python3). The pythonUDF worker depends on Python 3.5+,Apache Beam (version == 2.19.0), Pip(version >= 7.1.0) and SetupTools(version >= 37.0.0). Please ensurethat the specified environment meetsthe above requirements.-pyfs,--pyFiles <pythonFiles> Attach custom python files for job.These files will be added to thePYTHONPATH of both the local clientand the remote python UDF worker. Thestandard python resource file suffixessuch as .py/.egg/.zip or directory areall supported. Comma (',') could beused as the separator to specifymultiple files (e.g.: --pyFilesfile:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip).-pym,--pyModule <pythonModule> Python module with the program entrypoint. This option must be used inconjunction with `--pyFiles`.-pyreq,--pyRequirements <arg> Specify a requirements.txt file whichdefines the third-party dependencies.These dependencies will be installedand added to the PYTHONPATH of thepython UDF worker. A directory whichcontains the installation packages ofthese dependencies could be specifiedoptionally. Use '#' as the separatorif the optional parameter exists(e.g.: --pyRequirementsfile:///tmp/requirements.txt#file:///tmp/cached_dir).-s,--fromSavepoint <savepointPath> Path to a savepoint to restore the jobfrom (for examplehdfs:///flink/savepoint-1537).-sae,--shutdownOnAttachedExit If the job is submitted in attachedmode, perform a best-effort clustershutdown when the CLI is terminatedabruptly, e.g., in response to a userinterrupt, such as typing Ctrl + C.Options for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg> DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-d,--detached If present, runs the job in detached mode-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yat,--yarnapplicationType <arg> Set a custom application type for theapplication on YARN-yD <property=value> use value for given property-yd,--yarndetached If present, runs the job in detachedmode (deprecated; use non-YARNspecific option instead)-yh,--yarnhelp Help for the Yarn session CLI.-yid,--yarnapplicationId <arg> Attach to running YARN session-yj,--yarnjar <arg> Path to Flink jar file-yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container withoptional unit (default: MB)-ynl,--yarnnodeLabel <arg> Specify YARN node label for the YARNapplication-ynm,--yarnname <arg> Set a custom name for the applicationon YARN-yq,--yarnquery Display available YARN resources(memory, cores)-yqu,--yarnqueue <arg> Specify YARN queue.-ys,--yarnslots <arg> Number of slots per TaskManager-yt,--yarnship <arg> Ship files in the specified directory(t for transfer)-ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container withoptional unit (default: MB)-yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability mode-z,--zookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-pathsfor high availability modeAction "info" shows the optimized execution plan of the program (JSON).Syntax: info [OPTIONS] <jar-file> <arguments>"info" action options:-c,--class <classname> Class with the program entry point("main()" method). Only needed if the JARfile does not specify the class in itsmanifest.-p,--parallelism <parallelism> The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.Action "list" lists running and scheduled programs.Syntax: list [OPTIONS]"list" action options:-a,--all Show all programs and their JobIDs-r,--running Show only running programs and their JobIDs-s,--scheduled Show only scheduled programs and their JobIDsOptions for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg> DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg> Attach to running YARN session-z,--zookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-pathsfor high availability modeAction "stop" stops a running program with a savepoint (streaming jobs only).Syntax: stop [OPTIONS] <Job ID>"stop" action options:-d,--drain Send MAX_WATERMARK before taking thesavepoint and stopping the pipelne.-p,--savepointPath <savepointPath> Path to the savepoint (for examplehdfs:///flink/savepoint-1537). If nodirectory is specified, the configureddefault will be used("state.savepoints.dir").Options for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg> DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg> Attach to running YARN session-z,--zookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-pathsfor high availability modeAction "cancel" cancels a running program.Syntax: cancel [OPTIONS] <Job ID>"cancel" action options:-s,--withSavepoint <targetDirectory> **DEPRECATION WARNING**: Cancellinga job with savepoint is deprecated.Use "stop" instead.Trigger savepoint and cancel job.The target directory is optional. Ifno directory is specified, theconfigured default directory(state.savepoints.dir) is used.Options for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg> DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg> Attach to running YARN session-z,--zookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-pathsfor high availability modeAction "savepoint" triggers savepoints for a running job or disposes existing ones.Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]"savepoint" action options:-d,--dispose <arg> Path of savepoint to dispose.-j,--jarfile <jarfile> Flink program JAR file.Options for Generic CLI mode:-D <property=value> Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg> DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg> The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg> Attach to running YARN session-z,--zookeeperNamespace <arg> Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg> Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-pathsfor high availability mode您在 /var/spool/mail/root 中有新郵件 [root@hadoop6 flink-1.11.1]#總結
以上是生活随笔為你收集整理的06_Flink命令行界面、作业管理示例、Savepoints、语法(run、通用配置、yarn-cluster、info、list、stop、cancel、savepoint等)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 一嗨租车坦克300边境限定版怎么样
- 下一篇: 07/08_flink shell,基本