1.6.Flink命令行界面
1.6.1.Deployment targets
1.6.2.例子
1.6.3.作业管理示例
1.6.4.Savepoints
1.6.5.语法

1.6.Flink命令行界面

此篇内容来自:https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/ops/cli.html

Flink提供了一个命令行接口(CLI)来运行打包为JAR文件的程序,并控制它们的执行。CLI是任何Flink设置的一部分,可以在本地单节点设置和分布式设置中使用。它位于/bin/flink下,默认情况下连接到从相同安装目录启动的正在运行的JobManager。
可以使用命令行:
提交作业执行。
取消正在运行的作业。
提供有关job的信息。
列出正在运行和等待的作业。
触发和释放保存点(savepoints)

存储使用命令行接口的先决条件是,JobManager已经启动(通过/bin/start-cluster.sh),或者在YARN 或Kubernetes上部署了。

1.6.1.Deployment targets

Flink具有执行器的概念,用于定义可用的目标部署位置。你可以通过bin/flink --help查看可用的部署位置。例如:

Options for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"remote", "local", "kubernetes-session", "yarn-per-job","yarn-session", "yarn-application" and "kubernetes-application".

1.6.2.例子

Run example program with no arguments:
./bin/flink run ./examples/batch/WordCount.jarRun example program with arguments for input and result files:
./bin/flink run ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with parallelism 16 and arguments for input and result files:
./bin/flink run -p 16 ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with flink log output disabled:./bin/flink run -q ./examples/batch/WordCount.jar
Run example program in detached mode:
./bin/flink run -d ./examples/batch/WordCount.jarRun example program on a specific JobManager:
./bin/flink run -m myJMHost:8081 \./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program with a specific class as an entry point:
./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outRun example program using a per-job YARN cluster with 2 TaskManagers:
./bin/flink run -m yarn-cluster \./examples/batch/WordCount.jar \--input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out

1.6.3.作业管理示例

Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info ./examples/batch/WordCount.jar \--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_outList scheduled and running jobs (including their JobIDs):
./bin/flink listList scheduled jobs (including their JobIDs):
./bin/flink list -sList running jobs (including their JobIDs):
./bin/flink list -rList all existing jobs (including their JobIDs):
./bin/flink list -aList running Flink jobs inside Flink YARN session:
./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -rCancel a job:
./bin/flink cancel <jobID>
Cancel a job with a savepoint (deprecated; use “stop” instead):./bin/flink cancel -s [targetDirectory] <jobID>
Gracefully stop a job with a savepoint (streaming jobs only):./bin/flink stop [-p targetDirectory] [-d] <jobID>

1.6.4.Savepoints

Savepoints are controlled via the command line client:

Trigger a Savepoint

./bin/flink savepoint <jobId> [savepointDirectory]

This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.

Trigger a Savepoint with YARN

./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>

This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.

Stop

Use the stop to gracefully stop a running streaming job with a savepoint.

./bin/flink stop [-p targetDirectory] [-d] <jobID>

A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows from source to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrier that will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling their cancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpoint barrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting for a specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.

Cancel with a savepoint (deprecated)

You can atomically trigger a savepoint and cancel a job.

./bin/flink cancel -s [savepointDirectory] <jobID>

If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Note: Cancelling a job with savepoint is deprecated. Use “stop” instead.

Restore a savepoint

./bin/flink run -s <savepointPath> ...

The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.

./bin/flink run -s <savepointPath> -n ...

This is useful if your program dropped an operator that was part of the savepoint.

Dispose a savepoint

./bin/flink savepoint -d <savepointPath>

Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:

./bin/flink savepoint -d <savepointPath> -j <jarFile>

Otherwise, you will run into a ClassNotFoundException.

1.6.5.语法

[root@hadoop6 flink-1.11.1]# bin/flink --help
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/admin/installed/flink-1.11.1/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
./flink <ACTION> [OPTIONS] [ARGUMENTS]The following actions are available:Action "run" compiles and runs a program.Syntax: run [OPTIONS] <jar-file> <arguments>"run" action options:-c,--class <classname>               Class with the program entry point("main()" method). Only needed if theJAR file does not specify the class inits manifest.-C,--classpath <url>                 Adds a URL to each user codeclassloader  on all nodes in thecluster. The paths must specify aprotocol (e.g. file://) and beaccessible on all nodes (e.g. by meansof a NFS share). You can use thisoption multiple times for specifyingmore than one URL. The protocol mustbe supported by the {@linkjava.net.URLClassLoader}.-d,--detached                        If present, runs the job in detachedmode-n,--allowNonRestoredState           Allow to skip savepoint state thatcannot be restored. You need to allowthis if you removed an operator fromyour program that was part of theprogram when the savepoint wastriggered.-p,--parallelism <parallelism>       The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.-py,--python <pythonFile>            Python script with the program entrypoint. The dependent resources can beconfigured with the `--pyFiles`option.-pyarch,--pyArchives <arg>           Add python archive files for job. Thearchive files will be extracted to theworking directory of python UDFworker. Currently only zip-format issupported. For each archive file, atarget directory be specified. If thetarget directory name is specified,the archive file will be extracted toa name can directory with thespecified name. Otherwise, the archivefile will be extracted to a directorywith the same name of the archivefile. The files uploaded via thisoption are accessible via relativepath. '#' could be used as theseparator of the archive file path andthe target directory name. Comma (',')could be used as the separator tospecify multiple archive files. Thisoption can be used to upload thevirtual environment, the data filesused in Python UDF (e.g.: --pyArchivesfile:///tmp/py37.zip,file:///tmp/data.zip#data --pyExecutablepy37.zip/py37/bin/python). The datafiles could be accessed in Python UDF,e.g.: f = open('data/data.txt', 'r').-pyexec,--pyExecutable <arg>         Specify the path of the pythoninterpreter used to execute the pythonUDF worker (e.g.: --pyExecutable/usr/local/bin/python3). The pythonUDF worker depends on Python 3.5+,Apache Beam (version == 2.19.0), Pip(version >= 7.1.0) and SetupTools(version >= 37.0.0). Please ensurethat the specified environment meetsthe above requirements.-pyfs,--pyFiles <pythonFiles>        Attach custom python files for job.These files will be added to thePYTHONPATH of both the local clientand the remote python UDF worker. Thestandard python resource file suffixessuch as .py/.egg/.zip or directory areall supported. Comma (',') could beused as the separator to specifymultiple files (e.g.: --pyFilesfile:///tmp/myresource.zip,hdfs:///$namenode_address/myresource2.zip).-pym,--pyModule <pythonModule>       Python module with the program entrypoint. This option must be used inconjunction with `--pyFiles`.-pyreq,--pyRequirements <arg>        Specify a requirements.txt file whichdefines the third-party dependencies.These dependencies will be installedand added to the PYTHONPATH of thepython UDF worker. A directory whichcontains the installation packages ofthese dependencies could be specifiedoptionally. Use '#' as the separatorif the optional parameter exists(e.g.: --pyRequirementsfile:///tmp/requirements.txt#file:///tmp/cached_dir).-s,--fromSavepoint <savepointPath>   Path to a savepoint to restore the jobfrom (for examplehdfs:///flink/savepoint-1537).-sae,--shutdownOnAttachedExit        If the job is submitted in attachedmode, perform a best-effort clustershutdown when the CLI is terminatedabruptly, e.g., in response to a userinterrupt, such as typing Ctrl + C.Options for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-d,--detached                        If present, runs the job in detached mode-m,--jobmanager <arg>                Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yat,--yarnapplicationType <arg>     Set a custom application type for theapplication on YARN-yD <property=value>                 use value for given property-yd,--yarndetached                   If present, runs the job in detachedmode (deprecated; use non-YARNspecific option instead)-yh,--yarnhelp                       Help for the Yarn session CLI.-yid,--yarnapplicationId <arg>       Attach to running YARN session-yj,--yarnjar <arg>                  Path to Flink jar file-yjm,--yarnjobManagerMemory <arg>    Memory for JobManager Container withoptional unit (default: MB)-ynl,--yarnnodeLabel <arg>           Specify YARN node label for the YARNapplication-ynm,--yarnname <arg>                Set a custom name for the applicationon YARN-yq,--yarnquery                      Display available YARN resources(memory, cores)-yqu,--yarnqueue <arg>               Specify YARN queue.-ys,--yarnslots <arg>                Number of slots per TaskManager-yt,--yarnship <arg>                 Ship files in the specified directory(t for transfer)-ytm,--yarntaskManagerMemory <arg>   Memory per TaskManager Container withoptional unit (default: MB)-yz,--yarnzookeeperNamespace <arg>   Namespace to create the Zookeepersub-paths for high availability mode-z,--zookeeperNamespace <arg>        Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "info" shows the optimized execution plan of the program (JSON).Syntax: info [OPTIONS] <jar-file> <arguments>"info" action options:-c,--class <classname>           Class with the program entry point("main()" method). Only needed if the JARfile does not specify the class in itsmanifest.-p,--parallelism <parallelism>   The parallelism with which to run theprogram. Optional flag to override thedefault value specified in theconfiguration.Action "list" lists running and scheduled programs.Syntax: list [OPTIONS]"list" action options:-a,--all         Show all programs and their JobIDs-r,--running     Show only running programs and their JobIDs-s,--scheduled   Show only scheduled programs and their JobIDsOptions for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "stop" stops a running program with a savepoint (streaming jobs only).Syntax: stop [OPTIONS] <Job ID>"stop" action options:-d,--drain                           Send MAX_WATERMARK before taking thesavepoint and stopping the pipelne.-p,--savepointPath <savepointPath>   Path to the savepoint (for examplehdfs:///flink/savepoint-1537). If nodirectory is specified, the configureddefault will be used("state.savepoints.dir").Options for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "cancel" cancels a running program.Syntax: cancel [OPTIONS] <Job ID>"cancel" action options:-s,--withSavepoint <targetDirectory>   **DEPRECATION WARNING**: Cancellinga job with savepoint is deprecated.Use "stop" instead.Trigger savepoint and cancel job.The target directory is optional. Ifno directory is specified, theconfigured default directory(state.savepoints.dir) is used.Options for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability modeAction "savepoint" triggers savepoints for a running job or disposes existing ones.Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]"savepoint" action options:-d,--dispose <arg>       Path of savepoint to dispose.-j,--jarfile <jarfile>   Flink program JAR file.Options for Generic CLI mode:-D <property=value>   Generic configuration options forexecution/deployment and for the configured executor.The available options can be found athttps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html-e,--executor <arg>   DEPRECATED: Please use the -t option instead which isalso available with the "Application Mode".The name of the executor to be used for executing thegiven job, which is equivalent to the"execution.target" config option. The currentlyavailable executors are: "collection", "remote","local", "kubernetes-session", "yarn-per-job","yarn-session".-t,--target <arg>     The deployment target for the given application,which is equivalent to the "execution.target" configoption. The currently available targets are:"collection", "remote", "local","kubernetes-session", "yarn-per-job", "yarn-session","yarn-application" and "kubernetes-application".Options for yarn-cluster mode:-m,--jobmanager <arg>            Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the onespecified in the configuration.-yid,--yarnapplicationId <arg>   Attach to running YARN session-z,--zookeeperNamespace <arg>    Namespace to create the Zookeepersub-paths for high availability modeOptions for default mode:-m,--jobmanager <arg>           Address of the JobManager to which toconnect. Use this flag to connect to adifferent JobManager than the one specifiedin the configuration.-z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-pathsfor high availability mode您在 /var/spool/mail/root 中有新邮件
[root@hadoop6 flink-1.11.1]#

06_Flink命令行界面、作业管理示例、Savepoints、语法(run、通用配置、yarn-cluster、info、list、stop、cancel、savepoint等)相关推荐

  1. 单臂路由中路由器通用配置示意图

    单臂路由中路由器通用配置示意图 一.单臂路由 二.重要性 三.兼容问题 四.命令配置 总结 一.单臂路由 单臂路由(router-on-a-stick)是指在路由器的一个接口上通过配置子接口(或&qu ...

  2. Elasticsearch Java Low Level REST Client(通用配置)

    Elasticsearch Java Low Level REST Client(通用配置) 通用配置 正如初始化中所解释的,RestClientBuilder支持提供RequestConfigCal ...

  3. springmvc的web工程通用配置

    springmvc的web工程通用配置: 1.web.xml 2.applicationContext.xml(包含初始化调度器) 创建调度器 3.listener SpringBeanGetter ...

  4. Spring Cloud Alibaba - 19 Nacos Config配置中心加载不同微服务的通用配置的两种方式

    文章目录 Pre 实现 方式一 通过 shared-dataids 方式 方式二 通过 ext-config方式 配置文件优先级 源码 Pre Spring Cloud Alibaba - 18 Na ...

  5. Spring Cloud Alibaba - 18 Nacos Config配置中心加载相同微服务的不同环境下的通用配置

    文章目录 需求 实现 Step 1 Nacos Config 新增公共配置 Step 2 验证 配置文件优先级 源码 需求 举个例子,同一个微服务,通常我们的servlet-context 都是相同的 ...

  6. 微擎小程序怎么配置服务器域名,随便撸源码源码微擎小程序通用配置图文教程,教会你怎么配置微擎小程序!...

    最近很多网友都在问站长微擎小程序如何配置使用,微擎小程序配置 查看更多关于 微擎小程序配置 的文章 主要分几类,之前已经写过人人商城小程序的配置教程了,大家反响非常好,简单就学会了配置人人商城小程序. ...

  7. QDoc通用配置变量

    QDoc通用配置变量 QDoc通用配置变量 别名alias 代码缩进 代码前缀,代码后缀 定义 exampledirs 例子 排除对象 排除文件 extraimages falsehoods 生成索引 ...

  8. springmvc 项目完整示例05 日志 --log4j整合 配置 log4j属性设置 log4j 配置文件 log4j应用...

    log4j 就是log for java嘛,老外都喜欢这样子,比如那个I18n  ---internationalization  不就是i和n之间有18个字母... http://logging.a ...

  9. 解决方案-Visual Studio设置通用配置(包含路径+依赖库)

    作者:翟天保Steven 版权声明:著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处 问题说明 使用VS编程时,第一个要做的事情就是配置环境,而在自己日常工作或者学习中,有一些设置或 ...

最新文章

  1. 高斯过程及其家族往事
  2. 对于一些手机内存概念的思考、深入理解java的finalize,对于内存优化的小总结...
  3. 白领职场必懂的22条潜规则(转载)
  4. luogu P3293 [SCOI2016]美味
  5. java.io.File.setExecutable(boolean executable) 方法来设置所有者对于此抽象路径名执行权限。
  6. m不能被3整除c语言表达式,求mn-之间所有不能被3整除的整数之和求 – 手机爱问...
  7. java remove map_Java HashMap remove()方法
  8. View、Bitmap游戏常用
  9. UIView常见方法
  10. C语言变量相关试题,C语言模拟试题
  11. 临时的实验课记录+研究的代码+计算机文档目录管理
  12. Docker安装最新版MySQL5.7(mysql-5.7.40)教程(参考Docker Hub)
  13. 基于WebGIS的电子政务应用(基于J2EE的MVC架构)
  14. c语言,函数声明的误区
  15. Mybatis 查询 List作为参数查询 条件中有多个参数,foreach in 查询
  16. TC358775XBG是一颗将MIPI DSI信号转换成single/ dual -link LVDS的芯片,最高分辨率支持到1920x1200
  17. 实习心得体会之JDBC操作21090712
  18. php excel给excel批量插入图片
  19. 大数据先导实践实验一
  20. 《笨方法学 Python 3》35.分支和函数

热门文章

  1. 阿里Python后端1w+薪资面试真题!(附带准答案)offer轻松拿到手
  2. OpenCASCADE:建模算法之将触感的形状连接起来
  3. OpenCASCADE:Foundation Classes简介
  4. boost::spirit模块实现利用 Karma 生成器的替代方案和内置匹配功能的测试程序
  5. boost::gil::scoped_channel_value用法的测试程序
  6. boost::geometry::cross_product用法的测试程序
  7. Boost:人口 bimap的测试程序
  8. DCMTK:查询/检索服务类用户(C-FIND操作)
  9. VTK:PolyData之KochanekSplineDemo
  10. VTK:PolyData之CellCentersDemo