kubectl与之前的kubernetes进程不同,它不是一个后台运行的守护进程,而是kubernetes提供的一个命令行工具(CLI),它提供了一组命令来操作kubernetes集群。

kubectl入口类源码位置如下:

/cmd/kubectl/kubectl.go

func main() {rand.Seed(time.Now().UTC().UnixNano())command := cmd.NewDefaultKubectlCommand()// TODO: once we switch everything over to Cobra commands, we can go back to calling// utilflag.InitFlags() (by removing its pflag.Parse() call). For now, we have to set the// normalize func and add the go flag set by hand.pflag.CommandLine.SetNormalizeFunc(utilflag.WordSepNormalizeFunc)pflag.CommandLine.AddGoFlagSet(goflag.CommandLine)// utilflag.InitFlags()logs.InitLogs()defer logs.FlushLogs()if err := command.Execute(); err != nil {fmt.Fprintf(os.Stderr, "%v\n", err)os.Exit(1)}
}

可以看出通过NewDefaultKubectlCommand方法创建了一个具体的command命令并调用它的execute方法执行,这是工厂模式结合命令模式的一个经典设计案例,在工厂模式中,我们在创建对象时不会对客户端暴露创建逻辑,并且是通过使用一个共同的接口来指向新创建的对象。定义一个创建对象的接口,让其子类自己决定实例化哪一个工厂类,工厂模式使其创建过程延迟到子类进行。

NewDefaultKubectlCommand方法:

/pkg/kubectl/cmd/cmd.go中:

// NewKubectlCommand creates the `kubectl` command and its nested children.
func NewKubectlCommand(in io.Reader, out, err io.Writer) *cobra.Command {// Parent command to which all subcommands are added.cmds := &cobra.Command{Use:   "kubectl",Short: i18n.T("kubectl controls the Kubernetes cluster manager"),Long: templates.LongDesc(`kubectl controls the Kubernetes cluster manager.Find more information at:https://kubernetes.io/docs/reference/kubectl/overview/`),Run: runHelp,BashCompletionFunction: bashCompletionFunc,}flags := cmds.PersistentFlags()flags.SetNormalizeFunc(utilflag.WarnWordSepNormalizeFunc) // Warn for "_" flags// Normalize all flags that are coming from other packages or pre-configurations// a.k.a. change all "_" to "-". e.g. glog packageflags.SetNormalizeFunc(utilflag.WordSepNormalizeFunc)kubeConfigFlags := genericclioptions.NewConfigFlags()kubeConfigFlags.AddFlags(flags)matchVersionKubeConfigFlags := cmdutil.NewMatchVersionFlags(kubeConfigFlags)matchVersionKubeConfigFlags.AddFlags(cmds.PersistentFlags())cmds.PersistentFlags().AddGoFlagSet(flag.CommandLine)f := cmdutil.NewFactory(matchVersionKubeConfigFlags)

由于大多数kubectl的命令都需要访问kubernetes API Server,所以kubectl设计了一个类似命令的上下文环境的对象-Factory供command对象使用。

接下来我们对kubectl中的几个典型command源码逐一解读。

kubectl create命令

kubectl create命令通过调用kubernetes API Server提供的REST API来创建kubernetes资源对象。例如Pod、Service、RC等。资源的描述对象来自-f指定的文件或者来自命令行的输入流。

下面是创建create命令的相关源码:

/pkg/kubectl/cmd/create/create.go

func NewCmdCreate(f cmdutil.Factory, ioStreams genericclioptions.IOStreams) *cobra.Command {o := NewCreateOptions(ioStreams)cmd := &cobra.Command{Use: "create -f FILENAME",DisableFlagsInUseLine: true,Short:   i18n.T("Create a resource from a file or from stdin."),Long:    createLong,Example: createExample,Run: func(cmd *cobra.Command, args []string) {if cmdutil.IsFilenameSliceEmpty(o.FilenameOptions.Filenames) {defaultRunFunc := cmdutil.DefaultSubCommandRun(ioStreams.ErrOut)defaultRunFunc(cmd, args)return}cmdutil.CheckErr(o.Complete(f, cmd))cmdutil.CheckErr(o.ValidateArgs(cmd, args))cmdutil.CheckErr(o.RunCreate(f, cmd))},}// bind flag structso.RecordFlags.AddFlags(cmd)usage := "to use to create the resource"cmdutil.AddFilenameOptionFlags(cmd, &o.FilenameOptions, usage)cmd.MarkFlagRequired("filename")

其中Command的Run函数中的RunCreate方法是核心逻辑所在。

/pkg/kubectl/cmd/create/create.go

func (o *CreateOptions) RunCreate(f cmdutil.Factory, cmd *cobra.Command) error {// raw only makes sense for a single file resource multiple objects aren't likely to do what you want.// the validator enforces this, soif len(o.Raw) > 0 {return o.raw(f)}if o.EditBeforeCreate {return RunEditOnCreate(f, o.RecordFlags, o.IOStreams, cmd, &o.FilenameOptions)}schema, err := f.Validator(cmdutil.GetFlagBool(cmd, "validate"))if err != nil {return err}cmdNamespace, enforceNamespace, err := f.ToRawKubeConfigLoader().Namespace()if err != nil {return err}r := f.NewBuilder().Unstructured().Schema(schema).ContinueOnError().NamespaceParam(cmdNamespace).DefaultNamespace().FilenameParam(enforceNamespace, &o.FilenameOptions).LabelSelectorParam(o.Selector).Flatten().Do()

可以看到RunCreate方法使用了f.Builder对象,它是kubectl中的一处复杂设计,采用了Visitor的设计模式,kubectl的很多命令都用到了它,builder的目标是根据命令行输入的资源相关的参数,创建针对性的visitor对象来获取相关资源,最后遍历相关的所有visitor对象,触发用户指定的visitorFun回调函数来处理每个具体的资源,最终完成资源对象的业务处理逻辑。

RunCreate方法中创建了一个builder,设置必要的参数,比如schema对象用来校验资源描述是否正确,比如有没有缺少字段或者属性的类型错误等。FilenameParam是这里唯一指定builder的资源参数的方法,即把命令行传入的filenames参数作为资源参数。Flatten方法则告诉Builder,这里的资源对象其实是一个数组,需要builder构造一个FlattenListVisitor来遍历Visit数组中的每个资源项目,Do方法则返回一个Rest对象,里面包括与资源相关的visitor对象。

NamespaceParam等着一系列方法在:

/pkg/kubectl/genericclioptions/resource/builder.go中

// NamespaceParam accepts the namespace that these resources should be
// considered under from - used by DefaultNamespace() and RequireNamespace()
func (b *Builder) NamespaceParam(namespace string) *Builder {b.namespace = namespacereturn b
}// DefaultNamespace instructs the builder to set the namespace value for any object found
// to NamespaceParam() if empty.
func (b *Builder) DefaultNamespace() *Builder {b.defaultNamespace = truereturn b
}

FilenameParam

/pkg/kubectl/genericclioptions/resource/builder.go

// FilenameParam groups input in two categories: URLs and files (files, directories, STDIN)
// If enforceNamespace is false, namespaces in the specs will be allowed to
// override the default namespace. If it is true, namespaces that don't match
// will cause an error.
// If ContinueOnError() is set prior to this method, objects on the path that are not
// recognized will be ignored (but logged at V(2)).
func (b *Builder) FilenameParam(enforceNamespace bool, filenameOptions *FilenameOptions) *Builder {recursive := filenameOptions.Recursivepaths := filenameOptions.Filenamesfor _, s := range paths {switch {case s == "-":b.Stdin()case strings.Index(s, "http://") == 0 || strings.Index(s, "https://") == 0:url, err := url.Parse(s)if err != nil {b.errs = append(b.errs, fmt.Errorf("the URL passed to filename %q is not valid: %v", s, err))continue}b.URL(defaultHttpGetAttempts, url)default:if !recursive {b.singleItemImplied = true}b.Path(recursive, s)}}if enforceNamespace {b.RequireNamespace()}return b
}

该方法主要逻辑为调用Builder的Builder.Stdin,Builder.URL或Builder.Path方法来处理不同类型的资源参数。这些方法会生成对应的Visitor对象并加入Builder的Visitor数组里。

不管是标准输入流,URL,还是文件目录或者文件本身,这里处理资源的visitor都是streamVisitor这个实现,下面是streamVisitor的Visit接口代码:

/pkg/kubectl/genericclioptions/resource/visitor.go

// Visit implements Visitor over a stream. StreamVisitor is able to distinct multiple resources in one stream.
func (v *StreamVisitor) Visit(fn VisitorFunc) error {d := yaml.NewYAMLOrJSONDecoder(v.Reader, 4096)for {ext := runtime.RawExtension{}if err := d.Decode(&ext); err != nil {if err == io.EOF {return nil}return fmt.Errorf("error parsing %s: %v", v.Source, err)}// TODO: This needs to be able to handle object in other encodings and schemas.ext.Raw = bytes.TrimSpace(ext.Raw)if len(ext.Raw) == 0 || bytes.Equal(ext.Raw, []byte("null")) {continue}if err := ValidateSchema(ext.Raw, v.Schema); err != nil {return fmt.Errorf("error validating %q: %v", v.Source, err)}info, err := v.infoForData(ext.Raw, v.Source)if err != nil {if fnErr := fn(info, err); fnErr != nil {return fnErr}continue}if err := fn(info, nil); err != nil {return err}}
}

在上述代码中,首先从输入流中解析具体的资源对象,然后创建一个info结构体进行包装(转换后的资源对象存储在info的object属性中),最后再用这个info对象作为参数调用回调函数visitorFunc,从而完成整个逻辑流程。下面是RunCreate方法里调用builder的Visit方法触发visitor执行时的源码,可以看到这里的VisitorFunc所做的事是通过Rest Client发起kubernetes API调用。把资源对象写入资源注册表里:

 err = r.Err()if err != nil {return err}count := 0err = r.Visit(func(info *resource.Info, err error) error {if err != nil {return err}if err := kubectl.CreateOrUpdateAnnotation(cmdutil.GetFlagBool(cmd, cmdutil.ApplyAnnotationsFlag), info.Object, cmdutil.InternalVersionJSONEncoder()); err != nil {return cmdutil.AddSourceToErr("creating", info.Source, err)}if err := o.Recorder.Record(info.Object); err != nil {glog.V(4).Infof("error recording current command: %v", err)}if !o.DryRun {if err := createAndRefresh(info); err != nil {return cmdutil.AddSourceToErr("creating", info.Source, err)}}

kubectl rolling update命令

该命令负责滚动更新升级RC(ReplicationController),下面是创建对应Command的源码:

/pkg/kubectl/cmd/rollingupdate.go

func NewCmdRollingUpdate(f cmdutil.Factory, ioStreams genericclioptions.IOStreams) *cobra.Command {o := NewRollingUpdateOptions(ioStreams)cmd := &cobra.Command{Use: "rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)",DisableFlagsInUseLine: true,Short:      "Perform a rolling update. This command is deprecated, use rollout instead.",Long:       rollingUpdateLong,Example:    rollingUpdateExample,Deprecated: `use "rollout" instead`,Hidden:     true,Run: func(cmd *cobra.Command, args []string) {cmdutil.CheckErr(o.Complete(f, cmd, args))cmdutil.CheckErr(o.Validate(cmd, args))cmdutil.CheckErr(o.Run())},}o.PrintFlags.AddFlags(cmd)cmd.Flags().DurationVar(&o.Period, "update-period", o.Period, `Time to wait between updating pods. Valid time units are "ns", "us" (or "碌s"), "ms", "s", "m", "h".`)cmd.Flags().DurationVar(&o.Interval, "poll-interval", o.Interval, `Time delay between polling for replication controller status after the update. Valid time units are "ns", "us" (or "碌s"), "ms", "s", "m", "h".`)cmd.Flags().DurationVar(&o.Timeout, "timeout", o.Timeout, `Max time to wait for a replication controller to update before giving up. Valid time units are "ns", "us" (or "碌s"), "ms", "s", "m", "h".`)usage := "Filename or URL to file to use to create the new replication controller."cmdutil.AddJsonFilenameFlag(cmd.Flags(), &o.FilenameOptions.Filenames, usage)cmd.Flags().StringVar(&o.Image, "image", o.Image, i18n.T("Image to use for upgrading the replication controller. Must be distinct from the existing image (either new image or new image tag).  Can not be used with --filename/-f"))cmd.Flags().StringVar(&o.DeploymentKey, "deployment-label-key", o.DeploymentKey, i18n.T("The key to use to differentiate between two different controllers, default 'deployment'.  Only relevant when --image is specified, ignored otherwise"))cmd.Flags().StringVar(&o.Container, "container", o.Container, i18n.T("Container name which will have its image upgraded. Only relevant when --image is specified, ignored otherwise. Required when using --image on a multi-container pod"))cmd.Flags().StringVar(&o.PullPolicy, "image-pull-policy", o.PullPolicy, i18n.T("Explicit policy for when to pull container images. Required when --image is same as existing image, ignored otherwise."))cmd.Flags().BoolVar(&o.Rollback, "rollback", o.Rollback, "If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout")cmdutil.AddDryRunFlag(cmd)cmdutil.AddValidateFlags(cmd)return cmd
}

可以看到,rolling update的执行函数为Run,在分析这个函数前,我们先了解一下rolling update执行过程中的一个关键逻辑。

rolling update动作可能由于网络超时或者用户等的不耐烦等原因而被中断,因此我们可能会重复执行一条rolling update命令,目的只有一个,就是恢复之前的rolling update动作。为了实现这个目的,rolling update程序在执行过程中会在当前rolling update上增加一个Annotation标签-kubectl.kubernetes.io/next-controller-id,标签的值就是下一个要执行的新RC的名字。此外,对于Image升级这种更新方式,还会在RC的Selector上贴一个名为deploymentKey的Label,Label的值时RC的内容进行Hash计算后的值。相当签名,这样就能很方便的比较RC里的Image名字是否发生了变化。

Run函数执行逻辑的第1步:确定New RC对象及建立起 Old RC到New RC的关联关系。下面我们以指定的Image参数进行rolling Update的方式为例,看代码如何实现该逻辑的:

该代码在:/pkg/kubectl/cmd/rollingupdate.go的Run函数中:

 // If the --image option is specified, we need to create a new rc with at least one different selector// than the old rc. This selector is the hash of the rc, with a suffix to provide uniqueness for// same-image updates.if len(o.Image) != 0 {codec := legacyscheme.Codecs.LegacyCodec(v1.SchemeGroupVersion)newName := o.FindNewName(oldRc)if newRc, err = kubectl.LoadExistingNextReplicationController(coreClient, o.Namespace, newName); err != nil {return err}if newRc != nil {if inProgressImage := newRc.Spec.Template.Spec.Containers[0].Image; inProgressImage != o.Image {return fmt.Errorf("Found existing in-progress update to image (%s).\nEither continue in-progress update with --image=%s or rollback with --rollback", inProgressImage, inProgressImage)}fmt.Fprintf(o.Out, "Found existing update in progress (%s), resuming.\n", newRc.Name)} else {config := &kubectl.NewControllerConfig{Namespace:     o.Namespace,OldName:       o.OldName,NewName:       newName,Image:         o.Image,Container:     o.Container,DeploymentKey: o.DeploymentKey,}if oldRc.Spec.Template.Spec.Containers[0].Image == o.Image {if len(o.PullPolicy) == 0 {return fmt.Errorf("--image-pull-policy (Always|Never|IfNotPresent) must be provided when --image is the same as existing container image")}config.PullPolicy = api.PullPolicy(o.PullPolicy)}newRc, err = kubectl.CreateNewControllerFromCurrentController(coreClient, codec, config)if err != nil {return err}}// Update the existing replication controller with pointers to the 'next' controller// and adding the <deploymentKey> label if necessary to distinguish it from the 'next' controller.oldHash, err := util.HashObject(oldRc, codec)if err != nil {return err}// If new image is same as old, the hash may not be distinct, so add a suffix.oldHash += "-orig"oldRc, err = kubectl.UpdateExistingReplicationController(coreClient, coreClient, oldRc, o.Namespace, newRc.Name, o.DeploymentKey, oldHash, o.Out)if err != nil {return err}}

可以看出,在代码中的findNewName方法查询新RC的名字,如果没有提供新RC的名字,则从Old RC中根据kubectl.kubernetes.io/next-controller-id这个Annotation标签找新RC的名字并返回,如果新RC存在则继续使用,否则调用CreateNewControllerFromCurrentController方法创建一个新RC,在新RC的创建过程中设定deploymentKey为自己的Hash签名,方法源码如下:

/pkg/kubectl/rolling_updater.go

func CreateNewControllerFromCurrentController(rcClient coreclient.ReplicationControllersGetter, codec runtime.Codec, cfg *NewControllerConfig) (*api.ReplicationController, error) {containerIndex := 0// load the old RC into the "new" RCnewRc, err := rcClient.ReplicationControllers(cfg.Namespace).Get(cfg.OldName, metav1.GetOptions{})if err != nil {return nil, err}if len(cfg.Container) != 0 {containerFound := falsefor i, c := range newRc.Spec.Template.Spec.Containers {if c.Name == cfg.Container {containerIndex = icontainerFound = truebreak}}if !containerFound {return nil, fmt.Errorf("container %s not found in pod", cfg.Container)}}if len(newRc.Spec.Template.Spec.Containers) > 1 && len(cfg.Container) == 0 {return nil, fmt.Errorf("must specify container to update when updating a multi-container pod")}if len(newRc.Spec.Template.Spec.Containers) == 0 {return nil, fmt.Errorf("pod has no containers! (%v)", newRc)}newRc.Spec.Template.Spec.Containers[containerIndex].Image = cfg.Imageif len(cfg.PullPolicy) != 0 {newRc.Spec.Template.Spec.Containers[containerIndex].ImagePullPolicy = cfg.PullPolicy}newHash, err := util.HashObject(newRc, codec)if err != nil {return nil, err}if len(cfg.NewName) == 0 {cfg.NewName = fmt.Sprintf("%s-%s", newRc.Name, newHash)}newRc.Name = cfg.NewNamenewRc.Spec.Selector[cfg.DeploymentKey] = newHashnewRc.Spec.Template.Labels[cfg.DeploymentKey] = newHash// Clear resource version after hashing so that identical updates get different hashes.newRc.ResourceVersion = ""return newRc, nil
}

在该函数汇总确定新的RC后,返回到Run函数继续进行,调用UpdateExistingReplicationController方法,将旧RC的kubectl.kubernetes.io/next-controller-id设置为新RC的名字,并且判断旧RC时候需要设置或更新的deploymentKey,具体代码如下:

/pkg/kubectl/rolling_updater.go

func UpdateExistingReplicationController(rcClient coreclient.ReplicationControllersGetter, podClient coreclient.PodsGetter, oldRc *api.ReplicationController, namespace, newName, deploymentKey, deploymentValue string, out io.Writer) (*api.ReplicationController, error) {if _, found := oldRc.Spec.Selector[deploymentKey]; !found {SetNextControllerAnnotation(oldRc, newName)return AddDeploymentKeyToReplicationController(oldRc, rcClient, podClient, deploymentKey, deploymentValue, namespace, out)}// If we didn't need to update the controller for the deployment key, we still need to write// the "next" controller.applyUpdate := func(rc *api.ReplicationController) {SetNextControllerAnnotation(rc, newName)}return updateRcWithRetries(rcClient, namespace, oldRc, applyUpdate)
}

通过上面的逻辑,新RC被确定并且旧RC到新RC的关联关系也被建立好了,接下来再次回到Run函数往下执行,如果dry-run参数为true,则仅仅打印新旧RC的信息然后返回。如果是正常的rolling update动作,则创建一个kubectl.RollingUpdater对象来执行具体任务。任务的参数放在RollingUpdaterConfig中。源码如下:

/pkg/kubectl/cmd/rollingupdate.go的Run函数中:

updateCleanupPolicy := kubectl.DeleteRollingUpdateCleanupPolicyif o.KeepOldName {updateCleanupPolicy = kubectl.RenameRollingUpdateCleanupPolicy}config := &kubectl.RollingUpdaterConfig{Out:            o.Out,OldRc:          oldRc,NewRc:          newRc,UpdatePeriod:   o.Period,Interval:       o.Interval,Timeout:        timeout,CleanupPolicy:  updateCleanupPolicy,MaxUnavailable: intstr.FromInt(0),MaxSurge:       intstr.FromInt(1),}if o.Rollback {

其中out是输出流(屏幕输出);UpdatePeriod是执行rolling update动作的间隔时间;Interval和Timeout组合使用,前者是每次拉取polling controller状态的间隔时间,而后者则是对应的(HTTP REST调用)超时时间。CleanupPolicy表示升级结束后的善后策略。

在配置完config参数后,继续往下走:

if o.Rollback {err = kubectl.AbortRollingUpdate(config)if err != nil {return err}coreClient.ReplicationControllers(config.NewRc.Namespace).Update(config.NewRc)}err = updater.Update(config)if err != nil {return err}

RollingUpdater的方法是rolling update的核心,以上述config对象作为参数,其核心流程是每次让新RC的Pod副本数量加1,同时旧RC的Pod副本数量减1,直到新RC的Pod副本数量达到预期值同时旧RC的Pod副本数量变为零为止,在这个过程中由于新旧RC的Pod副本一直在变动,所以需要一个地方记录最初不变的Pod副本数量,这就是RC的Annotation标签-kubectl.kubernetes.io/desired-replicas。

下面是Update方法:

/pkg/kubectl/rolling_updater.go

func (r *RollingUpdater) Update(config *RollingUpdaterConfig) error {out := config.OutoldRc := config.OldRcscaleRetryParams := NewRetryParams(config.Interval, config.Timeout)// Find an existing controller (for continuing an interrupted update) or// create a new one if necessary.sourceId := fmt.Sprintf("%s:%s", oldRc.Name, oldRc.UID)newRc, existed, err := r.getOrCreateTargetController(config.NewRc, sourceId)if err != nil {return err}if existed {fmt.Fprintf(out, "Continuing update with existing controller %s.\n", newRc.Name)} else {fmt.Fprintf(out, "Created %s\n", newRc.Name)}// Extract the desired replica count from the controller.desiredAnnotation, err := strconv.Atoi(newRc.Annotations[desiredReplicasAnnotation])if err != nil {return fmt.Errorf("Unable to parse annotation for %s: %s=%s",newRc.Name, desiredReplicasAnnotation, newRc.Annotations[desiredReplicasAnnotation])}

上面就是贴标签的过程。

Update函数后半段就是新旧RC更换的源码:

 // Scale newRc and oldRc until newRc has the desired number of replicas and// oldRc has 0 replicas.progressDeadline := time.Now().UnixNano() + config.Timeout.Nanoseconds()for newRc.Spec.Replicas != desired || oldRc.Spec.Replicas != 0 {// Store the existing replica counts for progress timeout tracking.newReplicas := newRc.Spec.ReplicasoldReplicas := oldRc.Spec.Replicas// Scale up as much as possible.scaledRc, err := r.scaleUp(newRc, oldRc, desired, maxSurge, maxUnavailable, scaleRetryParams, config)if err != nil {return err}newRc = scaledRc// notify the caller if necessaryif err := progress(false); err != nil {return err}// Wait between scaling operations for things to settle.time.Sleep(config.UpdatePeriod)// Scale down as much as possible.scaledRc, err = r.scaleDown(newRc, oldRc, desired, minAvailable, maxUnavailable, maxSurge, config)if err != nil {return err}oldRc = scaledRc// notify the caller if necessaryif err := progress(false); err != nil {return err}// If we are making progress, continue to advance the progress deadline.// Otherwise, time out with an error.progressMade := (newRc.Spec.Replicas != newReplicas) || (oldRc.Spec.Replicas != oldReplicas)if progressMade {progressDeadline = time.Now().UnixNano() + config.Timeout.Nanoseconds()} else if time.Now().UnixNano() > progressDeadline {return fmt.Errorf("timed out waiting for any update progress to be made")}}

也就是直到新RC的Pod副本数量达到预期值同时旧RC的Pod副本数量变为零为止。也就是for语句中的条件了。

可以看到,在上面源码中,scaleup,sleep,scaledown三个函数组成了更换的步骤。

总结:

rolling update是kubectl命令中最为复杂的一个,从其功能和流程看,完全可以被当做一个Job并放在kube-controller-manager上实现。客户端仅仅发起job的创建及iob状态查看命令即可。

转载于:https://www.cnblogs.com/sichenzhao/p/9320106.html

k8s-kubectl进程源码分析相关推荐

  1. Android uevent进程源码分析

    在Android Init进程源码分析中讲到init进程会依次执行被加入到待执行队列action_queue中的Action,在init.rc中我们有这么一段配置: 11 on early-init1 ...

  2. Android7.0 init进程源码分析

    目的 看过一些blog和相关的书籍,大多数文章在介绍init进程时,参考的代码比较久远,同时不同文章行文的重点不太一样,因此决定自己试着来分析一下,并作相应的记录. 背景 当linux内核启动之后,运 ...

  3. kubernetes-kubelet进程源码分析(二)

    kubelet关键代码分析 在上篇博文,我们分析了kubelet进程的启动流程,大致明白了kubelet的核心个哦你工作流程就是不断从Pod Source中获取与本节点相关的Pod,然后开始加工处理, ...

  4. linux进程源码分析,Linux内核源代码分析——口述程序猿如何意淫进程(一)

    Jack:hi,淫龙,有空吗?我们来讨论一下Linux的进程吧. 我:没空.不要烦我,最近正在郁闷. Jack:郁闷啥呀? 我:最近大学城通了轻轨,房价涨得厉害,骂了隔壁. Jack:不要郁闷了,来研 ...

  5. nuwa创建新进程源码分析

    nuwa.cpp的作用,fork 对线程进行相关冻结设置复制等.线程级别的设置等.... https://developer.mozilla.org/en-US/docs/Archive/B2G_OS ...

  6. Android应用程序启动Binder线程源码分析

    Android的应用程序包括Java应用及本地应用,Java应用运行在davik虚拟机中,由zygote进程来创建启动,而本地服务应用在Android系统启动时,通过配置init.rc文件来由Init ...

  7. SpringMVC执行流程源码分析

    SpringMVC执行流程源码分析 我们先来看张图片,帮助我们理解整个流程 然后我们开始来解析 首先SpringMVC基于Servlet来运行 那么我们首先来看HttpServletBean这个类 他 ...

  8. 【源码分析】storm拓扑运行全流程源码分析

    [源码分析]storm拓扑运行全流程源码分析 @(STORM)[storm] 源码分析storm拓扑运行全流程源码分析 一拓扑提交流程 一stormpy 1storm jar 2def jar 3ex ...

  9. spring bean创建过程源码分析(上)

    大家好,我是@zzyang(小卓),一个热爱技术的90后.这篇文章主要是带大家了解一下spring bean的生命周期,对spring bean的创建过程源码分析.由于篇幅有限,这里说的都是主干流程, ...

  10. SpringBoot2 | SpringBoot启动流程源码分析(一)

    首页 博客 专栏·视频 下载 论坛 问答 代码 直播 能力认证 高校 会员中心 收藏 动态 消息 创作中心 SpringBoot2 | SpringBoot启动流程源码分析(一) 置顶 张书康 201 ...

最新文章

  1. “编程能力差!90%输在这点上!”谷歌AI专家:其实都是瞎努力!
  2. Android 进程常驻(0)----MarsDaemon使用说明
  3. ARM Cortex-M0微控制器汇编语言之分支条件的典型用法
  4. 猫叫老鼠跑的事件例子
  5. 手哥架构宝典系列:支付系统2.0架构演进
  6. 总结相对路径和绝对路径的写法
  7. R语言高级算法之支持向量机(Support Vector Machine)
  8. 微信浏览器中IOS12键盘回弹的问题
  9. 千亿市场空间开启!网络安全开启新纪元步入黄金时代
  10. JAVA语言基础-面向对象(方法重写概述及其应用)
  11. oauth2 token为空拦截_OAuth2 Token 一定要放在请求头中吗?
  12. python简单的爬虫实例
  13. 计算机组成原理-王道习题1
  14. eXtremeComponents使用总结--1(转载)
  15. 记录ubuntu20.04成功编译安装opencv4 c++环境
  16. Nhanes临床数据库挖掘教程1----数据库下载
  17. 著名npm包被毁,GitHub强烈谴责!开源作者因反俄给代码投毒遭猛烈抨击
  18. 【超超超easy】5分钟:自制酷炫猫咪词云图,会点鼠标即可。
  19. android swap 大小,android 手机内存SWAP经验
  20. 【沧海拾昧】微机原理:存储器系统

热门文章

  1. NYOJ-769乘数密码,逆元解法;
  2. 筛选索引--filter indexs
  3. application跟消息相关的东东
  4. activiti中的查询sql
  5. (转)泊松分布和指数分布:10分钟教程
  6. MVC 异常处理机制
  7. 【Objecitve C代码】让对象沿着某条路径移动的动画效果
  8. Java集合对象详解
  9. 网页设计中色彩的应用
  10. 写一条SQL,使它通过全表扫描方式的效率优于索引访问,分别给出各自的执行计划。...