PersistentVolumeClaim(PVC)是用户存储的请求。 它类似于pod。Pod消耗节点资源,PVC消耗存储资源

StorageClass 提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。

pv.kubernetes.io/bind-completed::yes 已经完成了 pvc 绑定

pv.kubernetes.io/bound-by-controller:

Lifecycle of a volume and claim

PV是源,PVC是对这些资源的请求,生命周期:

Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling

Provisioning:静态或者动态

  • Static: 创建多个PV

  • Dynamic:当创建的静态PV都不匹配用户的PersistentVolumeClaim时,集群可能会尝试为PVC动态配置卷。 StorageClasses:PVC必须请求一个类

B inding 需要两个步骤

  • ​​首先修改 PV.Spec.ClaimRef
  • 其次修改 PVC.Spec.VolumeName

pv c controller watch pvc资源,进行更新操作,本文章将分析这块内容

定义 nfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
namespace: default
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.73.184
path: /nfs/data

定义pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

0. 入口NewControllerInitializers函数

注册persistentvolume-binder,controllers["persistentvolume-binder"] = startPersistentVolumeBinderController

// NewControllerInitializers is a public map of named controller groups (you can start more than one in an init func)
// paired to their InitFunc.  This allows for structured downstream composition and subdivision.
func NewControllerInitializers(loopMode ControllerLoopMode) map[string]InitFunc {
	controllers["persistentvolume-binder"] = startPersistentVolumeBinderController
	controllers["attachdetach"] = startAttachDetachController
	controllers["persistentvolume-expander"] = startVolumeExpandController
	controllers["clusterrole-aggregation"] = startClusterRoleAggregrationController
	controllers["pvc-protection"] = startPVCProtectionController
	controllers["pv-protection"] = startPVProtectionController
	controllers["ttl-after-finished"] = startTTLAfterFinishedController
	controllers["root-ca-cert-publisher"] = startRootCACertPublisher
	return controllers

1. NewController函数

     创建 pv controller,关注的资源包括 volume pvc pod node storageclass等

    1.1 实例化PersistentVolumeController

       包括cache,pvc volume队列等。其他忽略

controller := &PersistentVolumeController{
	volumes:                       newPersistentVolumeOrderedIndex(),
	claims:                        cache.NewStore(cache.DeletionHandlingMetaNamespaceKeyFunc),
	kubeClient:                    p.KubeClient,
	eventRecorder:                 eventRecorder,
	runningOperations:             goroutinemap.NewGoRoutineMap(true /* exponentialBackOffOnError */),
	cloud:                         p.Cloud,
	enableDynamicProvisioning:     p.EnableDynamicProvisioning,
	clusterName:                   p.ClusterName,
	createProvisionedPVRetryCount: createProvisionedPVRetryCount,
	createProvisionedPVInterval:   createProvisionedPVInterval,
	claimQueue:                    workqueue.NewNamed("claims"),
	volumeQueue:                   workqueue.NewNamed("volumes"),
	resyncPeriod:                  p.SyncPeriod,

    1.2 初始化一大堆volume 插件,包括hostpath nfs csi等等

// Prober is nil because PV is not aware of Flexvolume.
if err := controller.volumePluginMgr.InitPlugins(p.VolumePlugins, nil /* prober */, controller); err != nil {
	return nil, fmt.Errorf("Could not initialize volume plugins for PersistentVolume Controller: %v", err)

    1.3 添加volume informer机制

p.VolumeInformer.Informer().AddEventHandler(
	cache.ResourceEventHandlerFuncs{
		AddFunc:    func(obj interface{}) { controller.enqueueWork(controller.volumeQueue, obj) },
		UpdateFunc: func(oldObj, newObj interface{}) { controller.enqueueWork(controller.volumeQueue, newObj) },
		DeleteFunc: func(obj interface{}) { controller.enqueueWork(controller.volumeQueue, obj) },
controller.volumeLister = p.VolumeInformer.Lister()
controller.volumeListerSynced = p.VolumeInformer.Informer().HasSynced

    1.4 添加 claim informer机制

p.ClaimInformer.Informer().AddEventHandler(
	cache.ResourceEventHandlerFuncs{
		AddFunc:    func(obj interface{}) { controller.enqueueWork(controller.claimQueue, obj) },
		UpdateFunc: func(oldObj, newObj interface{}) { controller.enqueueWork(controller.claimQueue, newObj) },
		DeleteFunc: func(obj interface{}) { controller.enqueueWork(controller.claimQueue, obj) },
controller.claimLister = p.ClaimInformer.Lister()
controller.claimListerSynced = p.ClaimInformer.Informer().HasSynced

    1.5 添加 storageclas pod node资源 informer机制

controller.classLister = p.ClassInformer.Lister()
controller.classListerSynced = p.ClassInformer.Informer().HasSynced
controller.podLister = p.PodInformer.Lister()
controller.podListerSynced = p.PodInformer.Informer().HasSynced
controller.NodeLister = p.NodeInformer.Lister()
controller.NodeListerSynced = p.NodeInformer.Informer().HasSynced

     --> ctrl.resync

     --> ctrl.volumeWorker

              --> updateVolume

                     --> ctrl.syncVolume

     --> ctrl.claimWorker

              -->  ctrl.updateClaim

                        -->  ctrl.storeClaimUpdate

                        -->  ctrl.syncClaim

                                 --> ctrl.syncUnboundClaim

                                 --> syncBoundClaim

2. Run函数

        定期执行三个函数 resysc,这个定期list pv pvc并加入到队列

        controller都是一个套路,分别分析volumeManager claimWorker

// Run starts all of this controller's control loops
func (ctrl *PersistentVolumeController) Run(stopCh <-chan struct{}) {
	defer utilruntime.HandleCrash()
	defer ctrl.claimQueue.ShutDown()
	defer ctrl.volumeQueue.ShutDown()
	klog.Infof("Starting persistent volume controller")
	defer klog.Infof("Shutting down persistent volume controller")
	if !controller.WaitForCacheSync("persistent volume", stopCh, ctrl.volumeListerSynced, ctrl.claimListerSynced, ctrl.classListerSynced, ctrl.podListerSynced, ctrl.NodeListerSynced) {
		return
	ctrl.initializeCaches(ctrl.volumeLister, ctrl.claimLister)
	go wait.Until(ctrl.resync, ctrl.resyncPeriod, stopCh)
	go wait.Until(ctrl.volumeWorker, time.Second, stopCh)
	go wait.Until(ctrl.claimWorker, time.Second, stopCh)
	metrics.Register(ctrl.volumes.store, ctrl.claims)
	<-stopCh

3. volumeWorker函数

    从队列取进行处理,未有就退出等待下一个周期在处理

    如果有则 updateVolume 更新操作,可能包括 add update sync 等操作,处理并删除队列    

// volumeWorker processes items from volumeQueue. It must run only once,
// syncVolume is not assured to be reentrant.
func (ctrl *PersistentVolumeController) volumeWorker() {
	workFunc := func() bool {
		keyObj, quit := ctrl.volumeQueue.Get()
		if quit {
			return true
		_, name, err := cache.SplitMetaNamespaceKey(key)
		volume, err := ctrl.volumeLister.Get(name)
		if err == nil {
			// The volume still exists in informer cache, the event must have
			// been add/update/sync
			ctrl.updateVolume(volume)
			return false
		ctrl.deleteVolume(volume)
		return false
	for {
		if quit := workFunc(); quit {
			klog.Infof("volume worker queue shutting down")
			return

    3.1 updateVolume函数

     更新cache,如果cache有则不处理

     调用syncVolume进行处理,接着分析

// updateVolume runs in worker thread and handles "volume added",
// "volume updated" and "periodic sync" events.
func (ctrl *PersistentVolumeController) updateVolume(volume *v1.PersistentVolume) {
	// Store the new volume version in the cache and do not process it if this
	// is an old version.
	new, err := ctrl.storeVolumeUpdate(volume)
	if !new {
		return
	err = ctrl.syncVolume(volume)

    3.2 syncVolume函数

      3.2.1 如果spec.claimRef未设置,则是未使用过的pv,则调用updateVolumePhase函数更新状态设置 phase 为 available,并更新cache

// [Unit test set 4]
if volume.Spec.ClaimRef == nil {
	// Volume is unused
	klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is unused", volume.Name)
	if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
		// Nothing was saved; we will fall back into the same
		// condition in the next call to this method
		return err
	return nil

        剩下分析spce.claimRef已经被设置的情况       

      3.2.2 正在被bound中,更新状态available,更新cache

} else /* pv.Spec.ClaimRef != nil */ {
	// Volume is bound to a claim.
	if volume.Spec.ClaimRef.UID == "" {
		// The PV is reserved for a PVC; that PVC has not yet been
		// bound to this PV; the PVC sync will handle it.
		klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is pre-bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
		if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
			// Nothing was saved; we will fall back into the same
			// condition in the next call to this method
			return err
		return nil

     3.2.3 这个比较多处理比较简单

      根据 pv 的 claimRef 获得 pvc,如果在队列未发现,可能是volume被删除了,或者失败了,重新同步pvc

// Get the PVC by _name_
var claim *v1.PersistentVolumeClaim
claimName := claimrefToClaimKey(volume.Spec.ClaimRef)
obj, found, err := ctrl.claims.GetByKey(claimName)
if err != nil {
	return err
if !found && metav1.HasAnnotation(volume.ObjectMeta, annBoundByController) {
	// If PV is bound by external PV binder (e.g. kube-scheduler), it's
	// possible on heavy load that corresponding PVC is not synced to
	// controller local cache yet. So we need to double-check PVC in
	//   1) informer cache
	//   2) apiserver if not found in informer cache
	// to make sure we will not reclaim a PV wrongly.
	// Note that only non-released and non-failed volumes will be
	// updated to Released state when PVC does not exist.
	if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
		obj, err = ctrl.claimLister.PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(volume.Spec.ClaimRef.Name)
		if err != nil && !apierrs.IsNotFound(err) {
			return err
		found = !apierrs.IsNotFound(err)
		if !found {
			obj, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(volume.Spec.ClaimRef.Name, metav1.GetOptions{})
			if err != nil && !apierrs.IsNotFound(err) {
				return err
			found = !apierrs.IsNotFound(err)
if !found {
	klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
	// Fall through with claim = nil
} else {
	var ok bool
	claim, ok = obj.(*v1.PersistentVolumeClaim)
	if !ok {
		return fmt.Errorf("Cannot convert object from volume cache to volume %q!?: %#v", claim.Spec.VolumeName, obj)
	klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s found: %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), getClaimStatusForLogging(claim))
if claim != nil && claim.UID != volume.Spec.ClaimRef.UID {
	// The claim that the PV was pointing to was deleted, and another
	// with the same name created.
	klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has different UID, the old one must have been deleted", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
	// Treat the volume as bound to a missing claim.
	claim = nil

    3.2.4 情况是claim可能被删除了,或者pv被删除了

       这种情况需要调用reclaimVolume(第4章节讲解)将 pv回收根据策略(Retain / Delete / Recycle) 卷可以是保留,回收或删除

if claim == nil {
	// If we get into this block, the claim must have been deleted;
	// NOTE: reclaimVolume may either release the PV back into the pool or
	// recycle it or do nothing (retain)
	// Do not overwrite previous Failed state - let the user see that
	// something went wrong, while we still re-try to reclaim the
	// volume.
	if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
		// Also, log this only once:
		klog.V(2).Infof("volume %q is released and reclaim policy %q will be executed", volume.Name, volume.Spec.PersistentVolumeReclaimPolicy)
		if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
			// Nothing was saved; we will fall back into the same condition
			// in the next call to this method
			return err
	if err = ctrl.reclaimVolume(volume); err != nil {
		// Release failed, we will fall back into the same condition
		// in the next call to this method
		return err
	return nil

    3.2.5 情况是正在被绑定中,加入到队列下次在校验一下

} else if claim.Spec.VolumeName == "" {
	if isMismatch, err := checkVolumeModeMismatches(&claim.Spec, &volume.Spec); err != nil || isMismatch {
		// Binding for the volume won't be called in syncUnboundClaim,
		// because findBestMatchForClaim won't return the volume due to volumeMode mismatch.
		volumeMsg := fmt.Sprintf("Cannot bind PersistentVolume to requested PersistentVolumeClaim %q due to incompatible volumeMode.", claim.Name)
		ctrl.eventRecorder.Event(volume, v1.EventTypeWarning, events.VolumeMismatch, volumeMsg)
		claimMsg := fmt.Sprintf("Cannot bind PersistentVolume %q to requested PersistentVolumeClaim due to incompatible volumeMode.", volume.Name)
		ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, claimMsg)
		// Skipping syncClaim
		return nil
	if metav1.HasAnnotation(volume.ObjectMeta, annBoundByController) {
		// The binding is not completed; let PVC sync handle it
		klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume not bound yet, waiting for syncClaim to fix it", volume.Name)
	} else {
		// Dangling PV; try to re-establish the link in the PVC sync
		klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it", volume.Name)
	// In both cases, the volume is Bound and the claim is Pending.
	// Next syncClaim will fix it. To speed it up, we enqueue the claim
	// into the controller, which results in syncClaim to be called
	// shortly (and in the right worker goroutine).
	// This speeds up binding of provisioned volumes - provisioner saves
	// only the new PV and it expects that next syncClaim will bind the
	// claim to it.
	ctrl.claimQueue.Add(claimToClaimKey(claim))
	return nil

    3.2.6 已经绑定更新状态status phase为Bound

} else if claim.Spec.VolumeName == volume.Name {
	// Volume is bound to a claim properly, update status if necessary
	klog.V(4).Infof("synchronizing PersistentVolume[%s]: all is bound", volume.Name)
	if _, err = ctrl.updateVolumePhase(volume, v1.VolumeBound, ""); err != nil {
		// Nothing was saved; we will fall back into the same
		// condition in the next call to this method
		return err
	return nil

    3.2.7 这volume绑到claim,而claim绑到其他pv,可气,所系直接重置volume

} else {
	// Volume is bound to a claim, but the claim is bound elsewhere
	if metav1.HasAnnotation(volume.ObjectMeta, annDynamicallyProvisioned) && volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
		// This volume was dynamically provisioned for this claim. The
		// claim got bound elsewhere, and thus this volume is not
		// needed. Delete it.
		// Mark the volume as Released for external deleters and to let
		// the user know. Don't overwrite existing Failed status!
		if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
			// Also, log this only once:
			klog.V(2).Infof("dynamically volume %q is released and it will be deleted", volume.Name)
			if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
				// Nothing was saved; we will fall back into the same condition
				// in the next call to this method
				return err
		if err = ctrl.reclaimVolume(volume); err != nil {
			// Deletion failed, we will fall back into the same condition
			// in the next call to this method
			return err
		return nil

    3.2.8  volum绑定到claim,而claim绑定到其他volume,又不是动态的则更新状态unbind

else {
	// Volume is bound to a claim, but the claim is bound elsewhere
	// and it's not dynamically provisioned.
	if metav1.HasAnnotation(volume.ObjectMeta, annBoundByController) {
		// This is part of the normal operation of the controller; the
		// controller tried to use this volume for a claim but the claim
		// was fulfilled by another volume. We did this; fix it.
		klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by controller to a claim that is bound to another volume, unbinding", volume.Name)
		if err = ctrl.unbindVolume(volume); err != nil {
			return err
		return nil
	} else {
		// The PV must have been created with this ptr; leave it alone.
		klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by user to a claim that is bound to another volume, waiting for the claim to get unbound", volume.Name)
		// This just updates the volume phase and clears
		// volume.Spec.ClaimRef.UID. It leaves the volume pre-bound
		// to the claim.
		if err = ctrl.unbindVolume(volume); err != nil {
			return err
		return nil

4  reclaimVolume函数

     路径 pkg/controller/volume/persistentvolume/pv_controller.go

     如果是retain保留则无需处理

    4.1 case v1.PersistentVolumeReclaimRecycle

       recycleVolumeOperation函数, deleteVolumeOperation函数查找插件进行删除工作,然后调用API删除 pv

    4.1 case v1.PersistentVolumeReclaimDelete

        deleteVolumeOperation函数查找插件进行删除工作,然后调用API删除 pv

5. claimWorker函数

        从队列取要操作的对象,如果没有则退出,下一个周期进行操作,如果有则调用 updateClaim 更新操作

// claimWorker processes items from claimQueue. It must run only once,
// syncClaim is not reentrant.
func (ctrl *PersistentVolumeController) claimWorker() {
	workFunc := func() bool {
		keyObj, quit := ctrl.claimQueue.Get()
		namespace, name, err := cache.SplitMetaNamespaceKey(key)
		if err != nil {
			klog.V(4).Infof("error getting namespace & name of claim %q to get claim from informer: %v", key, err)
			return false
		claim, err := ctrl.claimLister.PersistentVolumeClaims(namespace).Get(name)
		if err == nil {
			// The claim still exists in informer cache, the event must have
			// been add/update/sync
			ctrl.updateClaim(claim)
			return false
		ctrl.deleteClaim(claim)
		return false
	for {
		if quit := workFunc(); quit {
			klog.Infof("claim worker queue shutting down")
			return

    5.1 syncClaim函数

        根据注解中的 pv.kubernetes.io/bind-completed

// syncClaim is the main controller method to decide what to do with a claim.
// It's invoked by appropriate cache.Controller callbacks when a claim is
// created, updated or periodically synced. We do not differentiate between
// these events.
// For easier readability, it was split into syncUnboundClaim and syncBoundClaim
// methods.
func (ctrl *PersistentVolumeController) syncClaim(claim *v1.PersistentVolumeClaim) error {
	klog.V(4).Infof("synchronizing PersistentVolumeClaim[%s]: %s", claimToClaimKey(claim), getClaimStatusForLogging(claim))
	if !metav1.HasAnnotation(claim.ObjectMeta, annBindCompleted) {
		return ctrl.syncUnboundClaim(claim)
	} else {
		return ctrl.syncBoundClaim(claim)

6. syncUnboundClaim 处理未绑定的pvc

     如果claim.Spec.VolumeName == "",说明pvc处于pending状态,第二种情况可能是用户已经指定pv

    6.1 claim.Spec.VolumeName == ""

        选出所有最匹配的volume,是容量差额最小的,包括设置的权限等过滤

// [Unit test set 1]
volume, err := ctrl.volumes.findBestMatchForClaim(claim, delayBinding)
if err != nil {
	klog.V(2).Infof("synchronizing unbound PersistentVolumeClaim[%s]: Error finding PV for claim: %v", claimToClaimKey(claim), err)
	return fmt.Errorf("Error finding PV for claim %q: %v", claimToClaimKey(claim), err)

    6.2 如果没有可用volume情况

     看是不是storageclass模式,是则调用provisionClaim(第8章节处理)处理,还是设置pending等待下一轮处理

if volume == nil {
	klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: no volume found", claimToClaimKey(claim))
	// No PV could be found
	// OBSERVATION: pvc is "Pending", will retry
	switch {
	case delayBinding:
		ctrl.eventRecorder.Event(claim, v1.EventTypeNormal, events.WaitForFirstConsumer, "waiting for first consumer to be created before binding")
	case v1helper.GetPersistentVolumeClaimClass(claim) != "":
		if err = ctrl.provisionClaim(claim); err != nil {
			return err
		return nil
	default:
		ctrl.eventRecorder.Event(claim, v1.EventTypeNormal, events.FailedBinding, "no persistent volumes available for this claim and no storage class is set")
	// Mark the claim as Pending and try to find a match in the next
	// periodic syncClaim
	if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {
		return err
	return nil

    6.3 已经找到volume,进行绑定操作

  • bindVolumeToClaim函数设置volume的spec.ClaimRef
  • updateVolumePhase更新status phase为Bound
  • bindClaimToVolume设置Spec.VolumeName,已经注解 pv.kubernetes.io/bind-completed = yes
  • updateClaimStatus 更新status phase为Bound
else /* pv != nil */ {
	// Found a PV for this claim
	// OBSERVATION: pvc is "Pending", pv is "Available"
	klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q found: %s", claimToClaimKey(claim), volume.Name, getVolumeStatusForLogging(volume))
	if err = ctrl.bind(volume, claim); err != nil {
		// On any error saving the volume or the claim, subsequent
		// syncClaim will finish the binding.
		return err
	// OBSERVATION: claim is "Bound", pv is "Bound"
	return nil

        用户指定pv就不讲解了,差不多一样的原则,无非就是pv设置spec.ClaimRef,pvc设置spec.VlumeName,还有注解,设置status phase等

7.  syncBoundClaim 处理已经绑定的pvc

    主要是处理不同意的情况

    7.1 绑定到不存在的pv情况

obj, found, err := ctrl.volumes.store.GetByKey(claim.Spec.VolumeName)
if err != nil {
	return err
if !found {
	// Claim is bound to a non-existing volume.
	if _, err = ctrl.updateClaimStatusWithEvent(claim, v1.ClaimLost, nil, v1.EventTypeWarning, "ClaimLost", "Bound claim has lost its PersistentVolume. Data on the volume is lost!"); err != nil {
		return err
	return nil

     7.2 存在pv情况

       volume.Spec.ClaimRef == nil 与 volume.Spec.ClaimRef.UID == claim.UID 更新绑定关系。好理解不罗索了

    动态提供pv的情况

provisionClaim

      --> provisionClaimOperation

            --> ctrl.findProvisionablePlugin

            -->  ctrl.setClaimProvisioner

            -->  plugin.NewProvisioner

            -->  provisioner.Provision

8 provisionClaim函数

    异步提供一个volume,具体函数请看provisionClaimOperation

// provisionClaim starts new asynchronous operation to provision a claim if
// provisioning is enabled.
func (ctrl *PersistentVolumeController) provisionClaim(claim *v1.PersistentVolumeClaim) error {
	if !ctrl.enableDynamicProvisioning {
		return nil
	klog.V(4).Infof("provisionClaim[%s]: started", claimToClaimKey(claim))
	opName := fmt.Sprintf("provision-%s[%s]", claimToClaimKey(claim), string(claim.UID))
	startTime := time.Now()
	ctrl.scheduleOperation(opName, func() error {
		pluginName, err := ctrl.provisionClaimOperation(claim)
		timeTaken := time.Since(startTime).Seconds()
		metrics.RecordVolumeOperationMetric(pluginName, "provision", timeTaken, err)
		return err
	return nil

    8.1 provisionClaimOperation函数

     根据storageclass获得plugin

claimClass := v1helper.GetPersistentVolumeClaimClass(claim)
klog.V(4).Infof("provisionClaimOperation [%s] started, class: %q", claimToClaimKey(claim), claimClass)
plugin, storageClass, err := ctrl.findProvisionablePlugin(claim)
if err != nil {
	ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.ProvisioningFailed, err.Error())
	klog.V(2).Infof("error finding provisioning plugin for claim %s: %v", claimToClaimKey(claim), err)
	// The controller will retry provisioning the volume in every
	// syncVolume() call.
	return "", err

    8.1.1 更新pvc设置注解 

      volume.beta.kubernetes.io/storage-provisioner: ceph.rook.io/block

// Add provisioner annotation so external provisioners know when to start
newClaim, err := ctrl.setClaimProvisioner(claim, provisionerName)
if err != nil {
	// Save failed, the controller will retry in the next sync
	klog.V(2).Infof("error saving claim %s: %v", claimToClaimKey(claim), err)
	return pluginName, err
claim = newClaim

    对于 PV controller 工作完成了,主要是对 external 设置注解  volume.beta.kubernetes.io/storage-provisioner:

    对于使用 CSI 外部插件则工作完成了,PV controller 主要处理 In-tree 的插件。

    External-Provision 则watch PVC,处理注解 volume.beta.kubernetes.io/storage-provisioner: 为自己插件的 PVC,然后调用插件的 Provison 方法,向插件 ceph rbd 的 GRPC CreateVolumeRequest 请求

    pv.spec.claimRef == nil,未使用过的 pv,则设置 phase为 Available

    pv.spec.claimRef .uid == "",正在被bound中,更新状态Available

    如果没有找到合适的pv,看看是否是storageclass,根据插件 provision 创建 volume

PVC总结

   pvc.spec.volumeName==“”,说明pvc处于pending状态

   pv设置spec.ClaimRef,pvc设置spec.VlumeName,还有注解,设置status phase等

    https://kubernetes.io/docs/concepts/storage/persistent-volumes/#bindin

事情的起因是基于k8s集群搞东搞西搞项目时,使用脚本一键部署平台时突然僵住了,直接返回超时信息,通过各种手段对 shell 脚本进行debug,最终定位到是在使用yaml文件创建pod时一直卡在下面这步:PersistentVolumeClaim is not bond。 当然,这只是其中一条not boud的debug信息,整个部署中的和存储存储相关的组件,mysql、redis、mogodb和minio全都处于这种状态,查询相关pod,全都处于pending状态,等于压根没起来,查看pod的详细 目的: 为了屏蔽底层存储实现的细节, 让用户方便使用同时让管理员方便管理, 引入了pvpvc两种资源对象实现对存储的管理子系统 pv: 对底层网络共享存储的抽象, 将共享存储定义为一种资源 pvc: 用户对存储资源的一个神奇, 就像pod消费node的cpu,内存等资源一样, pvc能够消费pv资源, pvc可以申请特定存储空间和访问模式 StorageClass :标记存储资源的特性和性能, 在1.6版本, St... Configmap用于保存配置数据,以键值对形式存储;ConfigMap资源提供了向Pod注入配置数据的方法,旨在让镜像和配置文件解耦,以便实现镜像的可移植性和可复用性;典型的使用场景有:填充环境变量的值、设置容器内的命令行参数、填充卷的配置文件1.创建ConfigMap的方式有4种:使用字面值创建、使用文件创建、使用目录创建、编写configmap的yaml文件创建 2.如何使用configmap: 示例:Secret对象类型用来保存敏感信息,例如密码、OAuth令牌和ssh key;敏感信息放 推荐阅读:膜拜!阿里内部都在强推的K8S(kubernetes)学习指南,不能再详细了 从一个例子入手PVPVCKubernetes 项目引入了一组叫作 Persistent Volume Claim(PVC)和 Persistent Volume(PV)的 API 对象用于管理存储卷。下面举个例子看看,这个例子来自《k8s in Action》:apiVersion: v1kind: Pers... 先保存一下对应的pvc信息 kubectl get pv/dwuserspv -o yaml > /tmp/dwuserspv.yaml kubectl get pvc/dwuserspvc -o yaml > /tmp/dwuserspvc.yaml 此时pv的yaml文件 /tmp/dwuserspv.yaml大致如下 apiVersion K8s 1.23.x版本nfs持久存储报错 persistentvolume-controller waiting for a volume to be created, either by ext 这个 local 稍微有点特殊,他是 lazy provision 的(volumeBindingMode: WaitForFirstConsumer),需要创建 pod 引用才可以(waiting for first consumer to be created before binding)处于 Bound 状态 Kubernetes的pod本身是无状态的(stateless),生命周期通常比较短,只要出现了异常,Kubernetes就会自动创建一个新的Pod来代替它。 而容器产生的数据,会随着Pod消亡而自动消失。 为了实现Pod内数据的存储管理,Kubernetes引入了两个API资源:Persistent Volume(持久卷,以下简称PV)和Persistent Volume Claim(持久卷申请...