通过视觉语言模型蒸馏进行 3D 形状零件分割

         大家读完觉得有帮助记得关注和点赞!!!对应英文要求比较高,特此说明!

Abstract

This paper proposes a cross-modal distillation framework, PartDistill, which transfers 2D knowledge from vision-language models (VLMs) to facilitate 3D shape part segmentation. PartDistill addresses three major challenges in this task: the lack of 3D segmentation in invisible or undetected regions in the 2D projections, inconsistent 2D predictions by VLMs, and the lack of knowledge accumulation across different 3D shapes. PartDistill consists of a teacher network that uses a VLM to make 2D predictions and a student network that learns from the 2D predictions while extracting geometrical features from multiple 3D shapes to carry out 3D part segmentation. A bi-directional distillation, including forward and backward distillations, is carried out within the framework, where the former forward distills the 2D predictions to the student network, and the latter improves the quality of the 2D predictions, which subsequently enhances the final 3D segmentation. Moreover, PartDistill can exploit generative models that facilitate effortless 3D shape creation for generating knowledge sources to be distilled. Through extensive experiments, PartDistill boosts the existing methods with substantial margins on widely used ShapeNetPart and PartNetE datasets, by more than 15% and 12% higher mIoU scores, respectively. The code for this work is available at https://github.com/ardianumam/PartDistill.

1Introduction

3D shape part segmentation is essential to various 3D vision applications, such as shape editing [23, 45], stylization [29], and augmentation [40]. Despite its significance, acquiring part annotations for 3D data, such as point clouds or mesh shapes, is labor-intensive and time-consuming.

Zero-shot learning [41, 8] generalizes a model to unseen categories without annotations and has been notably uplifted by recent advances in vision-language models (VLMs) [37, 22, 46, 21]. By learning on large-scale image-text data pairs, VLMs show promising generalization abilities on various 2D recognition tasks. Recent research efforts [48, 52, 24, 1] have been made to utilize VLMs for zero-shot 3D part segmentation, where a 3D shape is projected into multi-view 2D images, and a VLM is applied to these images for 2D prediction acquisition. Specifically, PointCLIP [48] and PointCLIPv2 [52] produce 3D point-wise semantic segmentation by averaging their corresponding 2D pixel-wise predictions. Meanwhile, PartSLIP [24] and SATR [1] present a designated weighting mechanism to aggregate multi-view bounding box predictions.

Refer to caption

Figure 1: We present a distillation method that carries out zero-shot 3D shape part segmentation with a 2D vision-language model. After projecting an input 3D point cloud into multi-view 2D images, the 2D teacher (2D-T) and the 3D student (3D-S) networks are applied to the 2D images and 3D point cloud, respectively. Instead of direct transfer, our method carries bi-directional distillations, including forward and backward distillations, and yields better 3D part segmentation than the existing method.

The key step of zero-shot 3D part segmentation with 2D VLMs, e.g., [48, 52, 24, 1], lies in the transfer from 2D pixel-wise or bounding-box-wise predictions to 3D point segmentation. This step is challenging due to three major issues. First (𝓘𝟏), some 3D regions lack corresponding 2D predictions in multi-view images, which are caused by occlusion or not being covered by any bounding boxes, illustrated with black and gray points, respectively, in Fig. 1. This issue is considered a limitation in the previous work [48, 52, 24, 1]. Second (𝓘𝟐), there exists potential inconsistency among 2D predictions in multi-view images caused by inaccurate VLM predictions. Third (𝓘𝟑), existing work [48, 52, 24, 1] directly transfers 2D predictions to segmentation of a single 3D shape. The 2D predictions yielded based on appearance features are not optimal for 3D geometric shape segmentation, while geometric evidence given across different 3D shapes is not explored.

To alleviate the three issues 𝓘𝟏∼𝓘𝟑, unlike existing methods [48, 52, 24, 1] directly transferring 2D predictions to 3D segmentation, we propose a cross-modal distillation framework with a teacher-student model. Specifically, a VLM is utilized as a 2D teacher network, accepting multi-view images of a single 3D shape. The VLM is pre-trained on large-scale image-text pairs and can exploit appearance features to make 2D predictions. The student network is developed based on a point cloud backbone. It is derived from multiple, unlabeled 3D shapes and can extract point-specific geometric features. The proposed distillation method, PartDistill, leverages the strengths of both networks, hence improving zero-shot 3D part segmentation.

The student network learns from not only the 2D teacher network but also 3D shapes. It can extract point-wise features and segment 3D regions uncovered by 2D predictions, hence tackling issue 𝓘𝟏. As a distillation-based method, PartDistill tolerates inconsistent predictions between the teacher and student networks, which alleviates issue 𝓘𝟐 of negative transfer caused by wrong VLM predictions. The student network considers both appearance and geometric features. Thus, it can better predict 3D geometric data and mitigate issue 𝓘𝟑. As shown in Fig. 1, the student network can correctly predict the undetected arm of the chair (see the black arrows) by learning from other chairs.

PartDistill carries out a bi-directional distillation. It first forward distills the 2D knowledge to the student network. We observe that after the student integrates the 2D knowledge, we can jointly refer both teacher and student knowledge to perform backward distillation which re-scores the 2D knowledge based on its quality. Those of low quality will be suppressed with lower scores, such as from 0.6 to 0.1 for the falsely detected arm box in Fig. 1, and vice versa. Finally, this re-scored knowledge is utilized by the student network to seek better 3D segmentation.

The main contributions of this work are summarized as follows. First, we introduce PartDistill, a cross-modal distillation framework that transfers 2D knowledge from VLMs to facilitate 3D part segmentation. PartDistill addresses three identified issues present in existing methods and generalizes to both VLM with bounding-box predictions (B-VLM) and pixel-wise predictions (P-VLM). Second, we propose a bi-directional distillation, which involves enhancing the quality of 2D knowledge and subsequently improving the 3D predictions. Third, PartDistill can leverage existing generative models [31, 33] to enrich knowledge sources for distillation. Extensive experiments demonstrate that PartDistill surpasses existing methods by substantial margins on widely used benchmark datasets, ShapeNetPart [44] and PartNetE [24], with more than 15% and 12% higher mIoU scores, respectively. PartDistill consistently outperforms competing methods in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Refer to caption

Figure 2:Overview of the proposed method. (a) The overall pipeline where the knowledge extracted from a vision-language model (VLM) is distilled to carry out 3D shape part segmentation by teaching a 3D student network. Within the pipeline, backward distillation is introduced to re-score the teacher’s knowledge based on its quality and subsequently improve the final 3D part prediction. (b) (c) Knowledge is extracted by back-projection when we adopt (b) a bounding-box VLM (B-VLM) or (c) a pixel-wise VLM (P-VLM), where &Γ and ℂ denote 2D-to-3D back-projection and connected component labeling [3], respectively.

2Related Work

Vision-language models.

Based on learning granularity, vision-language models (VLMs) can be grouped into three categories, including the image-level [37, 15], pixel-level [21, 27, 53], and object-level [22, 46, 25] categories. The second and the third categories make pixel-level and bounding box predictions, respectively, while the first category produces image-level predictions. Recent research efforts on VLMs have been made for cross-level predictions. For example, pixel-level predictions can be derived from an image-level VLM via up-sampling the 2D features into the image dimensions, as shown in PointCLIPv2 [52]. In this work, we propose a cross-modal distillation framework that learns and transfers knowledge from a VLM in the 2D domain to 3D shape part segmentation.

3D part segmentation using vision-language models.

State-of-the-art zero-shot 3D part segmentation [24, 52, 1] is developed by utilizing a VLM and transferring its knowledge in the 2D domain to the 3D space. The pioneering work PointCLIP [48] utilizes CLIP [37]. PointCLIPv2 [52] extends PointCLIP by making the projected multi-view images more realistic and proposing LLM-assisted text prompts [4], hence producing more reliable CLIP outputs for 3D part segmentation.

Both PointCLIP and PointCLIPv2 rely on individual pixel predictions in 2D views to get the predictions of the corresponding 3D points, but individual pixel predictions are less unreliable. PartSLIP [24] suggests to extract superpoints [20] from the input point cloud. Therefore, 3D segmentation is estimated for each superpoint by referring to a set of relevant pixels in 2D views. PartSLIP uses GLIP [22] to output bounding boxes and further proposes a weighting mechanism to aggregate multi-view bounding box predictions to yield 3D superpoint predictions. SATR [1] shares a similar idea with PartSLIP but handles 3D mesh shapes instead of point clouds.

Existing methods [24, 52, 48, 1] directly transfer VLM predictions from 2D images into 3D spaces and pose three issues: (𝓘𝟏) uncovered 3D points, (𝓘𝟐) negative transfer, and (𝓘𝟑) cross-modality predictions, as discussed before. We present a distillation-based method to address all three issues and make substantial performance improvements.

2D to 3D distillation.

Seminal work of knowledge distillation [5, 14] aims at transferring knowledge from a large model to a small one. Subsequent research efforts [39, 28, 50, 26, 43] adopt this idea of transferring knowledge from a 2D model for 3D understanding. However, these methods require further fine-tuning with labeled data. OpenScene [34] and CLIP2Scene [7] require no fine-tuning and share a similar concept with our method of distilling VLMs for 3D understanding, with ours designed for part segmentation and theirs for indoor/outdoor scene segmentation. The major difference is that our method can enhance the knowledge sources in the 2D modality via the proposed backward distillation. Moreover, our method is generalizable to both P-VLM (pixel-wise VLM) and B-VLM (bounding-box VLM), while their methods are only applicable to P-VLM.

3Proposed Method

3.1Overview

Given a set of 3D shapes, this work aims to segment each one into R semantic parts without training with any part annotations. To this end, we propose a cross-modal bi-directional distillation frameworkPartDistill, which transfers 2D knowledge from a VLM to facilitate 3D shape part segmentation. As illustrated in Fig. 2, our framework takes triplet data as input, including the point cloud of the shape with N 3D points, multi-view rendered images from the shape in V different poses, and R text prompts with each describing the target semantic parts within the 3D shapes.

For the 2D modality, the V multi-view images and the text prompts are fed into a Bounding-box VLM (B-VLM) or Pixel-wise VLM (P-VLM). For each view v, a B-VLM produces a set of bounding boxes, Bv={bi}i=1β while a P-VLM generates pixel-wise predictions Sv. We then perform knowledge extraction (Sec 3.2) for each Bv or Sv; Namely, we transfer the 2D predictions into the 3D space through back-projection for a B-VLM or connected-component labeling [3] followed by back-projection for a P-VLM, as shown in Fig. 2 (b) and Fig. 2 (c), respectively. Subsequently, a set of D teacher knowledge units, 𝒦={k}d=1D={Yd,Md}d=1D, is obtained by aggregating from all V multi-view images. Each unit d comprises point-wise part probabilities, Yd∈ℝN×R, from the teacher VLM network, accompanied with a mask, Md∈{0,1}N, identifying the points included in this knowledge unit.

For the 3D modality, the point cloud is passed into the 3D student network with a 3D encoder and a distillation head, producing point-wise part predictions, Y~∈ℝN×R. With the proposed bi-directional distillation framework, we first forward distill teacher’s 2D knowledge by aligning Y~ with 𝒦 via minimizing the proposed loss, ℒd⁢i⁢s⁢t⁢i⁢l⁢l, specified in Sec 3.2. The 3D student network integrates 2D knowledge from the teacher through optimization. The integrated student knowledge Y~′ and the teacher knowledge 𝒦 are then jointly referred to perform backward distillation from 3D to 2D, detailed in Sec. 3.3, which re-scores each knowledge unit kd based on its qualities, as shown in Fig. 2. Finally, the re-scored knowledge 𝒦′ is used to refine the student knowledge to get final part segmentation predictions Y~f by assigning each point to the part with the highest probability.

3.2Forward distillation: 2D to 3D

Our method extracts the teacher’s knowledge in the 2D modality and distills it in the 3D space. In the 2D modality, V multi-view images {Iv∈ℝH×W}v=1V are rendered from the 3D shape, e.g., using the projection method in [52]. These V multi-view images together with the text prompts T of R parts are passed to the VLM to get the knowledge in 2D spaces. For a B-VLM, a set of β bounding boxes, Bv={bi}i=1β, is obtained from the v-th image, with bi∈ℝ4+R encoding the box coordinates and the probabilities of the R parts. For a P-VLM, a pixel-wise prediction map Sv∈ℝH×W×R is acquired from the v-th image. We apply knowledge extraction to each Bv and each Sv to obtain a readily distillable knowledge 𝒦 in the 3D space, as illustrated in Fig. 2 (b) and Fig. 2 (c), respectively.

For a B-VLM, bounding boxes can directly be treated as the teacher knowledge. For a P-VLM, knowledge extraction starts by applying connected-component labeling [3] to Sv to get a set of ρ segmentation components, {si∈ℝH×W×R}i=1ρ, indicating if the r-th part is with the highest probability in each pixel. We summarize the process when applying a VLM to a rendered image and the part text prompts as

VLM⁢(Iv,T)={Bv={bi}i=1β,for B-VLM,ℂ⁢(Sv)={si}i=1ρ,for P-VLM,(1)

where ℂ denotes connected-component labeling.

We then back-project each box bi or each prediction map si to the 3D space, i.e.,

ki=(Yi,Mi)={Γ⁢(bi),for B-VLM,Γ⁢(si),for P-VLM,(2)

where Γ denotes the back-projection operation with the camera parameters [49] used for multi-view image rendering, Yi∈ℝN×R is the point-specific part probabilities, and Mi∈{0,1}N is the mask indicating which 3D points are covered by bi or si in the 2D space. The pair (Yi,Mi) yields a knowledge unit, ki, upon which the knowledge re-scoring is performed in the backward distillation.

For the 3D modality, a 3D encoder, e.g., Point-M2AE [47], is applied to the point cloud and obtains per-point features, O∈ℝN×E, capturing local and global geometrical information. We then estimate point-wise part prediction, Y~∈RN×R, by feeding the point features O into the distillation head. The cross-modal distillation is performed by teaching the student network to align the part probability from the 3D modality Y~ to their 2D counterparts Y via minimizing our designated distillation loss.

Distillation loss.

Via Eq. 1 and Eq. 2, we assume that D knowledge units, 𝒦={kd}d=1D={Yd,Md}d=1D, are obtained from the multi-view images. The knowledge 𝒦 exploits 2D appearance features and is incomplete as several 3D points are not covered by any 2D predictions, i.e., issue 𝓘𝟏. To distill this incomplete knowledge, we utilize a masked cross-entropy loss defined as

ℒd⁢i⁢s⁢t⁢i⁢l⁢l=−∑d=1D1|Md|⁢∑n=1N∑r=1RMnd⁢Cnd⁢Zn,rd⁢log⁢(Y~n,r),(3)

where Cnd=max𝑟⁢(Ynd⁢(r)) is the confidence score of kd on point n. Zn,rd takes value 1 if part r receives the highest probability on kd, and 0 otherwise. |Md| is the area covered by the mask Md.

By minimizing Eq. 3, we teach the student network to align its prediction Y~ to the distilled prediction Y by considering the points covered by the mask and using the confidence scores as weights. Despite learning from incomplete knowledge, the student network extracts point features that capture geometrical information of the shape, thus enabling it to reasonably segment the points that are not covered by 2D predictions, hence addressing issue 𝓘𝟏. This can be regarded as interpolating the learned part probability in the feature spaces by the distillation head.

As a distillation-based method, our method allows partial inconsistency among the extracted knowledge 𝒦={kd}d=1D caused by inaccurate VLM predictions, thereby alleviating issue 𝓘𝟐 of negative transfer. In our method, the teacher network works on 2D appearance features, while the student network extracts 3D geometric features. After distillation via Eq. 3, the student network can exploit both appearance and geometric features from multiple shapes, hence mitigating issue 𝓘𝟑 of cross-modal transfer. It is worth noting that unlike the conventional teacher-student models [14, 11, 13] which solely establish a one-to-one correspondence, we further re-score each knowledge unit kd based on its quality (Sec. 3.3), and improve distillation by suppressing low-quality knowledge units.

3.3Backward distillation: 3D to 2D

In Eq. 3, we consider all knowledge units {kd}d=1D, weighted by their confidence scores. However, due to the potential VLM mispredictions, not all knowledge units are reliable. Hence, we refine the knowledge units by assigning higher scores to those of high quality and suppressing the low-quality ones. We observe that once the student network has thoroughly integrated the knowledge from the teacher, we can jointly refer both teacher and integrated student knowledge Y~′ to achieve the goal, by re-scoring the confidence score Cd to Cb⁢dd as:

Cb⁢dd=|Md(argmax(Yd)⇔argmax(Y~′))||Md|,(4)

where ⇔ denotes the element-wise equality (comparison) operation. In this way, each knowledge unit kd is re-scored: Those with high consensus between teacher 𝒦 and integrated student knowledge Y~′ have higher scores, such as those on the chair legs shown in Fig. 3, and those with low consensus are suppressed by the reduced scores, such as those on the chair arm (B-VLM) and back (P-VLM) in Fig. 3. Note that for simplicity, we only display two scores in each shape of Fig. 3 and show the average pixel-wise scores in P-VLM. To justify that the student network has thoroughly integrated the teacher’s knowledge, from initial knowledge Y~ to integrated knowledge Y~′, we track the moving average of the loss value for every epoch and see if the value in a subsequent epoch is lower than a specified threshold τ. Afterward, the student network continues to learn with the re-scored knowledge 𝒦′ by minimizing the loss in Eq. 3 with C being replaced by Cb⁢d, and produces the final part segmentation predictions Y~f.

Refer to caption

Figure 3:Given the VLM output of view v, Bv or Sv, we display the confidence scores before (C) and after (Cb⁢d) performing backward distillation via Eq. 4, with Y and M obtained via Eq. 2. With backward distillation, inaccurate VLM predictions have lower scores, such as the arm box in B-VLM with the score reduced from 0.7 to 0.1, and vice versa.

3.4Test-time alignment

In general, our method performs the alignment with a shape collection before the student network is utilized to carry the 3D shape part segmentation. If such a pre-alignment is not preferred, we provide a special case of our method, test-time alignment (TTA), where the alignment is performed for every single shape in test time. To maintain the practicability, TTA needs to achieve a near-instantaneous completion. To that end, TTA employs a readily used 3D encoder, e.g., pre-trained Point-M2AE [47], freezes its weights, and only updates the learnable parameters in the distillation head, which significantly fastens the TTA completion.

3.5Implementation Details

The proposed framework is implemented in PyTorch [32] and is optimized for 25 epochs via Adam optimizer [19] with a learning rate and batch size of 0.001 and 16, respectively. Unless further specified, the student network employs Point-M2AE [47] pre-trained in a self-supervised way on the ShapeNet55 dataset [6] as the 3D extractor, freezes its weights, and only updates the learnable parameters in the distillation head. A multi-layer perceptron consisting of 4 layers, with ReLU activation [2], is adopted for the distillation head. To fairly compare with the competing methods [48, 52, 24, 1], we follow their respective settings, including the used text prompts and the 2D rendering. Their methods render each shape into 10 multi-view images, either from a sparse point cloud [48, 52], a dense point cloud [24], or a mesh shape [1]. Lastly, we follow [18, 38] to specify a small threshold value, τ=0.01 in our backward distillation, and apply class-balance weighting [9] during the alignment, based on the VLM predictions in the zero-shot setting, with additional few-shot labels in the few-shot setting.

Table 1:Zero-shot segmentation on the ShapeNetPart dataset, reported in mIoU (%).*

VLMData typeMethodAirplaneBagCapChairEarphoneGuitarKnifeLaptopMugTableOverall
CLIP [37]point cloudPointCLIP [48]22.044.813.418.728.322.724.822.948.645.431.0
PointCLIPv2 [52] 35.753.353.151.948.159.166.761.845.549.848.4
OpenScene [34]34.463.856.159.862.669.370.165.451.060.452.9
Ours (TTA)37.562.655.556.455.671.776.967.453.562.953.8
Ours (Pre)40.675.667.265.066.385.879.892.683.168.763.9
GLIP [22]point cloudOurs (TTA)57.362.756.274.245.860.678.585.782.562.954.7
Ours (Pre)69.370.167.986.551.276.885.791.985.679.664.1
meshSATR [1]32.232.121.825.219.437.740.150.476.422.432.3
Ours (TTA)53.261.844.966.443.050.766.368.383.958.849.5
Ours (Pre)64.864.451.067.448.364.870.083.186.579.356.3

*Results for other categories, including those of Table 2 and Table 3, can be seen in the supplementary material.

Table 2:Zero-shot segmentation on the PartNetE dataset, reported in mIoU (%).

VLMData typeMethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
GLIP [22]point cloudPartSLIP [24]76.387.760.743.820.846.837.133.040.247.727.3
Ours (TTA)77.488.574.150.524.259.258.834.243.250.239.9

4Experiments

4.1Dataset and evaluation metric

We evaluate the effectiveness of our method on two main benchmark datasets, ShapeNetPart [44] and PartNetE [24]. While ShapeNetPart dataset contains 16 categories with a total of 31,963 shapes, PartNetE contains 2,266 shapes, covering 45 categories. The mean intersection over union (mIoU) [30] is adopted to evaluate the segmentation results on the test-set data, measured against the ground truth label.

4.2Zero-shot segmentation

To compare with the competing methods [48, 52, 1, 24], we adopt each of their settings and report their mIoU performances from their respective papers. Specifically, for P-VLM, we follow PointCLIP [48] and PointCLIPv2 [52] to utilize CLIP [37] with ViT-B/32 [10] backbone and use their pipeline to obtain the pixel-wise predictions from CLIP. For B-VLM, a GLIP-Large model [22] is employed in our method to compare with PartSLIP and SATR which also use the same model. While most competing methods report their performances on the ShapeNetPart dataset, PartSLIP evaluates its method on the PartNetE dataset. In addition, we compare with OpenScene [34] by extending it for 3D part segmentation and use the same Point-M2AE [47] backbone and VLM CLIP for a fair comparison.

Accordingly, we carry out the comparison separately to ensure fairness, based on the employed VLM model and the shape data type, i.e., point cloud or mesh data, as shown in Tables 1 and 2. In Table 1, we provide two versions of our method, including test-time alignment (TTA) and pre-alignment (Pre) with a collection of shapes from the train-set data. Note that in the Pre version, our method does not use any labels (only unlabeled shape data are utilized).

First, we compare our method to PointCLIP and PointCLIPv2 (both utilize CLIP) on the zero-shot segmentation for the ShapeNetPart dataset, as can be seen in the first part of Table 1. It is evident that our method for both TTA and pre-alignment versions achieves substantial improvements in all categories. For the overall mIoU, calculated by averaging the mIoUs from all categories, our method attains 5.4% and 15.5% higher mIoU for TTA and pre-alignment versions, respectively, compared to the best mIoU from the other methods. Such results reveal that our method which simultaneously exploits appearance and geometric features can better aggregate the 2D predictions for 3D part segmentation than directly averaging the corresponding 2D predictions as in the competing methods, where geometric evidence is not explored. We further compare with OpenScene [34] under the same setting as ours (Pre) and our method substantially outperforms it. One major reason is that our method can handle the inconsistency of VLM predictions (issue 𝓘2) better by backward distillation.

Next, as shown in the last three rows of Table 1, we compare our method to SATR [1] that works on mesh data shapes. To obtain the mesh face predictions, we propagate the point predictions via a nearest neighbors approach as in [17], where each face is voted from its five nearest points. Our method achieves 17.2% and 24% higher overall mIoU than SATR for TTA and pre-alignment versions, respectively. Then, we compare our method with PartSLIP [24] in Table 2 wherein only results from TTA are provided since the PartNetE dataset does not provide train-set data. One can see that our method consistently obtains better segmentations, with 12.6% higher overall mIoU than PartSLIP.

In PartSLIP and SATR, as GLIP is utilized, the uncovered 3D regions (issue 𝓘𝟏) could be intensified by possible undetected areas, and the negative transfer (issue 𝓘𝟐) may also be escalated due to semantic leaking, where the box predictions cover pixels from other semantics. On the other hand, our method can better alleviate these issues, thereby achieving substantially higher mIoU scores. In our method, the pre-alignment version achieves better segmentation results than TTA. This is expected since in the pre-alignment version, the student network can distill the knowledge from a collection of shapes, instead of individual shape.

Refer to caption

Figure 4:Visualization of the zero-shot segmentation results, drawn in different colors, on the ShapeNetPart dataset. We render PartSLIP results on the ShapeNetPart data to have the same visualization of shape inputs. While occluded and undetected regions (issue 𝓘𝟏) are shown with black and gray colors, respectively, the blue and red arrows highlight several cases of issues 𝓘𝟐 and 𝓘𝟑.

Besides foregoing quantitative comparisons, a qualitative comparison of the segmentation results is presented in Fig. 4. It is readily observed that the competing methods suffer from the lack of 3D segmentation for the uncovered regions (issue 𝓘𝟏) caused by either occlusion or not being covered by any bounding box, drawn with black and gray colors, respectively. Moreover, these methods may also encounter negative transfers caused by inaccurate VLM outputs (issue 𝓘𝟐), such as those pointed by blue arrows, with notably degraded outcomes in SATR due to semantic leaking. Nonetheless, our method performs cross-modal distillation and alleviates these two issues, as can be seen in Fig. 4. In addition, due to a direct transfer of 2D predictions to 3D space which relies on each independent shape as in the competing methods, erroneous 2D predictions will just remain as incorrect 3D segmentation (issue 𝓘𝟑), such as the missed detected chair arms and guitar heads pointed by red arrows. Our method also addresses this issue, by exploiting geometrical features across multiple shapes.

Table 3:Few-shot segmentation on the PartNetE dataset, reported in mIoU (%).

MethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
Non-VLM-basedPointNet++ [35]27.011.642.230.228.622.210.519.43.37.320.4
PointNext [36] 67.647.765.153.760.659.755.436.814.522.140.6
ACD [12] 22.431.539.029.240.239.613.78.913.213.523.2
Prototype [51] 60.136.870.867.362.750.438.236.535.525.744.3
Point-M2AE [47]72.474.583.474.364.368.057.653.357.533.656.4
VLM-based(GLIP [22]) PartSLIP [24]83.488.185.384.877.065.260.073.570.442.459.4
Ours84.690.188.487.478.671.469.272.873.463.365.9

4.3Few-shot segmentation

We further demonstrate the effectiveness of our method in a few-shot scenario by following the setting used in PartSLIP [24]. Specifically, we employ the fine-tuned GLIP model [22] provided by PartSLIP via 8-shot labeled shapes of the PartNetE dataset [44] for each category. In addition to the alignment via Eq. 3, we ask the student network to learn parameters that minimize both Eq. 3 and a standard cross-entropy loss for segmentation on the 8 labeled shapes.

As shown in Table 3, the methods dedicated to few-shot 3D segmentation, ACD [12] and Prototype [51], are adapted to PointNet++ [51] and PointNext [36] backbones, respectively, and can improve the performances (on average) of these backbones. PartSLIP, on the other hand, leverages multi-view GLIP predictions for 3D segmentation and further improves the mIoU, but there are still substantial performance gaps compared to our method which distills the GLIP predictions instead. We also present the results from fine-tuning Point-M2AE with the few-shot labels, which shows lower performances than ours, highlighting the significant contribution of our distillation framework. For more qualitative results, see the supplementary materials.

4.4Leveraging generated data

Since only unlabeled 3D shape data are required for our method to perform cross-modal distillation, existing generative models [31, 33] can facilitate an effortless generation of 3D shapes, and the generated data can be smoothly incorporated into our method. Specifically, we first adopt DiT-3D [31] which is pre-trained on the ShapeNet55 dataset [6] to generate point clouds of shapes, 500 shapes for each category, and further employ SAP [33] to transform the generated point clouds into mesh shapes. These generated mesh shapes can then be utilized in our method for distillation. Table 4 shows the results evaluated on the test-set data of ShapeNetPart [44] and COSEG [42] datasets for several shape categories, using GLIP as the VLM.

One can see that with distilling from the generated alone, our method already achieves competitive results on the ShapeNetPart dataset compared to distilling from the train-set data. Since the generated data via DiT-3D is pre-trained on the ShapeNet55 dataset which contains the ShapeNetPart data, we also evaluate its performance on the COSEG dataset to show that such results can be well transferred to shapes from another dataset. Finally, Table 4 (the last row) reveals that using generated data as a supplementary knowledge source can further increase the mIoU performance. Such results suggest that if a collection of shapes is available, generated data can be employed as supplementary knowledge sources, which can improve the performance. On the other hand, if a collection of shapes does not exist, generative models can be employed for shape creation and subsequently used in our method as the knowledge source.

4.5Ablation studies

Proposed components.

We perform ablation studies on the proposed components, and the mIoU scores in 2D11Calculated between the VLM predictions and their corresponding 2D ground truths projected from 3D, and weighted by the confidence scores. See supplementary material for the details. and 3D spaces on three categories of the ShapeNetPart dataset are shown in (1) to (9) of Table 5. In (1), only GLIP box predictions are utilized to get 3D segmentations, i.e., part labels are assigned by voting from all visible points within the multi-view box predictions. These numbers serve as baselines and are subject to issues 𝓘𝟏∼𝓘𝟑. In (2) and (3), 3D segmentations are achieved via forward distillation from the GLIP predictions to the student network using Eq. 3, for test-time alignment (TTA) and pre-alignment (Pre) versions, resulting in significant improvements compared to the baselines, with more than 10% and 14% higher mIoUs, respectively. Such results demonstrate that the proposed cross-modal distillation can better utilize the 2D multi-view predictions for 3D part segmentation, alleviating 𝓘𝟏∼𝓘𝟑.

Table 4:Segmentation mIoU (%) by leveraging generated data.

Distilled dataShapeNetPart [44]COSEG [42]
AirplaneChairGuitarChairGuitar
Train-set (baseline)69.386.276.896.468.0
Gen. data69.085.375.696.167.5
Gen. data & train-set70.888.478.397.470.2

Table 5:Ablation study on the proposed method.

NoVLMPreBDStudentnetworkAirplaneChairKnife
2D3D2D3D2D3D
(1)GLIP[22]42.840.260.260.153.657.2
(2)42.856.260.273.553.677.6
(3)42.864.360.284.253.684.5
(4)44.357.361.774.254.878.5
(5)48.269.363.286.555.085.7
(6)exclude 𝓘𝟏48.262.563.280.455.081.2
(7)w/o pretrain48.269.163.286.755.085.3
(8)CLIP[37]34.638.450.463.666.877.4
(9)37.840.654.265.068.478.9

We further add backward distillation (BD) in (4) and (5), which substantially improves the knowledge source in 2D, e.g., from 42.8% to 48.2% for the airplane category in the pre-alignment version, and subsequently enhances the 3D segmentation. We observe a higher impact (improvement) on the pre-alignment compared to TTA versions, i.e., in (4) and (5), as the student network of the former can better integrate the knowledge from a collection of shapes. A similar trend of improvement can be observed for a similar ablation performed with CLIP [37] used as the VLM (in (8) and (9)).

In (6), we exclude our method’s predictions for those uncovered points to simulate issue 𝓘𝟏, and the reduced mIoUs compared to (5), e.g., from 86.5% to 80.4% for the chair category, reveal that our method can effectively alleviate issue 𝓘𝟏. Finally, instead of using pre-trained weights of Point-M2AE [47] and freezing them as the 3D decoder as in (5), we initialize these weights (by default PyTorch [32] initialization) and set them to be learnable as in (7). Both settings produce comparable results (within 0.4%). The main purpose of using the pre-trained weights and freezing them is for faster convergence, especially for the test-time alignment purpose. Please refer to the supplementary material for the comparison of convergence curves.

Number of views.

We render V=10 multi-view images for each shape input in our main experiment, and Fig. 5 (left) shows the mIoU scores with different values of V. A substantial drop is observed when utilizing V<6, and small increases are obtained when a larger V is used.

Various shape types for 2D multi-view rendering.

We render 10 multi-view images from various shape data types, i.e., (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud (∼300k points) as used in PartSLIP [24], and (iv) sparse gray point cloud (2,048 points) using PyTroch3D [16] and the rendering method in [52] to render (i)-(iii) and (iv), respectively. Fig. 5 (right) summarizes such results on the ShapeNetPart dataset, with GLIP used as the VLM. Note that the first three shape types produce comparable mIoUs with slightly higher scores when colored mesh or dense colored point cloud is utilized. When sparse gray point cloud data type is used, a mild mIoU decrease is observed. Please refer to the supplementary material to see more results for (i)-(iv).

Refer to caption

Figure 5:Ablation study on number of views and various shape types for 2D multiview rendering on the ShapeNetPart dataset.

Limitation.

The main limitation of our method is that the segmentation results are impacted by the quality of the VLM predictions, where VLMs are generally pre-trained to recognize object- or sample-level categories (not part-level of object categories). For instance, GLIP can satisfactorily locate part semantics for the chair category but with lower qualities for the earphone category, while CLIP can favorably locate part semantics for the earphone category but with less favorable results for the airplane category. Hence, exploiting multiple VLMs can be a potential future work. Nonetheless, the proposed method which currently employs a single VLM model can already boost the segmentation results significantly compared to the existing methods.

5Conclusion

We present a cross-modal distillation framework that transfers 2D knowledge from a vision-language model (VLM) to facilitate 3D shape part segmentation, which generalizes well to both VLM with bounding-box and pixel-wise predictions. In the proposed method, backward distillation is introduced to enhance the quality of 2D predictions and subsequently improve the 3D segmentation. The proposed approach can also leverage existing generative models for shape creation and can be smoothly incorporated into the method for distillation. With extensive experiments, the proposed method is compared with existing methods on widely used benchmark datasets, including ShapeNetPart and PartNetE, and consistently outperforms existing methods with substantial margins both in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Acknowledgment.

This work was supported in part by the National Science and Technology Council (NSTC) under grants 112-2221-E-A49-090-MY3, 111-2628-E-A49-025-MY3, 112-2634-F-006-002 and 112-2634-F-A49-007. This work was funded in part by MediaTek and NVIDIA.

\thetitle

Supplementary Material

Table 6:Zero-shot segmentation on all 16 categories of the ShapeNetPart dataset [44], reported mIoU (%). In this table, TTA and Pre denote the test-time alignment and pre-alignment versions of our method, while VLM stands for vision-language model (see main paper for details).

CategoryVLM - CLIP [37]VLM - GLIP [22]
point cloud inputpoint cloud inputmesh input
PointCLIP [48] PointCLIP v2 [52]Ours (TTA)Ours (Pre)Ours (TTA)Ours (Pre)SATR [1]Ours (TTA)Ours (Pre)
Airplane22.035.737.540.657.369.332.253.264.8
Bag44.853.362.675.662.770.132.161.864.4
Cap13.453.155.567.256.267.921.844.951.0
Car30.434.536.441.232.439.222.330.232.3
Chair18.751.956.465.074.286.525.266.467.4
Earphone28.348.155.666.345.851.219.443.048.3
Guitar22.759.171.785.860.676.837.750.764.8
Knife24.866.776.979.878.585.740.166.370.0
Lamp39.644.745.863.134.543.521.630.535.2
Laptop22.961.867.492.685.791.950.468.383.1
Motorbike26.331.433.438.230.637.825.428.832.5
Mug48.645.553.583.182.585.676.483.986.5
Pistol42.646.148.255.839.648.534.137.440.9
Rocket22.746.749.349.536.848.933.241.145.3
Skateboard42.745.847.749.234.243.522.326.234.5
Table45.449.862.968.762.979.622.458.879.3
Overall31.048.453.863.954.764.132.349.556.3

Table 7:Segmentation on all 45 categories of the PartE dataset [24], reported in mIoU (%). In this table, TTA denotes our method with test-time alignment (see main paper for details).

CategoryZero-shotFew-shot
PartSLIP [24] Ours (TTA)PartSLIP [24]Ours
Bottle76.377.483.484.6
Box57.569.784.587.9
Bucket2.016.836.550.7
Camera21.429.458.360.1
Cart87.788.588.190.1
Chair60.774.185.388.4
Clock26.723.637.637.2
Coffee machine25.426.837.840.2
Dishwasher10.318.662.560.2
Dispenser16.511.473.874.7
Display43.850.584.887.4
Door2.741.140.855.5
Eyeglasses1.859.788.391.1
Faucet6.833.371.473.5
Folding chair91.789.786.390.7
Globe34.890.095.797.4
Kettle20.824.277.078.6
Keyboard37.338.553.670.8
Kitchenpot4.736.869.669.7
Knife46.859.265.271.4
Lamp37.158.866.169.2
Laptop27.037.129.740.0
Lighter35.437.364.764.9
Microwave16.623.242.743.8
Mouse27.018.644.046.9
Oven33.034.273.572.8
Pen14.615.771.574.4
Phone36.137.348.450.8
Pliers5.451.933.290.4
Printer0.83.34.36.3
Refrigerator20.225.255.858.1
Remote11.513.238.340.7
Safe22.418.232.258.6
Scissors21.864.460.368.8
Stapler20.965.184.886.3
Storage furniture29.530.653.656.5
Suitcase40.243.270.473.4
Switch9.530.359.460.7
Table47.750.242.563.3
Toaster13.811.460.058.7
Toilet20.622.553.855.0
Trash can30.149.322.370.0
Usb10.939.154.464.3
Washing machine12.512.953.555.1
Window5.245.375.478.1
Overall27.339.959.465.9

63D segmentation scores for full categories

We provide 3D segmentation scores, reported in mIoU, for full categories of the ShapeNetPart [44] and PartE [24] datasets in Tables 6 and 7, respectively. Table 6 is associated with Table 1 in the main paper, while Table 7 is associated with Tables 2 and 3. In Table 6, 16 categories of the ShapeNetPart dataset are reported, while 45 categories of the PartE dataset are presented in Table 7. For the tables, it is readily observed that the proposed method, PartDistill, attains substantial improvements compared to the competing methods [48, 52, 1, 24] in most categories.

7Evaluating 2D predictions

In the ablation studies of our method’s components presented in Table 5, we provide mIoU scores in 2D space, mIoU2D, to evaluate the quality of the 2D predictions measured against the 2D ground truths before and after performing backward distillation which re-scores the confidence scores of each knowledge unit. Here, the 2D ground truths are obtained by projecting the 3D mesh (faces) part segmentation labels to 2D space using the camera parameters utilized when performing 2D multi-view rendering.

We first explain how to calculate the mIoU2D if a vision-language model (VLM) which outputs pixel-wise predictions (P-VLM) is used in our method and later explain if a VLM which outputs bounding-box predictions (B-VLM) is employed. In each view, let {si}iρ be the prediction maps (see Eq. 1 in the main paper) of P-VLM with Ci denoting the confidence score of si and 𝒢 be the corresponding 2D ground truth. We first calculate the IoU2D for each semantic part r as,

IoU2D⁢(r)=ℐ⁢(r)ℐ⁢(r)+λ⁢(r)+γ⁢(r)(5)

where

ℐ⁢(r)=∑i∈ϕ⁢(r)Avg⁢(Ci)⁢(si∩𝒢r),(6)
λ⁢(r)=Avg⁢(Cϕ⁢(r))⁢((⋃i∈ϕ⁢(r)si)∉𝒢r),(7)

and

γ⁢(r)=𝒢r∉⋃i∈ϕ⁢(r)si,(8)

with ϕ⁢(r) denoting a function returning indices of {si}i=1ρ that predict part r, “Avg” denoting an averaging operation and 𝒢r indicating the ground truth of part r.

While Eq. 6 represents the intersection of the pixels between the 2D predictions and the corresponding ground truths, weighted by their confidence scores, Eq. 7 tells the union of the 2D predictions pixels that do not intersect with the corresponding ground truths, which is weighted by the average of all confidence scores associated with part r. As for Eq. 8, it tells the ground truth pixels that do not intersect with the corresponding 2D predictions union. We then calculate the IoU2D score for each semantic part r in every v view and compute the mean of them as mIoU2D.

Note that we involve the confidence scores as weights to calculate the mIoU2D. This allows us to compare the quality of the 2D predictions before and after applying backward distillation, using the confidence scores before and after this process. To compute the mIoU2D scores when a B-VLM is used in our method, we can use Eq. 5 with si in Eq. 6∼ Eq. 8 being replaced by ℱ⁢(bi), where ℱ denotes an operation excluding the background pixels covered in bi.

8Additional visualizations

8.1Visualization of few-shot segmentation

Refer to caption

Figure 6:Visualization of few-shot segmentation results derived using our method on the PartE dataset [24]. Each semantic part is drawn in different colors.

In Figure 6, we present several visualizations for few-shot segmentation obtained via our method, associated with Table 3 in the main paper. Following the prior work [24], 8-shot labeled shapes are utilized to carry the few-shot segmentation. From the figure, it is evident that our method successfully achieves satisfactory segmentation results.

8.2Convergence curves

In the ablation studies presented in Table 5, we compare two approaches used in the 3D encoder of our student network. First, we employ a pre-trained PointM2AE [47] backbone, freeze the weights and only update the learnable parameters in the student network’s distillation head. Second, we utilize a PointM2AE backbone with its weights initialized by PyTorch [32] default initialization and set them to be learnable, together with the parameters in the distillation head. From the table, we observe comparable results between both settings (see rows (5) and (7) for the first and second approaches respectively).

We then visualize the convergence curves in both settings, as depicted in Figure 7. From the figure, it can be seen that the loss in the first approach converges significantly faster than in the second approach. As a result, the first approach also starts to perform backward distillation in a substantially earlier epoch than the second one.

Refer to caption

Figure 7:Convergence curves of our method’s losses during optimization epochs. While the first approach employs a pre-trained PointM2AE [47] model and freezes its weights, the second approach initializes the Point2MAE’s weights from scratch and sets them to be learnable.

8.32D rendering from various shape types

We present several 2D rendering images from various shape types, including (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud, and (iv) sparse gray point cloud, which can be seen in Figure 8. While PartSLIP [24] renders the multi-view images using type (iii), SATR [1] uses type (i). As for PointCLIP [48] and PointCLIPv2 [52], they use type (iv) to render their multi-view images.

Refer to caption

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/955895.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

强推未发表!3D图!Transformer-LSTM+NSGAII工艺参数优化、工程设计优化!

目录 效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.Transformer-LSTMNSGAII多目标优化算法&#xff0c;工艺参数优化、工程设计优化&#xff01;&#xff08;Matlab完整源码和数据&#xff09; Transformer-LSTM模型的架构&#xff1a;输入层&#xff1a;多个变量作…

如何通过 Apache Airflow 将数据导入 Elasticsearch

作者&#xff1a;来自 Elastic Andre Luiz 了解如何通过 Apache Airflow 将数据导入 Elasticsearch。 Apache Airflow Apache Airflow 是一个旨在创建、安排&#xff08;schedule&#xff09;和监控工作流的平台。它用于编排 ETL&#xff08;Extract-Transform-Load&#xff0…

电脑风扇声音大怎么办? 原因及解决方法

电脑风扇是电脑的重要组件之一&#xff0c;它的作用是为电脑的各个部件提供冷却&#xff0c;防止电脑过热。然而&#xff0c;有时候我们会发现电脑风扇的声音特别大&#xff0c;不仅影响我们的使用体验&#xff0c;也可能是电脑出现了一些问题。那么&#xff0c;电脑风扇声音大…

Oracle报错ORA-01078、LRM-00109

虚拟机异常关机后&#xff0c;rac数据库备机无法启动数据库&#xff0c;报错如下 解决方法&#xff1a; 找到如下路径文件 执行&#xff1a; cp init.ora.016202516818 /u01/app/oracle/product/19.3.0/db/dbs/ mv init.ora.016202516818 initplm2.ora 再次进入命令行sqlpl…

.Net Core微服务入门系列(一)——项目搭建

系列文章目录 1、.Net Core微服务入门系列&#xff08;一&#xff09;——项目搭建 2、.Net Core微服务入门全纪录&#xff08;二&#xff09;——Consul-服务注册与发现&#xff08;上&#xff09; 3、.Net Core微服务入门全纪录&#xff08;三&#xff09;——Consul-服务注…

Ability Kit-程序框架服务(类似Android Activity)

文章目录 Ability Kit&#xff08;程序框架服务&#xff09;简介Stage模型开发概述Stage模型应用组件应用/组件级配置UIAbility组件概述概述声明配置 生命周期概述生命周期状态说明Create状态WindowStageCreate**和**WindowStageDestroy状态WindowStageWillDestroy状态Foregrou…

Harmony OS 5.0.1 模拟器报未开启 Hyper-V解决方法

程序员Feri一名12年的程序员,做过开发带过团队创过业,擅长Java、嵌入式、鸿蒙、人工智能等,专注于程序员成长那点儿事,希望在成长的路上有你相伴&#xff01;君志所向,一往无前&#xff01; 今天在写Harmony NEXT版本的元服务的时候&#xff0c;突然模拟器无法启动了&#xff0…

WPS数据分析000004

目录 一、表格阅读技巧 冻结窗格 拆分窗口 新建窗口 阅读模式 护眼模式 二、表格打印技巧 打印预览 打印缩放 打印区域 打印标题 分页打印 打印位置 页眉页脚 逐份打印 三、表格保护技巧 锁定单元格 隐藏公式 文档权限 文件加密 一、表格阅读技巧 冻结窗…

LabVIEW桥接传感器数据采集与校准程序

该程序设计用于采集来自桥接传感器的数据&#xff0c;执行必要的设置&#xff08;如桥接配置、信号采集参数、时间与触发设置&#xff09;&#xff0c;并进行适当的标定和偏移校正&#xff0c;最终通过图表呈现采集到的数据信息。程序包括多个模块&#xff0c;用于配置通道、触…

2025西湖论剑-babytrace

前言 就做了下题目&#xff0c;pwn1/3 都是签到&#xff0c;pwn2 后面绕 ptrace 有点意思&#xff0c;简单记录一下 漏洞分析 子进程中的读/写功能没有检查负数的情况&#xff0c;存在越界读写&#xff1a; void __fastcall get_value(__int64 *int64_arr) {__int64 ll; //…

【统计的思想】假设检验(一)

假设检验是统计学里的重要方法&#xff0c;同时也是一种“在理想与现实之间观察求索”的测试活动。假设检验从概率的角度去考察理想与现实之间的关系&#xff0c;籍此来缓解测试可信性问题。 我们先来看一个例子。民航旅客服务系统&#xff0c;简称PSS系统&#xff0c;有一种业…

GPT-5 传言:一场正在幕后发生的 AI 变革

新的一年&#xff0c;让我们从一个引人入胜的话题开始&#xff1a;如果我告诉你&#xff0c;GPT-5 并非虚构&#xff0c;而是真实存在呢&#xff1f;它不仅真实存在&#xff0c;而且正在你看不见的地方悄然塑造着世界。我的基本假设是&#xff1a;OpenAI 已经秘密开发出 GPT-5&…

【20】Word:小许-质量管理-论文❗

目录 题目​ NO1.2.3.4.5 NO6.7 NO8 NO9 NO10.11 题目 NO1.2.3.4.5 另存为“Word.docx”文件在考生文件夹下&#xff0c;F12Fn是另存为的作用布局→页面设置对话框→纸张&#xff1a;大小A4→页边距&#xff1a;上下左右不连续ctrl选择除表格外的所有内容→开始→字体对…

Leetcode - 周赛432

目录 一、3417. 跳过交替单元格的之字形遍历二、3418. 机器人可以获得的最大金币数三、3419. 图的最大边权的最小值四、3420. 统计 K 次操作以内得到非递减子数组的数目 一、3417. 跳过交替单元格的之字形遍历 题目链接 本题是一道模拟题&#xff0c;第一行走0&#xff0c;2&…

ASP.NET Core - 配置系统之配置提供程序

ASP.NET Core - 配置系统之配置提供程序 3. 配置提供程序3.1 文件配置提供程序3.1.1 JSON配置提供程序3.1.2 XML配置提供程序3.1.3 INI配置提供程序 3.2 环境变量配置提供程序3.3 命令行配置提供程序3.4 内存配置提供程序3.5 配置加载顺序 3.6 默认配置来源 3. 配置提供程序 前…

探索与创作:2024年CSDN平台上的成长与突破

文章目录 我与CSDN的初次邂逅初学阶段的阅读CSDN&#xff1a;编程新手的避风港初学者的福音&#xff1a;细致入微的知识讲解考试复习神器&#xff1a;技术总结的“救命指南”曾经的自己&#xff1a;为何迟迟不迈出写博客的第一步兴趣萌芽&#xff1a;从“读”到“想写”的初体验…

CSS中样式继承+优先级

继承属性和非继承属性 一、定义及分类 1、继承属性是指在父元素上设置了这些属性后&#xff0c;子元素会自动继承这些属性的值&#xff0c;除非子元素显式地设置了不同的值。 常见的继承属性: 字体 font 系列文本text-align text-ident line-height letter-spacing颜色 col…

macOS 安装JDK17

文章目录 前言介绍新特性下载安装1.下载完成后打开downloads 双击进行安装2.配置环境变量3.测试快速切换JDK 小结 前言 近期找开源软件&#xff0c;发现很多都已经使用JDK17springboot3 了&#xff0c;之前的JDK8已经被替换下场&#xff0c;所以今天就在本机安装了JDK17&#…

ChatGPT大模型极简应用开发-CH1-初识 GPT-4 和 ChatGPT

文章目录 1.1 LLM 概述1.1.1 语言模型和NLP基础1.1.2 Transformer及在LLM中的作用1.1.3 解密 GPT 模型的标记化和预测步骤 1.2 GPT 模型简史&#xff1a;从 GPT-1 到 GPT-41.2.1 GPT11.2.2 GPT21.2.3 GPT-31.2.4 从 GPT-3 到 InstructGPT1.2.5 GPT-3.5、Codex 和 ChatGPT1.2.6 …

vector迭代器的使用以及迭代器失效

一、iterator的使用注意 begin与end 遵循左闭右开的原则&#xff0c;begin 指向vector的第一个元素&#xff0c;end 指向vector的最后一个元素的往下一个位置。 rbegin 与 rend rbegin指向最后一个元素的位置&#xff0c;rend指向第一个元素的往前一个位置。 二、vector的常…