【AR】使用深度API实现虚实遮挡

遮挡效果

本段描述摘自 https://developers.google.cn/ar/develop/depth

遮挡是深度API的应用之一。
遮挡(即准确渲染虚拟物体在现实物体后面)对于沉浸式 AR 体验至关重要。
参考下图,假设场景中有一个Andy,用户可能需要放置在包含门边有后备箱的场景中。渲染时没有遮挡,Andy 会不切实际地与树干边缘重叠。如果您使用场景的深度来了解虚拟 Andy 相对于木箱等周围环境的距离,就可以准确地渲染 Andy 的遮挡效果,使其在周围环境中看起来更逼真。

图片源自 https://developers.google.cn/ar/develop/depth
在这里插入图片描述

使用ARFoundation

ARFoundation中提供了AROcclusionManager脚本,在AR Session Origin的AR Camera对象上挂载该脚本即可。在场景启动后,会自动启用虚实遮挡(当然Depth Mode 不能选择的是disable)。

在这里插入图片描述

AROcclusionManager脚本内容如下:

using System;
using System.Collections.Generic;
using Unity.Collections;
using UnityEngine.Serialization;
using UnityEngine.XR.ARSubsystems;
using UnityEngine.Rendering;

namespace UnityEngine.XR.ARFoundation
{
    /// <summary>
    /// The manager for the occlusion subsystem.
    /// </summary>
    [DisallowMultipleComponent]
    [DefaultExecutionOrder(ARUpdateOrder.k_OcclusionManager)]
    [HelpURL(HelpUrls.ApiWithNamespace + nameof(AROcclusionManager) + ".html")]
    public sealed class AROcclusionManager :
        SubsystemLifecycleManager<XROcclusionSubsystem, XROcclusionSubsystemDescriptor, XROcclusionSubsystem.Provider>
    {
        /// <summary>
        /// The list of occlusion texture infos.
        /// </summary>
        /// <value>
        /// The list of occlusion texture infos.
        /// </value>
        readonly List<ARTextureInfo> m_TextureInfos = new List<ARTextureInfo>();

        /// <summary>
        /// The list of occlusion textures.
        /// </summary>
        /// <value>
        /// The list of occlusion textures.
        /// </value>
        readonly List<Texture2D> m_Textures = new List<Texture2D>();

        /// <summary>
        /// The list of occlusion texture property IDs.
        /// </summary>
        /// <value>
        /// The list of occlusion texture property IDs.
        /// </value>
        readonly List<int> m_TexturePropertyIds = new List<int>();

        /// <summary>
        /// The human stencil texture info.
        /// </summary>
        /// <value>
        /// The human stencil texture info.
        /// </value>
        ARTextureInfo m_HumanStencilTextureInfo;

        /// <summary>
        /// The human depth texture info.
        /// </summary>
        /// <value>
        /// The human depth texture info.
        /// </value>
        ARTextureInfo m_HumanDepthTextureInfo;

        /// <summary>
        /// The environment depth texture info.
        /// </summary>
        /// <value>
        /// The environment depth texture info.
        /// </value>
        ARTextureInfo m_EnvironmentDepthTextureInfo;

        /// <summary>
        /// The environment depth confidence texture info.
        /// </summary>
        /// <value>
        /// The environment depth confidence texture info.
        /// </value>
        ARTextureInfo m_EnvironmentDepthConfidenceTextureInfo;

        /// <summary>
        /// An event which fires each time an occlusion camera frame is received.
        /// </summary>
        public event Action<AROcclusionFrameEventArgs> frameReceived;

        /// <summary>
        /// The mode for generating the human segmentation stencil texture.
        /// This method is obsolete.
        /// Use <see cref="requestedHumanStencilMode"/>
        /// or  <see cref="currentHumanStencilMode"/> instead.
        /// </summary>
        [Obsolete("Use requestedSegmentationStencilMode or currentSegmentationStencilMode instead. (2020-01-14)")]
        public HumanSegmentationStencilMode humanSegmentationStencilMode
        {
            get => m_HumanSegmentationStencilMode;
            set => requestedHumanStencilMode = value;
        }

        /// <summary>
        /// The requested mode for generating the human segmentation stencil texture.
        /// </summary>
        public HumanSegmentationStencilMode requestedHumanStencilMode
        {
            get => subsystem?.requestedHumanStencilMode ?? m_HumanSegmentationStencilMode;
            set
            {
                m_HumanSegmentationStencilMode = value;
                if (enabled && descriptor?.humanSegmentationStencilImageSupported == Supported.Supported)
                {
                    subsystem.requestedHumanStencilMode = value;
                }
            }
        }

        /// <summary>
        /// Get the current mode in use for generating the human segmentation stencil mode.
        /// </summary>
        public HumanSegmentationStencilMode currentHumanStencilMode => subsystem?.currentHumanStencilMode ?? HumanSegmentationStencilMode.Disabled;

        [SerializeField]
        [Tooltip("The mode for generating human segmentation stencil texture.\n\n"
                 + "Disabled -- No human stencil texture produced.\n"
                 + "Fastest -- Minimal rendering quality. Minimal frame computation.\n"
                 + "Medium -- Medium rendering quality. Medium frame computation.\n"
                 + "Best -- Best rendering quality. Increased frame computation.")]
        HumanSegmentationStencilMode m_HumanSegmentationStencilMode = HumanSegmentationStencilMode.Disabled;

        /// <summary>
        /// The mode for generating the human segmentation depth texture.
        /// This method is obsolete.
        /// Use <see cref="requestedHumanDepthMode"/>
        /// or  <see cref="currentHumanDepthMode"/> instead.
        /// </summary>
        [Obsolete("Use requestedSegmentationDepthMode or currentSegmentationDepthMode instead. (2020-01-15)")]
        public HumanSegmentationDepthMode humanSegmentationDepthMode
        {
            get => m_HumanSegmentationDepthMode;
            set => requestedHumanDepthMode = value;
        }

        /// <summary>
        /// Get or set the requested human segmentation depth mode.
        /// </summary>
        public HumanSegmentationDepthMode requestedHumanDepthMode
        {
            get => subsystem?.requestedHumanDepthMode ?? m_HumanSegmentationDepthMode;
            set
            {
                m_HumanSegmentationDepthMode = value;
                if (enabled && descriptor?.humanSegmentationDepthImageSupported == Supported.Supported)
                {
                    subsystem.requestedHumanDepthMode = value;
                }
            }
        }

        /// <summary>
        /// Get the current human segmentation depth mode in use by the subsystem.
        /// </summary>
        public HumanSegmentationDepthMode currentHumanDepthMode => subsystem?.currentHumanDepthMode ?? HumanSegmentationDepthMode.Disabled;

        [SerializeField]
        [Tooltip("The mode for generating human segmentation depth texture.\n\n"
                 + "Disabled -- No human depth texture produced.\n"
                 + "Fastest -- Minimal rendering quality. Minimal frame computation.\n"
                 + "Best -- Best rendering quality. Increased frame computation.")]
        HumanSegmentationDepthMode m_HumanSegmentationDepthMode = HumanSegmentationDepthMode.Disabled;

        /// <summary>
        /// Get or set the requested environment depth mode.
        /// </summary>
        public EnvironmentDepthMode requestedEnvironmentDepthMode
        {
            get => subsystem?.requestedEnvironmentDepthMode ?? m_EnvironmentDepthMode;
            set
            {
                m_EnvironmentDepthMode = value;
                if (enabled && descriptor?.environmentDepthImageSupported == Supported.Supported)
                {
                    subsystem.requestedEnvironmentDepthMode = value;
                }
            }
        }

        /// <summary>
        /// Get the current environment depth mode in use by the subsystem.
        /// </summary>
        public EnvironmentDepthMode currentEnvironmentDepthMode => subsystem?.currentEnvironmentDepthMode ?? EnvironmentDepthMode.Disabled;

        [SerializeField]
        [Tooltip("The mode for generating the environment depth texture.\n\n"
                 + "Disabled -- No environment depth texture produced.\n"
                 + "Fastest -- Minimal rendering quality. Minimal frame computation.\n"
                 + "Medium -- Medium rendering quality. Medium frame computation.\n"
                 + "Best -- Best rendering quality. Increased frame computation.")]
        EnvironmentDepthMode m_EnvironmentDepthMode = EnvironmentDepthMode.Fastest;

        [SerializeField]
        bool m_EnvironmentDepthTemporalSmoothing = true;

        /// <summary>
        /// Whether temporal smoothing should be applied to the environment depth image. Query for support with
        /// [environmentDepthTemporalSmoothingSupported](xref:UnityEngine.XR.ARSubsystems.XROcclusionSubsystemDescriptor.environmentDepthTemporalSmoothingSupported).
        /// </summary>
        /// <value>When `true`, temporal smoothing is applied to the environment depth image. Otherwise, no temporal smoothing is applied.</value>
        public bool environmentDepthTemporalSmoothingRequested
        {
            get => subsystem?.environmentDepthTemporalSmoothingRequested ?? m_EnvironmentDepthTemporalSmoothing;
            set
            {
                m_EnvironmentDepthTemporalSmoothing = value;
                if (enabled && descriptor?.environmentDepthTemporalSmoothingSupported == Supported.Supported)
                {
                    subsystem.environmentDepthTemporalSmoothingRequested = value;
                }
            }
        }

        /// <summary>
        /// Whether temporal smoothing is applied to the environment depth image. Query for support with
        /// [environmentDepthTemporalSmoothingSupported](xref:UnityEngine.XR.ARSubsystems.XROcclusionSubsystemDescriptor.environmentDepthTemporalSmoothingSupported).
        /// </summary>
        /// <value>Read Only.</value>
        public bool environmentDepthTemporalSmoothingEnabled => subsystem?.environmentDepthTemporalSmoothingEnabled ?? false;

        /// <summary>
        /// Get or set the requested occlusion preference mode.
        /// </summary>
        public OcclusionPreferenceMode requestedOcclusionPreferenceMode
        {
            get => subsystem?.requestedOcclusionPreferenceMode ?? m_OcclusionPreferenceMode;
            set
            {
                m_OcclusionPreferenceMode = value;
                if (enabled && subsystem != null)
                {
                    subsystem.requestedOcclusionPreferenceMode = value;
                }
            }
        }

        /// <summary>
        /// Get the current occlusion preference mode in use by the subsystem.
        /// </summary>
        public OcclusionPreferenceMode currentOcclusionPreferenceMode => subsystem?.currentOcclusionPreferenceMode ?? OcclusionPreferenceMode.PreferEnvironmentOcclusion;

        [SerializeField]
        [Tooltip("If both environment texture and human stencil & depth textures are available, this mode specifies which should be used for occlusion.")]
        OcclusionPreferenceMode m_OcclusionPreferenceMode = OcclusionPreferenceMode.PreferEnvironmentOcclusion;

        /// <summary>
        /// The human segmentation stencil texture.
        /// </summary>
        /// <value>
        /// The human segmentation stencil texture, if any. Otherwise, <c>null</c>.
        /// </value>
        public Texture2D humanStencilTexture
        {
            get
            {
                if (descriptor?.humanSegmentationStencilImageSupported == Supported.Supported &&
                    subsystem.TryGetHumanStencil(out var humanStencilDescriptor))
                {
                    m_HumanStencilTextureInfo = ARTextureInfo.GetUpdatedTextureInfo(m_HumanStencilTextureInfo,
                                                                                    humanStencilDescriptor);
                    DebugAssert.That(((m_HumanStencilTextureInfo.descriptor.dimension == TextureDimension.Tex2D)
                                  || (m_HumanStencilTextureInfo.descriptor.dimension == TextureDimension.None)))?.
                        WithMessage("Human Stencil Texture needs to be a Texture 2D, but instead is "
                                    + $"{m_HumanStencilTextureInfo.descriptor.dimension.ToString()}.");
                    return m_HumanStencilTextureInfo.texture as Texture2D;
                }
                return null;
            }
        }

        /// <summary>
        /// Attempt to get the latest human stencil CPU image. This provides directly access to the raw pixel data.
        /// </summary>
        /// <remarks>
        /// The `XRCpuImage` must be disposed to avoid resource leaks.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireHumanStencilCpuImage(out XRCpuImage cpuImage)
        {
            if (descriptor?.humanSegmentationStencilImageSupported == Supported.Supported)
            {
                return subsystem.TryAcquireHumanStencilCpuImage(out cpuImage);
            }

            cpuImage = default;
            return false;
        }

        /// <summary>
        /// The human segmentation depth texture.
        /// </summary>
        /// <value>
        /// The human segmentation depth texture, if any. Otherwise, <c>null</c>.
        /// </value>
        public Texture2D humanDepthTexture
        {
            get
            {
                if (descriptor?.humanSegmentationDepthImageSupported == Supported.Supported &&
                    subsystem.TryGetHumanDepth(out var humanDepthDescriptor))
                {
                    m_HumanDepthTextureInfo = ARTextureInfo.GetUpdatedTextureInfo(m_HumanDepthTextureInfo,
                                                                                  humanDepthDescriptor);
                    DebugAssert.That(m_HumanDepthTextureInfo.descriptor.dimension == TextureDimension.Tex2D
                                  || m_HumanDepthTextureInfo.descriptor.dimension == TextureDimension.None)?.
                        WithMessage("Human Depth Texture needs to be a Texture 2D, but instead is "
                                    + $"{m_HumanDepthTextureInfo.descriptor.dimension.ToString()}.");
                    return m_HumanDepthTextureInfo.texture as Texture2D;
                }
                return null;
            }
        }

        /// <summary>
        /// Attempt to get the latest environment depth confidence CPU image. This provides direct access to the
        /// raw pixel data.
        /// </summary>
        /// <remarks>
        /// The `XRCpuImage` must be disposed to avoid resource leaks.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireEnvironmentDepthConfidenceCpuImage(out XRCpuImage cpuImage)
        {
            if (descriptor?.environmentDepthConfidenceImageSupported == Supported.Supported)
            {
                return subsystem.TryAcquireEnvironmentDepthConfidenceCpuImage(out cpuImage);
            }

            cpuImage = default;
            return false;
        }

        /// <summary>
        /// The environment depth confidence texture.
        /// </summary>
        /// <value>
        /// The environment depth confidence texture, if any. Otherwise, <c>null</c>.
        /// </value>
        public Texture2D environmentDepthConfidenceTexture
        {
            get
            {
                if (descriptor?.environmentDepthConfidenceImageSupported == Supported.Supported
                    && subsystem.TryGetEnvironmentDepthConfidence(out var environmentDepthConfidenceDescriptor))
                {
                    m_EnvironmentDepthConfidenceTextureInfo = ARTextureInfo.GetUpdatedTextureInfo(m_EnvironmentDepthConfidenceTextureInfo,
                                                                                                  environmentDepthConfidenceDescriptor);
                    DebugAssert.That(m_EnvironmentDepthConfidenceTextureInfo.descriptor.dimension == TextureDimension.Tex2D
                                  || m_EnvironmentDepthConfidenceTextureInfo.descriptor.dimension == TextureDimension.None)?.
                        WithMessage("Environment depth confidence texture needs to be a Texture 2D, but instead is "
                                    + $"{m_EnvironmentDepthConfidenceTextureInfo.descriptor.dimension.ToString()}.");
                    return m_EnvironmentDepthConfidenceTextureInfo.texture as Texture2D;
                }
                return null;
            }
        }


        /// <summary>
        /// Attempt to get the latest human depth CPU image. This provides direct access to the raw pixel data.
        /// </summary>
        /// <remarks>
        /// The `XRCpuImage` must be disposed to avoid resource leaks.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireHumanDepthCpuImage(out XRCpuImage cpuImage)
        {
            if (descriptor?.humanSegmentationDepthImageSupported == Supported.Supported)
            {
                return subsystem.TryAcquireHumanDepthCpuImage(out cpuImage);
            }

            cpuImage = default;
            return false;
        }

        /// <summary>
        /// The environment depth texture.
        /// </summary>
        /// <value>
        /// The environment depth texture, if any. Otherwise, <c>null</c>.
        /// </value>
        public Texture2D environmentDepthTexture
        {
            get
            {
                if (descriptor?.environmentDepthImageSupported == Supported.Supported
                    && subsystem.TryGetEnvironmentDepth(out var environmentDepthDescriptor))
                {
                    m_EnvironmentDepthTextureInfo = ARTextureInfo.GetUpdatedTextureInfo(m_EnvironmentDepthTextureInfo,
                                                                                        environmentDepthDescriptor);
                    DebugAssert.That(m_EnvironmentDepthTextureInfo.descriptor.dimension == TextureDimension.Tex2D
                                  || m_EnvironmentDepthTextureInfo.descriptor.dimension == TextureDimension.None)?.
                        WithMessage("Environment depth texture needs to be a Texture 2D, but instead is "
                                    + $"{m_EnvironmentDepthTextureInfo.descriptor.dimension.ToString()}.");
                    return m_EnvironmentDepthTextureInfo.texture as Texture2D;
                }
                return null;
            }
        }

        /// <summary>
        /// Attempt to get the latest environment depth CPU image. This provides direct access to the raw pixel data.
        /// </summary>
        /// <remarks>
        /// The `XRCpuImage` must be disposed to avoid resource leaks.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireEnvironmentDepthCpuImage(out XRCpuImage cpuImage)
        {
            if (descriptor?.environmentDepthImageSupported == Supported.Supported)
            {
                return subsystem.TryAcquireEnvironmentDepthCpuImage(out cpuImage);
            }

            cpuImage = default;
            return false;
        }

        /// <summary>
        /// Attempt to get the latest raw environment depth CPU image. This provides direct access to the raw pixel data.
        /// </summary>
        /// <remarks>
        /// > [!NOTE]
        /// > The `XRCpuImage` must be disposed to avoid resource leaks.
        /// This differs from <see cref="TryAcquireEnvironmentDepthCpuImage"/> in that it always tries to acquire the
        /// raw environment depth image, whereas <see cref="TryAcquireEnvironmentDepthCpuImage"/> depends on the value
        /// of <see cref="environmentDepthTemporalSmoothingEnabled"/>.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireRawEnvironmentDepthCpuImage(out XRCpuImage cpuImage)
        {
            if (subsystem == null)
            {
                cpuImage = default;
                return false;
            }

            return subsystem.TryAcquireRawEnvironmentDepthCpuImage(out cpuImage);
        }

        /// <summary>
        /// Attempt to get the latest smoothed environment depth CPU image. This provides direct access to
        /// the raw pixel data.
        /// </summary>
        /// <remarks>
        /// > [!NOTE]
        /// > The `XRCpuImage` must be disposed to avoid resource leaks.
        /// This differs from <see cref="TryAcquireEnvironmentDepthCpuImage"/> in that it always tries to acquire the
        /// smoothed environment depth image, whereas <see cref="TryAcquireEnvironmentDepthCpuImage"/>
        /// depends on the value of <see cref="environmentDepthTemporalSmoothingEnabled"/>.
        /// </remarks>
        /// <param name="cpuImage">If this method returns `true`, an acquired `XRCpuImage`.</param>
        /// <returns>Returns `true` if the CPU image was acquired. Returns `false` otherwise.</returns>
        public bool TryAcquireSmoothedEnvironmentDepthCpuImage(out XRCpuImage cpuImage)
        {
            if (subsystem == null)
            {
                cpuImage = default;
                return false;
            }

            return subsystem.TryAcquireSmoothedEnvironmentDepthCpuImage(out cpuImage);
        }

        /// <summary>
        /// Callback before the subsystem is started (but after it is created).
        /// </summary>
        protected override void OnBeforeStart()
        {
            requestedHumanStencilMode = m_HumanSegmentationStencilMode;
            requestedHumanDepthMode = m_HumanSegmentationDepthMode;
            requestedEnvironmentDepthMode = m_EnvironmentDepthMode;
            requestedOcclusionPreferenceMode = m_OcclusionPreferenceMode;
            environmentDepthTemporalSmoothingRequested = m_EnvironmentDepthTemporalSmoothing;

            ResetTextureInfos();
        }

        /// <summary>
        /// Callback when the manager is being disabled.
        /// </summary>
        protected override void OnDisable()
        {
            base.OnDisable();

            ResetTextureInfos();
            InvokeFrameReceived();
        }

        /// <summary>
        /// Callback as the manager is being updated.
        /// </summary>
        public void Update()
        {
            if (subsystem != null)
            {
                UpdateTexturesInfos();
                InvokeFrameReceived();

                requestedEnvironmentDepthMode = m_EnvironmentDepthMode;
                requestedHumanDepthMode = m_HumanSegmentationDepthMode;
                requestedHumanStencilMode = m_HumanSegmentationStencilMode;
                requestedOcclusionPreferenceMode = m_OcclusionPreferenceMode;
                environmentDepthTemporalSmoothingRequested = m_EnvironmentDepthTemporalSmoothing;
            }
        }

        void ResetTextureInfos()
        {
            m_HumanStencilTextureInfo.Reset();
            m_HumanDepthTextureInfo.Reset();
            m_EnvironmentDepthTextureInfo.Reset();
            m_EnvironmentDepthConfidenceTextureInfo.Reset();
        }

        /// <summary>
        /// Pull the texture descriptors from the occlusion subsystem, and update the texture information maintained by
        /// this component.
        /// </summary>
        void UpdateTexturesInfos()
        {
            var textureDescriptors = subsystem.GetTextureDescriptors(Allocator.Temp);
            try
            {
                int numUpdated = Math.Min(m_TextureInfos.Count, textureDescriptors.Length);

                // Update the existing textures that are in common between the two arrays.
                for (int i = 0; i < numUpdated; ++i)
                {
                    m_TextureInfos[i] = ARTextureInfo.GetUpdatedTextureInfo(m_TextureInfos[i], textureDescriptors[i]);
                }

                // If there are fewer textures in the current frame than we had previously, destroy any remaining unneeded
                // textures.
                if (numUpdated < m_TextureInfos.Count)
                {
                    for (int i = numUpdated; i < m_TextureInfos.Count; ++i)
                    {
                        m_TextureInfos[i].Reset();
                    }
                    m_TextureInfos.RemoveRange(numUpdated, (m_TextureInfos.Count - numUpdated));
                }
                // Else, if there are more textures in the current frame than we have previously, add new textures for any
                // additional descriptors.
                else if (textureDescriptors.Length > m_TextureInfos.Count)
                {
                    for (int i = numUpdated; i < textureDescriptors.Length; ++i)
                    {
                        m_TextureInfos.Add(new ARTextureInfo(textureDescriptors[i]));
                    }
                }
            }
            finally
            {
                if (textureDescriptors.IsCreated)
                {
                    textureDescriptors.Dispose();
                }
            }
        }

        /// <summary>
        /// Invoke the occlusion frame received event with the updated textures and texture property IDs.
        /// </summary>
        void InvokeFrameReceived()
        {
            if (frameReceived != null)
            {
                int numTextureInfos = m_TextureInfos.Count;

                m_Textures.Clear();
                m_TexturePropertyIds.Clear();

                m_Textures.Capacity = numTextureInfos;
                m_TexturePropertyIds.Capacity = numTextureInfos;

                for (int i = 0; i < numTextureInfos; ++i)
                {
                    DebugAssert.That(m_TextureInfos[i].descriptor.dimension == TextureDimension.Tex2D)?.
                        WithMessage($"Texture needs to be a Texture 2D, but instead is {m_TextureInfos[i].descriptor.dimension.ToString()}.");

                    m_Textures.Add((Texture2D)m_TextureInfos[i].texture);
                    m_TexturePropertyIds.Add(m_TextureInfos[i].descriptor.propertyNameId);
                }

                subsystem.GetMaterialKeywords(out List<string> enabledMaterialKeywords, out List<string>disabledMaterialKeywords);

                AROcclusionFrameEventArgs args = new AROcclusionFrameEventArgs();
                args.textures = m_Textures;
                args.propertyNameIds = m_TexturePropertyIds;
                args.enabledMaterialKeywords = enabledMaterialKeywords;
                args.disabledMaterialKeywords = disabledMaterialKeywords;

                frameReceived(args);
            }
        }
    }
}

使用EQ-R实现

EQ-R

简介

EQ-Renderer是EQ基于sceneform(filament)扩展的一个用于安卓端的三维AR渲染器。

主要功能

它包含sceneform_v1.16.0中九成接口(剔除了如sfb资源加载等已弃用的内容),扩展了视频背景视图、解决了sceneform模型加载的内存泄漏问题、集成了AREngine和ORB-SLAM3、添加了场景坐标与地理坐标系(CGCS-2000)的转换方法。

注:由于精力有限,文档和示例都不完善。sceneform相关请直接参考谷歌官方文档,扩展部分接口说明请移步git联系。

相关链接

Git仓库
  • EQ-Renderer的示例工程
码云
  • EQ-Renderer的示例工程
EQ-R相关文档
  • 文档目录

使用示例

需要在安卓清单中添加其值设为“com.google.ar.core.depth>

接口调用

在使用ARSceneLayout 创建AR布局控件时,在适当的地方修改深度遮挡模式即可,示例如下。

ARSceneLayout layout = new ARSceneLayout(this);//使用普通3d视图
layout.getSceneView.getCameraStream()
				 setDepthOcclusionMode(DepthOcclusionMode.DEPTH_OCCLUSION_ENABLED);

实现方式

获取深度图和相机帧,在着色器中根据深度数据处理。

EQ-R基于filament

filament材质如下:

material {
    name : depth,
    shadingModel : unlit,
    blending : opaque,
    vertexDomain : device,
    parameters : [
        {
            type : samplerExternal,
            name : cameraTexture
        },
        {
            type : sampler2d,
            name : depthTexture
        },
        {
            type : float4x4,
            name : uvTransform
        }
    ],
    requires : [
        uv0
    ]
}

fragment {
    void material(inout MaterialInputs material) {
        prepareMaterial(material);
        material.baseColor.rgb = inverseTonemapSRGB(texture(materialParams_cameraTexture, getUV0()).rgb);
        vec2 packed_depth = texture(materialParams_depthTexture, getUV0()).xy;
        float depth_mm = dot(packed_depth, vec2(255.f, 256.f * 255.f));
        vec4 view = mulMat4x4Float3(getClipFromViewMatrix(), vec3(0.f, 0.f, -depth_mm / 1000.f));
        float ndc_depth = view.z / view.w;
        gl_FragDepth = 1.f - ((ndc_depth + 1.f) / 2.f);
    }
}

vertex {
    void materialVertex(inout MaterialVertexInputs material) {
        material.uv0 = mulMat4x4Float3(materialParams.uvTransform, vec3(material.uv0.x, material.uv0.y, 0.f)).xy;
    }
} 

示例应用

之前基于Android(Java)做过的示例应用。

管线巡检示例 :开挖显示、卷帘效果…

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/539671.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

2024年蓝桥杯40天打卡总结

2024蓝桥杯40天打卡总结 真题题解其它预估考点重点复习考点时间复杂度前缀和二分的两个模板字符串相关 String和StringBuilderArrayList HashSet HashMap相关蓝桥杯Java常用算法大数类BigInteger的存储与运算日期相关考点及函数质数最小公倍数和最大公约数排序库的使用栈Math类…

牛客周赛 Round 39(A,B,C,D,E,F,G)

比赛链接 官方题解&#xff08;视频&#xff09; B题是个贪心。CD用同余最短路&#xff0c;预处理的完全背包&#xff0c;多重背包都能做&#xff0c;比较典型。E是个诈骗&#xff0c;暴力就完事了。F是个线段树。G是个分类大讨论&#xff0c;出题人钦定的本年度最佳最粪 题目…

14届蓝桥杯 C/C++ B组 T6 岛屿个数 (BFS,FloodFill,填色)

首先拿到这道题不要想着去直接判断环里面的岛屿&#xff0c;这样太困难了&#xff0c;我们可以使用之前做过的题的经验&#xff0c;在输入加入一圈海水&#xff0c;然后从(0,0)点开始BFS&#xff0c;这里进行八向搜索&#xff0c;搜到的0全部都染色成2&#xff0c;假如2能够蔓延…

GD32 HID键盘矩阵键盘发送数据时,一直发送数据问题处理

这个问题找了两三天,开始并不认为是示例程序的问题,只是感觉是自己代码问题。 这个解决流程大概是: 先调好矩阵键盘=> 调用发送函数。 就是因为调用时,一直发送数据,我也在按键抬起做了操作,始终不行。 最后,发现时示例代码中有个 空闲中断 引起的。 udev->reg…

数学建模-最优包衣厚度终点判别法-三(Bayes判别分析法和梯度下降算法)

&#x1f49e;&#x1f49e; 前言 hello hello~ &#xff0c;这里是viperrrrrrr~&#x1f496;&#x1f496; &#xff0c;欢迎大家点赞&#x1f973;&#x1f973;关注&#x1f4a5;&#x1f4a5;收藏&#x1f339;&#x1f339;&#x1f339; &#x1f4a5;个人主页&#xff…

【算法】bfs解决FloodFill问题

个人主页 &#xff1a; zxctscl 如有转载请先通知 题目 FloodFill算法1. 733. 图像渲染1.1 分析1.2 代码 2. 200. 岛屿数量2.1 分析2.2 代码 3. 695. 岛屿的最大面积3.1 分析3.2 代码 4. 130. 被围绕的区域4.1 分析4.2 代码 FloodFill算法 FloodFill就是洪水灌溉&#xff0c;解…

minio-docker单节点部署SDK测试文件上传下载

目录 一&#xff0c;docker部署minio单节点单磁盘 二&#xff0c;SDK测试上传下载 一&#xff0c;docker部署minio单节点单磁盘 1.拉取镜像 # 下载镜像 docker pull minio/minio 2.查看镜像 docker images 3.启动minio(新版本) 创建本机上的挂载目录&#xff0c;这个可以…

蓝桥杯备赛(C/C++组)

README&#xff1a; 本笔记是自己的备考笔记&#xff0c;按照官网提纲进行复习&#xff01;适合有基础&#xff0c;复习用。 一、总考点 试题考查选手解决实际问题的能力&#xff0c;对于结果填空题&#xff0c;选手可以使用手算、软件、编程等方法解决&#xff0c;对于编程大…

Cesium 无人机航线规划

鉴于大疆司空平台和大疆无人机app高度绑定&#xff0c;导致很多东西没办法定制化。 从去年的时候就打算仿大疆开发一套完整的平台&#xff0c;包括无人机app以及仿司空2的管理平台&#xff0c;集航线规划、任务派发、实时图像、无人机管理等功能的平台。 当前阶段主要实现了&…

Kubernetes学习笔记12

k8s核心概念&#xff1a;控制器&#xff1a; 我们删除Pod是可以直接删除的&#xff0c;如果生产环境中的误操作&#xff0c;Pod同样也会被轻易地被删除掉。 所以&#xff0c;在K8s中引入另外一个概念&#xff1a;Controller&#xff08;控制器&#xff09;的概念&#xff0c;…

STM32电机控制固件架构

目录 一、应用程序剖析 二、面向现场的控制实现体系结构 1、参考计算循环 2、电流调节环路 3、安全回路 一、应用程序剖析 上图显示了由ST MC SDK构建的电机控制应用程序。首先&#xff0c;这样的应用程序是由电机控制工作台生成的软件项目&#xff0c;这要归功于STM32Cube…

中国手机频段介绍

中国目前有三大运营商&#xff0c;分别是中国移动、中国联通、中国电信&#xff0c;还有一个潜在的运营商中国广电&#xff0c;各家使用的2/3/4G的制式略有不同 中国移动的GSM包括900M和1800M两个频段。 中国移动的4G的TD-LTE包括B34、B38、B39、B40、B41几个频段&#xff0c;…

纯css实现switch开关

代码比较简单&#xff0c;有需要直接在下边粘贴使用吧~ html: <div class"switch-box"><input id"switch" type"checkbox"><label></label></div> css&#xff1a; .switch-box {position: relative;height: 25px…

SFP光模块和媒体转换器的区别

SFP光模块和媒体转换器都是光电转换设备。它们是否可以互换使用&#xff1f;它们之间有什么区别&#xff1f; SFP光模块与媒体转换器&#xff1a;它们是什么&#xff1f; SFP模块是一种可热插拔的光模块&#xff0c;用于连接网络交换机。它可以将电信号转换为光信号&#xff…

Java - JDK8 下载 安装教程(Mac M芯片)

下载 JDK 安装包 在个人的电脑上&#xff0c;我是比较喜欢使用 zulu 的 JDK&#xff0c;因为它比较早的支持了苹果的 M1 芯片 不论是版本还是功能都非常齐全&#xff0c;各个系统都有对应版本&#xff0c;基于 OpenJDK&#xff0c;免费&#xff0c;下载也方便 官网下载&…

算法——马尔可夫与隐马尔可夫模型

HMM&#xff08;Hidden Markov Model&#xff09;是一种统计模型&#xff0c;用来描述一个隐含未知量的马尔可夫过程&#xff08;马尔可夫过程是一类随机过程&#xff0c;它的原始模型是马尔科夫链&#xff09;&#xff0c;它是结构最简单的动态贝叶斯网&#xff0c;是一种著名…

Qt---控件的基本属性

文章目录 enabled(控件可用状态)geometry(位置和尺寸)简单恶搞程序 windowIcon(顶层 widget 窗口图标)使用 qrc 机制 windowOpacity(窗口的不透明值)cursor(当鼠标悬停空间上的形状)自定义鼠标图标 toolTip(鼠标悬停时的提示)focusPolicy(控件获取焦点的策略)styleSheet(通过CS…

数据集学习

1&#xff0c;CIFAR-10数据集 CIFAR-10数据集由10个类的60000个32x32彩色图像组成&#xff0c;每个类有6000个图像。有50000个训练图像和10000个测试图像。 数据集分为五个训练批次和一个测试批次&#xff0c;每个批次有10000个图像。测试批次包含来自每个类别的恰好1000个随机…

C++的stack和queue类(三):适配所有容器的反向迭代器

目录 前言 list的反向迭代器 list.h文件 ReverseIterator.h文件 test.cpp文件 前言 迭代器按性质分类&#xff1a; 单向&#xff1a;forward_list双向&#xff1a;list随机&#xff1a;vector / deque 迭代器按功能分类&#xff1a; 正向反向const list的反向迭代器…

uni-app web端使用getUserMedia,摄像头拍照

<template><view><video id"video"></video></view> </template> 摄像头显示在video标签上 var opts {audio: false,video: true }navigator.mediaDevices.getUserMedia(opts).then((stream)> {video document.querySelec…