数字人知识库:Awesome-Talking-Head-Synthesis

数字人知识库:Awesome-Talking-Head-Synthesis

文章目录

  • 数字人知识库:Awesome-Talking-Head-Synthesis
    • Datasets
    • Survey
    • Audio-driven
    • Text-driven
    • NeRF & 3D
    • Metrics
    • Tools & Software
    • Slides & Presentations

Gihub:https://github.com/Kedreamix/Awesome-Talking-Head-Synthesis

这份资源库整理了与生成对抗网络(GAN)和神经光辐场(NeRF)相关的论文、代码和资源,重点关注基于图像和音频的虚拟讲话头合成论文及已发布代码。

论文合集及发布代码整理。✍️

大多数论文链接到“arXiv”或学术会议/期刊的PDF。但是,一些论文可能需要学术许可才能查看。

这个Awesome Talking Head Synthesis项目将持续更新 - 欢迎Pull Request。如果您有任何论文缺失、新增论文、关键研究人员或错别字建议,请编辑提交PR。您也可以打开Issue或直接通过电子邮件联系我。

如果您觉得这个仓库有用,请star✭支持!

2023年12月更新 感谢https://github.com/Curated-Awesome-Lists/awesome-ai-talking-heads, 我增加了一些其内容,例如Tools&Software和Slides&Presentations模块。 希望这对您有帮助。

如果您对扩展这个聚合资源有任何想法或反馈,请打开Issue或PR——社区贡献对推进我们共同的知识至关重要。

让我们继续努力,实现更逼真的数字人脸表现!我们已经走了很长一段路,但还有很长的路要走。通过持续的研究和合作,我相信我们一定会达到目标!

如果您觉得这个仓库很有价值,请star✭并分享给他人。您的支持可以激励我持续改进和维护它。如果您还有任何其他问题,请告诉我!

This repository organizes papers, codes and resources related to generative adversarial networks (GANs) 🤗 and neural radiance fields (NeRF) 🎨, with a main focus on image-driven and audio-driven talking head synthesis papers and released codes. 👤

Papers for Talking Head Synthesis, released codes collections. ✍️

Most papers are linked to PDFs on “arXiv” or journal/conference websites 📚. However, some papers require an academic license to view 🔐.

🔆 This project Awesome-Talking-Head-Synthesis is ongoing - pull requests are welcome! If you have any suggestions (missing papers, new papers, key researchers or typos), please feel free to edit and submit a PR. You can also open an issue or contact me directly via email. 📩

⭐ If you find this repo useful, please give it a star! 🤩

2023.12 Update 📆

Thank you to https://github.com/Curated-Awesome-Lists/awesome-ai-talking-heads, I have added some of its contents, such as Tools & Software and Slides & Presentations. 🙏 I hope this will be helpful.😊

If you have any feedback or ideas on extending this aggregated resource, please open an issue or PR - community contributions are vital to advancing this shared knowledge. 🤝

Let’s keep pushing forward to recreate ever more realistic digital human faces! 💪 We’ve come so far but still have a long way to go. With continued research 🔬 and collaboration, I’m sure we’ll get there! 🤗

Please feel free to star ⭐ and share this repo if you find it a valuable resource. Your support helps motivate me to keep maintaining and improving it. 🥰 Let me know if you have any other questions!

Datasets

在这里插入图片描述

DatasetDownload LinkDescription
Faceforensics++Download link
CelebVDownload link
VoxCelebDownload linkVoxCeleb, a comprehensive audio-visual dataset for speaker recognition, encompasses both VoxCeleb1 and VoxCeleb2 datasets.
VoxCeleb1Download linkVoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2Download linkExtracted from YouTube videos, VoxCeleb2 includes video URLs and discourse timestamps. As the largest public audio-visual dataset, it is primarily used for speaker recognition tasks. However, it can also be utilized for training talking-head generation models. To obtain download permission and access the dataset, apply here. Requires 300 GB+ storage space.
ObamaSetDownload linkObamaSet is a specialized audio-visual dataset focused on analyzing the visual speech of former US President Barack Obama. All video samples are collected from his weekly address footage. Unlike previous datasets, it exclusively centers on Barack Obama and does not provide any human annotations.
TalkingHead-1KHDownload linkThe dataset consists of 500k video clips, of which about 80k are greater than 512x512 resolution. Only videos under permissive licenses are included. Note that the number of videos differ from that in the original paper because a more robust preprocessing script was used to split the videos.
LRW (Lip Reading in the Wild)Download linkLRW, a diverse English-speaking video dataset from the BBC program, features over 1000 speakers with various speaking styles and head poses. Each video is 1.16 seconds long (29 frames) and involves the target word along with context.
MEAD 2020Download linkMEAD 2020 is a Talking Head dataset annotated with emotion labels and intensity labels. The dataset focuses on facial generation for natural emotional speech, covering eight different emotions on three intensity levels.
CelebV-HQDownload linkCelebV-HQ is a high-quality video dataset comprising 35,666 clips with a resolution of at least 512x512. It includes 15,653 identities, and each clip is manually labeled with 83 facial attributes, spanning appearance, action, and emotion. The dataset’s diversity and temporal coherence make it a valuable resource for tasks like unconditional video generation and video facial attribute editing.
HDTFDownload linkHDTF, the High-definition Talking-Face Dataset, is a large in-the-wild high-resolution audio-visual dataset consisting of approximately 362 different videos totaling 15.8 hours. Original video resolutions are 720 P or 1080 P, and each cropped video is resized to 512 × 512.
CREMA-DDownload linkCREMA-D is a diverse dataset with 7,442 original clips featuring 91 actors, including 48 male and 43 female actors aged 20 to 74, representing various races and ethnicities. The dataset includes recordings of actors speaking from a set of 12 sentences, expressing six different emotions (Anger, Disgust, Fear, Happy, Neutral, and Sad) at four emotion levels (Low, Medium, High, and Unspecified). Emotion and intensity ratings were gathered through crowd-sourcing, with 2,443 participants rating 90 unique clips each (30 audio, 30 visual, and 30 audio-visual). Over 95% of the clips have more than 7 ratings. For additional details on CREMA-D, refer to the paper link.
LRS2Download linkLRS2 is a lip reading dataset that includes videos recorded in diverse settings, suitable for studying lip reading and visual speech recognition.
GRIDDownload linkThe GRID dataset was recorded in a laboratory setting with 34 volunteers, each speaking 1000 phrases, totaling 34,000 utterance instances. Phrases follow specific rules, with six words randomly selected from six categories: “command,” “color,” “preposition,” “letter,” “number,” and “adverb.” Access the dataset here.
SAVEEDownload linkThe SAVEE (Surrey Audio-Visual Expressed Emotion) database is a crucial component for developing an automatic emotion recognition system. It features recordings from 4 male actors expressing 7 different emotions, totaling 480 British English utterances. These sentences, selected from the standard TIMIT corpus, are phonetically balanced for each emotion. Recorded in a high-quality visual media lab, the data undergoes processing and labeling. Performance evaluation involves 10 subjects rating recordings under audio, visual, and audio-visual conditions. Classification systems for each modality achieve speaker-independent recognition rates of 61%, 65%, and 84% for audio, visual, and audio-visual, respectively.
BIWI(3D)Download linkThe Biwi 3D Audiovisual Corpus of Affective Communication serves as a compromise between data authenticity and quality, acquired at ETHZ in collaboration with SYNVO GmbH.
VOCADownload linkVOCA is a 4D-face dataset with approximately 29 minutes of 4D face scans and synchronized audio from 12-bit speakers. It greatly facilitates research in 3D VSG.
Multiface(3D)Download linkThe Multiface Dataset consists of high-quality multi-view video recordings of 13 people displaying various facial expressions. It contains approximately 12,200 to 23,000 frames per subject, captured at 30 fps from around 40 to 160 camera views with uniform lighting. The dataset’s size is 65TB and includes raw images (2048x1334 resolution), tracked and meshed heads, 1024x1024 unwrapped face textures, camera calibration metadata, and audio. This repository provides code for downloading the dataset and building a codec avatar using a deep appearance model.

Survey

YearTitleConference/Journal
2023From Pixels to Portraits: A Comprehensive Survey of Talking Head Generation Techniques and ApplicationsarXiv 2023
2023Human-Computer Interaction System: A Survey of Talking-Head GenerationIEEE
2023Talking human face generation: A surveyACM
2022Deep Learning for Visual Speech Analysis: A SurveyarXiv 2022
2020What comprises a good talking-head video generation?: A Survey and BenchmarkarXiv 2020

Audio-driven

YearTitleConference/JournalCodeProjectKeywords
2024[GAIA] GAIA: Zero-shot Talking Avatar GenerationArix 2024Code(coming)Project😲😲😲
2023Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head Video GenerationICCV 2023CodeProject-
2023[ToonTalker] ToonTalker: Cross-Domain Face ReenactmentICCV 2023---
2023Efficient Emotional Adaptation for Audio-Driven Talking-Head GenerationICCV 2023CodeProject-
2023[EMMN] EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face GenerationICCV 2023--Emotion
2023Emotional Listener Portrait: Realistic Listener Motion Simulation in ConversationICCV 2023--Emotion,LHG
2023[MODA] MODA: Mapping-Once Audio-driven Portrait Animation with Dual AttentionsICCV 2023---
2023[Facediffuser] Facediffuser: Speech-driven 3d facial animation synthesis using diffusionACM SIGGRAPH MIG 2023CodeProject🔥Diffusion,3D
2023Audio-Driven Dubbing for User Generated Contents via Style-Aware Semi-Parametric SynthesisTCSVT 2023--
2023[SadTalker] SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face AnimationCVPR 2023CodeProject3D,Single Image
2023[EmoTalk] EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face AnimationICCV 2023Code3D,Emotion
2023Emotional Talking Head Generation based on Memory-Sharing and Attention-Augmented NetworksInterSpeech 2023Emotion
2023[DINet] DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution VideoAAAI 2023Code-
2023[StyleTalk] StyleTalk: One-shot Talking Head Generation with Controllable Speaking StylesAAAI 2023Code-Style
2023High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space LearningCVPR 2023--Emotion
2023[StyleSync] StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based GeneratorCVPR 2023CodeProject-
2023[TalkLip] TalkLip: Seeing What You Said - Talking Face Generation Guided by a Lip Reading ExpertCVPR 2023Code--
2023[CodeTalker] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion PriorCVPR 2023CodeProject3D,codebook
2023[EmoGen] Emotionally Enhanced Talking Face GenerationArxiv 2023Code-Emotion
2023[DAE-Talker] DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion AutoencoderArxiv 2023-Project🔥Diffusion
2023[READ] [READ Avatars: Realistic Emotion-controllable Audio Driven Avatars](READ Avatars: Realistic Emotion-controllable Audio Driven Avatars)Arxiv 2023---
2023[DiffTalk] DiffTalk: Crafting Diffusion Models for Generalized Talking Head SynthesisCVPR 2023CodeProject🔥Diffusion
2023[Diffused Heads] Diffused Heads: Diffusion Models Beat GANs on Talking-Face GenerationArxiv 2023-Project🔥Diffusion
2022[MemFace] Expressive Talking Head Generation with Granular Audio-Visual ControlCVPR 2022---
2022Talking Face Generation with Multilingual TTSCVPR 2022Demo Track--
2022[EAMM] EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion ModelSIGGRAPH 2022--Emotion
2022[SPACEx] SPACEx 🚀: Speech-driven Portrait Animation with Controllable ExpressionarXiv 2022-Project-
2022[AV-CAT] Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in TransformersSIGGRAPH Asia 2022---
2022[MemFace] Memories are One-to-Many Mapping Alleviators in Talking Face GenerationarXiv 2022---
2021[PC-AVS] PC-AVS: Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual RepresentationCVPR 2021CodeProject-
2021[IATS] Imitating Arbitrary Talking Style for Realistic Audio-Driven Talking Face SynthesisACM MM 2021---
2021[Speech2Talking-Face] Speech2Talking-Face: Inferring and Driving a Face with Synchronized Audio-Visual RepresentationIJCAI 2021---
2021[FAU] Talking Head Generation with Audio and Speech Related Facial Action UnitsBMVC 2021--AU
2021[EVP] Audio-Driven Emotional Video PortraitsCVPR 2021Code-Emotion
2021[IATS] IATS: Imitating Arbitrary Talking Style for Realistic Audio-Driven Talking Face SynthesisACM Multimedia 2021---
2020[Wav2Lip] A Lip Sync Expert Is All You Need for Speech to Lip Generation In The WildACM Multimedia 2020CodeProject-
2020[RhythmicHead] Talking-head Generation with Rhythmic Head MotionECCV 2020Code--
2020[MakeItTalk] Speaker-Aware Talking-Head AnimationSIGGRAPH Asia 2020CodeProject-
2020[Neural Voice Puppetry] Audio-driven Facial ReenactmentECCV 2020-Project-
2020[MEAD] A Large-scale Audio-visual Dataset for Emotional Talking-face GenerationECCV 2020CodeProject-
2020Realistic Speech-Driven Facial Animation with GANsIJCV 2020---
2019[DAVS] Talking Face Generation by Adversarially Disentangled Audio-Visual RepresentationAAAI 2019Code--
2019[ATVGnet] Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise LossCVPR 2019Code--
2018Lip Movements Generation at a GlanceECCV 2018Code--
2018[VisemeNet] Audio-Driven Animator-Centric Speech AnimationSIGGRAPH 2018---
2017[Synthesizing Obama] Learning Lip Sync From AudioSIGGRAPH 2017-Project-
2017[You Said That?] Synthesising Talking Faces From AudioBMVC 2019Code--
2017Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and EmotionSIGGRAPH 2017---
2017A Deep Learning Approach for Generalized Speech AnimationSIGGRAPH 2017---
2016[LRW] Lip Reading in the WildACCV 2016---

Text-driven

YearTitleConference/JournalCode/Proj
2023TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking StylesArxiv
2021Write-a-speaker: Text-based Emotional and Rhythmic Talking-head GenerationAAAICode
2021Txt2vid: Ultra-low bitrate compression of talking-head videos via textArxivCode

NeRF & 3D

YearTitleConference/JournalCodeProjectKeywords
2024[SyncTalk] SyncTalk: The Devil😈 is in the Synchronization for Talking Head SynthesisCVPR 2024?CodeProject😈
2024[DT-NeRF] DT-NeRF: Decomposed Triplane-Hash Neural Radiance Fields for High-Fidelity Talking Portrait SynthesisICASSP 2024--ER-NeRF
2023[ER-NeRF] Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait SynthesisICCV 2023CodeProjectTri-plane
2023[LipNeRF] LipNeRF: What is the right feature space to lip-sync a NeRF?FG 2023CodeProjectWav2lip
2023[SD-NeRF] SD-NeRF: Towards Lifelike Talking Head Animation via Spatially-adaptive Dual-driven NeRFsIEEE 2023--
2023[Instruct-NeuralTalker] Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with InstructionsArxiv 2023
2023[GeneFace++] Generalized and Stable Real-Time Audio-Driven 3D Talking Face GenerationArxiv 2023-Project-
2023[GeneFace] GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face SynthesisICLR 2023CodeProject-
2022[RAD-NeRF] RAD-NeRF: Real-time Neural Talking Portrait SynthesisArxiv 2022CodeProjectInstantNGP
2022[DFRF] DFRF:Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head SynthesisECCV 2022CodeProject
2022[DialogueNeRF] DialogueNeRF: Towards Realistic Avatar Face-to-face Conversation Video GenerationArxiv 2022---
2022[NeRFInvertor] NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real Image AnimationArxiv 2022CodeProject-
2022[Next3D] Next3D: Generative Neural Texture Rasterization for 3D-Aware Head AvatarsArxiv 2022CodeProject-
2022[3DFaceShop] 3DFaceShop: Explicitly Controllable 3D-Aware Portrait GenerationArxiv 2022CodeProject-
2022[FNeVR] FNeVR: Neural Volume Rendering for Face AnimationArxiv 2022Code--
2022[ROME] ROME: Realistic One-shot Mesh-based Head AvatarsECCV 2022CodeProject-
2022[IMavatar] IMavatar: Implicit Morphable Head Avatars from VideosCVPR 2022CodeProject-
2022[HeadNeRF] HeadNeRF: A Real-time NeRF-based Parametric Head ModelCVPR 2022CodeProject-
2022[SSP-NeRF] Semantic-Aware Implicit Neural Audio-Driven Video Portrait GenerationArxiv 2022CodeProject-
2021[AD-NeRF] AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head SynthesisICCV 2021CodeProject-
2021[NerFACE] NerFACE: Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar ReconstructionCVPR 2021 OralCodeProject-
2021[DFA-NeRF] DFA-NeRF: Personalized Talking Head Generation via Disentangled Face Attributes Neural RenderingArxiv 2021Code--

Metrics

MetricsPaperLink
PSNR (peak signal-to-noise ratio)-
SSIM (structural similarity index measure)Image quality assessment: from error visibility to structural similarity.
CPBD(cumulative probability of blur detection)A no-reference image blur metric based on the cumulative probability of blur detection
LPIPS (Learned Perceptual Image Patch Similarity) -The Unreasonable Effectiveness of Deep Features as a Perceptual Metricpaper
NIQE (Natural Image Quality Evaluator)Making a ‘Completely Blind’ Image Quality Analyzerpaper
FID (Fréchet inception distance)GANs trained by a two time-scale update rule converge to a local nash equilibrium
LMD (landmark distance error)Lip Movements Generation at a Glance
LRA (lip-reading accuracy)Talking Face Generation by Conditional Recurrent Adversarial Networkpaper
WER(word error rate)Lipnet: end-to-end sentencelevel lipreading.
LSE-D (Lip Sync Error - Distance)Out of time: automated lip sync in the wild
LSE-C (Lip Sync Error - Confidence)Out of time: automated lip sync in the wild
ACD(Average content distance)Facenet: a unified embedding for face recognition and clustering.
CSIM(cosine similarity)Arcface: additive angular margin loss for deep face recognition.
EAR(eye aspect ratio)Real-time eye blink detection using facial landmarks. In: Computer Vision Winter Workshop
ESD(emotion similarity distance)What comprises a good talking-head video generation?: A Survey and Benchmark

Tools & Software

Tool/ResourceDescription
LUCIADevelopment of a MPEG-4 Talking Head Engine. 💻
Yepic StudioCreate and dub talking head-style videos in minutes without expensive equipment. 🎥
Mel McGee’s TalkbotsA complete multi-browser, multi-platform talking head application in SVG suitable for web sites or as an avatar. 🗣️
face3D_chungCreate 3D character avatar head objects with texture from a single photo for your games. 🎮
CrazyTalkExciting features for 3D head creation and automation. 🤪
tts avatar free download - SourceForgeMel McGee’s Talkbots is a complete multi-browser, multi-platform talking head. (🔧👄)
Verbatim AI - Product Information, Latest Updates, and Reviews 2023A simple yet powerful API to generate AI “talking head” videos in near real-time with Verbatim AI. Add interest, intrigue, and dynamism to your chat bots! (🔧👄)
Best Open Source BASIC 3D Modeling SoftwareIncludes talk3D_chung, a small example using obj models created with face3D_chung, and speak3D_chung_dll, a dll to load and display face3D_chung talking avatars. (🛠️🎭)
DVDStyler / Discussion / Help: ffmpeg-vbr or internalTalking heads would get a bitrate which is unnecessarily high while using DVDStyler. (🛠️👄)
puffin web browser free download - SourceForgeMel McGee’s Talkbots is a complete multi-browser, multi-platform talking head. (🔧👄)
12 best AI video generators to use in 2023 Free and paid |Product …Whether you’re an entrepreneur, small business owner, or run a large company, AI video generators make it super easy to create high-quality videos from scratch. (🔧🎥)

Slides & Presentations

Presentation TitleDescription
Few-Shot Adversarial Learning of Realistic Neural Talking Head ModelsPresentation reviewing the few-shot adversarial learning of realistic neural talking head models.
Nethania Michelle’s CharacterPPT: Presentation discussing the improvement of a 3D talking head for use in an avatar of a virtual meeting room.
Presenting you: Top tips on presenting with Prezi Video – PreziArticle providing top tips for presenting with Prezi Video.
Research PresentationPPT: Resident Research Presentation Slide Deck.
Adding narration to your presentation (using Prezi Video) – PreziLearn how to add narration to your Prezi presentation with Prezi Video.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/226104.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

从 ByteHouse 网关,看如何进一步提升 OLAP 引擎性能

更多技术交流、求职机会,欢迎关注字节跳动数据平台微信公众号,回复【1】进入官方交流群 随着数字化转型的加速,企业面临着海量数据收集、处理和分析挑战。ClickHouse因其分析速度快、高性能的特点,被开发者广泛使用。 作为连接客户…

OPC UA客户端工具UaExpert使用

OPC UA客户端工具UaExpert使用 官方下载地址: https://www.unified-automation.com/downloads.html UaExpert 是一个全功能的 OPC UA 客户端,能够支持多个 OPC UA 配置文件和功能。 安装UaExpert 官方下载最新安装包: uaexpert-bin-win32-x86-vs2008sp1-v1.5.1-…

一文搞懂系列——你真的了解如何生成动态库了吗?

引言 动态库的编译,这有什么难度,这不是手到擒来的事情吗?无非不就是: gcc -FPIC -shared -o libxxx.so *.o *.c 我若是提出这些需求场景,阁下又如何应对呢? 动态库A依赖其他部分提供的能力。但是却不…

网络层(1)——概述

一、概述 网络层毫无疑问是最复杂的一层,涉及到大量的协议与结构的内容。在如今主流的设计中,大家都会把网络层分成两个部分:数据平面、控制平面。其中数据平面指的是网络层中每台路由器的功能,它决定了到达路由器端口输入链路之一…

马蹄集 oj赛(双周赛第十六次)

目录 ​圣诞树上的星星 军团大战 堆煤球 武力对决 小码哥教数学 小码哥玩字母独 跳跳棋 激光扫描游戏 数数游戏 小狗巴克 魔塔密码 地狱尖兵 3D眩晕 圣诞树上的星星 难度:青铜 时间限制: 1秒占用内存:64M 小码哥在过圣诞节! 小码哥家里有很多个星星 *&#xff0c…

热烈祝贺许战海老师成为北京湖南商会特聘专家!

在北京的初冬时节,一股商业的暖流在世纪华天大酒店的湖南厅中涌动。2023年12月3日下午,这里迎来了一场盛大的聚会——北京湖南企业商会成立20周年的预热活动之一:“湘商大讲堂”。这不仅是一次庆祝,更是一次对未来的展望&#xff…

在Pwn中,为什么时长需要栈对齐?

Index 介绍知识要点正文 介绍 在 Pwn 的学习中,对于初学者常常会遇到这个问题: 找到了溢出点,并且知道如何溢出,但是不知道为什么自己的Payload并没有成功,Pwntools报错EOF: 今天趁着有时间,来…

C++ 指针进阶

目录 一、字符指针 二、指针数组 三、数组指针 数组指针的定义 &数组名 与 数组名 数组指针的使用 四、数组参数 一维数组传参 二维数组传参 五、指针参数 一级指针传参 二级指针传参 六、函数指针 七、函数指针数组 八、指向函数指针数组的指针 九、回调函…

HBase 使用JDK21

HBase 使用JDK21 启动zookeeper和hadoop 创建软件目录 mkdir -p /opt/soft cd /opt/soft下载软件 wget https://dlcdn.apache.org/hbase/2.5.6/hbase-2.5.6-hadoop3-bin.tar.gz解压 hbase tar -zxvf hbase-2.5.6-hadoop3-bin.tar.gz修改 hbase 目录名称 mv hbase-2.5.6-had…

圣诞将至—C语言圣诞树代码来啦

文章目录 圣诞将至—C实现语言圣诞树源码 圣诞将至—C实现语言圣诞树 圣诞树 源码 #define _CRT_SECURE_NO_WARNINGS#include <stdio.h> #include <math.h> #include <stdlib.h> #include <windows.h> #include <time.h> #define PI 3.14159265…

深眸科技以机器视觉高性能优势,为消费电子行业提供优质解决方案

机器视觉技术近年来发展迅速&#xff0c;基于计算机对图像的处理与分析&#xff0c;能够识别和辨别目标物体&#xff0c;被广泛应用于人工智能、智能制造等领域。 机器视觉凭借着高精度、高效率、灵活性和可靠性等优势&#xff0c;不断推进工业企业生产自动化和智能化进程&…

论ChatGPT让程序员提升效率—掌握时代工具风口修炼之道【文末送书-02】

文章目录 一.论ChatGPT让程序员提升效率—掌握时代工具风口修炼之道二.ChatGPT在代码编写中的应用2.1 快速解决问题&#xff1a;2.2 优化代码结构&#xff1a;2.3 ChatGPT的学习过程2.4 ChatGPT的自定义训练 三.文末推荐与福利免费包邮送出4本&#xff01;3.2领书方式 一.论Cha…

java群聊聊天程序

先运行服务端&#xff0c;如果不先连接服务端&#xff0c;就不监听&#xff0c;那客户端不知道连接谁 服务端 import java.io.*; import java.net.*; import java.util.ArrayList; public class QLFWD{public static ServerSocket server_socket;public static ArrayList<S…

力扣刷题day3(移除元素,找出字符串中的第一个不匹配项的下标,搜索插入位置)

题目1&#xff1a;27.移除元素 思路1和代码&#xff1a; //由于题目要求删除数组中等于 val\textit{ val }val 的元素&#xff0c;因此输出数组的长度一定小于等于输入数组的长度&#xff0c;我们可以把输出的数组直接写在输入数组上。可以使用双指针&#xff1a;右指针 righ…

“掌握高效视频分割技巧,降低误差,提高精度“

如果你是一名视频编辑爱好者或者专业人士&#xff0c;那么你一定会在视频剪辑的过程中遇到各种挑战。其中&#xff0c;如何准确高效地进行视频分割是一个至关重要的问题。现在&#xff0c;我们将向你展示一种全新的解决方案&#xff0c;帮助你轻松解决这些问题。 首先第一步&a…

灯塔资产管理系统魔改版搭建(ARL-Puls)

免责声明 文章仅做经验分享用途&#xff01;利用本文章所提供的信息而造成的任何直接或者间接的后果及损失&#xff0c;均由使用者本人负责&#xff0c;作者不为此承担任何责任&#xff0c;一旦造成后果请自行承担&#xff01;&#xff01;&#xff01; 简介 ARL-Puls是基于斗…

pycharm使用Anaconda中的虚拟环境【我的入门困惑二】

Anaconda的作用 Anaconda的存在&#xff0c;使得一台电脑上可以存在多个不同版本的python和相应的包&#xff0c;这解决了多个项目运行时&#xff0c;所需要的python和包版本不同的问题。 本文内容 今天就来简单说说如何在pycharm使用Anaconda中的虚拟环境。 详细介绍 首先…

Linux | tar,bc,uname指令

Linux | tar,bc,uname指令 文章目录 Linux | tar,bc,uname指令tar指令【重要】bc指令uname –r指令 tar指令【重要】 tar [-cxtzjvf] 文件与目录 … 参数&#xff1a; -c &#xff1a;建立一个压缩文件的参数指令(create 的意思)&#xff1b;-x &#xff1a;解开一个压缩文件的…

静态住宅代理IP怎么用?有何优势?

在全球互联网的广阔天地里&#xff0c;网络地理限制常常成为用户访问不同国家和地区内容的障碍。这时&#xff0c;住宅代理IP显得尤为重要。住宅代理IP&#xff0c;顾名思义&#xff0c;是指那些直接分配给家庭宽带用户的IP地址。与数据中心IP或商业IP相比&#xff0c;它们更能…

iOS-打包上架构建版本一直不出现

iOS开发过程中&#xff0c;打包上架苹果审核是一个不可或缺的环节。说实话&#xff0c;这个问题我遇见两次了&#xff0c;为了让自己长点记性&#xff0c;决定写下来。首先&#xff0c;列举几种情况&#xff1a; 1.iPa包上传至App store后&#xff0c;一个小时内不显示构建版本…