webassembly003 TTS BARK.CPP

TTS task

  • TTS(Text-to-Speech)任务是一种自然语言处理(NLP)任务,其中模型的目标是将输入的文本转换为声音,实现自动语音合成。具体来说,模型需要理解输入的文本并生成对应的语音输出,使得合成的语音听起来自然而流畅,类似于人类语音的表达方式。

Bark

  • Bark(https://github.com/suno-ai/bark) 是由 Suno 创建的基于转换器的文本到音频模型。Bark 可以生成高度逼真的多语言语音以及其他音频,包括音乐、背景噪音和简单的音效。该模型还可以产生非语言交流,如大笑、叹息和哭泣。为了支持研究社区,我们提供了对预训练模型检查点的访问,这些检查点已准备好进行推理并可用于商业用途。

bark.cpp

  • https://github.com/PABannier/bark.cpp

编译

$mkdir build
$cd build
$cmake ..
-- The C compiler identification is GNU 9.5.0
-- The CXX compiler identification is GNU 9.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Linux detected
-- Configuring done
-- Generating done
-- Build files have been written to: /home/pdd/le/bark.cpp/build
$cmake --build . --config Release
[  7%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o
[ 14%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 21%] Linking C static library libggml.a
[ 21%] Built target ggml
[ 28%] Building CXX object CMakeFiles/bark.cpp.dir/bark.cpp.o
[ 42%] Linking CXX static library libbark.cpp.a
[ 42%] Built target bark.cpp
[ 50%] Building CXX object examples/main/CMakeFiles/main.dir/main.cpp.o
[ 57%] Linking CXX executable ../../bin/main
[ 57%] Built target main
[ 64%] Building CXX object examples/server/CMakeFiles/server.dir/server.cpp.o
[ 71%] Linking CXX executable ../../bin/server
[ 71%] Built target server
[ 78%] Building CXX object examples/quantize/CMakeFiles/quantize.dir/quantize.cpp.o
[ 85%] Linking CXX executable ../../bin/quantize
[ 85%] Built target quantize
[ 92%] Building CXX object tests/CMakeFiles/test-tokenizer.dir/test-tokenizer.cpp.o
[100%] Linking CXX executable ../bin/test-tokenizer
[100%] Built target test-tokenizer

权重下载与转换

$cd ../
# text_2.pt, coarse_2.pt, fine_2.pt,https://dl.fbaipublicfiles.com/encodec/v0/encodec_24khz-d7cc33bc.th
$python3 download_weights.py --download-dir ./models
# convert the model to ggml format
$python3 convert.py   --dir-model ./models --codec-path ./models --vocab-path ./ggml_weights/ --out-dir ./ggml_weights/
$ ls -ahl ./models/
总用量 13G
drwxrwxr-x  2 pdd pdd 4.0K Jan 29 08:22 .
drwxrwxr-x 13 pdd pdd 4.0K Jan 29 06:50 ..
-rwxrwxrwx  1 pdd pdd 3.7G Jan 29 07:34 coarse_2.pt
-rw-rw-r--  1 pdd pdd  89M Jan 29 07:29 encodec_24khz-d7cc33bc.th
-rwxrwxrwx  1 pdd pdd 3.5G Jan 29 07:53 fine_2.pt
-rwxrwxrwx  1 pdd pdd 5.0G Jan 29 07:22 text_2.pt
$ ls -ahl ./ggml_weights/
总用量 4.2G
drwxrwxr-x  2 pdd pdd 4.0K Jan 29 08:34 .
drwxrwxr-x 13 pdd pdd 4.0K Jan 29 06:50 ..
-rw-rw-r--  1 pdd pdd 1.3M Jan 29 08:33 ggml_vocab.bin
-rw-rw-r--  1 pdd pdd 1.3G Jan 29 08:34 ggml_weights_coarse.bin
-rw-rw-r--  1 pdd pdd  45M Jan 29 08:34 ggml_weights_codec.bin
-rw-rw-r--  1 pdd pdd 1.2G Jan 29 08:34 ggml_weights_fine.bin
-rw-rw-r--  1 pdd pdd 1.7G Jan 29 08:33 ggml_weights_text.bin
-rw-rw-r--  1 pdd pdd 973K Jan 29 05:23 vocab.txt
$ ./main -m ./ggml_weights/ -p "this is an audio"

运行

$ ./build/bin/main -h
usage: ./build/bin/main [options]

options:
  -h, --help            show this help message and exit
  -t N, --threads N     number of threads to use during computation (default: 4)
  -s N, --seed N        seed for random number generator (default: 0)
  -p PROMPT, --prompt PROMPT
                        prompt to start generation with (default: random)
  -m FNAME, --model FNAME
                        model path (default: /home/pdd/le/bark.cpp/ggml_weights)
  -o FNAME, --outwav FNAME
                        output generated wav (default: output.wav)
$ ./build/bin/main -m ./ggml_weights/ -p "this is an audio"
bark_load_model_from_file: loading model from './ggml_weights/'
bark_load_model_from_file: reading bark text model
gpt_model_load: n_in_vocab  = 129600
gpt_model_load: n_out_vocab = 10048
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 1
gpt_model_load: n_wtes      = 1
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 304 bytes
gpt_model_load: ggml ctx size = 1894.87 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1701.69 MB
bark_load_model_from_file: reading bark vocab

bark_load_model_from_file: reading bark coarse model
gpt_model_load: n_in_vocab  = 12096
gpt_model_load: n_out_vocab = 12096
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 1
gpt_model_load: n_wtes      = 1
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 304 bytes
gpt_model_load: ggml ctx size = 1443.87 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1250.69 MB

bark_load_model_from_file: reading bark fine model
gpt_model_load: n_in_vocab  = 1056
gpt_model_load: n_out_vocab = 1056
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 7
gpt_model_load: n_wtes      = 8
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 304 bytes
gpt_model_load: ggml ctx size = 1411.25 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1218.26 MB

bark_load_model_from_file: reading bark codec model
encodec_model_load: model size    =   44.32 MB

bark_load_model_from_file: total model size  =  4170.64 MB

bark_tokenize_input: prompt: 'this is an audio'
bark_tokenize_input: number of tokens in prompt = 513, first 8 tokens: 20579 20172 20199 33733 129595 129595 129595 129595 
bark_forward_text_encoder: ...........................................................................................................

bark_print_statistics: mem per token =     4.81 MB
bark_print_statistics:   sample time =    16.03 ms / 109 tokens
bark_print_statistics:  predict time =  9644.73 ms / 87.68 ms per token
bark_print_statistics:    total time =  9663.29 ms

bark_forward_coarse_encoder: ...................................................................................................................................................................................................................................................................................................................................

bark_print_statistics: mem per token =     8.53 MB
bark_print_statistics:   sample time =     4.43 ms / 324 tokens
bark_print_statistics:  predict time = 52071.64 ms / 160.22 ms per token
bark_print_statistics:    total time = 52080.24 ms

ggml_new_object: not enough space in the context's memory pool (needed 4115076720, available 4112941056)
段错误 (核心已转储)
  • 一开始以为是内存不足,去增加了虚拟内存,但仍然报错
$ sudo dd if=/dev/zero of=swapfile bs=1024 count=10000000
记录了10000000+0 的读入
记录了10000000+0 的写出
10240000000字节(10 GB,9.5 GiB)已复制,55.3595 s,185 MB/s
$ sudo chmod 600 ./swapfile  # delete the swapfile if you dont need it
$ sudo mkswap -f ./swapfile
正在设置交换空间版本 1,大小 = 9.5 GiB (10239995904  个字节)
无标签, UUID=f3e2a0be-b880-48da-b598-950b7d69f94f
$ sudo swapon ./swapfile
$ free -m
               total        used        free      shared  buff/cache   available
内存:      15731        6441         307        1242        8982        7713
交换:      11813        2047        9765

$ ./build/bin/main -m ./ggml_weights/ -p "this is an audio"
ggml_new_object: not enough space in the context's memory pool (needed 4115076720, available 4112941056)
  • 去看了报错的函数,应该不是内存的原因
static struct ggml_object * ggml_new_object(struct ggml_context * ctx, enum ggml_object_type type, size_t size) {
    // always insert objects at the end of the context's memory pool
    struct ggml_object * obj_cur = ctx->objects_end;

    const size_t cur_offs = obj_cur == NULL ? 0 : obj_cur->offs;
    const size_t cur_size = obj_cur == NULL ? 0 : obj_cur->size;
    const size_t cur_end  = cur_offs + cur_size;

    // align to GGML_MEM_ALIGN
    size_t size_needed = GGML_PAD(size, GGML_MEM_ALIGN);

    char * const mem_buffer = ctx->mem_buffer;
    struct ggml_object * const obj_new = (struct ggml_object *)(mem_buffer + cur_end);

    if (cur_end + size_needed + GGML_OBJECT_SIZE > ctx->mem_size) {
        GGML_PRINT("%s: not enough space in the context's memory pool (needed %zu, available %zu)\n",
                __func__, cur_end + size_needed, ctx->mem_size);
        assert(false);
        return NULL;
    }

    *obj_new = (struct ggml_object) {
        .offs = cur_end + GGML_OBJECT_SIZE,
        .size = size_needed,
        .next = NULL,
        .type = type,
    };

    ggml_assert_aligned(mem_buffer + obj_new->offs);

    if (obj_cur != NULL) {
        obj_cur->next = obj_new;
    } else {
        // this is the first object in this context
        ctx->objects_begin = obj_new;
    }

    ctx->objects_end = obj_new;

    //printf("%s: inserted new object at %zu, size = %zu\n", __func__, cur_end, obj_new->size);

    return obj_new;
}
  • 然后找到了https://github.com/PABannier/bark.cpp/issues/122
    在这里插入图片描述
$ cd bark.cpp/
$ git checkout -f 07e651618b3a8a27de3bfa7f733cdb0aa8f46b8a
HEAD 目前位于 07e6516 ENH Decorrelate fine GPT graph (#111)
  • 运行成功
/home/pdd/le/bark.cpp/cmake-build-debug/bin/main
bark_load_model_from_file: loading model from '/home/pdd/le/bark.cpp/ggml_weights'
bark_load_model_from_file: reading bark text model
gpt_model_load: n_in_vocab  = 129600
gpt_model_load: n_out_vocab = 10048
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 1
gpt_model_load: n_wtes      = 1
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 272 bytes
gpt_model_load: ggml ctx size = 1894.87 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1701.69 MB
bark_load_model_from_file: reading bark vocab

bark_load_model_from_file: reading bark coarse model
gpt_model_load: n_in_vocab  = 12096
gpt_model_load: n_out_vocab = 12096
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 1
gpt_model_load: n_wtes      = 1
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 272 bytes
gpt_model_load: ggml ctx size = 1443.87 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1250.69 MB

bark_load_model_from_file: reading bark fine model
gpt_model_load: n_in_vocab  = 1056
gpt_model_load: n_out_vocab = 1056
gpt_model_load: block_size  = 1024
gpt_model_load: n_embd      = 1024
gpt_model_load: n_head      = 16
gpt_model_load: n_layer     = 24
gpt_model_load: n_lm_heads  = 7
gpt_model_load: n_wtes      = 8
gpt_model_load: ftype       = 0
gpt_model_load: qntvr       = 0
gpt_model_load: ggml tensor size = 272 bytes
gpt_model_load: ggml ctx size = 1411.25 MB
gpt_model_load: memory size =   192.00 MB, n_mem = 24576
gpt_model_load: model size  =  1218.26 MB

bark_load_model_from_file: reading bark codec model

bark_load_model_from_file: total model size  =  4170.64 MB

bark_tokenize_input: prompt: 'this is an audio'
bark_tokenize_input: number of tokens in prompt = 513, first 8 tokens: 20579 20172 20199 33733 129595 129595 129595 129595 
encodec_model_load: model size    =   44.32 MB
bark_forward_text_encoder: ...........................................................................................................

bark_print_statistics: mem per token =     4.80 MB
bark_print_statistics:   sample time =    59.49 ms / 109 tokens
bark_print_statistics:  predict time = 24761.95 ms / 225.11 ms per token
bark_print_statistics:    total time = 24826.76 ms

bark_forward_coarse_encoder: ...................................................................................................................................................................................................................................................................................................................................

bark_print_statistics: mem per token =     8.51 MB
bark_print_statistics:   sample time =    19.74 ms / 324 tokens
bark_print_statistics:  predict time = 178366.69 ms / 548.82 ms per token
bark_print_statistics:    total time = 178396.22 ms

bark_forward_fine_encoder: .....

bark_print_statistics: mem per token =     0.66 MB
bark_print_statistics:   sample time =   304.20 ms / 6144 tokens
bark_print_statistics:  predict time = 407086.19 ms / 58155.17 ms per token
bark_print_statistics:    total time = 407399.91 ms



bark_forward_encodec: mem per token = 760209 bytes
bark_forward_encodec:  predict time =  4349.03 ms
bark_forward_encodec:    total time =  4349.07 ms

Number of frames written = 51840.


main:     load time = 11441.58 ms
main:     eval time = 614987.69 ms
main:    total time = 626429.31 ms

Process finished with exit code 0

CG

  • 科大讯飞 语义理解 AIUI封装
  • https://github.com/iboB/pytorch-ggml-plugin

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/358571.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

「仅需三次鼠标,即可开服」幻兽帕鲁全自动部署教程

在帕鲁的世界,你可以选择与神奇的生物「帕鲁」一同享受悠闲的生活,也可以投身于与偷猎者进行生死搏斗的冒险。帕鲁可以进行战斗、繁殖、协助你做农活,也可以为你在工厂工作。你也可以将它们进行售卖,或肢解后食用。 本文将为您提…

qt5-入门

参考: qt学习指南 Qt5和Qt6的区别-CSDN博客 Qt 学习之路_w3cschool Qt教程,Qt5编程入门教程(非常详细) 本地环境: win10专业版,64位 技术选择 Qt5力推QML界面编程。QML类似HTML,可以借助CSS进…

TypeScript(六) 循环语句

1. TypeScript循环语句 1.1. 简述 有的时候,我们可能需要多次执行同一块代码。一般情况下,语句是按顺序执行的:函数中的第一个语句先执行,接着是第二个语句,依此类推。   循环语句允许我们多次执行一个语句或语句组…

栈的基本知识

链表的优点:在任何位置插入删除O(1) 链表的缺点:不支持下标的随机访问,需要通过特定函数实现 顺序表的缺点:在前面部分插入数据,效率是O(N),需要挪动数据,要求连续的物理空间如果空间不够要扩…

AI投资或成科技裁员罪魁祸首

最近的科技裁员让许多人对这个行业的稳定性产生了疑问。然而,仔细观察发现,这些裁员并不是经济困境的迹象,而是科技公司为了重新调整优先事项并投资未来而进行的战略举措。科技行业正投入数十亿美元用于人工智能(AI)&a…

大模型——推理优化——KV Cache

在本文中,我们将详细介绍KV Cache,这是一种大模型推理加速的方法。 正如其名称所示,该方法通过缓存Attention中的K和V来实现推理优化。 一、大模型推理的冗余计算 我们先简单观察一下基于Decoder架构的大模型的生成过程 用户输入“中国的首…

springboot本地测试

文章目录 本地测试引入依赖进入StudentMapper右键点击生成 项目结构 本地测试 引入依赖 <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope> </d…

【劳德巴赫 Trace32 高阶系列 3 -- trace32 svf 文件操作命令】

请阅读【嵌入式开发学习必备专栏 之 Trace32 系列 】 文章目录 Trace32 SVF 文件操作命令JTAG.PROGRAM.autoJTAG.PROGRAM.SVF命令参数介绍IRPREIRPOSTDRPREDRPOSTInitStateIgnoreTDOVerbose使用示例Trace32 SVF 文件操作命令 JTAG.PROGRAM.auto Format: JTAG.PROGRAM.</

mfc140.dll找不到了要怎么解决?教你多种修复mfc140.dll的方法

遭遇 mfc140.dll 文件缺失的状况时&#xff0c;首要任务是保持冷静&#xff0c;并深入理解问题所在&#xff0c;随后按照科学的方法来应对这一挑战。本篇文章概述了多种应对策略&#xff0c;从适合新手的基本步骤到针对有技术基础用户的高级方案&#xff0c;各种手段都能有效地…

[Bug] [OpenAI] [TypeError: fetch failed] { cause: [Error: AggregateError] }

[Bug] [OpenAI] [TypeError: fetch failed] { cause: [Error: AggregateError] } ubuntu20 win10 edge浏览器访问 服务器部署 页面打开后想使用chatgpt报错了 rootcoal-pasi1cmp:/www/wwwroot/ChatGPT-Next-Web# PORT3000 yarn start yarn run v1.22.19 warning package.json:…

多场景建模:腾讯3MN

3MN: Three Meta Networks for Multi-Scenario and Multi-Task Learning in Online Advertising Recommender Systems 背景 推荐领域的多场景多任务学习&#xff1a;维护单模型即可节省资源也可节省人力&#xff1b;各个场景的数据共享&#xff0c;理论上面学习是更加充分的 …

基于ldap实现登录认证

最近开发的应用需要外协人员实现登录认证&#xff0c;外协人员的密码等信息已经录入到ldap, 需要连接ldap进行登录认证。下面先介绍一下登录的网络旅程图。 一.nginx实现AES加密 nginx请求处理入口&#xff08;前端请求为json格式&#xff09; location /aes {default_type te…

leetcode常见错误

1 runtime error: load of null pointer of type ‘std::_Bit_type‘ (aka ‘unsigned long‘) (stl_bvector&#xff09; 力扣&#xff1a;runtime error: load of null pointer of type ‘std::_Bit_type‘ (aka ‘unsigned long‘) (stl_bvector&#xff09;_runtime error…

GitLab 中国发行版如何设置镜像拉取策略?

最近在用极狐GitLab&#xff08;极狐GitLab 可以理解为 GitLab 在中国的发行版&#xff09; CI/CD 的时候遇到一个问题&#xff1a;CI/CD 中有一个 stage 需要拉取 dockerhub 上的镜像&#xff0c;但是由于 dockerhub 在国内的访问不是很顺畅&#xff0c;经常发生 timeout 的情…

方法阻塞的解决方案之一

1、简单使用 一个h一个cpp文件 #pragma once #include <iostream> #include <thread> #include <atomic> #include <chrono> #include <string>class Person {public:struct dog {std::string name;int age;};public:void a(std::atomic<bo…

链表——超详细

一、无头单向非循环链表 1.结构&#xff08;两个部分&#xff09;&#xff1a; typedef int SLTDataType; typedef struct SListNode {SLTDataType data;//数据域struct SListNode* next;//指针域 }SLNode; 它只有一个数字域和一个指针域&#xff0c;里面数据域就是所存放的…

一些著名的软件都用什么语言编写?

1、操作系统 Microsoft Windows &#xff1a;汇编 -> C -> C 备注&#xff1a;曾经在智能手机的操作系统&#xff08;Windows Mobile&#xff09;考虑掺点C#写的程序&#xff0c;比如软键盘&#xff0c;结果因为写出来的程序太慢&#xff0c;实在无法和别的模块合并&…

宠物空气净化器适合养猫家庭吗?猫用空气净化器品牌推荐!

养宠物的家庭都了解到&#xff0c;宠物掉毛是一个令人头痛的问题。即使我们及时清理地面&#xff0c;也很难跟上宠物掉毛的速度。飘散的毛发不仅让家里显得不整洁&#xff0c;还可能对家人的呼吸健康产生影响&#xff0c;甚至引起过敏反应。此外&#xff0c;猫咪每天上厕所&…

vue3之echarts3D环柱图-间隔版

vue3之echarts3D环柱图-间隔版 效果&#xff1a; 版本 "echarts": "^5.4.1", "echarts-gl": "^2.0.9" 核心代码&#xff1a; <template><div class"content"><div ref"eCharts" class"c…

零基础自学C语言|内存函数

&#x1f50d;memcpy的使用与模拟实现 格式如下&#xff1a; void* memcpy(void* destination, const void* source, size_t num); 函数memcpy从source的位置开始向后复制num个字节的数据到destination指向的内存位置。这个函数在遇到\0的时候并不会停下来。如果source和de…