docker-compose部署FastDFS分布式文件系统

文章目录

  • 一、技术选型
  • 二、fastDFS组成部分
  • 三、docker-compose文件
  • 四、客户端nginx配置
  • 五、存储器
  • spring Boot集成
  • 参考文献

一、技术选型

还有一个更好的google FS(但是他不开源,我也没找到社区版一类的可以不要钱使用的)。
最后考虑到我们存储的都是10M以下的图片,而且访问量不是很高,所以使用了国产的fastDFS(当然也包含爱国情怀)。

在这里插入图片描述

二、fastDFS组成部分

Tracker Server: 主要负责调度工作,起到均衡的作用,管理所有的 Storage Server 和 Group。每个 Storage Server 在启动后会连接 Tracker,告知自己所属 Group 等信息,并保持周期性心跳。多个 Tracker 之间是对等关系,不存在单点故障。Tracker Server 在内存中记录分组和 Storage Server 的状态等信息,不记录文件索引信息,占用的内存量很少。Tracker Server 作为中心结点,管理集群拓扑结构,其主要作用是负载均衡和调度。

Storage Server: 主要提供容量和备份服务,以 Group 为单位,每个 Group 内可以有多台 Storage Server。组内的所有 Storage Server 之间是平等关系,会相互连接进行文件同步,从而保证组内的所有 Storage Server 的文件内容一致。Storage Server 直接利用 OS 的文件系统存储文件,不会对文件进行分块存储,客户端上传的文件和 Storage Server 上的文件一一对应(V3 引入的小文件合并存储除外)。每个 Storage Server 的存储依赖于本地文件系统,可配置多个数据存储目录。

Client: 与 Tracker Server 和 Storage Server 交互,负责文件的上传、下载、删除等操作。客户端可以指定上传到的 Group,当某个 Group 的访问压力较大时,可以在该组增加 Storage Server 来扩充服务能力。

三、docker-compose文件

version: '3.3'
services:
  tracker: # 调度器
    image: season/fastdfs:1.2
    container_name: fastdfs-tracker
    network_mode: host
    restart: always
    command: "tracker"

  storage: # 存储器
    image: season/fastdfs:1.2
    container_name: fastdfs-storage
    network_mode: host
    restart: always
    volumes:
      - "/fastDFS/storage/storage.conf:/fdfs_conf/storage.conf" # 存储器配置文件
      - "/fastDFS/storage/storage_base_path:/fastdfs/storage/data" # 文件存储位置
      - "/fastDFS/storage/store_path0:/fastdfs/store_path" # 存储配置
    environment:
      TRACKER_SERVER: "调度器ip和端口"
    command: "storage"

  nginx: # 客户端
    image: season/fastdfs:1.2
    container_name: fastdfs-nginx
    restart: always
    network_mode: host
    volumes:
      - "/fastDFS/nginx.conf:/etc/nginx/conf/nginx.conf"
      - "/fastDFS/storage/store_path0:/fastdfs/store_path"
      - "/fastDFS/pic:/hydf/fastDFS/pic"
    environment:
      TRACKER_SERVER: "调度器ip和端口"
    command: "nginx"

四、客户端nginx配置

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       8088;
        server_name  localhost;

        #charset koi8-r;

        #缩略图需要使用插件,需要单独构建nginx镜像,此处忽略
        #location /group([0-9])/M00/.*\.(gif|jpg|jpeg|png)$ {
         #   root /fastdfs/storage/data;
         #   image on;
         #   image_output off;
         #   image_jpeg_quality 75;
         #   image_backend off;
        #    image_backend_server http://baidu.com/docs/aabbc.png;
       # }

        # group1
        location /group1/M00 {
        # 文件存储目录
            root /fastdfs/storage/data; # 注意这里要和dockercompose中的storage data存储配置保持一致
            ngx_fastdfs_module;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
 }
}

五、存储器


# the name of the group this storage server belongs to
group_name=group1 # 节点名称,后续访问节点要和这个保持一致

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=10.36.6.251 # 绑定本机ip地址(因为是docker所以这里需要指定ip地址)

# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port
port=23000

# connect timeout in seconds
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60

# heart beat interval in seconds
heart_beat_interval=30

# disk usage report interval in seconds
stat_report_interval=60

# the base path to store data and log files
base_path=/fastdfs/storage

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500

# path(disk or mount point) count, default value is 1
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/fastdfs/store_path
#store_path1=/home/yuqing/fastdfs2

# subdir_count  * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=调度器ip地址和端口

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0

# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0

# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=

# the port of the web server on this storage server
http.server_port=8888

spring Boot集成

pom文件引入工具包

   <!--fastDFS-->
        <dependency>
            <groupId>com.github.tobato</groupId>
            <artifactId>fastdfs-client</artifactId>
            <version>1.26.5</version>
        </dependency>

java文件上传,这里因为名称长度有要求,所以我直接不要文件名字了,随机了一个然后加上原文件后缀

 File file = new File(s);
            String fileName = file.getName();
            StorePath path = storageClient.uploadFile("group1", new FileInputStream(file), file.length(), fileName.substring(fileName.lastIndexOf(".") + 1));
            System.out.println(path.getFullPath());

上传完成后的访问路径,由client所在的nginx客户端地址和返回回来的存储路径拼合就可以直接访问
http://{{nginx client客户端ip地址}}/group1/M00/00/00/CiQG-2ZeyV-AWmHcAA3vSx2O3n4972.png

参考文献

https://github.com/happyfish100/fastdfs
https://cloud.tencent.com/developer/article/2072195
https://blog.51cto.com/u_12740336/6943977
https://blog.csdn.net/weixin_44002151/article/details/131617686
https://blog.csdn.net/ZHUZIH6/article/details/129973644
https://download.csdn.net/blog/column/12340564/136098096

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/712902.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

AD学习记录

1. 负信号&#xff1a; \WR或者W\R\ 2.快捷键&#xff1a; MMS VGS X/W CTLRW原理图画总连接线&#xff0c;shift快速复制 TAA管理器&#xff0c;TG封装管理器 3. 选中后按住ctlr进行位移 4.原理图里切换原理图库&#xff1a; 5.重要警报&#xff1a;&#xff0…

CleanMyMacX4.15.4如何优化苹果电脑系统缓存,告别MacBook卡顿,提升mac电脑性能

你是否曾为苹果电脑存储空间不够而烦恼&#xff1f;是否曾因系统运行缓慢而苦恼&#xff1f;别担心&#xff0c;今天我要给大家种草一个神器——CleanMyMac&#xff01;这款软件可以帮助你轻松解决苹果电脑的种种问题&#xff0c;让你的电脑焕然一新&#xff01; 让我来给大家介…

论文学习day01

1.自我反思的检索增强生成&#xff08;SELF-RAG&#xff09; 1.文章出处&#xff1a; Chan, C., Xu, C., Yuan, R., Luo, H., Xue, W., Guo, Y., & Fu, J. (2024). RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation. ArXiv, abs/2404.00610. 2.摘…

使用消息队列(MQ)实现MySQL持久化存储与MySQL server has gone away问题解决

在现代应用程序开发中&#xff0c;消息队列&#xff08;MQ&#xff09;扮演着重要的角色。它们可以帮助我们解决异步通信和解耦系统组件之间的依赖关系。而其中一个常见的需求是将消息队列中的数据持久化到数据库中&#xff0c;以确保数据的安全性和可靠性。在本文中&#xff0…

java第二十四课 —— super 关键字 | 方法重写

super 关键字 基本介绍 super 代表父类的引用&#xff0c;用于访问父类的属性、方法、构造器。 基本语法 访问父类的属性&#xff0c;但不能访问父类的 private 属性。 super.属性名; 访问父类的方法&#xff0c;不能访问父类的 private 方法。 super.方法名(参数列表); 访…

Java的一些内容

transient的作用 transient是Java语言的关键字&#xff0c;用来表示一个成员变量不是该对象序列化的一部分。当一个对象被序列化的时候&#xff0c;transient型变量的值不包括在序列化的结果中。而非transient型的变量是被包括进去的。 注意static修饰的静态变量天然就是不可序…

Python **运算符(python**kwargs:参数解包)(kwargs:keyword arguments)

文章目录 Python中的 ** 运算符&#xff1a;参数解包参数解包基础语法和示例 在函数定义中使用 **示例代码 使用场景和好处1. 灵活性&#xff1a;使用 **kwargs 允许函数设计得更加灵活&#xff0c;可以接受未来可能增加的新参数而无需修改函数定义。2. 可读性和可维护性&#…

C#开发-集合使用和技巧(四)集合中常用的查询方法

集合中常用的查询方法 测试数据准备&#xff1a;查询方法详解**Where**条件查询定义和注释&#xff1a;功能详细说明&#xff1a;应用实例查找所有设备类型为“生产设备”的对象 结果测试&#xff1a;查询所有测试结果大于90的设备多条件查询&#xff1a;类型为生产设备同时测试…

# RocketMQ 实战:模拟电商网站场景综合案例(六)

RocketMQ 实战&#xff1a;模拟电商网站场景综合案例&#xff08;六&#xff09; 一、RocketMQ 实战 &#xff1a;项目公共类介绍 1、ID 生成器 &#xff1a;IDWorker&#xff1a;Twitter 雪花算法。 在 shop-common 工程模块中&#xff0c;IDWorker.java 是 ID 生成器公共类…

Centos7系统下Docker的安装与配置

文章目录 前言下载Docker安装yum库安装Docker启动和校验配置Docker镜像加速卸载Docker 前言 此博客的内容的为自己的学习笔记&#xff0c;如果需要更具体的内容&#xff0c;可查看Docker官网文档内容 注意&#xff1a;以下命令在root管理员用户下运行&#xff0c;如果在普通用…

基于单片机的无线遥控自动翻书机械臂设计

摘 要&#xff1a; 本设备的重点控制部件为单片机&#xff0c;充分实现了其自动化的目的。相关研究表明&#xff0c;它操作简单便捷&#xff0c;使残疾人在翻书时提供了较大的便利&#xff0c;使用价值性极高&#xff0c;具有很大的发展空间。 关键词&#xff1a; 机械臂&…

gbase8s数据库阻塞检查点和非阻塞检查点的执行机制

1. 检查点的描述 为了便于数据库系统的复原和逻辑恢复&#xff0c;数据库服务器生成的一致性标志点&#xff0c;称为检查点&#xff0c;其是建立在数据库系统的已知和一致状态时日志中的某个时间点检查点的目的在于定期将逻辑日志中的重新启动点向前移动 如果存在检查点&#…

零基础入门学用Arduino 第三部分(二)

重要的内容写在前面&#xff1a; 该系列是以up主太极创客的零基础入门学用Arduino教程为基础制作的学习笔记。个人把这个教程学完之后&#xff0c;整体感觉是很好的&#xff0c;如果有条件的可以先学习一些相关课程&#xff0c;学起来会更加轻松&#xff0c;相关课程有数字电路…

即时聊天系统

功能描述 该项目是一个前后端分离的即时聊天项目&#xff0c;前端采用vue2、后端使用springboot以mysql8.0作为数据库。 项目功能包含了单聊、群聊功能。在此基础上增加了对好友的功能操作&#xff0c;如备注设为通知、视频聊天、语音聊天、置顶、拉入黑名单、清空聊天记录等。…

如何在两个不同的conda环境中实现jupyter notebook共同使用,避免重复下载

前提&#xff1a;有2个conda环境&#xff0c;yes和py38_pytorch 其中&#xff0c;yes已经安装了jupyter notebook;py38_pytorch没有jupyter notebook 现在&#xff0c;实现在py38_pytorch用jupyter notebook 步骤&#xff1a; 1、激活py38_pytorch conda activate py38_p…

gma 2.0.10 (2024.06.16) | GmaGIS V0.0.0a4 更新日志

安装 gma 2.0.10 pip install gma2.0.10网盘下载&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1P0nmZUPMJaPEmYgixoL2QQ?pwd1pc8 提取码&#xff1a;1pc8 注意&#xff1a;此版本没有Linux版&#xff01; 编译gma的Linux虚拟机没有时间修复&#xff0c;本期Linux版…

HTML5的未来:掌握最新技术,打造炫酷网页体验

引言 随着互联网技术的飞速发展&#xff0c;HTML5已经成为构建现代网页和应用的核心技术之一。HTML5不仅提供了丰富的语义化标签&#xff0c;还引入了多项前沿技术&#xff0c;使得网页体验更加丰富多彩。本文将探讨HTML5的最新技术&#xff0c;并结合行业实践&#xff0c;提供…

基础算法--双指针算法

文章目录 什么是双指针算法例题1.移动零2.复写零3.快乐数4.盛最多水的容器5.有效三角形的个数6.三数之和7.四数之和 什么是双指针算法 通常我们讲的双指针就是用两个指针&#xff0c;两个指针可以是快慢指针&#xff0c;解决成环的问题&#xff0c;也可以是指向收尾的两个指针…

快速压缩前端项目

背景 作为前端开发工程师难免会遇到需要把项目压缩成压缩文件来传送的情况&#xff0c;这时候需要压缩软件进行压缩文件处理 问题 项目中的依赖包文件非常庞大&#xff0c;严重影响压缩速度&#xff0c;即使想先删除再压缩&#xff0c;删除文件也不会很快完成 解决 首先要安…

Unity中实现ScrollRect 滚动定位到视口内

Demo链接: https://download.csdn.net/download/qq_41973169/89439428https://download.csdn.net/download/qq_41973169/89439428 一、前言 Unity版本:2020.1.x 如果需要资源请联系我我会分享给你 因为本人也要存储一下Demo所以上传到这里了但是又不能设置不需要积分 在Un…