Apache Ranger 2.4.0 集成hadoop 3.X(Kerbos)

1、安装Ranger

参照上一个文章

2、修改配置

把各种plugin转到统一目录(源码编译的target目录下拷贝过来),比如 tar zxvf ranger-2.4.0-hdfs-plugin.tar.gz

tar zxvf ranger-2.4.0-hdfs-plugin.tar.gz

vim install.properties

POLICY_MGR_URL==http://localhost:6080
REPOSITORY_NAME=hdfs_repo

COMPONENT_INSTALL_DIR_NAME=/BigData/run/hadoop

CUSTOM_USER=hadoop
CUSTOM_GROUP=hadoop

3、生效配置

./enable-hdfs-plugin.sh

出现下面关键字代表安装plugin成功,需要重启所有namenode服务

Ranger Plugin for hadoop has been enabled. Please restart hadoop to ensure that changes are effective.

4、 相关报错

4.1 hadoop common冲突

[root@tv3-hadoop-01 ranger-2.4.0-hdfs-plugin]# ./enable-hdfs-plugin.sh
Custom user and group is available, using custom user and group.
+ Sun Jun 30 16:22:24 CST 2024 : hadoop: lib folder=/BigData/run/hadoop/share/hadoop/hdfs/lib conf folder=/BigData/run/hadoop/etc/hadoop
+ Sun Jun 30 16:22:24 CST 2024 : Saving current config file: /BigData/run/hadoop/etc/hadoop/hdfs-site.xml to /BigData/run/hadoop/etc/hadoop/.hdfs-site.xml.20240630-162224 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving current config file: /BigData/run/hadoop/etc/hadoop/ranger-hdfs-audit.xml to /BigData/run/hadoop/etc/hadoop/.ranger-hdfs-audit.xml.20240630-162224 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving current config file: /BigData/run/hadoop/etc/hadoop/ranger-hdfs-security.xml to /BigData/run/hadoop/etc/hadoop/.ranger-hdfs-security.xml.20240630-162224 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving current config file: /BigData/run/hadoop/etc/hadoop/ranger-policymgr-ssl.xml to /BigData/run/hadoop/etc/hadoop/.ranger-policymgr-ssl.xml.20240630-162224 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving lib file: /BigData/run/hadoop/share/hadoop/hdfs/lib/ranger-hdfs-plugin-impl to /BigData/run/hadoop/share/hadoop/hdfs/lib/.ranger-hdfs-plugin-impl.20240630162225 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving lib file: /BigData/run/hadoop/share/hadoop/hdfs/lib/ranger-hdfs-plugin-shim-2.4.0.jar to /BigData/run/hadoop/share/hadoop/hdfs/lib/.ranger-hdfs-plugin-shim-2.4.0.jar.20240630162225 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving lib file: /BigData/run/hadoop/share/hadoop/hdfs/lib/ranger-plugin-classloader-2.4.0.jar to /BigData/run/hadoop/share/hadoop/hdfs/lib/.ranger-plugin-classloader-2.4.0.jar.20240630162225 ...
+ Sun Jun 30 16:22:25 CST 2024 : Saving current JCE file: /etc/ranger/hdfs_repo/cred.jceks to /etc/ranger/hdfs_repo/.cred.jceks.20240630162225 ...
Unable to store password in non-plain text format. Error: [SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/collect/Interners
	at org.apache.hadoop.util.StringInterner.<clinit>(StringInterner.java:40)
	at org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3335)
	at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3417)
	at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3191)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3084)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3045)
	at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2923)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2905)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1413)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1385)
	at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:55)
	at org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
	at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.com.google.common.collect.Interners
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 13 more]
Exiting plugin installation

解决方案:

确保打包的pom.xml文件中的hadoop.version和ranger部署的hadoop环境版本保持一致(如果冲突,建议保留源码版本)

4.2 其他报错

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions
	at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:430)
	at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:443)
	at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:525)
	at org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:39)
	at org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
	at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.com.google.common.base.Preconditions
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 6 more]

解决方案:

确保打包的pom.xml文件中的hadoop.version和ranger部署的hadoop环境版本保持一致(如果冲突,建议保留源码版本)

4.3 ranger 集成kerbos hadoop报错

2024-06-30 12:20:53,246 [timed-executor-pool-0] INFO [BaseClient.java:122] Init Login: using username/password
2024-06-30 12:20:53,282 [timed-executor-pool-0] ERROR [HdfsResourceMgr.java:49] <== HdfsResourceMgr.testConnection Error: Unable to login to Hadoop environment [hdfs_repo]
org.apache.ranger.plugin.client.HadoopException: Unable to login to Hadoop environment [hdfs_repo]
	at org.apache.ranger.plugin.client.BaseClient.createException(BaseClient.java:153)
	at org.apache.ranger.plugin.client.BaseClient.createException(BaseClient.java:149)
	at org.apache.ranger.plugin.client.BaseClient.login(BaseClient.java:140)
	at org.apache.ranger.plugin.client.BaseClient.<init>(BaseClient.java:61)
	at org.apache.ranger.services.hdfs.client.HdfsClient.<init>(HdfsClient.java:50)
	at org.apache.ranger.services.hdfs.client.HdfsClient.connectionTest(HdfsClient.java:217)
	at org.apache.ranger.services.hdfs.client.HdfsResourceMgr.connectionTest(HdfsResourceMgr.java:47)
	at org.apache.ranger.services.hdfs.RangerServiceHdfs.validateConfig(RangerServiceHdfs.java:74)
	at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:655)
	at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:642)
	at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:603)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Login failure for hadoop using password ****
	at org.apache.hadoop.security.SecureClientLogin.loginUserWithPassword(SecureClientLogin.java:82)
	at org.apache.ranger.plugin.client.BaseClient.login(BaseClient.java:123)
	... 12 common frames omitted
Caused by: javax.security.auth.login.LoginException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
	at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
	at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:596)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
	at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
	at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
	at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
	at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
	at org.apache.hadoop.security.SecureClientLogin.loginUserWithPassword(SecureClientLogin.java:79)
	... 13 common frames omitted
Caused by: sun.security.krb5.KrbException: Client not found in Kerberos database (6) - CLIENT_NOT_FOUND
	at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:82)
	at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316)
	at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
	at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:766)
	... 26 common frames omitted
Caused by: sun.security.krb5.Asn1Exception: Identifier doesn't match expected value (906)
	at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
	at sun.security.krb5.internal.ASRep.init(ASRep.java:64)
	at sun.security.krb5.internal.ASRep.<init>(ASRep.java:59)
	at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:60)
	... 29 common frames omitted

如果ranger同时集成kerbos的话需要增加如下操作

增加证书

addprinc -randkey rangeradmin/tv3-hadoop-01@AB.ELONG.COM
addprinc -randkey rangerlookup/tv3-hadoop-01@AB.ELONG.COM
addprinc -randkey rangerusersync/tv3-hadoop-01@AB.ELONG.COM
addprinc -randkey HTTP/tv3-hadoop-01@AB.ELONG.COM

HTTP如果之前已经有了,可以不重新生成

导出证书

xst -k /BigData/run/hadoop/keytab/ranger.keytab rangeradmin/tv3-hadoop-01@AB.ELONG.COM rangerlookup/tv3-hadoop-01@AB.ELONG.COM rangerusersync/tv3-hadoop-01@AB.ELONG.COM HTTP/tv3-hadoop-01@AB.ELONG.COM

赋权限:

chown ranger:ranger /BigData/run/hadoop/keytab/ranger.keytab

进行证书初始化

su - ranger

kinit -kt /BigData/run/hadoop/keytab/ranger.keytab HTTP/$HOSTNAME@AB.ELONG.COM

kinit -kt /BigData/run/hadoop/keytab/ranger.keytab rangeradmin/$HOSTNAME@AB.ELONG.COM

kinit -kt /BigData/run/hadoop/keytab/ranger.keytab rangerlookup/$HOSTNAME@AB.ELONG.COM

kinit -kt /BigData/run/hadoop/keytab/ranger.keytab rangerusersync/$HOSTNAME@AB.ELONG.COM

安装前可以提前配置:

spnego_principal=HTTP/tv3-hadoop-01@AB.ELONG.COM

spnego_keytab=/BigData/run/hadoop/keytab/ranger.keytab

admin_principal=rangeradmin/tv3-hadoop-01@AB.ELONG.COM

admin_keytab=/BigData/run/hadoop/keytab/ranger.keytab

lookup_principal=rangerlookup/tv3-hadoop-01@AB.ELONG.COM

lookup_keytab=/BigData/run/hadoop/keytab/ranger.keytab

hadoop_conf=/BigData/run/hadoop/etc/hadoop/

ranger.service.host=tv3-hadoop-01

4.4 SSL 报错

2024-06-30 13:01:43,237 [timed-executor-pool-0] ERROR [HdfsResourceMgr.java:49] <== HdfsResourceMgr.testConnection Error: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [hdfs_repo].
org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [hdfs_repo].
	at org.apache.ranger.services.hdfs.client.HdfsClient.listFilesInternal(HdfsClient.java:138)
	at org.apache.ranger.services.hdfs.client.HdfsClient.access$000(HdfsClient.java:44)
	at org.apache.ranger.services.hdfs.client.HdfsClient$1.run(HdfsClient.java:167)
	at org.apache.ranger.services.hdfs.client.HdfsClient$1.run(HdfsClient.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.ranger.services.hdfs.client.HdfsClient.listFiles(HdfsClient.java:170)
	at org.apache.ranger.services.hdfs.client.HdfsClient.connectionTest(HdfsClient.java:221)
	at org.apache.ranger.services.hdfs.client.HdfsResourceMgr.connectionTest(HdfsResourceMgr.java:47)
	at org.apache.ranger.services.hdfs.RangerServiceHdfs.validateConfig(RangerServiceHdfs.java:74)
	at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:655)
	at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:642)
	at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:603)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: DestHost:destPort tv3-hadoop-06:15820 , LocalHost:localPort tv3-hadoop-01/10.177.42.98:0. Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: No common protection layer between client and server
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:837)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:812)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566)
	at org.apache.hadoop.ipc.Client.call(Client.java:1508)
	at org.apache.hadoop.ipc.Client.call(Client.java:1405)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:234)
	at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:119)
	at com.sun.proxy.$Proxy107.getListing(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:687)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy108.getListing(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1694)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1678)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1093)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:144)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1157)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1154)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1164)
	at org.apache.ranger.services.hdfs.client.HdfsClient.listFilesInternal(HdfsClient.java:83)
	... 16 common frames omitted
Caused by: java.io.IOException: javax.security.sasl.SaslException: No common protection layer between client and server
	at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:778)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
	at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:732)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:835)
	at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 40 common frames omitted
Caused by: javax.security.sasl.SaslException: No common protection layer between client and server
	at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186)
	at org.apache.hadoop.security.SaslRpcClient.saslEvaluateToken(SaslRpcClient.java:481)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:423)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
	at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
	... 43 common frames omitted

解决方案:

hadoop.rpc.protection需要设置和集群保持一致,比如当前设置为

   <property>

     <name>hadoop.rpc.protection</name>

     <value>authentication</value>

     <description>authentication : authentication only (default); integrity : integrity check in addition to authentication; privacy : data encryption in addition to integrity</description>

   </property>

5、验证连接是否成功

Service Name    hdfs_repo
Display Name    hdfs_repo
Description    --
Active Status    Enabled
Tag Service    --
Username    hadoop
Password    *****
Namenode URL   hdfs://localhost:15820
Authorization Enabled    true
Authentication Type    kerberos
hadoop.security.auth_to_local    RULE:[2:$1@$0](hadoop@.*EXAMPLE.COM)s/.*/hadoop/         RULE:[2:$1@$0](HTTP@.*EXAMPLE.COM)s/.*/hadoop/         DEFAULT
dfs.datanode.kerberos.principal    hadoop/_HOST@CC.LOCAL
dfs.namenode.kerberos.principal    hadoop/_HOST@CC.LOCAL
dfs.secondary.namenode.kerberos.principal    --
RPC Protection Type    authentication
Common Name for Certificate    --
policy.download.auth.users    hadoop
tag.download.auth.users    hadoop
dfs.journalnode.kerberos.principal    hadoop/_HOST@CC.LOCAL

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/760125.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

PD芯片OTG功能的应用 LDR6500

随着科技的飞速发展&#xff0c;智能手机、平板电脑等电子设备已经成为我们日常生活和工作中不可或缺的工具。这些设备的功能日益强大&#xff0c;应用场景也愈发广泛&#xff0c;但随之而来的是对充电和数据传输效率的高要求。在这一背景下&#xff0c;PD&#xff08;Power De…

Apache Ranger 2.4.0 集成Hive 3.x(Kerbos)

一、解压tar包 tar zxvf ranger-2.4.0-hive-plugin.tar.gz 二、修改install.propertis POLICY_MGR_URLhttp://localhost:6080REPOSITORY_NAMEhive_repoCOMPONENT_INSTALL_DIR_NAME/BigData/run/hiveCUSTOM_USERhadoop 三、进行enable [roottv3-hadoop-01 ranger-2.4.0-hive…

k8s笔记——helm chat与k8s Operator区别

k8s Operator Kubernetes 为自动化而生。无需任何修改&#xff0c;你即可以从 Kubernetes 核心中获得许多内置的自动化功能。 你可以使用 Kubernetes 自动化部署和运行工作负载&#xff0c;甚至 可以自动化 Kubernetes 自身。 Kubernetes 的 Operator 模式概念允许你在不修改…

爱情再启:庄国栋笑谈“玫瑰人生”爱情觉悟

庄国栋&#xff0c;这位电视剧《玫瑰的故事》中的男主角&#xff0c; 最近在一次采访中坦言&#xff1a;“如果给我一次重来的机会&#xff0c; 我绝对会毫不犹豫地选择爱情&#xff01;” 听到这话&#xff0c; 我不禁想&#xff0c;庄先生&#xff0c;您是不是被剧里的玫瑰…

PotPlayer安装及高分辨率设置

第1步&#xff1a; 下载安装PotPlayer软件 PotPlayer链接&#xff1a;https://pan.baidu.com/s/1hW168dJrLBonUnpLI6F3qQ 提取码&#xff1a;z8xd 第2步&#xff1a; 下载插件&#xff0c;选择系统对应的位数进行运行&#xff0c;该文件不能删除&#xff0c;删除后将失效。 …

(单机架设教程)凯旋|当年的QQ游戏

前言 今天给大家带来一款单机游戏的架设&#xff1a;凯旋 &#xff0c; 当年的QQ游戏 如今市面上的资源参差不齐&#xff0c;大部分的都不能运行&#xff0c;本人亲自测试&#xff0c;运行视频如下&#xff1a; 凯旋单机 搭建教程 此游戏架设需要安装虚拟机&#xff0c; 没有…

pytest测试框架pytest-cov插件生成代码覆盖率

Pytest提供了丰富的插件来扩展其功能&#xff0c;本章介绍下pytest-cov插件&#xff0c;用于生成测试覆盖率报告&#xff0c;帮助开发者了解哪些部分的代码被测试覆盖&#xff0c;哪些部分还需要进一步的测试。 pytest-cov 支持多种报告格式&#xff0c;包括纯文本、HTML、XML …

什么是 Elasticsearch 数据预热?

引言&#xff1a;在现代的信息检索和数据分析领域&#xff0c;Elasticsearch 已经成为一个广泛应用的分布式搜索和分析引擎。作为开源项目的一部分&#xff0c;Elasticsearch 提供了强大的实时搜索和分析能力&#xff0c;使得处理大规模数据变得更加高效和可靠。然而&#xff0…

GPT-5 一年半后发布,对此你有何期待?

IT之家6月22日消息&#xff0c;在美国达特茅斯工程学院周四公布的采访中&#xff0c;OpenAI首席技术官米拉穆拉蒂被问及GPT-5是否会在明年发布&#xff0c;给出了肯定答案并表示将在一年半后发布。此外&#xff0c;穆拉蒂在采访中还把GPT-4到GPT-5的飞跃描述为高中生到博士生的…

在线教育项目(一):如何防止一个账号多个地方登陆

使用jwt做验证&#xff0c;使用账号作为redis中的key,登录的时候生成token放到redis中&#xff0c;每次申请资源的时候去看token 有没有变&#xff0c;因为token每次登录都会去覆盖&#xff0c;只要第二次登录token就不一样了

在我们的大数据平台(XSailbaot)上进行企业级数据建模的思路

1. 背景 笔者所在的公司是差不多二十年前搞CIM&#xff08;公共信息模型的&#xff09;起家的。当时公司的前辈搞了基于CIS协议的模型服务器、数据服务器、模式编辑器等&#xff0c;形成了一套基于公共信息模型建模的平台系统。其中可视化建模&#xff0c;建好了模式类以后&am…

SA 注册流程

目录 1. UE开机后按照3GPP TS 38.104定义的Synchronization Raster搜索特定频点 2.UE尝试检测PSS/SSS&#xff0c;取得下行时钟同步&#xff0c;并获取小区的PCI&#xff1b;如果失败则转步骤1搜索下一个频点&#xff1b;否则继续后续步骤&#xff1b; 3.解析Mib&#xff0c;…

设计NOR Flash(SPI接口)的Flashloader(MCU: stm32f4)

目录 概述 1 软硬件 1.1 软硬件信息表 1.2 NOR Flash芯片&#xff08;W25Q64BVSSI&#xff09; 1.2.1 W25Q64BVSSI芯片介绍 1.2.2 NOR Flash接口 1.3 MCU与NOR Flash接口 2 SPI Flash功能实现 2.1 软件框架结构 2.2 代码实现 2.2.1 Dev_Inf文件 2.2.2 W25QXX驱动程…

区间动态规划——最长回文子串(C++)

难得心静。 ——2024年6月30日 什么是区间动态规划&#xff1f; 区间动态规划通常以连续区间的求解作为子问题&#xff0c;例如区间 [i, j] 上的最优解用dp[i][j]表示。先在小区间上进行动态规划得到子问题的最优解&#xff0c;再利用小区间的最优解合并产生大区间的最优解。 …

娱乐圈发生震动,AI大模型技术已经取代了SNH48的小偶像?

自2023年以来&#xff0c;全球都被包裹在AI的惊天大潮之中&#xff0c;所有行业都在主动或被动地迎接改变。目前&#xff0c;各行业已经有大量公司正在把AI作为自身发展的最佳路径。其中&#xff0c;娱乐行业作为最被人们熟知的行业也在面对AI的发展时&#xff0c;发生着巨大变…

视频共享融合赋能平台LntonCVS统一视频接入平台数字化升级医疗体系

医疗健康事关国计民生&#xff0c;然而&#xff0c;当前我国医疗水平的地区发展不平衡、医疗资源分布不均和医疗信息系统老化等问题&#xff0c;制约了整体服务能力和水平的提升。视频融合云平台作为推动数字医疗的关键工具&#xff0c;在医疗领域的广泛应用和普及&#xff0c;…

统计信号处理基础 习题解答11-1

题目 观测到的数据具有PDF 在μ给定的条件下&#xff0c;是相互独立的。均值具有先验PDF&#xff1a; 求μ的 MMSE 和 MAP 估计量。另外&#xff0c;当和时将发生什么情况? 解答 和两者都是独立高斯分布&#xff0c;与例题10.1一致&#xff0c;直接套用&#xff08;10.11&am…

RedisAtomicInteger并发案例

&#x1f370; 个人主页:__Aurora__ &#x1f35e;文章有不合理的地方请各位大佬指正。 &#x1f349;文章不定期持续更新&#xff0c;如果我的文章对你有帮助➡️ 关注&#x1f64f;&#x1f3fb; 点赞&#x1f44d; 收藏⭐️ RedisAtomicInteger 提供了对整数的原子性操作&a…

《昇思25天学习打卡营第14天 | 昇思MindSpore基于MindNLP+MusicGen生成自己的个性化音乐》

14天 本节学了基于MindNLPMusicGen生成自己的个性化音乐。 MusicGen是来自Meta AI的Jade Copet等人提出的基于单个语言模型的音乐生成模型&#xff0c;能够根据文本描述或音频提示生成高质量的音乐样本。 MusicGen模型基于Transformer结构&#xff0c;可以分解为三个不同的阶段…

C++: 如何用C语言实现C++的虚函数机制?

前言 在 googletest的源码中&#xff0c;看到gtest-matchers.h 中实现的MatcherBase 类自定义了一个 VTable&#xff0c;这种设计实现了一种类似于C虚函数的机制。C中的虚函数机制实质上就是通过这种方式实现的&#xff0c;本文用c语言自定义虚函数表VTable实现了一下virtual的…