Spark编程-使用SparkCore求TopN,Max_Min_Value

简介

        使用SparkCore求top5值编程,最大最小值

求订单前五的TOP5值

数据

        数据字段如下:orderid,userid,payment,productid 

        需求如下:从文本文件中读取数据,并计算出前5个payment(订单的付款金额)值

//字段 orderid,userid,payment,productid 
1,1193,5234,978300760
2,661,323423,978302109
3,914,34234,978301968
4,3408,45435,978300275
5,2355,543443,978824291
6,1197,35345,978302268
7,1287,553,978302039
8,2804,53453,978300719
9,594,45654,978302268
10,919,3534,978301368
11,595,543,978824268
12,938,454,978301752
13,2398,4656,978302281
14,2918,9874,978302124
15,1035,37455,978301753
16,2791,4353,978302188
17,2687,3353,978824268
18,2018,423,978301777
19,3105,56345,978301713

代码

import org.apache.spark.{SparkConf, SparkContext}
//求TopN个payment值
//定义Object
object TopValue {
  //声明入口函数
  def main(args:Array[String]):Unit = {
    //配置定义机器,我们是单机运行的
    val conf = new SparkConf().setAppName("TopValue").setMaster("local[1]")
    val sc = new SparkContext(conf)    //设置日志级别,只有错误信息显示
    sc.setLogLevel("ERROR")
    //加载文本文件,生成rdd,每一个RDD都是一行文本:其中文本字段为orderid,userid,payment,productid
    val lines = sc.textFile("D:\\workspace\\spark\\src\\main\\Data\\orderTopData1",3)
    //num是用来计数用的,初始值为零
    var num = 0
    // 过滤,去除空行和不符合指定格式的行
    val result = lines.filter(line => (line.trim().length > 0) && (line.split(",").length == 4 ))
      // 取出每行中的payment字段
      .map(_.split(",")(2))
      //使用map操作将payment字段转成整数类型,并与一空字符串组成键值对,方便后续使用sortByKey进行排序
      .map(x => (x.toInt,""))
      //使用sortByKey进行降序排序
      .sortByKey(false)
      //使用map操作只保留payment字段,并私用take()方法去除排名前五的值
      .map(x => x._1).take(5)
      //foreach遍历结果,并对payment值进行编号,并进行打印
      .foreach(x => {
        num = num + 1
        println(num + "\t" + x)
      })



  }

}

运行结果

D:\Java\jdk1.8.0_131\bin\java.exe "-javaagent:D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar=59560:D:\idea\IntelliJ IDEA 2021.1.3\bin" -Dfile.encoding=UTF-8 -classpath "D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar" com.intellij.rt.execution.CommandLineWrapper C:\Users\Administrator\AppData\Local\Temp\idea_classpath338812056 TopValue
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/spark/spark-3.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Maven/Maven_repositories/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/07/17 13:45:01 INFO SparkContext: Running Spark version 3.2.0
23/07/17 13:45:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/07/17 13:45:03 INFO ResourceUtils: ==============================================================
23/07/17 13:45:03 INFO ResourceUtils: No custom resources configured for spark.driver.
23/07/17 13:45:03 INFO ResourceUtils: ==============================================================
23/07/17 13:45:03 INFO SparkContext: Submitted application: TopValue
23/07/17 13:45:03 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/07/17 13:45:03 INFO ResourceProfile: Limiting resource is cpu
23/07/17 13:45:03 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/07/17 13:45:03 INFO SecurityManager: Changing view acls to: Administrator
23/07/17 13:45:03 INFO SecurityManager: Changing modify acls to: Administrator
23/07/17 13:45:03 INFO SecurityManager: Changing view acls groups to: 
23/07/17 13:45:03 INFO SecurityManager: Changing modify acls groups to: 
23/07/17 13:45:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Administrator); groups with view permissions: Set(); users  with modify permissions: Set(Administrator); groups with modify permissions: Set()
23/07/17 13:45:11 INFO Utils: Successfully started service 'sparkDriver' on port 59590.
23/07/17 13:45:11 INFO SparkEnv: Registering MapOutputTracker
23/07/17 13:45:11 INFO SparkEnv: Registering BlockManagerMaster
23/07/17 13:45:12 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/07/17 13:45:12 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/07/17 13:45:12 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/07/17 13:45:12 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-20fc95dd-f4f4-4897-93d3-a3091092d923
23/07/17 13:45:12 INFO MemoryStore: MemoryStore started with capacity 623.4 MiB
23/07/17 13:45:12 INFO SparkEnv: Registering OutputCommitCoordinator
23/07/17 13:45:13 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/07/17 13:45:13 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
23/07/17 13:45:14 INFO Executor: Starting executor ID driver on host 192.168.56.1
23/07/17 13:45:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59606.
23/07/17 13:45:14 INFO NettyBlockTransferService: Server created on 192.168.56.1:59606
23/07/17 13:45:14 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/07/17 13:45:14 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.1, 59606, None)
23/07/17 13:45:14 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:59606 with 623.4 MiB RAM, BlockManagerId(driver, 192.168.56.1, 59606, None)
23/07/17 13:45:14 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.1, 59606, None)
23/07/17 13:45:14 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.1, 59606, None)
1	543443
2	323423
3	56345
4	53453
5	45654

Process finished with exit code 0

图像演示分析步骤

1,1193,5234,978300760  经过val lines = sc.textFile()    处理为RDD(lines),如下图

上图来源:林子雨

val result = lines.filter(line => (line.trim().length > 0) && (line.split(",").length == 4))
.map(_.split(",")(2))

.map(x => (x.toInt,""))

 .sortByKey(false)

.map(x => x._1).take(5)

.foreach(x => {
    num = num + 1
    println(num + "\t" + x)
})

                                                         注:图片来源林子雨教授

求最大值最小值

数据同上,我取得是数据的第三列

代码及运行结果

import org.apache.spark.{SparkConf, SparkContext}

object MaxValue_order {
  def main(args:Array[String]):Unit = {
    val conf = new SparkConf().setAppName("MaxValues_order").setMaster("local[1]")
    val sc = new SparkContext(conf)
    val lines = sc.textFile("D:\\workspace\\spark\\src\\main\\Data\\orderTopData1",3)
    sc.setLogLevel("ERROR")
    val Max_Value = lines.map( line => line.split(",")(2).toInt).max()
    val Min_Value = lines.map( line => line.split(",")(2).toInt).min()
    println("最大值为:" + Max_Value)
    println("最小值为:" + Min_Value)
  }

}


运行结果

最大值为:543443
最小值为:423

求最大最小值

需求:

读取文本文件中的数字,并找出每个数字分组中的最大值和最小值。

数据:

213
101
111
123
242
987
999
12

代码:

import org.apache.spark.{SparkConf, SparkContext}
object Max_Min_PaymentValue {
  def main(args:Array[String]):Unit = {
    // 创建Spark配置对象
    val conf = new SparkConf().setAppName("Max_Min_PaymentValue").setMaster("local[1]")
    // 创建SparkContext对象
    val sc = new SparkContext(conf)
    // 读取文本文件,每一行作为一个元素存储在RDD中
    val lines = sc.textFile("D:\\workspace\\spark\\src\\main\\Data\\number_sort")
    // 过滤掉空行,将每一行转换为(key, value)的形式,其中key是固定的,
    //value是行中的数字转换为整数类型,经过groupByKey会将("key",213),("key",101)...变成        
    // ("key",<213,101,111,123,242,987,999,12>)
    val result = lines.filter(_.trim().length > 0)
      .map(line => ("key",line.trim.toInt)).groupByKey()
      .map(x => {
        // 初始化最小值和最大值
        var min = Integer.MAX_VALUE
        var max = Integer.MIN_VALUE
        // 遍历每个分组中的数字,x._2就是键值对(key,value-list)中的value-list
        // value-list是 <213,101,111,123,242,987,999,12>
        for (num <- x._2){
          // 更新最大值
          if(num > max){
            max = num
          }
          // 更新最小值
          if(num < min){
            min = num
          }
        }
        // 返回每个分组的最大值和最小值的元组
        (max,min)
      }).collect.foreach{
      case (max,min) =>
        // 输出最大值和最小值
        println("最大值:" + max)
        println("最小值:" + min)
    }
  }
}

输出结果:

D:\Java\jdk1.8.0_131\bin\java.exe "-javaagent:D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar=52414:D:\idea\IntelliJ IDEA 2021.1.3\bin" -Dfile.encoding=UTF-8 -classpath "D:\idea\IntelliJ IDEA 2021.1.3\lib\idea_rt.jar" com.intellij.rt.execution.CommandLineWrapper C:\Users\Administrator\AppData\Local\Temp\idea_classpath1231880085 Max_Min_PaymentValue
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/spark/spark-3.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/Maven/Maven_repositories/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
23/07/17 15:18:03 INFO SparkContext: Running Spark version 3.2.0
23/07/17 15:18:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/07/17 15:18:05 INFO ResourceUtils: ==============================================================
23/07/17 15:18:05 INFO ResourceUtils: No custom resources configured for spark.driver.
23/07/17 15:18:05 INFO ResourceUtils: ==============================================================
23/07/17 15:18:05 INFO SparkContext: Submitted application: Max_Min_PaymentValue
23/07/17 15:18:05 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/07/17 15:18:05 INFO ResourceProfile: Limiting resource is cpu
23/07/17 15:18:05 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/07/17 15:18:05 INFO SecurityManager: Changing view acls to: Administrator
23/07/17 15:18:05 INFO SecurityManager: Changing modify acls to: Administrator
23/07/17 15:18:05 INFO SecurityManager: Changing view acls groups to: 
23/07/17 15:18:05 INFO SecurityManager: Changing modify acls groups to: 
23/07/17 15:18:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Administrator); groups with view permissions: Set(); users  with modify permissions: Set(Administrator); groups with modify permissions: Set()
23/07/17 15:18:17 INFO Utils: Successfully started service 'sparkDriver' on port 52457.
23/07/17 15:18:17 INFO SparkEnv: Registering MapOutputTracker
23/07/17 15:18:17 INFO SparkEnv: Registering BlockManagerMaster
23/07/17 15:18:17 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/07/17 15:18:17 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/07/17 15:18:17 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/07/17 15:18:17 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-fefa1a95-9abb-4f02-930f-e101116537e3
23/07/17 15:18:17 INFO MemoryStore: MemoryStore started with capacity 623.4 MiB
23/07/17 15:18:17 INFO SparkEnv: Registering OutputCommitCoordinator
23/07/17 15:18:18 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/07/17 15:18:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
23/07/17 15:18:19 INFO Executor: Starting executor ID driver on host 192.168.56.1
23/07/17 15:18:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52473.
23/07/17 15:18:19 INFO NettyBlockTransferService: Server created on 192.168.56.1:52473
23/07/17 15:18:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/07/17 15:18:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.1, 52473, None)
23/07/17 15:18:19 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:52473 with 623.4 MiB RAM, BlockManagerId(driver, 192.168.56.1, 52473, None)
23/07/17 15:18:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.1, 52473, None)
23/07/17 15:18:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.1, 52473, None)
23/07/17 15:18:22 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 244.0 KiB, free 623.2 MiB)
23/07/17 15:18:22 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.4 KiB, free 623.1 MiB)
23/07/17 15:18:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.1:52473 (size: 23.4 KiB, free: 623.4 MiB)
23/07/17 15:18:22 INFO SparkContext: Created broadcast 0 from textFile at Max_Min_PaymentValue.scala:7
23/07/17 15:18:23 INFO FileInputFormat: Total input paths to process : 1
23/07/17 15:18:23 INFO SparkContext: Starting job: collect at Max_Min_PaymentValue.scala:9
23/07/17 15:18:23 INFO DAGScheduler: Registering RDD 3 (map at Max_Min_PaymentValue.scala:8) as input to shuffle 0
23/07/17 15:18:23 INFO DAGScheduler: Got job 0 (collect at Max_Min_PaymentValue.scala:9) with 1 output partitions
23/07/17 15:18:23 INFO DAGScheduler: Final stage: ResultStage 1 (collect at Max_Min_PaymentValue.scala:9)
23/07/17 15:18:23 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
23/07/17 15:18:23 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
23/07/17 15:18:23 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at Max_Min_PaymentValue.scala:8), which has no missing parents
23/07/17 15:18:23 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.8 KiB, free 623.1 MiB)
23/07/17 15:18:23 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.2 KiB, free 623.1 MiB)
23/07/17 15:18:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.56.1:52473 (size: 4.2 KiB, free: 623.4 MiB)
23/07/17 15:18:23 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1427
23/07/17 15:18:23 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at Max_Min_PaymentValue.scala:8) (first 15 tasks are for partitions Vector(0))
23/07/17 15:18:23 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/07/17 15:18:24 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (192.168.56.1, executor driver, partition 0, PROCESS_LOCAL, 4508 bytes) taskResourceAssignments Map()
23/07/17 15:18:24 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
23/07/17 15:18:25 INFO HadoopRDD: Input split: file:/D:/workspace/spark/src/main/Data/number_sort:0+37
23/07/17 15:18:27 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1204 bytes result sent to driver
23/07/17 15:18:27 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3114 ms on 192.168.56.1 (executor driver) (1/1)
23/07/17 15:18:27 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
23/07/17 15:18:27 INFO DAGScheduler: ShuffleMapStage 0 (map at Max_Min_PaymentValue.scala:8) finished in 3.615 s
23/07/17 15:18:27 INFO DAGScheduler: looking for newly runnable stages
23/07/17 15:18:27 INFO DAGScheduler: running: Set()
23/07/17 15:18:27 INFO DAGScheduler: waiting: Set(ResultStage 1)
23/07/17 15:18:27 INFO DAGScheduler: failed: Set()
23/07/17 15:18:27 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at map at Max_Min_PaymentValue.scala:9), which has no missing parents
23/07/17 15:18:27 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 8.9 KiB, free 623.1 MiB)
23/07/17 15:18:27 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 4.7 KiB, free 623.1 MiB)
23/07/17 15:18:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.56.1:52473 (size: 4.7 KiB, free: 623.4 MiB)
23/07/17 15:18:27 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1427
23/07/17 15:18:27 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at map at Max_Min_PaymentValue.scala:9) (first 15 tasks are for partitions Vector(0))
23/07/17 15:18:27 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
23/07/17 15:18:27 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (192.168.56.1, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
23/07/17 15:18:27 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
23/07/17 15:18:27 INFO ShuffleBlockFetcherIterator: Getting 1 (88.0 B) non-empty blocks including 1 (88.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
23/07/17 15:18:27 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 36 ms
23/07/17 15:18:27 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1397 bytes result sent to driver
23/07/17 15:18:27 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 222 ms on 192.168.56.1 (executor driver) (1/1)
23/07/17 15:18:27 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
23/07/17 15:18:27 INFO DAGScheduler: ResultStage 1 (collect at Max_Min_PaymentValue.scala:9) finished in 0.310 s
23/07/17 15:18:27 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
23/07/17 15:18:27 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
23/07/17 15:18:27 INFO DAGScheduler: Job 0 finished: collect at Max_Min_PaymentValue.scala:9, took 4.208463 s
最大值:999
最小值:12
23/07/17 15:18:27 INFO SparkContext: Invoking stop() from shutdown hook
23/07/17 15:18:27 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
23/07/17 15:18:27 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/07/17 15:18:27 INFO MemoryStore: MemoryStore cleared
23/07/17 15:18:27 INFO BlockManager: BlockManager stopped
23/07/17 15:18:27 INFO BlockManagerMaster: BlockManagerMaster stopped
23/07/17 15:18:27 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/07/17 15:18:27 INFO SparkContext: Successfully stopped SparkContext
23/07/17 15:18:27 INFO ShutdownHookManager: Shutdown hook called
23/07/17 15:18:27 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-d878c8c9-c882-40ea-81cf-2e9408af0f36

Process finished with exit code 0

bug1: reassignment to val max = num

bug原因:报错"reassignment to val"是因为在代码中将一个val变量(即不可变变量)maxmin赋值为num,但是val变量一旦赋值后就不能再次修改,因此导致了错误。 解决这个问题的方法是将maxmin变量定义为var变量(即可变变量),这样就可以进行重新赋值操作。

bug2:value _1 is not a member of Unit
      println("最大值:" + x._1)

bug原因:map操作的结果没有返回一个值,而是返回了一个Unit类型的值,即(),我把这个(max,min)放到了for循环语句内了,放外头就行。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/41801.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

在 3ds Max 中对二战球形炮塔进行建模

推荐&#xff1a; NSDT场景编辑器助你快速搭建可二次开发的3D应用场景 实际上被称为“斯佩里球炮塔”&#xff0c;它被用于二战的B-17和B-24轰炸机。 本教程介绍如何在 3ds Max 中对球形转塔进行建模。建模时&#xff0c;您将使用背景图片作为辅助。首先创建一个低多边形球体。…

视频融合平台EasyCVR登录后通道数据及菜单栏页面显示异常的排查与解决

EasyCVR可拓展性强、视频能力灵活、部署轻快&#xff0c;可支持的主流标准协议有GB28181、RTSP/Onvif、RTMP等&#xff0c;以及厂家私有协议与SDK接入&#xff0c;包括海康Ehome、海大宇等设备的SDK等&#xff0c;能对外分发RTSP、RTMP、FLV、HLS、WebRTC等格式的视频流。 有用…

Windows11 C盘瘦身

1.符号链接 将大文件夹移动到其他盘&#xff0c;创建成符号链接 2.修改Android Studio路径设置 1.SDK路径 2.Gradle路径 3.模拟器路径 设置环境变量 ANDROID_SDK_HOME

存量市场下,雅迪的高端化之路举步维艰?

为了让自家的高端产品成功“突围”&#xff0c;雅迪在营销上无所不用其极。 继在央视大楼高调发布后&#xff0c;近日雅迪冠能探索E10完成了力战70吨游艇、无惧24吨雨水冲刷、制霸百公里全地形等极限挑战&#xff0c;“树立起新一代两轮电动车豪华标杆旗舰”。 图源&#xff1…

字节跳动后端面试,笔试部分

var code "7022f444-ded0-477c-9afe-26812ca8e7cb" 背景 笔者在刷B站的时候&#xff0c;看到了一个关于面试的实录&#xff0c;前半段是八股文&#xff0c;后半段是笔试部分&#xff0c;感觉笔试部分的题目还是挺有意思的&#xff0c;特此记录一下。 笔试部分 问…

Jmeter性能测试,通过插件监控服务器资源使用情况

Jmeter作为性能测试的首选工具&#xff0c;那么在性能测试过程中如何方便快捷的监测服务器资源使用情况&#xff1f; 可以通过jmeter 安装"PerfMon(Servers Performance Monitoting)"插件并配合服务端资源监控工具进行实现&#xff0c;详细操作流程如下&#xff1a;…

【微信机器人开发

现在并没有长期免费的微信群机器人&#xff0c;很多都是前期免费试用&#xff0c;后期进行收费&#xff0c;或者核心功能需要付费使用的。 这时如果需要群机器人帮助我们管理群聊&#xff0c;建议大家使有条件的可以自己开发微信管理系统。了解微信群机器人的朋友都知道&#x…

教程 | 如何10秒内一键生成高质量PPT

Hi! 大家好&#xff0c;我是赤辰&#xff01; 近期新进的学员不少职场小白&#xff0c;对AI工具提效办公很感兴趣&#xff0c;今天火速给大家安排&#xff0c;ChatGPTMindShow强强联合&#xff0c;30秒内快速生成PPT&#xff0c;对于策划小白来说简直是福音呀&#xff01; 市…

第三方api对接怎么做?淘宝1688api接口怎么对接?

在今天的互联网上&#xff0c;第三方API对接是必不可少的。这种技术将不同的应用程序/服务连接在一起&#xff0c;创造了无限的可能性。 第三方api对接怎么做&#xff1f; 1、与支付公司签约 首先&#xff0c;通过正规的渠道&#xff0c;如支付公司官网或正规服务商&#xf…

Echarts 修改背景颜色、全屏自适应屏幕

修改背景色&#xff1a; 全屏自适应屏幕 首先拿到外面的div的高度 通过DOM获取clientHeight即为无论全屏与否都是DIV的整个高度 在通过高度去做自适应就好了

Redis可视化工具(Redis Desktop Manager)

redis是我们平时开发工作中经常用到的非关系型数据库&#xff0c;常用于做数据缓存&#xff0c;分布式锁等。 为了更方便的使用redi&#xff0c;这里给大家推荐一款可视化工具&#xff1a;Redis Desktop Manager。 1.下载与安装 直接到gihub下载&#xff0c;地址 Release 0.…

搭建Redis主从集群和哨兵

说明&#xff1a;单机的Redis存在许多的问题&#xff0c;如数据丢失问题、高并发问题、故障恢复问题、海量数据的存储能力问题&#xff0c;针对这四个问题&#xff0c;对应解决方式有&#xff1a;数据持久化&#xff08;参考&#xff1a;http://t.csdn.cn/SSyBi&#xff09;、搭…

Stable Diffusion学习笔记

一些零散笔记 批量化生产的时候推荐提高生成批次&#xff0c;不建议提高每批数量 灰常好的模型网站 LiblibAI哩布哩布AI-中国领先原创AI模型分享社区 出图效率倍增&#xff01;47个高质量的 Stable Diffusion 常用模型推荐 - 优设网 - 学设计上优设 关键词Prompt顺序 画质…

16 | 视图:如何实现服务和数据在微服务各层的协作?

目录 服务的协作 1. 服务的类型 2. 服务的调用 微服务内跨层 微服务之间的服务调用 领域事件驱动 3. 服务的封装与组合 基础层 领域层 应用层 用户接口层 4. 两种分层架构的服务依赖关系 松散分层架构的服务依赖 严格分层架构的服务依赖 数据对象视图 基础层 领…

Hugging Face开源库accelerate详解

官网&#xff1a;https://huggingface.co/docs/accelerate/package_reference/accelerator Accelerate使用步骤 初始化accelerate对象accelerator Accelerator()调用prepare方法对model、dataloader、optimizer、lr_schedluer进行预处理删除掉代码中关于gpu的操作&#xff0…

【Zookeeper】

目录 一、Zookeeper 概述1、Zookeeper 定义2、Zookeeper 工作机制3、Zookeeper 特点4、Zookeeper 数据结构5、Zookeeper 应用场景6、Zookeeper 选举机制 二、部署 Zookeeper 集群1.安装前准备1、关闭防火墙2、安装 JDK3、下载安装包 2.安装 Zookeeper1、修改配置文件2、拷贝配置…

CMU 15-445 -- Query Optimization - 10

CMU 15-445 -- Query Optimization - 10 引言Query Optimization TechniquesQuery RewritingPredicate PushdownProjections Pushdown Cost-based SearchCost EstimationStatisticsEquality PredicateRange PredicateNegation QueryConjunction QueryDisjunction QueryJoins直方…

6.6Jmeter远程调度Linux机器Jmeter测试

1、配置Agent和启动 1.1、打开jmeter/bin目录下的jmeter.properties 1、server_port1099取消注释 2、remote_hosts127.0.0.1 改为remote_hosts127.0.0.1:1099 或者是remote_hostsAgent机的ip:1099 3、server.rmi.localport1099 4、server.rmi.ssl.disablefalse改为true&#x…

AtcoderABC245场

A - Good morningA - Good morning 题目大意 给定Takahashi和Aoki的起床时间&#xff0c;判断谁先起床。 思路分析 题目要求比较Takahashi和Aoki的起床时间。首先&#xff0c;将起床时间转换为以分钟为单位。然后&#xff0c;通过比较两者的起床时间来确定谁先起床。 时间复…

30,queue容器

30.1queue基本概念 概念&#xff1a;queue是一种先进先出(First In First Out,FIFO)的数据结构&#xff0c;它有两个出口 队列容器允许从一端新增元素&#xff0c;从另一端移除元素&#xff0c;队列queue符合先进先出 队列中只有对头和队尾才可以被外界使用&#xff0c;因此队…