使用 Elasticsearch 检测抄袭 (二)

我在在之前的文章 “使用 Elasticsearch 检测抄袭 (一)” 介绍了如何检文章抄袭。这个在许多的实际使用中非常有意义。我在  CSDN 上的文章也经常被人引用或者抄袭。有的人甚至也不用指明出处。这对文章的作者来说是很不公平的。文章介绍的内容针对很多的博客网站也非常有意义。在那篇文章中,我觉得针对一些开发者来说,不一定能运行的很好。在今天的这篇文章中,我特意使用本地部署,并使用 jupyter notebook 来进行一个展示。这样开发者能一步一步地完整地运行起来。

安装

安装 Elasticsearch 及 Kibana

如果你还没有安装好自己的 Elasticsearch 及 Kibana,那么请参考一下的文章来进行安装:

  • 如何在 Linux,MacOS 及 Windows 上进行安装 Elasticsearch

  • Kibana:如何在 Linux,MacOS 及 Windows 上安装 Elastic 栈中的 Kibana

在安装的时候,请选择 Elastic Stack 8.x 进行安装。在安装的时候,我们可以看到如下的安装信息:

为了能够上传向量模型,我们必须订阅白金版或试用。

上传模型

注意:如果我们在这里通过命令行来进行上传模型的话,那么你就不需要在下面的代码中来实现上传。可以省去那些个步骤。

我们可以参考之前的文章 “Elasticsearch:使用 NLP 问答模型与你喜欢的圣诞歌曲交谈”。我们使用如下的命令来上传 OpenAI detection 模型:

eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \
	--hub-model-id roberta-base-openai-detector \
	--task-type text_classification \
	--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \
	--start

在上面,我们需要根据自己的配置修改上面的证书路径,Elasticsearch 的访问地址。

我们可以在 Kibana 中查看最新上传的模型:

接下来,按照同样的方法,我们安装文本嵌入模型。

eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \
	--hub-model-id sentence-transformers/all-mpnet-base-v2 \
	--task-type text_embedding \
	--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \
	--start	

为了方便大家学习,我们可以在如下的地址下载代码:

git clone https://github.com/liu-xiao-guo/elasticsearch-labs

我们可以在如下的位置找到 jupyter notebook:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ ls
plagiarism_detection_es_self_managed.ipynb

运行代码

接下来,我们开始运行 notebook。我们首先安装相应的 python 包:

pip3 install elasticsearch==8.11
pip3 -q install eland elasticsearch sentence_transformers transformers torch==2.1.0

在运行代码之前,我们先设置如下的变量:

export ES_USER="elastic"
export ES_PASSWORD="o6G_pvRL=8P*7on+o6XH"
export ES_ENDPOINT="localhost"

我们还需要把 Elasticsearch 的证书拷贝到当前的目录中:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ cp ~/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt .
$ ls
http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
plagiarism_detection_es.ipynb

导入包:

from elasticsearch import Elasticsearch, helpers
from elasticsearch.client import MlClient
from eland.ml.pytorch import PyTorchModel
from eland.ml.pytorch.transformers import TransformerModel
from urllib.request import urlopen
import json
from pathlib import Path
import os

连接到 Elasticsearch

elastic_user=os.getenv('ES_USER')
elastic_password=os.getenv('ES_PASSWORD')
elastic_endpoint=os.getenv("ES_ENDPOINT")

url = f"https://{elastic_user}:{elastic_password}@{elastic_endpoint}:9200"
client = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)
 
print(client.info())

上传 detector 模型

hf_model_id ='roberta-base-openai-detector'
tm = TransformerModel(model_id=hf_model_id, task_type="text_classification")

#set the modelID as it is named in Elasticsearch
es_model_id = tm.elasticsearch_model_id()

# Download the model from Hugging Face
tmp_path = "models"
Path(tmp_path).mkdir(parents=True, exist_ok=True)
model_path, config, vocab_path = tm.save(tmp_path)

# Load the model into Elasticsearch
ptm = PyTorchModel(client, es_model_id)
ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)

#Start the model
s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
s.body

我们可以在 Kibana 中进行查看:

上传 text embedding 模型

hf_model_id='sentence-transformers/all-mpnet-base-v2'
tm = TransformerModel(model_id=hf_model_id, task_type="text_embedding")

#set the modelID as it is named in Elasticsearch
es_model_id = tm.elasticsearch_model_id()

# Download the model from Hugging Face
tmp_path = "models"
Path(tmp_path).mkdir(parents=True, exist_ok=True)
model_path, config, vocab_path = tm.save(tmp_path)

# Load the model into Elasticsearch
ptm = PyTorchModel(client, es_model_id)
ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)

# Start the model
s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
s.body

我们可以在 Kibana 中查看:

创建源索引

client.indices.create(
index="plagiarism-docs",
mappings= {
    "properties": {
        "title": {
            "type": "text",
            "fields": {
                "keyword": {
                "type": "keyword"
                }
            }
        },
        "abstract": {
            "type": "text",
            "fields": {
                "keyword": {
                "type": "keyword"
                }
            }
        },
        "url": {
            "type": "keyword"
        },
        "venue": {
            "type": "keyword"
        },
         "year": {
            "type": "keyword"
        }
    }
})

我们可以在 Kibana 中进行查看:

创建 checker ingest pipeline

client.ingest.put_pipeline(
    id="plagiarism-checker-pipeline",
    processors = [
    {
      "inference": { #for ml models - to infer against the data that is being ingested in the pipeline
        "model_id": "roberta-base-openai-detector", #text classification model id
        "target_field": "openai-detector", # Target field for the inference results
        "field_map": { #Maps the document field names to the known field names of the model.
        "abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.
        }
      }
    },
    {
      "inference": {
        "model_id": "sentence-transformers__all-mpnet-base-v2", #text embedding model model id
        "target_field": "abstract_vector", # Target field for the inference results
        "field_map": { #Maps the document field names to the known field names of the model.
        "abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.
        }
      }
    }

  ]
)

我们可以在 Kibana 中进行查看:

创建 plagiarism checker 索引

client.indices.create(
index="plagiarism-checker",
mappings={
"properties": {
    "title": {
        "type": "text",
        "fields": {
            "keyword": {
                "type": "keyword"
            }
        }
    },
    "abstract": {
        "type": "text",
        "fields": {
            "keyword": {
                "type": "keyword"
            }
        }
    },
    "url": {
        "type": "keyword"
    },
    "venue": {
        "type": "keyword"
    },
    "year": {
        "type": "keyword"
    },
    "abstract_vector.predicted_value": { # Inference results field, target_field.predicted_value
    "type": "dense_vector",
    "dims": 768, # embedding_size
    "index": "true",
    "similarity": "dot_product" #  When indexing vectors for approximate kNN search, you need to specify the similarity function for comparing the vectors.
         }
  }
}
)

我们可以在 Kibana 中进行查看:

写入源文档

我们首先把地址 https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/datasets/emnlp2016-2018.json 里的文档下载到当前目录下:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ ls
emnlp2016-2018.json                        plagiarism_detection_es.ipynb
http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
models

如上所示,emnlp2016-2018.json  就是我们下载的文档。

# Load data into a JSON object
with open('emnlp2016-2018.json') as f:
   data_json = json.load(f)
 
print(f"Successfully loaded {len(data_json)} documents")

def create_index_body(doc):
    """ Generate the body for an Elasticsearch document. """
    return {
        "_index": "plagiarism-docs",
        "_source": doc,
    }

# Prepare the documents to be indexed
documents = [create_index_body(doc) for doc in data_json]

# Use helpers.bulk to index
helpers.bulk(client, documents)

print("Done indexing documents into `plagiarism-docs` source index")

我们可以在 Kibana 中进行查看:

使用 ingest pipeline 进行 reindex

client.reindex(wait_for_completion=False,
               source={
                  "index": "plagiarism-docs"
    },
               dest= {
                  "index": "plagiarism-checker",
                  "pipeline": "plagiarism-checker-pipeline"
    }
)

在上面,我们设置 wait_for_completion=False。这是一个异步的操作。我们需要等一段时间让上面的 reindex 完成。我们可以通过检查如下的文档数:

上面表明我们的文档已经完成。我们再接着查看一下 plagiarism-checker 索引中的文档:

检查重复文字

direct plagarism

model_text = 'Understanding and reasoning about cooking recipes is a fruitful research direction towards enabling machines to interpret procedural text. In this work, we introduce RecipeQA, a dataset for multimodal comprehension of cooking recipes. It comprises of approximately 20K instructional recipes with multiple modalities such as titles, descriptions and aligned set of images. With over 36K automatically generated question-answer pairs, we design a set of comprehension and reasoning tasks that require joint understanding of images and text, capturing the temporal flow of events and making sense of procedural knowledge. Our preliminary results indicate that RecipeQA will serve as a challenging test bed and an ideal benchmark for evaluating machine comprehension systems. The data and leaderboard are available at http://hucvl.github.io/recipeqa.'

response = client.search(index='plagiarism-checker', size=1,
    knn={
        "field": "abstract_vector.predicted_value",
        "k": 9,
        "num_candidates": 974,
        "query_vector_builder": {
            "text_embedding": {
                "model_id": "sentence-transformers__all-mpnet-base-v2",
                "model_text": model_text
            }
        }
    }
)

for hit in response['hits']['hits']:
    score = hit['_score']
    title = hit['_source']['title']
    abstract = hit['_source']['abstract']
    openai = hit['_source']['openai-detector']['predicted_value']
    url = hit['_source']['url']

    if score > 0.9:
        print(f"\nHigh similarity detected! This might be plagiarism.")
        print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

    elif score < 0.7:
        print(f"\nLow similarity detected. This might not be plagiarism.")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

    else:
        print(f"\nModerate similarity detected.")
        print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

ml_client = MlClient(client)

model_id = 'roberta-base-openai-detector' #open ai text classification model

document = [
    {
        "text_field": model_text
    }
]

ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)

predicted_value = ml_response['inference_results'][0]['predicted_value']

if predicted_value == 'Fake':
    print("Note: The text query you entered may have been generated by AI.\n")

similar text - paraphrase plagiarism

model_text = 'Comprehending and deducing information from culinary instructions represents a promising avenue for research aimed at empowering artificial intelligence to decipher step-by-step text. In this study, we present CuisineInquiry, a database for the multifaceted understanding of cooking guidelines. It encompasses a substantial number of informative recipes featuring various elements such as headings, explanations, and a matched assortment of visuals. Utilizing an extensive set of automatically crafted question-answer pairings, we formulate a series of tasks focusing on understanding and logic that necessitate a combined interpretation of visuals and written content. This involves capturing the sequential progression of events and extracting meaning from procedural expertise. Our initial findings suggest that CuisineInquiry is poised to function as a demanding experimental platform.'

response = client.search(index='plagiarism-checker', size=1,
    knn={
        "field": "abstract_vector.predicted_value",
        "k": 9,
        "num_candidates": 974,
        "query_vector_builder": {
            "text_embedding": {
                "model_id": "sentence-transformers__all-mpnet-base-v2",
                "model_text": model_text
            }
        }
    }
)

for hit in response['hits']['hits']:
    score = hit['_score']
    title = hit['_source']['title']
    abstract = hit['_source']['abstract']
    openai = hit['_source']['openai-detector']['predicted_value']
    url = hit['_source']['url']

    if score > 0.9:
        print(f"\nHigh similarity detected! This might be plagiarism.")
        print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

    elif score < 0.7:
        print(f"\nLow similarity detected. This might not be plagiarism.")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

    else:
        print(f"\nModerate similarity detected.")
        print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")

        if openai == 'Fake':
            print("This document may have been created by AI.\n")

ml_client = MlClient(client)

model_id = 'roberta-base-openai-detector' #open ai text classification model

document = [
    {
        "text_field": model_text
    }
]

ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)

predicted_value = ml_response['inference_results'][0]['predicted_value']

if predicted_value == 'Fake':
    print("Note: The text query you entered may have been generated by AI.\n")

完整的代码可以在地址下载:https://github.com/liu-xiao-guo/elasticsearch-labs/blob/main/supporting-blog-content/plagiarism-detection-with-elasticsearch/plagiarism_detection_es_self_managed.ipynb

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/265828.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【星海出品】Keepalived 使用基础案例 (二)

keepalived 使用 [rootmaster ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { //全局配置notification_email { //定义报警收件人邮件地址acassenfirewall.locfailoverfirewall.locsysadminfirewall.loc}notification_…

ECMAScript基础入门:从语法到应用

在此之前我以及发布过关于JavaScript基础知识点大家也可以参考 大家有关于JavaScript知识点不知道可以去 &#x1f389;博客主页&#xff1a;阿猫的故乡 &#x1f389;系列专栏&#xff1a;JavaScript专题栏 &#x1f389;ajax专栏&#xff1a;ajax知识点 &#x1f389;欢迎关注…

redis常见数据类型

目录 1.基本全局命令 2.数据结构和内部编码 3.单线程架构 1.基本全局命令 Redis有5种数据结构,但它们都是键值对种的值&#xff0c;对于键来说有一些通用的命令。 KEYS 返回所有满足样式(pattern) 的key。支持如下统配样式。 h?llo 匹配 hello, hallo和hxllo h*llo匹配h…

机场信息集成系统系列介绍(6):机场协同决策支持系统ACDM

目录 一、背景介绍 1、机场协同决策支持系统是什么&#xff1f; 2、发展历程 3、机场协同决策参与方 4、相关定义 二、机场协同决策ACDM的建设目标 &#xff08;一&#xff09;机场协同决策支持系统的宏观目标 1、实现运行数据共享和前序航班信息透明化 2、实现地面资源…

Linux常用基本命令(三)

一、显示命令 1. cat 通式&#xff1a;cat 选项 文件名 只能看普通的文本文件 缺点&#xff1a;如果内容过多会显示不全 选项效果-n显示行号包括空行-b跳过空白行编号-s讲所有的连续的多个空行替换为一个空行&#xff08;压缩成一个空行&#xff09;-A显示隐藏字符 三个标准文件…

十四、W5100S/W5500+RP2040之MicroPython开发<MQTTThingSpeak示例>

文章目录 1. 前言2. 平台操作流程3. WIZnet以太网芯片4. 示例讲解以及使用4.1 程序流程图4.2 测试准备4.3 连接方式4.4 相关代码4.5 烧录验证 5. 注意事项6. 相关链接 1. 前言 在这个智能硬件和物联网时代&#xff0c;MicroPython和树莓派PICO正以其独特的优势引领着嵌入式开发…

使用PE信息查看工具和Dependency Walker工具排查因为库版本不对导致程序启动报错问题

目录 1、问题说明 2、问题分析思路 3、问题分析过程 3.1、使用Dependency Walker打开软件主程序&#xff0c;查看库与库的依赖关系&#xff0c;查看出问题的库 3.2、使用PE工具查看dll库的时间戳 3.3、解决办法 4、最后 VC常用功能开发汇总&#xff08;专栏文章列表&…

【数据结构】线性表

一.线性表 1.定义&#xff1a; n个同类型数据元素的有限序列&#xff0c;记为 L为表名&#xff0c;i为数据元素在线性表中的位序&#xff0c;n为线性表的表长&#xff0c;n0时称为空表。 2.数据元素之间的关系&#xff1a; 直接前驱和直接后继 3.抽象数据类型线性表的定义…

PADS Layout安全间距检查报错

问题&#xff1a; 在Pads Layout完成layout后&#xff0c;进行工具-验证设计安全间距检查时&#xff0c;差分对BAK_FIXCLK_100M_P / BAK_FIXCLK_100M_N的安全间距检查报错&#xff0c;最小为3.94mil&#xff0c;但是应该大于等于5mil&#xff1b;如下两张图&#xff1a; 检查&…

SpringBoot的配置高级

&#x1f648;作者简介&#xff1a;练习时长两年半的Java up主 &#x1f649;个人主页&#xff1a;程序员老茶 &#x1f64a; ps:点赞&#x1f44d;是免费的&#xff0c;却可以让写博客的作者开心好久好久&#x1f60e; &#x1f4da;系列专栏&#xff1a;Java全栈&#xff0c;…

【JMeter】使用内网负载机(Linux)执行JMeter性能测试

一、背景 ​ 在我们工作中有时候会需要使用客户提供的内网负载机进行性能测试&#xff0c;一般在什么情况下我们需要要求客户提供内网负载机进行性能测试呢&#xff1f; 遇到公网环境下性能测试达到了带宽瓶颈。那么这时&#xff0c;我们就需要考虑在内网环境负载机下来执行我们…

使用C语言将ASCII明文编码为GSM短信体格式

一、背景介绍 GSM&#xff08;Global System for Mobile Communications&#xff09;是全球移动通信系统的简称&#xff0c;而GSM 03.38是GSM系统中用于短信编码的标准。GSM 03.38字符集采用7-bit编码&#xff0c;与ASCII的8-bit编码有所不同。为了将ASCII编码的文本转换为GSM…

【JavaWeb学习笔记】13 - JSP浏览器渲染技术

项目代码 https://github.com/yinhai1114/JavaWeb_LearningCode/tree/main/jsp JSP 一、JSP引入 1.JSP现状 1.目前主流的技术是前后端分离(比如: Spring Boot Vue/React),我们会讲的.[看一下] 2. JSP技术使用在逐渐减少&#xff0c;但使用少和没有使用是两个意思&#xff…

DB207S-ASEMI迷你贴片整流桥DB207S

编辑&#xff1a;ll DB207S-ASEMI迷你贴片整流桥DB207S 型号&#xff1a;DB207S 品牌&#xff1a;ASEMI 封装&#xff1a;DBS-4 最大平均正向电流&#xff1a;2A 最大重复峰值反向电压&#xff1a;1000V 产品引线数量&#xff1a;4 产品内部芯片个数&#xff1a;4 产品…

Spring security之授权

前言 本篇为大家带来Spring security的授权&#xff0c;首先要理解一些概念&#xff0c;有关于&#xff1a;权限、角色、安全上下文、访问控制表达式、方法级安全性、访问决策管理器 一.授权的基本介绍 Spring Security 中的授权分为两种类型&#xff1a; 基于角色的授权&…

2024最新苹果手机APP软件下架了,怎么安装?

如果你是一个iPhone用户&#xff0c;你可能会遇到这样的情况&#xff1a;你想下载或更新一个软件&#xff0c;但发现它已经从APP Store上消失了。这可能是因为软件违反了苹果的规则&#xff0c;或者开发者主动撤下了软件。那么&#xff0c;这种情况下&#xff0c;你还能安装或使…

OpenCV | 霍夫变换:以车道线检测为例

霍夫变换 霍夫变换只能灰度图&#xff0c;彩色图会报错 lines cv2.HoughLinesP(edge_img,1,np.pi/180,15,minLineLength40,maxLineGap20) 参数1&#xff1a;要检测的图片矩阵参数2&#xff1a;距离r的精度&#xff0c;值越大&#xff0c;考虑越多的线参数3&#xff1a;距离…

Unreal5.3 PCG 笔记

目录 ElectricDreams场景功能移动中间山体向周围随机生成倒下的树干树干上随机生成的植被 ElectricDreams场景功能 移动中间山体向周围随机生成倒下的树干 配置内容 中心山体Spline周围沟渠Spline&#xff08;土堆&#xff09;PCG规则 主要功能节点 SplineSample&#xff08;…

基于Java web的住院管理系统论文

目 录 目 录 I 摘 要 III ABSTRACT IV 1 绪论 1 1.1 课题背景 1 1.2 研究现状 1 1.3 研究内容 2 2 系统开发环境 3 2.1 vue技术 3 2.2 JAVA技术 3 2.3 MYSQL数据库 3 2.4 B/S结构 4 2.5 SSM框架技术 4 3 系统分析 5 3.1 可行性分析 5 3.1.1 技术可行性 5 3.1.2 操作可行性 5 3…

Linux 操作系统(用户注册、删除、权限修改等)

添加用户 格式&#xff1a;useradd 用户名 ( 添加用户 ) passwd 用户名 (给用户设置密码) Linux 用户切换原则&#xff1a; 高权限向低权限切换无需输入密码 低权限向高权限或同级用户切换需要输入密码 用户切换&#xff1a;su su – 用户名 &#xff08;用户切换&#x…