不使用petallinux构建apu核rpu之间的核间通信
一:首先需要在RPU中创建openamp裸机程序:居于openamp框架实现rpmag通信
打开vitis平台将xsa导入并创建平台工程,然后再平台工程中找到platform.spr文件并打开,可以看到平台添加的cpu核支持包:
首先需要在平台下面对应的芯片中,打开board support支持包(modify BSP setting),选中里面的lib库metallib,amplib库(注意图片截取的是1.6和2.4版本,实际opentest工程是用1.7openamp/2.4libmetal)
上述步骤操作完成后,
在芯片bsp下面的system.mss描述中出现下述lib描述添加:
BEGIN LIBRARY
PARAMETER LIBRARY_NAME = openamp
PARAMETER LIBRARY_VER = 1.7
PARAMETER PROC_INSTANCE = psu_cortexr5_1
END
BEGIN LIBRARY
PARAMETER LIBRARY_NAME = libmetal
PARAMETER LIBRARY_VER = 2.4
PARAMETER PROC_INSTANCE = psu_cortexr5_1
END
然后编译platform平台工程。
完成编译后,开始创建application程序,在vitis工具中file->new->application创建openamp的测试程序:选中r50,选中opentest工程创建即可。
二:内核设备树修改如下:
linux 内核设备树描述: 参考链接
//https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/1896251596/OpenAMP+2021.1
/dts-v1/;
#include <dt-bindings/power/xilinx-zynqmp-power.h>
#include "zynqmp.dtsi"
#include "zynqmp-clk-ccf.dtsi"
#include <dt-bindings/phy/phy.h>
/ {
chosen {
bootargs = "earlycon";
stdout-path = "serial0:115200n8";
};
aliases {
i2c0 = &i2c0;
i2c1 = &i2c1;
spi0 = &qspi;
mmc0 = &sdhci0;
serial0 = &uart0;
serial1 = &uart1;
ethernet0 = &gem1;
// rtc0 = &rtc0;
// nvmem0 = &eeprom;
};
memory@0 {
device_type = "memory";
reg = <0x00000000 0x00000000 0x00000000 0x7FF00000>, <0x00000008 0x00000000 0x00000001 0x80000000>;
};
gem_clk: psgrt_gem_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <125000000>;
};
usb_clk: psgrt_usb_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <100000000>;
};
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
global_reserved: global_reserved@0 {
no-map;
reg = <0x0 0x0 0x0 0x00800000>;
};
rpu0vdev0vring0: rpu0vdev0vring0@800000 {
no-map;
reg = <0x0 0x00800000 0x0 0x100000>;
};
rpu0vdev0vring1: rpu0vdev0vring1@900000 {
no-map;
reg = <0x0 0x00900000 0x0 0x100000>;
};
rpu0vdev0buffer: rpu0vdev0buffer@A00000 {
no-map;
reg = <0x0 0xA00000 0x0 0x200000>;
};
rpu0load: rpu0load@1000000 {
no-map;
reg = <0x0 0x01000000 0x0 0x800000>;
};
rpu1vdev0vring0: rpu1vdev0vring0@C00000 {
no-map;
reg = <0x0 0x00C00000 0x0 0x100000>;
};
rpu1vdev0vring1: rpu1vdev0vring1@D00000 {
no-map;
reg = <0x0 0x00D00000 0x0 0x100000>;
};
rpu1vdev0buffer: rpu1vdev0buffer@E00000 {
no-map;
reg = <0x0 0x00E00000 0x0 0x200000>;
};
rpu1load: rpu1load@1800000 {
no-map;
reg = <0x0 0x30000000 0x0 0x8000000>;
};
};
tcm_0a: tcm_0a@ffe00000 {
no-map;
reg = <0x0 0xffe00000 0x0 0x10000>;
phandle = <0x40>;
status = "okay";
compatible = "mmio-sram";
power-domain = <&zynqmp_firmware PD_TCM_0_A>;
};
tcm_0b: tcm_0b@ffe20000 {
no-map;
reg = <0x0 0xffe20000 0x0 0x10000>;
phandle = <0x41>;
status = "okay";
compatible = "mmio-sram";
power-domain = <&zynqmp_firmware PD_TCM_0_B>;
};
tcm_1a: tcm_1a@ffe90000 {
no-map;
reg = <0x0 0xffe90000 0x0 0x10000>;
phandle = <0x42>;
status = "okay";
compatible = "mmio-sram";
power-domain = <&zynqmp_firmware PD_TCM_1_A>;
};
tcm_1b: tcm_1b@ffeb0000 {
no-map;
reg = <0x0 0xffeb0000 0x0 0x10000>;
phandle = <0x43>;
status = "okay";
compatible = "mmio-sram";
power-domain = <&zynqmp_firmware PD_TCM_1_B>;
};
zynqmp_ipi1 {
compatible = "xlnx,zynqmp-ipi-mailbox";
interrupt-parent = <&gic>;
interrupts = <0 29 4>;
xlnx,ipi-id = <7>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
/* APU<->RPU0 IPI mailbox controller */
ipi_mailbox_rpu0: mailbox@ff990600 {
reg = <0xff990600 0x20>,
<0xff990620 0x20>,
<0xff9900c0 0x20>,
<0xff9900e0 0x20>;
reg-names = "local_request_region",
"local_response_region",
"remote_request_region",
"remote_response_region";
#mbox-cells = <1>;
xlnx,ipi-id = <1>;
};
};
zynqmp_ipi2 {
compatible = "xlnx,zynqmp-ipi-mailbox";
interrupt-parent = <&gic>;
interrupts = <0 30 4>;
xlnx,ipi-id = <8>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
/* APU<->RPU1 IPI mailbox controller */
ipi_mailbox_rpu1: mailbox@ff990640 {
reg = <0xff3f0b00 0x20>,
<0xff3f0b20 0x20>,
<0xff3f0940 0x20>,
<0xff3f0960 0x20>;
reg-names = "local_request_region",
"local_response_region",
"remote_request_region",
"remote_response_region";
#mbox-cells = <1>;
xlnx,ipi-id = <2>;
};
};
rf5ss: rf5ss@ff9a0000 {
status = "disable";
compatible = "xlnx,zynqmp-r5-remoteproc";
xlnx,cluster-mode = <1>;
ranges;
reg = <0x0 0xFF9A0000 0x0 0x10000>;
#address-cells = <0x2>;
#size-cells = <0x2>;
// power-domain = <PD_RPU>;// 5.10版本
power-domain = <&zynqmp_firmware PD_RPU>;// 5.15版本
r5f_0: r5@0 {
compatible = "xilinx,r5f";
#address-cells = <2>;
#size-cells = <2>;
ranges;
sram = <&tcm_0a>, <&tcm_0b>;
memory-region = <&rpu0load>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
// power-domain = <PD_RPU_0>;// 5.10版本
power-domain = <&zynqmp_firmware PD_RPU_0>;// 5.15版本
mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
mbox-names = "tx", "rx";
interrupt-parent = <&gic>;
interrupts = <0 29 4>;
xlnx,ipi-id = <7>;
};
r5f_1: r5@1 {
compatible = "xilinx,r5f";
#address-cells = <2>;
#size-cells = <2>;
ranges;
sram = <&tcm_1a>, <&tcm_1b>,<&rshare_2_reserved>;
memory-region = <&rpu1load>, <&rpu1vdev0buffer>, <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
// power-domain = <PD_RPU_1>;// 5.10版本
power-domain = <&zynqmp_firmware PD_RPU_1>;// 5.15版本
mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
mbox-names = "tx", "rx";
interrupt-parent = <&gic>;
interrupts = <0 30 4>;
xlnx,ipi-id = <8>;
};
};
};
三:根据上述设备树更改opentest工程适配:
1:更改rsc_table.c
#define RING_TX 0x800000//FW_RSC_U32_ADDR_ANY 对应设备树的rpu0vdev0vring0
#define RING_RX 0x900000//FW_RSC_U32_ADDR_ANY 对应设备树的rpu0vdev0vring1
2:更改platfoem_init.c的 共享地址:
#define SHARED_MEM_PA 0xa00000UL 对应设备树中的rpu0vdev0buffer
3:更改platfoem_init.h的文件的ipi中断通道,如果默认是ch7就不需要更改:
#define IPI_CHN_BITMASK 0x0f000001 /* IPI channel bit mask for IPI from/to
这个通道的mask需要更改为中断通道
并且修改rpmsg_echo.c产生的节点数:ECHO_NUM_EPTS 2 (默认为1)
4:在opentest的工程中lscript.ld修改resource_table核固件加载地址,以及bss
地址为固件加载地址的某一部分区域并且不能超出加载地址,属于预留部分存放这个table
.resource_table 0x30700000 : { //指定放置在rpu0load中的某一个位置
. = ALIGN(4);
*(.resource_table)
} > psu_r5_ddr_text_1_MEM_0
注意:以上所有地址在设备树中均有描述并且不能冲突!!!
四:内核配置和启动rpu固件
在内核需要开启rpmsg_char,rpmsg_core,rpmsg_bus驱动
CONFIG_REMOTEPROC=y
CONFIG_REMOTEPROC_CDEV=y
CONFIG_ZYNQMP_R5_REMOTEPROC=y
CONFIG_IPI_MAILBOX=y
# end of Remoteproc drivers
#
# Rpmsg drivers
#
CONFIG_RPMSG=y
CONFIG_RPMSG_CHAR=y
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
CONFIG_RPMSG_VIRTIO=y
# end of Rpmsg drivers
编译好内核和设备树,下载到板子,会有remoteproc节点产生,为加载rpu固件做准备:
使用echo opentest.elf > /sys/class/remoteproc/remoteproc0/firmware 加载固件到rpu0
使用echo start > /sys/class/remoteproc/remoteproc0/state 开启固件运行
如果运行程序后,没有出现下述类似log,此时/sys/bus/rpmsg/devices下只能看到:
就没办法通信,引起此问题:
1:可能是设备树配置不对,
2:或者上述地址修改部分不正确
3:设备树中ipi中断和ipi_mailbox地址设置不对等。
virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x400
[ 71.000069] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel1 addr 0x401
正确运行rpu后,在/sys/bus/rpmsg/devices下会产生节点,并且此时加载 modprobe rpmsg_char.ko才能在/dev下面看到rpmsg_ctrl节点。(因为内核配置已经设置为CONFIG_RPMSG_CHAR=y,所以不需要重新执行modprobe rpmsg_char.ko。并且需要更改apu测试程序echo_test关于modprobe rpmsg_char部分注释掉即可)
注意:如果rpu没有正确加载open_test,并创建endpoint节点的话。Apu直接加载modprobe rpmsg_char.ko也不会在在/dev下面看到rpmsg_ctrl节点
所以rpmsg的使用必须由rpu先正常创建rpmsg通道。
成功运行rpu程序后:可以在/sys/bus/rpmsg/devices下面看到节点信息:
运行apu的echo_test可以通信:
测试rpu1:
根据上述设备树和步骤修改rpu1的通信会有下述问题,可能是ipi_mailbox配置不对引发的。:
如果配置了rpu1中一下mboxes等属性后,rpu加载启动固件会引起ipi死机,导致挂掉:
如果按上图将设备树中配置的remoteproc。不注册rpu1的mailbox属性是可以运行到
但是此时没有办法中断通知apu创建channel通道,导致不能通信
ipi_mailbox_rpu2: mailbox@ff990780 { //内核源码中找到
reg = <0xff990780 0x20>,
<0xff9907a0 0x20>,
<0xff9907c0 0x20>,
<0xff9905a0 0x20>;
reg-names = "local_request_region",
"local_response_region",
"remote_request_region",
"remote_response_region";
#mbox-cells = <1>;
xlnx,ipi-id = <2>;
};
就可以配置rpu1工作了:
参考链接:
Echo_test.c下载
meta-openamp/recipes-openamp/rpmsg-examples/rpmsg-echo-test/echo_test.c at rel-v2019.1 · Xilinx/meta-openamp · GitHub
OpenAMP 2021.1 - Xilinx Wiki - Confluence (atlassian.net)
Loading FreeRTOS RPU firmware on VCK190 using remoteproc driver - Xilinx Wiki - Confluence (atlassian.net)
Debugging an OpenAMP Application • Libmetal and OpenAMP User Guide (UG1186) • 阅读器 • AMD 自适应计算文档门户 (xilinx.com)