linux 核间通讯rpmsg架构分析
以imx8為例
在最底層硬件上,A核和M核通訊是靠硬件來進行的,稱為MU,如圖
Linux RPMsg 是在virtio framework上實現(xiàn)的一個消息傳遞機制
VirtIO 是一個用來實現(xiàn)“虛擬IO”的通用框架,典型虛擬的pci,網(wǎng)卡,磁盤等虛擬設備,kvm等都使用了這個技術
與virtio對應的還有一個virtio-ring,其實現(xiàn)了 virtio 的具體通信機制和數(shù)據(jù)流程。
virtio 層屬于控制層,負責前后端之間的通知機制(kick,notify)和控制流程,而 virtio-vring 則負責具體數(shù)據(jù)流轉發(fā)
從整體架構上看,關系如下:
?
最底層有platform_bus,負責從dts獲取配置來初始化相關對象,如virtio_device,初始化其config操作函數(shù)列表以及devID等,同時注冊到virtio_bus
dts相關配置:
&rpmsg{/** 64K for one rpmsg instance:*/vdev-nums = <2>;reg = <0x0 0x90000000 0x0 0x20000>;status = "okay"; };主要初始化過程在imx_rpmsg_probe中,關鍵操作有:
注冊MU相關的硬件中斷
ret = request_irq(irq, imx_mu_rpmsg_isr, IRQF_EARLY_RESUME | IRQF_SHARED,
"imx-mu-rpmsg", rpdev);
初始化MU硬件
ret = imx_rpmsg_mu_init(rpdev);
創(chuàng)建工作隊列用于處理MU中斷數(shù)據(jù)
INIT_DELAYED_WORK(&(rpdev->rpmsg_work), rpmsg_work_handler);
創(chuàng)建通知鏈用于對接virtio queue
BLOCKING_INIT_NOTIFIER_HEAD(&(rpdev->notifier));
初始化virtio_device并注冊
for (j = 0; j < rpdev->vdev_nums; j++) {pr_debug("%s rpdev%d vdev%d: vring0 0x%x, vring1 0x%x\n",__func__, rpdev->core_id, rpdev->vdev_nums,rpdev->ivdev[j].vring[0],rpdev->ivdev[j].vring[1]);rpdev->ivdev[j].vdev.id.device = VIRTIO_ID_RPMSG;rpdev->ivdev[j].vdev.config = &imx_rpmsg_config_ops;rpdev->ivdev[j].vdev.dev.parent = &pdev->dev;rpdev->ivdev[j].vdev.dev.release = imx_rpmsg_vproc_release;rpdev->ivdev[j].base_vq_id = j * 2;ret = register_virtio_device(&rpdev->ivdev[j].vdev);if (ret) {pr_err("%s failed to register rpdev: %d\n",__func__, ret);return ret;}}值得注意的是virtio_device的config結構 rpdev->ivdev[j].vdev.config = &imx_rpmsg_config_ops;
static struct virtio_config_ops imx_rpmsg_config_ops = {
.get_features = imx_rpmsg_get_features,
.finalize_features = imx_rpmsg_finalize_features,
.find_vqs = imx_rpmsg_find_vqs,
.del_vqs = imx_rpmsg_del_vqs,
.reset = imx_rpmsg_reset,
.set_status = imx_rpmsg_set_status,
.get_status = imx_rpmsg_get_status,
};
imx_rpmsg_find_vqs
????????--> rp_find_vq
??????? ??????? -->ioremap_nocache
??????????????? -->vring_new_virtqueue(...imx_rpmsg_notify, callback...)
需要注意的是callback的注冊過程,在rpmsg_bus中
rpmsg_probe
??????? -->vq_callback_t *vq_cbs[] = { rpmsg_recv_done, rpmsg_xmit_done };
??????? -->virtio_find_vqs(vdev, 2, vqs, vq_cbs, names, NULL);
在此處注冊的imx_rpmsg_notify 和 callback 將被virtio_bus框架所調用
中間virtio_bus承上啟下,并負責提供統(tǒng)一標準的virtio queue操作接口,如virtqueue_add,virtqueue_kick等
針對struct virtqueue,對外只有一個callback函數(shù),用于表示queue的數(shù)據(jù)變化
struct virtqueue {struct list_head list;void (*callback)(struct virtqueue *vq);const char *name;struct virtio_device *vdev;unsigned int index;unsigned int num_free;void *priv; };其實virtqueue只是提供一層標準queue的操作接口,其具體實現(xiàn)依靠vring_virtqueue
struct vring_virtqueue {struct virtqueue vq;/* Actual memory layout for this queue */struct vring vring{//queue的具體實現(xiàn)unsigned int num;struct vring_desc *desc;struct vring_avail *avail;struct vring_used *used;};/* How to notify other side. FIXME: commonalize hcalls! */bool (*notify)(struct virtqueue *vq);.../* Per-descriptor state. */struct vring_desc_state desc_state[]; };其觸發(fā)過程在vring_interrupt
irqreturn_t vring_interrupt(int irq, void *_vq) {##對外只有virtqueue,找到其包裝vring_virtqueuestruct vring_virtqueue *vq = to_vvq(_vq);if (!more_used(vq)) {pr_debug("virtqueue interrupt with no work for %p\n", vq);return IRQ_NONE;}if (unlikely(vq->broken))return IRQ_HANDLED;pr_debug("virtqueue callback for %p (%p)\n", vq, vq->vq.callback);##調用virtqueue的callbackif (vq->vq.callback)vq->vq.callback(&vq->vq);return IRQ_HANDLED; }結合中斷,整體流程如下:
imx_mu_rpmsg_isr
??????? -->rpmsg_work_handler
??????????????? -->vring_interrupt
??????????????????????? -->virtqueue.callback
關于vring_virtqueue,包含一個notify,用于通知queue有變化
virtqueue_add 和 virtqueue_kick 以及 virtqueue_notify 都能夠觸發(fā)notify
最終notify的實現(xiàn)在imx_rpmsg_notify,其內容為設置MU寄存器,發(fā)送數(shù)據(jù)
/* kick the remote processor, and let it know which virtqueue to poke at */ static bool imx_rpmsg_notify(struct virtqueue *vq) {unsigned int mu_rpmsg = 0;struct imx_rpmsg_vq_info *rpvq = vq->priv;mu_rpmsg = rpvq->vq_id << 16;mutex_lock(&rpvq->rpdev->lock);/** Send the index of the triggered virtqueue as the mu payload.* Use the timeout MU send message here.* Since that M4 core may not be loaded, and the first MSG may* not be handled by M4 when multi-vdev is enabled.* To make sure that the message wound't be discarded when M4* is running normally or in the suspend mode. Only use* the timeout mechanism by the first notify when the vdev is* registered.*/if (unlikely(rpvq->rpdev->first_notify > 0)) {rpvq->rpdev->first_notify--;MU_SendMessageTimeout(rpvq->rpdev->mu_base, 1, mu_rpmsg, 200);} else {MU_SendMessage(rpvq->rpdev->mu_base, 1, mu_rpmsg);}mutex_unlock(&rpvq->rpdev->lock);return true; }最上面可看成基于rpmsg的應用,掛載到rpmsg_bus總線,針對rpmsg也有對應的標準操作接口,如rpmsg_send,rpmsg_sendto,rpmsg_poll等等
在rpmsg_bus這一層,還有一個rpmsg_endpoint概念,其對應有一個rpmsg_endpoint_ops,包含send,send_to等接口,目前還未對其深入研究
static const struct rpmsg_endpoint_ops virtio_endpoint_ops = {
.destroy_ept = virtio_rpmsg_destroy_ept,
.send = virtio_rpmsg_send,
.sendto = virtio_rpmsg_sendto,
.send_offchannel = virtio_rpmsg_send_offchannel,
.trysend = virtio_rpmsg_trysend,
.trysendto = virtio_rpmsg_trysendto,
.trysend_offchannel = virtio_rpmsg_trysend_offchannel,
};
發(fā)送流程為
rpmsg_send
??????? -->rpmsg_endpoint.ops->send
??????????????? -->virtio_rpmsg_send
??????????????????????? -->virtqueue_add_outbuf?? 往queue填充數(shù)據(jù)
??????????????????????? -->virtqueue_kick??? 通知對端
??????????????????????????????? -->virtqueue_notify
??????????????????????????????????????? -->imx_rpmsg_notify
??????????????????????????????????????????????? -->MU_REG_WRITE
rpmsg_bus總線默認提供兩個回調rpmsg_recv_done和rpmsg_xmit_done以便通知給上層rpmsg應用,分別表示收到數(shù)據(jù)及發(fā)送完成
接收處理流程:
imx_mu_rpmsg_isr
??????? -->rpmsg_work_handler
??????????????? -->vring_interrupt
??????????????????????? -->virtqueue.callback
??????????????????????????????? -->rpmsg_recv_done or rpmsg_xmit_done
/* called when an rx buffer is used, and it's time to digest a message */ static void rpmsg_recv_done(struct virtqueue *rvq) {struct virtproc_info *vrp = rvq->vdev->priv;struct device *dev = &rvq->vdev->dev;struct rpmsg_hdr *msg;unsigned int len, msgs_received = 0;int err;msg = virtqueue_get_buf(rvq, &len);if (!msg) {dev_err(dev, "uhm, incoming signal, but no used buffer ?\n");return;}while (msg) {err = rpmsg_recv_single(vrp, dev, msg, len);if (err)break;msgs_received++;msg = virtqueue_get_buf(rvq, &len);}dev_dbg(dev, "Received %u messages\n", msgs_received);/* tell the remote processor we added another available rx buffer */if (msgs_received)## 通知接收queuevirtqueue_kick(vrp->rvq); } static void rpmsg_xmit_done(struct virtqueue *svq) {struct virtproc_info *vrp = svq->vdev->priv;dev_dbg(&svq->vdev->dev, "%s\n", __func__);/* wake up potential senders that are waiting for a tx buffer */wake_up_interruptible(&vrp->sendq); }整體過程如下:
總結
以上是生活随笔為你收集整理的linux 核间通讯rpmsg架构分析的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux Mutex机制与死锁分析
- 下一篇: Linux设备驱动归纳总结(一):内核的