3.Agent architecture
Last updated
Was this helpful?
Last updated
Was this helpful?
Here comes the Agent,and This is where DPDK and QEMU will interact for the 1st time.
the objective to introduce agent into virtual link is to reduce dependency between dpdk and qemu,but never eliminate the dependency to be co completely plugable for both ends. but Why,as what is mentioned earlier,ivshemem will generate metadata and then these metadatas are passed to qemu ivshmem pci backends. so you can image how the link will be if the dpdk lane crashes down:a reboot of vm is required,dpdk+vhost does the same.so with agent, both lanes interact with agent within control plane,so each of two can shutdown themselves,and then recover,all these things can happen because of agent which decouples relevant components.
it would be remiss not to mention what's deficiency of the virtual link.as a matter of fact ,it's not deficient in neutron environment at all. the following picture illustrate how things are going when virtual links are integrated into neutron. in tradition neutron tap based network fabric, qemu will generate a tap device, and the tap device is extended with linux bridge or ovs for further connectivity features,either qemu or vswitch could restart,connect to tap still(this is tap device can do,preserve char device interface ,descriptor,remain there even connected user device is closed.),so ,tap device is good thing.
But with our virtual link,it's integrated as part of dpdk pmd link ,also it's zero-copy inside. that's not easy to establish such a link while making it to be high performance.the core requirement of virtual link is that any lane could recover after reboot,but initially ,one side must initilize virtual link,considering what happens in qemu,qemu will do it,once the virtual link is create ,both dpdk and qemu can connect to virtual via init or request action, and the virtual link exists until the moment the agent detects all lanes are disconnected,so you could say,these virtual links are automatically destroyed.
next few chaters I will introduce how agent is realized.
1.what's communication mechanism between agent and both lanes?
rember the channel I design during my last demo: ? I designed a very ping-pong(as opposed to on-the-fly pipeline) tlv based virtual bus,it's very light-weighted but highly scalable and easy to extend and threads-safe(actually only one thead is used in epoll io serialization model),so I changed message dispatch and logic portion,even utilize unix domain socket instead of network socket.
2.what's the message dispatch logic ?
so currently several message types (tlv.type) are supported ,I will list(not limited to ) them below:
See,this is the tlv type when passing message between agent and two ends.and the agent must combine message type and relevant routine like this:
whenever the agent resolve message ,it will lookup callback table cb_entries for related routine ,then invoke them.
remeber the order of message types matters a lot,for example ,the when initializing a virtual link :
we issue a MESSAGE_TLV_VLINK_INIT_START message to clean previous context
we issue another several message to deliver whatever we want the agent to know.
we issue a MESSAGE_TLV_VLINK_INIT_END message to let the agent take some actions ,maybe next we will wait for the response which also structured by TLV format.
note that several messages can be grouped into one bigger message.
3.what's high level interface for dpdk and qemu?
as far as I implement ,only two APIs are given,they are respectively vlink-init and vlink-request,the prototype are presented below(also ,not limited to):
often qemue side invokes client_endpoint_init_virtual_link
first,it does not matter if the virtual link exists.it always returns the allocated resource from agent.please refer to the struct client_endpoint
for more specific information.
client_endpoint_request_virtual_link,however ,could return an error which indicates resource not found . and the api often should be called by qemu side.