1 # Binary API support {#api_doc}
3 VPP provides a binary API scheme to allow a wide variety of client codes to
4 program data-plane tables. As of this writing, there are hundreds of binary
7 Messages are defined in `*.api` files. Today, there are about 50 api files,
8 with more arriving as folks add programmable features. The API file compiler
9 sources reside in @ref src/tools/vppapigen .
11 Here's a typical request/response message definition, from
12 @ref src/vnet/interface.api :
15 autoreply define sw_interface_set_flags
20 /* 1 = up, 0 = down */
25 To a first approximation, the API compiler renders this definition as
29 /****** Message ID / handler enum ******/
31 vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS, vl_api_sw_interface_set_flags_t_handler)
32 vl_msg_id(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, vl_api_sw_interface_set_flags_reply_t_handler)
35 /****** Message names ******/
37 vl_msg_name(vl_api_sw_interface_set_flags_t, 1)
38 vl_msg_name(vl_api_sw_interface_set_flags_reply_t, 1)
41 /****** Message name, crc list ******/
42 #ifdef vl_msg_name_crc_list
43 #define foreach_vl_msg_name_crc_interface \
44 _(VL_API_SW_INTERFACE_SET_FLAGS, sw_interface_set_flags, f890584a) \
45 _(VL_API_SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply, dfbf3afa) \
48 /****** Typedefs *****/
50 typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags {
56 }) vl_api_sw_interface_set_flags_t;
58 typedef VL_API_PACKED(struct _vl_api_sw_interface_set_flags_reply {
62 }) vl_api_sw_interface_set_flags_reply_t;
65 To change the admin state of an interface, a binary api client sends a
66 @ref vl_api_sw_interface_set_flags_t to vpp, which will respond with a
67 @ref vl_api_sw_interface_set_flags_reply_t message.
69 Multiple layers of software, transport types, and shared libraries
70 implement a variety of features:
72 * API message allocation, tracing, pretty-printing, and replay.
73 * Message transport via global shared memory, pairwise/private shared
75 * Barrier synchronization of worker threads across thread-unsafe
78 Correctly-coded message handlers know nothing about the transport used to
79 deliver messages to/from vpp. It's reasonably straighforward to use multiple
80 API message transport types simultaneously.
82 For historical reasons, binary api messages are (putatively) sent in network
83 byte order. As of this writing, we're seriously considering whether that
89 Since binary API messages are always processed in order, we allocate messages
90 using a ring allocator whenever possible. This scheme is extremely fast when
91 compared with a traditional memory allocator, and doesn't cause heap
93 @ref src/vlibmemory/memory_shared.c @ref vl_msg_api_alloc_internal() .
95 Regardless of transport, binary api messages always follow a @ref msgbuf_t
99 typedef struct msgbuf_
101 unix_shared_memory_queue_t *q;
103 u32 gc_mark_timestamp;
108 This structure makes it easy to trace messages without having to
109 decode them - simply save data_len bytes - and allows
110 @ref vl_msg_api_free() to rapidly dispose of message buffers:
114 vl_msg_api_free (void *a)
117 api_main_t *am = &api_main;
119 rv = (msgbuf_t *) (((u8 *) a) - offsetof (msgbuf_t, data));
122 * Here's the beauty of the scheme. Only one proc/thread has
123 * control of a given message buffer. To free a buffer, we just
124 * clear the queue field, and leave. No locks, no hits, no errors...
129 rv->gc_mark_timestamp = 0;
136 ## Message Tracing and Replay
138 It's extremely important that vpp can capture and replay sizeable binary API
139 traces. System-level issues involving hundreds of thousands of API
140 transactions can be re-run in a second or less. Partial replay allows one to
141 binary-search for the point where the wheels fall off. One can add scaffolding
142 to the data plane, to trigger when complex conditions obtain.
144 With binary API trace, print, and replay, system-level bug reports of the form
145 "after 300,000 API transactions, the vpp data-plane stopped forwarding
146 traffic, FIX IT!" can be solved offline.
148 More often than not, one discovers that a control-plane client
149 misprograms the data plane after a long time or under complex
150 circumstances. Without direct evidence, "it's a data-plane problem!"
152 See @ref src/vlibmemory/memory_vlib.c @ref vl_msg_api_process_file() ,
153 and @ref src/vlibapi/api_shared.c . See also the debug CLI command "api trace"
155 ## Client connection details
157 Establishing a binary API connection to vpp from a C-language client
162 connect_to_vpe (char *client_name, int client_message_queue_length)
164 vat_main_t *vam = &vat_main;
165 api_main_t *am = &api_main;
167 if (vl_client_connect_to_vlib ("/vpe-api", client_name,
168 client_message_queue_length) < 0)
171 /* Memorize vpp's binary API message input queue address */
172 vam->vl_input_queue = am->shmem_hdr->vl_input_queue;
173 /* And our client index */
174 vam->my_client_index = am->my_client_index;
179 32 is a typical value for client_message_queue_length. Vpp cannot
180 block when it needs to send an API message to a binary API client, and
181 the vpp-side binary API message handlers are very fast. When sending
182 asynchronous messages, make sure to scrape the binary API rx ring with
185 ### binary API message RX pthread
187 Calling @ref vl_client_connect_to_vlib spins up a binary API message RX
192 rx_thread_fn (void *arg)
194 unix_shared_memory_queue_t *q;
195 memory_client_main_t *mm = &memory_client_main;
196 api_main_t *am = &api_main;
198 q = am->vl_input_queue;
200 /* So we can make the rx thread terminate cleanly */
201 if (setjmp (mm->rx_thread_jmpbuf) == 0)
203 mm->rx_thread_jmpbuf_valid = 1;
206 vl_msg_api_queue_handler (q);
213 To handle the binary API message queue yourself, use
214 @ref vl_client_connect_to_vlib_no_rx_pthread.
216 In turn, vl_msg_api_queue_handler(...) uses mutex/condvar signalling
217 to wake up, process vpp -> client traffic, then sleep. Vpp supplies a
218 condvar broadcast when the vpp -> client API message queue transitions
219 from empty to nonempty.
221 Vpp checks its own binary API input queue at a very high rate. Vpp
222 invokes message handlers in "process" context [aka cooperative
223 multitasking thread context] at a variable rate, depending on
224 data-plane packet processing requirements.
226 ## Client disconnection details
228 To disconnect from vpp, call @ref vl_client_disconnect_from_vlib
229 . Please arrange to call this function if the client application
230 terminates abnormally. Vpp makes every effort to hold a decent funeral
231 for dead clients, but vpp can't guarantee to free leaked memory in the
232 shared binary API segment.
234 ## Sending binary API messages to vpp
236 The point of the exercise is to send binary API messages to vpp, and
237 to receive replies from vpp. Many vpp binary APIs comprise a client
238 request message, and a simple status reply. For example, to
239 set the admin status of an interface, one codes:
242 vl_api_sw_interface_set_flags_t *mp;
244 mp = vl_msg_api_alloc (sizeof (*mp));
245 memset (mp, 0, sizeof (*mp));
246 mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_SW_INTERFACE_SET_FLAGS);
247 mp->client_index = api_main.my_client_index;
248 mp->sw_if_index = clib_host_to_net_u32 (<interface-sw-if-index>);
249 vl_msg_api_send (api_main.shmem_hdr->vl_input_queue, (u8 *)mp);
254 * Use @ref vl_msg_api_alloc to allocate message buffers
256 * Allocated message buffers are not initialized, and must be presumed
259 * Don't forget to set the _vl_msg_id field!
261 * As of this writing, binary API message IDs and data are sent in
264 * The client-library global data structure @ref api_main keeps track
265 of sufficient pointers and handles used to communicate with vpp
267 ## Receiving binary API messages from vpp
269 Unless you've made other arrangements (see @ref
270 vl_client_connect_to_vlib_no_rx_pthread), *messages are received on a
271 separate rx pthread*. Synchronization with the client application main
272 thread is the responsibility of the application!
274 Set up message handlers about as follows:
277 #define vl_typedefs /* define message structures */
278 #include <vpp/api/vpe_all_api_h.h>
281 /* declare message handlers for each api */
283 #define vl_endianfun /* define message structures */
284 #include <vpp/api/vpe_all_api_h.h>
287 /* instantiate all the print functions we know about */
288 #define vl_print(handle, ...)
290 #include <vpp/api/vpe_all_api_h.h>
293 /* Define a list of all message that the client handles */
294 #define foreach_vpe_api_reply_msg \
295 _(SW_INTERFACE_SET_FLAGS_REPLY, sw_interface_set_flags_reply)
297 static clib_error_t *
298 my_api_hookup (vlib_main_t * vm)
300 api_main_t *am = &api_main;
303 vl_msg_api_set_handlers(VL_API_##N, #n, \
304 vl_api_##n##_t_handler, \
306 vl_api_##n##_t_endian, \
307 vl_api_##n##_t_print, \
308 sizeof(vl_api_##n##_t), 1);
316 The key API used to establish message handlers is @ref
317 vl_msg_api_set_handlers , which sets values in multiple parallel
318 vectors in the @ref api_main_t structure. As of this writing: not all
319 vector element values can be set through the API. You'll see sporadic
320 API message registrations followed by minor adjustments of this form:
324 * Thread-safe API messages
326 am->is_mp_safe[VL_API_IP_ADD_DEL_ROUTE] = 1;
327 am->is_mp_safe[VL_API_GET_NODE_GRAPH] = 1;