X-Git-Url: https://git.llucax.com/software/libev.git/blobdiff_plain/da64db6d2bfa6add6c6ad65d51468dba440d03e3..bcb5dd319c35f6c72b57f4153f08a9720eb9cd4e:/ev.pod diff --git a/ev.pod b/ev.pod index c5d2b5f..36b927a 100644 --- a/ev.pod +++ b/ev.pod @@ -308,15 +308,24 @@ environment variable. This is your standard select(2) backend. Not I standard, as libev tries to roll its own fd_set with no limits on the number of fds, but if that fails, expect a fairly low limit on the number of fds when -using this backend. It doesn't scale too well (O(highest_fd)), but its usually -the fastest backend for a low number of fds. +using this backend. It doesn't scale too well (O(highest_fd)), but its +usually the fastest backend for a low number of (low-numbered :) fds. + +To get good performance out of this backend you need a high amount of +parallelity (most of the file descriptors should be busy). If you are +writing a server, you should C in a loop to accept as many +connections as possible during one iteration. You might also want to have +a look at C to increase the amount of +readyness notifications you get per iteration. =item C (value 2, poll backend, available everywhere except on windows) -And this is your standard poll(2) backend. It's more complicated than -select, but handles sparse fds better and has no artificial limit on the -number of fds you can use (except it will slow down considerably with a -lot of inactive fds). It scales similarly to select, i.e. O(total_fds). +And this is your standard poll(2) backend. It's more complicated +than select, but handles sparse fds better and has no artificial +limit on the number of fds you can use (except it will slow down +considerably with a lot of inactive fds). It scales similarly to select, +i.e. O(total_fds). See the entry for C, above, for +performance tips. =item C (value 4, Linux) @@ -326,7 +335,7 @@ like O(total_fds) where n is the total number of fds (or the highest fd), epoll scales either O(1) or O(active_fds). The epoll design has a number of shortcomings, such as silently dropping events in some hard-to-detect cases and rewiring a syscall per fd change, no fork support and bad -support for dup: +support for dup. While stopping, setting and starting an I/O watcher in the same iteration will result in some caching, there is still a syscall per such incident @@ -338,27 +347,49 @@ Please note that epoll sometimes generates spurious notifications, so you need to use non-blocking I/O or other means to avoid blocking when no data (or space) is available. +Best performance from this backend is achieved by not unregistering all +watchers for a file descriptor until it has been closed, if possible, i.e. +keep at least one watcher active per fd at all times. + +While nominally embeddeble in other event loops, this feature is broken in +all kernel versions tested so far. + =item C (value 8, most BSD clones) Kqueue deserves special mention, as at the time of this writing, it -was broken on I BSDs (usually it doesn't work with anything but -sockets and pipes, except on Darwin, where of course it's completely -useless. On NetBSD, it seems to work for all the FD types I tested, so it -is used by default there). For this reason it's not being "autodetected" +was broken on all BSDs except NetBSD (usually it doesn't work reliably +with anything but sockets and pipes, except on Darwin, where of course +it's completely useless). For this reason it's not being "autodetected" unless you explicitly specify it explicitly in the flags (i.e. using C) or libev was compiled on a known-to-be-good (-enough) system like NetBSD. +You still can embed kqueue into a normal poll or select backend and use it +only for sockets (after having made sure that sockets work with kqueue on +the target platform). See C watchers for more info. + It scales in the same way as the epoll backend, but the interface to the -kernel is more efficient (which says nothing about its actual speed, -of course). While stopping, setting and starting an I/O watcher does -never cause an extra syscall as with epoll, it still adds up to two event -changes per incident, support for C is very bad and it drops fds -silently in similarly hard-to-detetc cases. +kernel is more efficient (which says nothing about its actual speed, of +course). While stopping, setting and starting an I/O watcher does never +cause an extra syscall as with C, it still adds up to +two event changes per incident, support for C is very bad and it +drops fds silently in similarly hard-to-detect cases. + +This backend usually performs well under most conditions. + +While nominally embeddable in other event loops, this doesn't work +everywhere, so you might need to test for this. And since it is broken +almost everywhere, you should only use it when you have a lot of sockets +(for which it usually works), by embedding it into another event loop +(e.g. C or C) and using it only for +sockets. =item C (value 16, Solaris 8) -This is not implemented yet (and might never be). +This is not implemented yet (and might never be, unless you send me an +implementation). According to reports, C only supports sockets +and is not embeddable, which would limit the usefulness of this backend +immensely. =item C (value 32, Solaris 10) @@ -369,12 +400,19 @@ Please note that solaris event ports can deliver a lot of spurious notifications, so you need to use non-blocking I/O or other means to avoid blocking when no data (or space) is available. +While this backend scales well, it requires one system call per active +file descriptor per loop iteration. For small and medium numbers of file +descriptors a "slow" C or C backend +might perform better. + =item C Try all backends (even potentially broken ones that wouldn't be tried with C). Since this is a mask, you can do stuff such as C. +It is definitely not recommended to use this flag. + =back If one or more of these are ored into the flags value, then only these @@ -598,7 +636,7 @@ overhead for the actual polling but can deliver many events at once. By setting a higher I you allow libev to spend more time collecting I/O events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both C and -C) will be not affected. Setting this to a non-null bvalue will +C) will be not affected. Setting this to a non-null value will introduce an additional C call into most loop iterations. Likewise, by setting a higher I you allow libev @@ -1625,11 +1663,11 @@ It is recommended to give C watchers highest (C) priority, to ensure that they are being run before any other watchers after the poll. Also, C watchers (and C watchers, too) should not activate ("feed") events into libev. While libev fully -supports this, they will be called before other C watchers did -their job. As C watchers are often used to embed other event -loops those other event loops might be in an unusable state until their -C watcher ran (always remind yourself to coexist peacefully with -others). +supports this, they will be called before other C watchers +did their job. As C watchers are often used to embed other +(non-libev) event loops those other event loops might be in an unusable +state until their C watcher ran (always remind yourself to +coexist peacefully with others). =head3 Watcher-Specific Functions and Data Members @@ -1778,7 +1816,7 @@ this. This is a rather advanced watcher type that lets you embed one event loop into another (currently only C events are supported in the embedded loop, other types of watchers might be handled in a delayed or incorrect -fashion and must not be used). (See portability notes, below). +fashion and must not be used). There are primarily two reasons you would want that: work around bugs and prioritise I/O. @@ -1843,22 +1881,6 @@ create it, and if that fails, use the normal loop for everything: else loop_lo = loop_hi; -=head2 Portability notes - -Kqueue is nominally embeddable, but this is broken on all BSDs that I -tried, in various ways. Usually the embedded event loop will simply never -receive events, sometimes it will only trigger a few times, sometimes in a -loop. Epoll is also nominally embeddable, but many Linux kernel versions -will always eport the epoll fd as ready, even when no events are pending. - -While libev allows embedding these backends (they are contained in -C), take extreme care that it will actually -work. - -When in doubt, create a dynamic event loop forced to use sockets (this -usually works) and possibly another thread and a pipe or so to report to -your main event loop. - =head3 Watcher-Specific Functions and Data Members =over 4