.\" ========================================================================
.\"
.IX Title "EV 1"
-.TH EV 1 "2007-12-21" "perl v5.8.8" "User Contributed Perl Documentation"
+.TH EV 1 "2007-12-22" "perl v5.8.8" "User Contributed Perl Documentation"
.SH "NAME"
libev \- a high performance full\-featured event loop written in C
.SH "SYNOPSIS"
Returns the current time as libev would use it. Please note that the
\&\f(CW\*(C`ev_now\*(C'\fR function is usually faster and also often returns the timestamp
you actually want to know.
+.IP "ev_sleep (ev_tstamp interval)" 4
+.IX Item "ev_sleep (ev_tstamp interval)"
+Sleep for the given interval: The current thread will be blocked until
+either it is interrupted or the given time interval has passed. Basically
+this is a subsecond-resolution \f(CW\*(C`sleep ()\*(C'\fR.
.IP "int ev_version_major ()" 4
.IX Item "int ev_version_major ()"
.PD 0
This is your standard \fIselect\fR\|(2) backend. Not \fIcompletely\fR standard, as
libev tries to roll its own fd_set with no limits on the number of fds,
but if that fails, expect a fairly low limit on the number of fds when
-using this backend. It doesn't scale too well (O(highest_fd)), but its usually
-the fastest backend for a low number of fds.
+using this backend. It doesn't scale too well (O(highest_fd)), but its
+usually the fastest backend for a low number of (low\-numbered :) fds.
+.Sp
+To get good performance out of this backend you need a high amount of
+parallelity (most of the file descriptors should be busy). If you are
+writing a server, you should \f(CW\*(C`accept ()\*(C'\fR in a loop to accept as many
+connections as possible during one iteration. You might also want to have
+a look at \f(CW\*(C`ev_set_io_collect_interval ()\*(C'\fR to increase the amount of
+readyness notifications you get per iteration.
.ie n .IP """EVBACKEND_POLL"" (value 2, poll backend, available everywhere except on windows)" 4
.el .IP "\f(CWEVBACKEND_POLL\fR (value 2, poll backend, available everywhere except on windows)" 4
.IX Item "EVBACKEND_POLL (value 2, poll backend, available everywhere except on windows)"
-And this is your standard \fIpoll\fR\|(2) backend. It's more complicated than
-select, but handles sparse fds better and has no artificial limit on the
-number of fds you can use (except it will slow down considerably with a
-lot of inactive fds). It scales similarly to select, i.e. O(total_fds).
+And this is your standard \fIpoll\fR\|(2) backend. It's more complicated
+than select, but handles sparse fds better and has no artificial
+limit on the number of fds you can use (except it will slow down
+considerably with a lot of inactive fds). It scales similarly to select,
+i.e. O(total_fds). See the entry for \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR, above, for
+performance tips.
.ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4
.el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4
.IX Item "EVBACKEND_EPOLL (value 4, Linux)"
like O(total_fds) where n is the total number of fds (or the highest fd),
epoll scales either O(1) or O(active_fds). The epoll design has a number
of shortcomings, such as silently dropping events in some hard-to-detect
-cases and rewuiring a syscall per fd change, no fork support and bad
-support for dup:
+cases and rewiring a syscall per fd change, no fork support and bad
+support for dup.
.Sp
While stopping, setting and starting an I/O watcher in the same iteration
will result in some caching, there is still a syscall per such incident
Please note that epoll sometimes generates spurious notifications, so you
need to use non-blocking I/O or other means to avoid blocking when no data
(or space) is available.
+.Sp
+Best performance from this backend is achieved by not unregistering all
+watchers for a file descriptor until it has been closed, if possible, i.e.
+keep at least one watcher active per fd at all times.
+.Sp
+While nominally embeddeble in other event loops, this feature is broken in
+all kernel versions tested so far.
.ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4
.el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4
.IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)"
Kqueue deserves special mention, as at the time of this writing, it
-was broken on \fIall\fR BSDs (usually it doesn't work with anything but
-sockets and pipes, except on Darwin, where of course it's completely
-useless. On NetBSD, it seems to work for all the \s-1FD\s0 types I tested, so it
-is used by default there). For this reason it's not being \*(L"autodetected\*(R"
+was broken on all BSDs except NetBSD (usually it doesn't work reliably
+with anything but sockets and pipes, except on Darwin, where of course
+it's completely useless). For this reason it's not being \*(L"autodetected\*(R"
unless you explicitly specify it explicitly in the flags (i.e. using
\&\f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a known-to-be-good (\-enough)
system like NetBSD.
.Sp
+You still can embed kqueue into a normal poll or select backend and use it
+only for sockets (after having made sure that sockets work with kqueue on
+the target platform). See \f(CW\*(C`ev_embed\*(C'\fR watchers for more info.
+.Sp
It scales in the same way as the epoll backend, but the interface to the
-kernel is more efficient (which says nothing about its actual speed,
-of course). While stopping, setting and starting an I/O watcher does
-never cause an extra syscall as with epoll, it still adds up to two event
-changes per incident, support for \f(CW\*(C`fork ()\*(C'\fR is very bad and it drops fds
-silently in similarly hard-to-detetc cases.
+kernel is more efficient (which says nothing about its actual speed, of
+course). While stopping, setting and starting an I/O watcher does never
+cause an extra syscall as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to
+two event changes per incident, support for \f(CW\*(C`fork ()\*(C'\fR is very bad and it
+drops fds silently in similarly hard-to-detect cases.
+.Sp
+This backend usually performs well under most conditions.
+.Sp
+While nominally embeddable in other event loops, this doesn't work
+everywhere, so you might need to test for this. And since it is broken
+almost everywhere, you should only use it when you have a lot of sockets
+(for which it usually works), by embedding it into another event loop
+(e.g. \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR) and using it only for
+sockets.
.ie n .IP """EVBACKEND_DEVPOLL"" (value 16, Solaris 8)" 4
.el .IP "\f(CWEVBACKEND_DEVPOLL\fR (value 16, Solaris 8)" 4
.IX Item "EVBACKEND_DEVPOLL (value 16, Solaris 8)"
-This is not implemented yet (and might never be).
+This is not implemented yet (and might never be, unless you send me an
+implementation). According to reports, \f(CW\*(C`/dev/poll\*(C'\fR only supports sockets
+and is not embeddable, which would limit the usefulness of this backend
+immensely.
.ie n .IP """EVBACKEND_PORT"" (value 32, Solaris 10)" 4
.el .IP "\f(CWEVBACKEND_PORT\fR (value 32, Solaris 10)" 4
.IX Item "EVBACKEND_PORT (value 32, Solaris 10)"
Please note that solaris event ports can deliver a lot of spurious
notifications, so you need to use non-blocking I/O or other means to avoid
blocking when no data (or space) is available.
+.Sp
+While this backend scales well, it requires one system call per active
+file descriptor per loop iteration. For small and medium numbers of file
+descriptors a \*(L"slow\*(R" \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR backend
+might perform better.
.ie n .IP """EVBACKEND_ALL""" 4
.el .IP "\f(CWEVBACKEND_ALL\fR" 4
.IX Item "EVBACKEND_ALL"
Try all backends (even potentially broken ones that wouldn't be tried
with \f(CW\*(C`EVFLAG_AUTO\*(C'\fR). Since this is a mask, you can do stuff such as
\&\f(CW\*(C`EVBACKEND_ALL & ~EVBACKEND_KQUEUE\*(C'\fR.
+.Sp
+It is definitely not recommended to use this flag.
.RE
.RS 4
.Sp
\& ev_ref (loop);
\& ev_signal_stop (loop, &exitsig);
.Ve
+.IP "ev_set_io_collect_interval (loop, ev_tstamp interval)" 4
+.IX Item "ev_set_io_collect_interval (loop, ev_tstamp interval)"
+.PD 0
+.IP "ev_set_timeout_collect_interval (loop, ev_tstamp interval)" 4
+.IX Item "ev_set_timeout_collect_interval (loop, ev_tstamp interval)"
+.PD
+These advanced functions influence the time that libev will spend waiting
+for events. Both are by default \f(CW0\fR, meaning that libev will try to
+invoke timer/periodic callbacks and I/O callbacks with minimum latency.
+.Sp
+Setting these to a higher value (the \f(CW\*(C`interval\*(C'\fR \fImust\fR be >= \f(CW0\fR)
+allows libev to delay invocation of I/O and timer/periodic callbacks to
+increase efficiency of loop iterations.
+.Sp
+The background is that sometimes your program runs just fast enough to
+handle one (or very few) event(s) per loop iteration. While this makes
+the program responsive, it also wastes a lot of \s-1CPU\s0 time to poll for new
+events, especially with backends like \f(CW\*(C`select ()\*(C'\fR which have a high
+overhead for the actual polling but can deliver many events at once.
+.Sp
+By setting a higher \fIio collect interval\fR you allow libev to spend more
+time collecting I/O events, so you can handle more events per iteration,
+at the cost of increasing latency. Timeouts (both \f(CW\*(C`ev_periodic\*(C'\fR and
+\&\f(CW\*(C`ev_timer\*(C'\fR) will be not affected. Setting this to a non-null value will
+introduce an additional \f(CW\*(C`ev_sleep ()\*(C'\fR call into most loop iterations.
+.Sp
+Likewise, by setting a higher \fItimeout collect interval\fR you allow libev
+to spend more time collecting timeouts, at the expense of increased
+latency (the watcher callback will be called later). \f(CW\*(C`ev_io\*(C'\fR watchers
+will not be affected. Setting this to a non-null value will not introduce
+any overhead in libev.
+.Sp
+Many (busy) programs can usually benefit by setting the io collect
+interval to a value near \f(CW0.1\fR or so, which is often enough for
+interactive servers (of course not for games), likewise for timeouts. It
+usually doesn't make much sense to set it to a lower value than \f(CW0.01\fR,
+as this approsaches the timing granularity of most systems.
.SH "ANATOMY OF A WATCHER"
.IX Header "ANATOMY OF A WATCHER"
A watcher is a structure that you create and register to record your
priority, to ensure that they are being run before any other watchers
after the poll. Also, \f(CW\*(C`ev_check\*(C'\fR watchers (and \f(CW\*(C`ev_prepare\*(C'\fR watchers,
too) should not activate (\*(L"feed\*(R") events into libev. While libev fully
-supports this, they will be called before other \f(CW\*(C`ev_check\*(C'\fR watchers did
-their job. As \f(CW\*(C`ev_check\*(C'\fR watchers are often used to embed other event
-loops those other event loops might be in an unusable state until their
-\&\f(CW\*(C`ev_check\*(C'\fR watcher ran (always remind yourself to coexist peacefully with
-others).
+supports this, they will be called before other \f(CW\*(C`ev_check\*(C'\fR watchers
+did their job. As \f(CW\*(C`ev_check\*(C'\fR watchers are often used to embed other
+(non\-libev) event loops those other event loops might be in an unusable
+state until their \f(CW\*(C`ev_check\*(C'\fR watcher ran (always remind yourself to
+coexist peacefully with others).
.PP
\fIWatcher-Specific Functions and Data Members\fR
.IX Subsection "Watcher-Specific Functions and Data Members"
This is a rather advanced watcher type that lets you embed one event loop
into another (currently only \f(CW\*(C`ev_io\*(C'\fR events are supported in the embedded
loop, other types of watchers might be handled in a delayed or incorrect
-fashion and must not be used). (See portability notes, below).
+fashion and must not be used).
.PP
There are primarily two reasons you would want that: work around bugs and
prioritise I/O.
\& else
\& loop_lo = loop_hi;
.Ve
-.Sh "Portability notes"
-.IX Subsection "Portability notes"
-Kqueue is nominally embeddable, but this is broken on all BSDs that I
-tried, in various ways. Usually the embedded event loop will simply never
-receive events, sometimes it will only trigger a few times, sometimes in a
-loop. Epoll is also nominally embeddable, but many Linux kernel versions
-will always eport the epoll fd as ready, even when no events are pending.
-.PP
-While libev allows embedding these backends (they are contained in
-\&\f(CW\*(C`ev_embeddable_backends ()\*(C'\fR), take extreme care that it will actually
-work.
-.PP
-When in doubt, create a dynamic event loop forced to use sockets (this
-usually works) and possibly another thread and a pipe or so to report to
-your main event loop.
.PP
\fIWatcher-Specific Functions and Data Members\fR
.IX Subsection "Watcher-Specific Functions and Data Members"
be attempted. This effectively replaces \f(CW\*(C`gettimeofday\*(C'\fR by \f(CW\*(C`clock_get
(CLOCK_REALTIME, ...)\*(C'\fR and will not normally affect correctness. See the
note about libraries in the description of \f(CW\*(C`EV_USE_MONOTONIC\*(C'\fR, though.
+.IP "\s-1EV_USE_NANOSLEEP\s0" 4
+.IX Item "EV_USE_NANOSLEEP"
+If defined to be \f(CW1\fR, libev will assume that \f(CW\*(C`nanosleep ()\*(C'\fR is available
+and will use it for delays. Otherwise it will use \f(CW\*(C`select ()\*(C'\fR.
.IP "\s-1EV_USE_SELECT\s0" 4
.IX Item "EV_USE_SELECT"
If undefined or defined to be \f(CW1\fR, libev will compile in support for the