+
+=head1 COMPLEXITIES
+
+In this section the complexities of (many of) the algorithms used inside
+libev will be explained. For complexity discussions about backends see the
+documentation for C<ev_default_init>.
+
+All of the following are about amortised time: If an array needs to be
+extended, libev needs to realloc and move the whole array, but this
+happens asymptotically never with higher number of elements, so O(1) might
+mean it might do a lengthy realloc operation in rare cases, but on average
+it is much faster and asymptotically approaches constant time.
+
+=over 4
+
+=item Starting and stopping timer/periodic watchers: O(log skipped_other_timers)
+
+This means that, when you have a watcher that triggers in one hour and
+there are 100 watchers that would trigger before that then inserting will
+have to skip roughly seven (C<ld 100>) of these watchers.
+
+=item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers)
+
+That means that changing a timer costs less than removing/adding them
+as only the relative motion in the event queue has to be paid for.
+
+=item Starting io/check/prepare/idle/signal/child watchers: O(1)
+
+These just add the watcher into an array or at the head of a list.
+
+=item Stopping check/prepare/idle watchers: O(1)
+
+=item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE))
+
+These watchers are stored in lists then need to be walked to find the
+correct watcher to remove. The lists are usually short (you don't usually
+have many watchers waiting for the same fd or signal).
+
+=item Finding the next timer in each loop iteration: O(1)
+
+By virtue of using a binary heap, the next timer is always found at the
+beginning of the storage array.
+
+=item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd)
+
+A change means an I/O watcher gets started or stopped, which requires
+libev to recalculate its status (and possibly tell the kernel, depending
+on backend and wether C<ev_io_set> was used).
+
+=item Activating one watcher (putting it into the pending state): O(1)
+
+=item Priority handling: O(number_of_priorities)
+
+Priorities are implemented by allocating some space for each
+priority. When doing priority-based operations, libev usually has to
+linearly search all the priorities, but starting/stopping and activating
+watchers becomes O(1) w.r.t. prioritiy handling.
+
+=back
+
+
+=head1 Win32 platform limitations and workarounds
+
+Win32 doesn't support any of the standards (e.g. POSIX) that libev
+requires, and its I/O model is fundamentally incompatible with the POSIX
+model. Libev still offers limited functionality on this platform in
+the form of the C<EVBACKEND_SELECT> backend, and only supports socket
+descriptors. This only applies when using Win32 natively, not when using
+e.g. cygwin.
+
+There is no supported compilation method available on windows except
+embedding it into other applications.
+
+Due to the many, low, and arbitrary limits on the win32 platform and the
+abysmal performance of winsockets, using a large number of sockets is not
+recommended (and not reasonable). If your program needs to use more than
+a hundred or so sockets, then likely it needs to use a totally different
+implementation for windows, as libev offers the POSIX model, which cannot
+be implemented efficiently on windows (microsoft monopoly games).
+
+=over 4
+
+=item The winsocket select function
+
+The winsocket C<select> function doesn't follow POSIX in that it requires
+socket I<handles> and not socket I<file descriptors>. This makes select
+very inefficient, and also requires a mapping from file descriptors
+to socket handles. See the discussion of the C<EV_SELECT_USE_FD_SET>,
+C<EV_SELECT_IS_WINSOCKET> and C<EV_FD_TO_WIN32_HANDLE> preprocessor
+symbols for more info.
+
+The configuration for a "naked" win32 using the microsoft runtime
+libraries and raw winsocket select is:
+
+ #define EV_USE_SELECT 1
+ #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */
+
+Note that winsockets handling of fd sets is O(n), so you can easily get a
+complexity in the O(n²) range when using win32.
+
+=item Limited number of file descriptors
+
+Windows has numerous arbitrary (and low) limits on things. Early versions
+of winsocket's select only supported waiting for a max. of C<64> handles
+(probably owning to the fact that all windows kernels can only wait for
+C<64> things at the same time internally; microsoft recommends spawning a
+chain of threads and wait for 63 handles and the previous thread in each).
+
+Newer versions support more handles, but you need to define C<FD_SETSIZE>
+to some high number (e.g. C<2048>) before compiling the winsocket select
+call (which might be in libev or elsewhere, for example, perl does its own
+select emulation on windows).
+
+Another limit is the number of file descriptors in the microsoft runtime
+libraries, which by default is C<64> (there must be a hidden I<64> fetish
+or something like this inside microsoft). You can increase this by calling
+C<_setmaxstdio>, which can increase this limit to C<2048> (another
+arbitrary limit), but is broken in many versions of the microsoft runtime
+libraries.
+
+This might get you to about C<512> or C<2048> sockets (depending on
+windows version and/or the phase of the moon). To get more, you need to
+wrap all I/O functions and provide your own fd management, but the cost of
+calling select (O(n²)) will likely make this unworkable.
+
+=back
+
+