Filewatcher File Search File Search
Content Search
» » » » » » pth-devel-2.0.7-9.3.el6.i686.rpm » Content »
pkg://pth-devel-2.0.7-9.3.el6.i686.rpm:57892/usr/share/man/man3/  info  HEADER  downloads

pth-devel - Development headers and libraries for GNU Pth…  more info»


pth(3)                  GNU Portable Threads                  pth(3)

       pth - GNU Portable Threads

       GNU Pth 2.0.7 (08-Jun-2006)

       Global Library Management
           pth_init, pth_kill, pth_ctrl, pth_version.

       Thread Attribute Handling
           pth_attr_of, pth_attr_new, pth_attr_init, pth_attr_set,
           pth_attr_get, pth_attr_destroy.

       Thread Control
           pth_spawn, pth_once, pth_self, pth_suspend, pth_resume,
           pth_yield, pth_nap, pth_wait, pth_cancel, pth_abort,
           pth_raise, pth_join, pth_exit.

           pth_fdmode, pth_time, pth_timeout, pth_sfiodisc.

       Cancellation Management
           pth_cancel_point, pth_cancel_state.

       Event Handling
           pth_event, pth_event_typeof, pth_event_extract,
           pth_event_concat, pth_event_isolate, pth_event_walk,
           pth_event_status, pth_event_free.

       Key-Based Storage
           pth_key_create, pth_key_delete, pth_key_setdata,

       Message Port Communication
           pth_msgport_create, pth_msgport_destroy, pth_msg‐
           port_find, pth_msgport_pending, pth_msgport_put, pth_msg‐
           port_get, pth_msgport_reply.

       Thread Cleanups
           pth_cleanup_push, pth_cleanup_pop.

       Process Forking
           pth_atfork_push, pth_atfork_pop, pth_fork.

           pth_mutex_init, pth_mutex_acquire, pth_mutex_release,
           pth_rwlock_init, pth_rwlock_acquire, pth_rwlock_release,
           pth_cond_init, pth_cond_await, pth_cond_notify, pth_bar‐
           rier_init, pth_barrier_reach.

       User-Space Context
           pth_uctx_create, pth_uctx_make, pth_uctx_switch,

       Generalized POSIX Replacement API
           pth_sigwait_ev, pth_accept_ev, pth_connect_ev,
           pth_select_ev, pth_poll_ev, pth_read_ev, pth_readv_ev,
           pth_write_ev, pth_writev_ev, pth_recv_ev,
           pth_recvfrom_ev, pth_send_ev, pth_sendto_ev.

       Standard POSIX Replacement API
           pth_nanosleep, pth_usleep, pth_sleep, pth_waitpid,
           pth_system, pth_sigmask, pth_sigwait, pth_accept,
           pth_connect, pth_select, pth_pselect, pth_poll, pth_read,
           pth_readv, pth_write, pth_writev, pth_pread, pth_pwrite,
           pth_recv, pth_recvfrom, pth_send, pth_sendto.

         ____  _   _
        ⎪  _ \⎪ ⎪_⎪ ⎪__
        ⎪ ⎪_) ⎪ __⎪ '_ \         ``Only those who attempt
        ⎪  __/⎪ ⎪_⎪ ⎪ ⎪ ⎪          the absurd can achieve
        ⎪_⎪    \__⎪_⎪ ⎪_⎪          the impossible.''

       Pth is a very portable POSIX/ANSI-C based library for Unix
       platforms which provides non-preemptive priority-based sched‐
       uling for multiple threads of execution (aka `multithread‐
       ing') inside event-driven applications. All threads run in
       the same address space of the application process, but each
       thread has its own individual program counter, run-time
       stack, signal mask and "errno" variable.

       The thread scheduling itself is done in a cooperative way,
       i.e., the threads are managed and dispatched by a priority-
       and event-driven non-preemptive scheduler. The intention is
       that this way both better portability and run-time perfor‐
       mance is achieved than with preemptive scheduling. The event
       facility allows threads to wait until various types of inter‐
       nal and external events occur, including pending I/O on file
       descriptors, asynchronous signals, elapsed timers, pending
       I/O on message ports, thread and process termination, and
       even results of customized callback functions.

       Pth also provides an optional emulation API for POSIX.1c
       threads (`Pthreads') which can be used for backward compati‐
       bility to existing multithreaded applications. See Pth's
       pthread(3) manual page for details.

       Threading Background

       When programming event-driven applications, usually servers,
       lots of regular jobs and one-shot requests have to be pro‐
       cessed in parallel.  To efficiently simulate this parallel
       processing on uniprocessor machines, we use `multitasking' --
       that is, we have the application ask the operating system to
       spawn multiple instances of itself. On Unix, typically the
       kernel implements multitasking in a preemptive and priority-
       based way through heavy-weight processes spawned with
       fork(2).  These processes usually do not share a common
       address space. Instead they are clearly separated from each
       other, and are created by direct cloning a process address
       space (although modern kernels use memory segment mapping and
       copy-on-write semantics to avoid unnecessary copying of phys‐
       ical memory).

       The drawbacks are obvious: Sharing data between the processes
       is complicated, and can usually only be done efficiently
       through shared memory (but which itself is not very porta‐
       ble). Synchronization is complicated because of the preemp‐
       tive nature of the Unix scheduler (one has to use atomic
       locks, etc). The machine's resources can be exhausted very
       quickly when the server application has to serve too many
       long-running requests (heavy-weight processes cost memory).
       And when each request spawns a sub-process to handle it, the
       server performance and responsiveness is horrible
       (heavy-weight processes cost time to spawn). Finally, the
       server application doesn't scale very well with the load
       because of these resource problems. In practice, lots of
       tricks are usually used to overcome these problems - ranging
       from pre-forked sub-process pools to semi-serialized process‐
       ing, etc.

       One of the most elegant ways to solve these resource- and
       data-sharing problems is to have multiple light-weight
       threads of execution inside a single (heavy-weight) process,
       i.e., to use multithreading.  Those threads usually improve
       responsiveness and performance of the application, often
       improve and simplify the internal program structure, and most
       important, require less system resources than heavy-weight
       processes. Threads are neither the optimal run-time facility
       for all types of applications, nor can all applications bene‐
       fit from them. But at least event-driven server applications
       usually benefit greatly from using threads.

       The World of Threading

       Even though lots of documents exists which describe and
       define the world of threading, to understand Pth, you need
       only basic knowledge about threading. The following defini‐
       tions of thread-related terms should at least help you under‐
       stand thread programming enough to allow you to use Pth.

       o process vs. thread
         A process on Unix systems consists of at least the follow‐
         ing fundamental ingredients: virtual memory table, program
         code, program counter, heap memory, stack memory, stack
         pointer, file descriptor set, signal table. On every
         process switch, the kernel saves and restores these ingre‐
         dients for the individual processes. On the other hand, a
         thread consists of only a private program counter, stack
         memory, stack pointer and signal table. All other ingredi‐
         ents, in particular the virtual memory, it shares with the
         other threads of the same process.

       o kernel-space vs. user-space threading
         Threads on a Unix platform traditionally can be implemented
         either inside kernel-space or user-space. When threads are
         implemented by the kernel, the thread context switches are
         performed by the kernel without the application's knowl‐
         edge. Similarly, when threads are implemented in
         user-space, the thread context switches are performed by an
         application library, without the kernel's knowledge. There
         also are hybrid threading approaches where, typically, a
         user-space library binds one or more user-space threads to
         one or more kernel-space threads (there usually called
         light-weight processes - or in short LWPs).

         User-space threads are usually more portable and can per‐
         form faster and cheaper context switches (for instance via
         swapcontext(2) or setjmp(3)/longjmp(3)) than kernel based
         threads. On the other hand, kernel-space threads can take
         advantage of multiprocessor machines and don't have any
         inherent I/O blocking problems. Kernel-space threads are
         usually scheduled in preemptive way side-by-side with the
         underlying processes. User-space threads on the other hand
         use either preemptive or non-preemptive scheduling.

       o preemptive vs. non-preemptive thread scheduling
         In preemptive scheduling, the scheduler lets a thread exe‐
         cute until a blocking situation occurs (usually a function
         call which would block) or the assigned timeslice elapses.
         Then it detracts control from the thread without a chance
         for the thread to object. This is usually realized by
         interrupting the thread through a hardware interrupt signal
         (for kernel-space threads) or a software interrupt signal
         (for user-space threads), like "SIGALRM" or "SIGVTALRM". In
         non-preemptive scheduling, once a thread received control
         from the scheduler it keeps it until either a blocking sit‐
         uation occurs (again a function call which would block and
         instead switches back to the scheduler) or the thread
         explicitly yields control back to the scheduler in a coop‐
         erative way.

       o concurrency vs. parallelism
         Concurrency exists when at least two threads are in
         progress at the same time. Parallelism arises when at least
         two threads are executing simultaneously. Real parallelism
         can be only achieved on multiprocessor machines, of course.
         But one also usually speaks of parallelism or high concur‐
         rency in the context of preemptive thread scheduling and of
         low concurrency in the context of non-preemptive thread

       o responsiveness
         The responsiveness of a system can be described by the user
         visible delay until the system responses to an external
         request. When this delay is small enough and the user
         doesn't recognize a noticeable delay, the responsiveness of
         the system is considered good. When the user recognizes or
         is even annoyed by the delay, the responsiveness of the
         system is considered bad.

       o reentrant, thread-safe and asynchronous-safe functions
         A reentrant function is one that behaves correctly if it is
         called simultaneously by several threads and then also exe‐
         cutes simultaneously.  Functions that access global state,
         such as memory or files, of course, need to be carefully
         designed in order to be reentrant. Two traditional
         approaches to solve these problems are caller-supplied
         states and thread-specific data.

         Thread-safety is the avoidance of data races, i.e., situa‐
         tions in which data is set to either correct or incorrect
         value depending upon the (unpredictable) order in which
         multiple threads access and modify the data. So a function
         is thread-safe when it still behaves semantically correct
         when called simultaneously by several threads (it is not
         required that the functions also execute simultaneously).
         The traditional approach to achieve thread-safety is to
         wrap a function body with an internal mutual exclusion lock
         (aka `mutex'). As you should recognize, reentrant is a
         stronger attribute than thread-safe, because it is harder
         to achieve and results especially in no run-time contention
         between threads. So, a reentrant function is always
         thread-safe, but not vice versa.

         Additionally there is a related attribute for functions
         named asynchronous-safe, which comes into play in conjunc‐
         tion with signal handlers. This is very related to the
         problem of reentrant functions. An asynchronous-safe func‐
         tion is one that can be called safe and without side-
         effects from within a signal handler context. Usually very
         few functions are of this type, because an application is
         very restricted in what it can perform from within a signal
         handler (especially what system functions it is allowed to
         call). The reason mainly is, because only a few system
         functions are officially declared by POSIX as guaranteed to
         be asynchronous-safe. Asynchronous-safe functions usually
         have to be already reentrant.

       User-Space Threads

       User-space threads can be implemented in various way. The two
       traditional approaches are:

       1. Matrix-based explicit dispatching between small units of

          Here the global procedures of the application are split
          into small execution units (each is required to not run
          for more than a few milliseconds) and those units are
          implemented by separate functions.  Then a global matrix
          is defined which describes the execution (and perhaps even
          dependency) order of these functions. The main server pro‐
          cedure then just dispatches between these units by calling
          one function after each other controlled by this matrix.
          The threads are created by more than one jump-trail
          through this matrix and by switching between these jump-
          trails controlled by corresponding occurred events.

          This approach gives the best possible performance, because
          one can fine-tune the threads of execution by adjusting
          the matrix, and the scheduling is done explicitly by the
          application itself. It is also very portable, because the
          matrix is just an ordinary data structure, and functions
          are a standard feature of ANSI C.

          The disadvantage of this approach is that it is compli‐
          cated to write large applications with this approach,
          because in those applications one quickly gets hundreds(!)
          of execution units and the control flow inside such an
          application is very hard to understand (because it is
          interrupted by function borders and one always has to
          remember the global dispatching matrix to follow it).
          Additionally, all threads operate on the same execution
          stack. Although this saves memory, it is often nasty,
          because one cannot switch between threads in the middle of
          a function. Thus the scheduling borders are the function

       2. Context-based implicit scheduling between threads of exe‐

          Here the idea is that one programs the application as with
          forked processes, i.e., one spawns a thread of execution
          and this runs from the begin to the end without an inter‐
          rupted control flow. But the control flow can be still
          interrupted - even in the middle of a function.  Actually
          in a preemptive way, similar to what the kernel does for
          the heavy-weight processes, i.e., every few milliseconds
          the user-space scheduler switches between the threads of
          execution. But the thread itself doesn't recognize this
          and usually (except for synchronization issues) doesn't
          have to care about this.

          The advantage of this approach is that it's very easy to
          program, because the control flow and context of a thread
          directly follows a procedure without forced interrupts
          through function borders.  Additionally, the programming
          is very similar to a traditional and well understood
          fork(2) based approach.

          The disadvantage is that although the general performance
          is increased, compared to using approaches based on heavy-
          weight processes, it is decreased compared to the matrix-
          approach above. Because the implicit preemptive scheduling
          does usually a lot more context switches (every user-space
          context switch costs some overhead even when it is a lot
          cheaper than a kernel-level context switch) than the
          explicit cooperative/non-preemptive scheduling.  Finally,
          there is no really portable POSIX/ANSI-C based way to
          implement user-space preemptive threading. Either the
          platform already has threads, or one has to hope that some
          semi-portable package exists for it. And even those semi-
          portable packages usually have to deal with assembler code
          and other nasty internals and are not easy to port to
          forthcoming platforms.

       So, in short: the matrix-dispatching approach is portable and
       fast, but nasty to program. The thread scheduling approach is
       easy to program, but suffers from synchronization and porta‐
       bility problems caused by its preemptive nature.

       The Compromise of Pth

       But why not combine the good aspects of both approaches while
       avoiding their bad aspects? That's the goal of Pth. Pth
       implements easy-to-program threads of execution, but avoids
       the problems of preemptive scheduling by using non-preemptive
       scheduling instead.

       This sounds like, and is, a useful approach. Nevertheless,
       one has to keep the implications of non-preemptive thread
       scheduling in mind when working with Pth. The following list
       summarizes a few essential points:

       o Pth provides maximum portability, but NOT the fanciest fea‐

         This is, because it uses a nifty and portable POSIX/ANSI-C
         approach for thread creation (and this way doesn't require
         any platform dependent assembler hacks) and schedules the
         threads in non-preemptive way (which doesn't require
         unportable facilities like "SIGVTALRM"). On the other hand,
         this way not all fancy threading features can be imple‐
         mented.  Nevertheless the available facilities are enough
         to provide a robust and full-featured threading system.

       o Pth increases the responsiveness and concurrency of an
         event-driven application, but NOT the concurrency of num‐
         ber-crunching applications.

         The reason is the non-preemptive scheduling. Number-crunch‐
         ing applications usually require preemptive scheduling to
         achieve concurrency because of their long CPU bursts. For
         them, non-preemptive scheduling (even together with
         explicit yielding) provides only the old concept of `corou‐
         tines'. On the other hand, event driven applications bene‐
         fit greatly from non-preemptive scheduling. They have only
         short CPU bursts and lots of events to wait on, and this
         way run faster under non-preemptive scheduling because no
         unnecessary context switching occurs, as it is the case for
         preemptive scheduling. That's why Pth is mainly intended
         for server type applications, although there is no techni‐
         cal restriction.

       o Pth requires thread-safe functions, but NOT reentrant func‐

         This nice fact exists again because of the nature of non-
         preemptive scheduling, where a function isn't interrupted
         and this way cannot be reentered before it returned. This
         is a great portability benefit, because thread-safety can
         be achieved more easily than reentrance possibility. Espe‐
         cially this means that under Pth more existing third-party
         libraries can be used without side-effects than it's the
         case for other threading systems.

       o Pth doesn't require any kernel support, but can NOT benefit
         from multiprocessor machines.

         This means that Pth runs on almost all Unix kernels,
         because the kernel does not need to be aware of the Pth
         threads (because they are implemented entirely in
         user-space). On the other hand, it cannot benefit from the
         existence of multiprocessors, because for this, kernel sup‐
         port would be needed. In practice, this is no problem,
         because multiprocessor systems are rare, and portability is
         almost more important than highest concurrency.

       The life cycle of a thread

       To understand the Pth Application Programming Interface
       (API), it helps to first understand the life cycle of a
       thread in the Pth threading system. It can be illustrated
       with the following directed graph:

             +---> READY ---+
             ⎪       ^      ⎪
             ⎪       ⎪      V
          WAITING <--+-- RUNNING
             :              V
          SUSPENDED       DEAD

       When a new thread is created, it is moved into the NEW queue
       of the scheduler. On the next dispatching for this thread,
       the scheduler picks it up from there and moves it to the
       READY queue. This is a queue containing all threads which
       want to perform a CPU burst. There they are queued in prior‐
       ity order. On each dispatching step, the scheduler always
       removes the thread with the highest priority only. It then
       increases the priority of all remaining threads by 1, to pre‐
       vent them from `starving'.

       The thread which was removed from the READY queue is the new
       RUNNING thread (there is always just one RUNNING thread, of
       course). The RUNNING thread is assigned execution control.
       After this thread yields execution (either explicitly by
       yielding execution or implicitly by calling a function which
       would block) there are three possibilities: Either it has
       terminated, then it is moved to the DEAD queue, or it has
       events on which it wants to wait, then it is moved into the
       WAITING queue. Else it is assumed it wants to perform more
       CPU bursts and immediately enters the READY queue again.

       Before the next thread is taken out of the READY queue, the
       WAITING queue is checked for pending events. If one or more
       events occurred, the threads that are waiting on them are
       immediately moved to the READY queue.

       The purpose of the NEW queue has to do with the fact that in
       Pth a thread never directly switches to another thread. A
       thread always yields execution to the scheduler and the
       scheduler dispatches to the next thread. So a freshly spawned
       thread has to be kept somewhere until the scheduler gets a
       chance to pick it up for scheduling. That is what the NEW
       queue is for.

       The purpose of the DEAD queue is to support thread joining.
       When a thread is marked to be unjoinable, it is directly
       kicked out of the system after it terminated. But when it is
       joinable, it enters the DEAD queue. There it remains until
       another thread joins it.

       Finally, there is a special separated queue named SUSPENDED,
       to where threads can be manually moved from the NEW, READY or
       WAITING queues by the application. The purpose of this spe‐
       cial queue is to temporarily absorb suspended threads until
       they are again resumed by the application. Suspended threads
       do not cost scheduling or event handling resources, because
       they are temporarily completely out of the scheduler's scope.
       If a thread is resumed, it is moved back to the queue from
       where it originally came and this way again enters the sched‐
       ulers scope.

       In the following the Pth Application Programming Interface
       (API) is discussed in detail. With the knowledge given above,
       it should now be easy to understand how to program threads
       with this API. In good Unix tradition, Pth functions use spe‐
       cial return values ("NULL" in pointer context, "FALSE" in
       boolean context and "-1" in integer context) to indicate an
       error condition and set (or pass through) the "errno" system
       variable to pass more details about the error to the caller.

       Global Library Management

       The following functions act on the library as a whole.  They
       are used to initialize and shutdown the scheduler and fetch
       information from it.

       int pth_init(void);
           This initializes the Pth library. It has to be the first
           Pth API function call in an application, and is manda‐
           tory. It's usually done at the begin of the main() func‐
           tion of the application. This implicitly spawns the
           internal scheduler thread and transforms the single exe‐
           cution unit of the current process into a thread (the
           `main' thread). It returns "TRUE" on success and "FALSE"
           on error.

       int pth_kill(void);
           This kills the Pth library. It should be the last Pth API
           function call in an application, but is not really
           required. It's usually done at the end of the main func‐
           tion of the application. At least, it has to be called
           from within the main thread. It implicitly kills all
           threads and transforms back the calling thread into the
           single execution unit of the underlying process.  The
           usual way to terminate a Pth application is either a sim‐
           ple `"pth_exit(0);"' in the main thread (which waits for
           all other threads to terminate, kills the threading sys‐
           tem and then terminates the process) or a `"pth_kill();
           exit(0)"' (which immediately kills the threading system
           and terminates the process). The pth_kill() return imme‐
           diately with a return code of "FALSE" if it is not called
           from within the main thread. Else it kills the threading
           system and returns "TRUE".

       long pth_ctrl(unsigned long query, ...);
           This is a generalized query/control function for the Pth
           library.  The argument query is a bitmask formed out of
           one or more "PTH_CTRL_"XXXX queries. Currently the fol‐
           lowing queries are supported:

               This returns the total number of threads currently in
               existence.  This query actually is formed out of the
               combination of queries for threads in a particular
               state, i.e., the "PTH_CTRL_GETTHREADS" query is equal
               to the OR-combination of all the following special‐
               ized queries:

               "PTH_CTRL_GETTHREADS_NEW" for the number of threads
               in the new queue (threads created via pth_spawn(3)
               but still not scheduled once), "PTH_CTRL_GET‐
               THREADS_READY" for the number of threads in the ready
               queue (threads who want to do CPU bursts),
               "PTH_CTRL_GETTHREADS_RUNNING" for the number of run‐
               ning threads (always just one thread!),
               "PTH_CTRL_GETTHREADS_WAITING" for the number of
               threads in the waiting queue (threads waiting for
               events), "PTH_CTRL_GETTHREADS_SUSPENDED" for the num‐
               ber of threads in the suspended queue (threads wait‐
               ing to be resumed) and "PTH_CTRL_GETTHREADS_DEAD" for
               the number of threads in the new queue (terminated
               threads waiting for a join).

               This requires a second argument of type `"float *"'
               (pointer to a floating point variable).  It stores a
               floating point value describing the exponential aver‐
               aged load of the scheduler in this variable. The load
               is a function from the number of threads in the ready
               queue of the schedulers dispatching unit.  So a load
               around 1.0 means there is only one ready thread (the
               standard situation when the application has no high
               load). A higher load value means there a more threads
               ready who want to do CPU bursts. The average load
               value updates once per second only. The return value
               for this query is always 0.

               This requires a second argument of type `"pth_t"'
               which identifies a thread.  It returns the priority
               (ranging from "PTH_PRIO_MIN" to "PTH_PRIO_MAX") of
               the given thread.

               This requires a second argument of type `"pth_t"'
               which identifies a thread. It returns the name of the
               given thread, i.e., the return value of pth_ctrl(3)
               should be casted to a `"char *"'.

               This requires a second argument of type `"FILE *"' to
               which a summary of the internal Pth library state is
               written to. The main information which is currently
               written out is the current state of the thread pool.

               This requires a second argument of type `"int"' which
               specified whether the GNU Pth scheduler favours new
               threads on startup, i.e., whether they are moved from
               the new queue to the top (argument is "TRUE") or mid‐
               dle (argument is "FALSE") of the ready queue. The
               default is to favour new threads to make sure they do
               not starve already at startup, although this slightly
               violates the strict priority based scheduling.

           The function returns "-1" on error.

       long pth_version(void);
           This function returns a hex-value `0xVRRTLL' which
           describes the current Pth library version. V is the ver‐
           sion, RR the revisions, LL the level and T the type of
           the level (alphalevel=0, betalevel=1, patchlevel=2, etc).
           For instance Pth version 1.0b1 is encoded as 0x100101.
           The reason for this unusual mapping is that this way the
           version number is steadily increasing. The same value is
           also available under compile time as "PTH_VERSION".

       Thread Attribute Handling

       Attribute objects are used in Pth for two things: First
       stand-alone/unbound attribute objects are used to store
       attributes for to be spawned threads.  Bounded attribute
       objects are used to modify attributes of already existing
       threads. The following attribute fields exists in attribute

       "PTH_ATTR_PRIO" (read-write) ["int"]
           Thread Priority between "PTH_PRIO_MIN" and
           "PTH_PRIO_MAX".  The default is "PTH_PRIO_STD".

       "PTH_ATTR_NAME" (read-write) ["char *"]
           Name of thread (up to 40 characters are stored only),
           mainly for debugging purposes.

       "PTH_ATTR_DISPATCHES" (read-write) ["int"]
           In bounded attribute objects, this field is incremented
           every time the context is switched to the associated

       "PTH_ATTR_JOINABLE" (read-write> ["int"]
           The thread detachment type, "TRUE" indicates a joinable
           thread, "FALSE" indicates a detached thread. When a
           thread is detached, after termination it is immediately
           kicked out of the system instead of inserted into the
           dead queue.

       "PTH_ATTR_CANCEL_STATE" (read-write) ["unsigned int"]
           The thread cancellation state, i.e., a combination of

       "PTH_ATTR_STACK_SIZE" (read-write) ["unsigned int"]
           The thread stack size in bytes. Use lower values than 64
           KB with great care!

       "PTH_ATTR_STACK_ADDR" (read-write) ["char *"]
           A pointer to the lower address of a chunk of malloc(3)'ed
           memory for the stack.

       "PTH_ATTR_TIME_SPAWN" (read-only) ["pth_time_t"]
           The time when the thread was spawned.  This can be
           queried only when the attribute object is bound to a

       "PTH_ATTR_TIME_LAST" (read-only) ["pth_time_t"]
           The time when the thread was last dispatched.  This can
           be queried only when the attribute object is bound to a

       "PTH_ATTR_TIME_RAN" (read-only) ["pth_time_t"]
           The total time the thread was running.  This can be
           queried only when the attribute object is bound to a

       "PTH_ATTR_START_FUNC" (read-only) ["void *(*)(void *)"]
           The thread start function.  This can be queried only when
           the attribute object is bound to a thread.

       "PTH_ATTR_START_ARG" (read-only) ["void *"]
           The thread start argument.  This can be queried only when
           the attribute object is bound to a thread.

       "PTH_ATTR_STATE" (read-only) ["pth_state_t"]
           The scheduling state of the thread, i.e., either
           or "PTH_STATE_DEAD" This can be queried only when the
           attribute object is bound to a thread.

       "PTH_ATTR_EVENTS" (read-only) ["pth_event_t"]
           The event ring the thread is waiting for.  This can be
           queried only when the attribute object is bound to a

       "PTH_ATTR_BOUND" (read-only) ["int"]
           Whether the attribute object is bound ("TRUE") to a
           thread or not ("FALSE").

       The following API functions can be used to handle the
       attribute objects:

       pth_attr_t pth_attr_of(pth_t tid);
           This returns a new attribute object bound to thread tid.
           Any queries on this object directly fetch attributes from
           tid. And attribute modifications directly change tid. Use
           such attribute objects to modify existing threads.

       pth_attr_t pth_attr_new(void);
           This returns a new unbound attribute object. An implicit
           pth_attr_init() is done on it. Any queries on this object
           just fetch stored attributes from it.  And attribute mod‐
           ifications just change the stored attributes.  Use such
           attribute objects to pre-configure attributes for to be
           spawned threads.

       int pth_attr_init(pth_attr_t attr);
           This initializes an attribute object attr to the default
           values: "PTH_ATTR_PRIO" := "PTH_PRIO_STD",
           "PTH_ATTR_NAME" := `"unknown"', "PTH_ATTR_DISPATCHES" :=
           := "PTH_CANCEL_DEFAULT", "PTH_ATTR_STACK_SIZE" := 64*1024
           and "PTH_ATTR_STACK_ADDR" := "NULL". All other
           "PTH_ATTR_*" attributes are read-only attributes and
           don't receive default values in attr, because they exists
           only for bounded attribute objects.

       int pth_attr_set(pth_attr_t attr, int field, ...);
           This sets the attribute field field in attr to a value
           specified as an additional argument on the variable argu‐
           ment list. The following attribute fields and argument
           pairs can be used:

            PTH_ATTR_PRIO           int
            PTH_ATTR_NAME           char *
            PTH_ATTR_DISPATCHES     int
            PTH_ATTR_JOINABLE       int
            PTH_ATTR_CANCEL_STATE   unsigned int
            PTH_ATTR_STACK_SIZE     unsigned int
            PTH_ATTR_STACK_ADDR     char *

       int pth_attr_get(pth_attr_t attr, int field, ...);
           This retrieves the attribute field field in attr and
           stores its value in the variable specified through a
           pointer in an additional argument on the variable argu‐
           ment list. The following fields and argument pairs can be

            PTH_ATTR_PRIO           int *
            PTH_ATTR_NAME           char **
            PTH_ATTR_DISPATCHES     int *
            PTH_ATTR_JOINABLE       int *
            PTH_ATTR_CANCEL_STATE   unsigned int *
            PTH_ATTR_STACK_SIZE     unsigned int *
            PTH_ATTR_STACK_ADDR     char **
            PTH_ATTR_TIME_SPAWN     pth_time_t *
            PTH_ATTR_TIME_LAST      pth_time_t *
            PTH_ATTR_TIME_RAN       pth_time_t *
            PTH_ATTR_START_FUNC     void *(**)(void *)
            PTH_ATTR_START_ARG      void **
            PTH_ATTR_STATE          pth_state_t *
            PTH_ATTR_EVENTS         pth_event_t *
            PTH_ATTR_BOUND          int *

       int pth_attr_destroy(pth_attr_t attr);
           This destroys a attribute object attr. After this attr is
           no longer a valid attribute object.

       Thread Control

       The following functions control the threading itself and make
       up the main API of the Pth library.

       pth_t pth_spawn(pth_attr_t attr, void *(*entry)(void *), void
           This spawns a new thread with the attributes given in
           attr (or "PTH_ATTR_DEFAULT" for default attributes -
           which means that thread priority, joinability and cancel
           state are inherited from the current thread) with the
           starting point at routine entry; the dispatch count is
           not inherited from the current thread if attr is not
           specified - rather, it is initialized to zero.  This
           entry routine is called as `pth_exit(entry(arg))' inside
           the new thread unit, i.e., entry's return value is fed to
           an implicit pth_exit(3). So the thread can also exit by
           just returning. Nevertheless the thread can also exit
           explicitly at any time by calling pth_exit(3). But keep
           in mind that calling the POSIX function exit(3) still
           terminates the complete process and not just the current

           There is no Pth-internal limit on the number of threads
           one can spawn, except the limit implied by the available
           virtual memory. Pth internally keeps track of thread in
           dynamic data structures. The function returns "NULL" on

       int pth_once(pth_once_t *ctrlvar, void (*func)(void *), void
           This is a convenience function which uses a control vari‐
           able of type "pth_once_t" to make sure a constructor
           function func is called only once as `func(arg)' in the
           system. In other words: Only the first call to
           pth_once(3) by any thread in the system succeeds. The
           variable referenced via ctrlvar should be declared as
           `"pth_once_t" variable-name = "PTH_ONCE_INIT";' before
           calling this function.

       pth_t pth_self(void);
           This just returns the unique thread handle of the cur‐
           rently running thread.  This handle itself has to be
           treated as an opaque entity by the application.  It's
           usually used as an argument to other functions who
           require an argument of type "pth_t".

       int pth_suspend(pth_t tid);
           This suspends a thread tid until it is manually resumed
           again via pth_resume(3). For this, the thread is moved to
           the SUSPENDED queue and this way is completely out of the
           scheduler's event handling and thread dispatching scope.
           Suspending the current thread is not allowed.  The func‐
           tion returns "TRUE" on success and "FALSE" on errors.

       int pth_resume(pth_t tid);
           This function resumes a previously suspended thread tid,
           i.e. tid has to stay on the SUSPENDED queue. The thread
           is moved to the NEW, READY or WAITING queue (dependent on
           what its state was when the pth_suspend(3) call were
           made) and this way again enters the event handling and
           thread dispatching scope of the scheduler. The function
           returns "TRUE" on success and "FALSE" on errors.

       int pth_raise(pth_t tid, int sig)
           This function raises a signal for delivery to thread tid
           only.  When one just raises a signal via raise(3) or
           kill(2), its delivered to an arbitrary thread which has
           this signal not blocked.  With pth_raise(3) one can send
           a signal to a thread and its guarantees that only this
           thread gets the signal delivered. But keep in mind that
           nevertheless the signals action is still configured
           process-wide.  When sig is 0 plain thread checking is
           performed, i.e., `"pth_raise(tid, 0)"' returns "TRUE"
           when thread tid still exists in the PTH system but
           doesn't send any signal to it.

       int pth_yield(pth_t tid);
           This explicitly yields back the execution control to the
           scheduler thread.  Usually the execution is implicitly
           transferred back to the scheduler when a thread waits for
           an event. But when a thread has to do larger CPU bursts,
           it can be reasonable to interrupt it explicitly by doing
           a few pth_yield(3) calls to give other threads a chance
           to execute, too.  This obviously is the cooperating part
           of Pth.  A thread has not to yield execution, of course.
           But when you want to program a server application with
           good response times the threads should be cooperative,
           i.e., when they should split their CPU bursts into
           smaller units with this call.

           Usually one specifies tid as "NULL" to indicate to the
           scheduler that it can freely decide which thread to dis‐
           patch next.  But if one wants to indicate to the sched‐
           uler that a particular thread should be favored on the
           next dispatching step, one can specify this thread
           explicitly. This allows the usage of the old concept of
           coroutines where a thread/routine switches to a particu‐
           lar cooperating thread. If tid is not "NULL" and points
           to a new or ready thread, it is guaranteed that this
           thread receives execution control on the next dispatching
           step. If tid is in a different state (that is, not in
           "PTH_STATE_NEW" or "PTH_STATE_READY") an error is

           The function usually returns "TRUE" for success and only
           "FALSE" (with "errno" set to "EINVAL") if tid specified
           an invalid or still not new or ready thread.

       int pth_nap(pth_time_t naptime);
           This functions suspends the execution of the current
           thread until naptime is elapsed. naptime is of type
           "pth_time_t" and this way has theoretically a resolution
           of one microsecond. In practice you should neither rely
           on this nor that the thread is awakened exactly after
           naptime has elapsed. It's only guarantees that the thread
           will sleep at least naptime. But because of the non-pre‐
           emptive nature of Pth it can last longer (when another
           thread kept the CPU for a long time). Additionally the
           resolution is dependent of the implementation of timers
           by the operating system and these usually have only a
           resolution of 10 microseconds or larger. But usually this
           isn't important for an application unless it tries to use
           this facility for real time tasks.

       int pth_wait(pth_event_t ev);
           This is the link between the scheduler and the event
           facility (see below for the various pth_event_xxx() func‐
           tions). It's modeled like select(2), i.e., one gives this
           function one or more events (in the event ring specified
           by ev) on which the current thread wants to wait. The
           scheduler awakes the thread when one ore more of them
           occurred or failed after tagging them as such. The ev
           argument is a pointer to an event ring which isn't
           changed except for the tagging. pth_wait(3) returns the
           number of occurred or failed events and the application
           can use pth_event_status(3) to test which events occurred
           or failed.

       int pth_cancel(pth_t tid);
           This cancels a thread tid. How the cancellation is done
           depends on the cancellation state of tid which the thread
           can configure itself. When its state is "PTH_CANCEL_DIS‐
           ABLE" a cancellation request is just made pending.  When
           it is "PTH_CANCEL_ENABLE" it depends on the cancellation
           type what is performed. When its "PTH_CANCEL_DEFERRED"
           again the cancellation request is just made pending. But
           when its "PTH_CANCEL_ASYNCHRONOUS" the thread is immedi‐
           ately canceled before pth_cancel(3) returns. The effect
           of a thread cancellation is equal to implicitly forcing
           the thread to call `"pth_exit(PTH_CANCELED)"' at one of
           his cancellation points.  In Pth thread enter a cancella‐
           tion point either explicitly via pth_cancel_point(3) or
           implicitly by waiting for an event.

       int pth_abort(pth_t tid);
           This is the cruel way to cancel a thread tid. When it's
           already dead and waits to be joined it just joins it (via
           `"pth_join("tid", NULL)"') and this way kicks it out of
           the system.  Else it forces the thread to be not joinable
           and to allow asynchronous cancellation and then cancels
           it via `"pth_cancel("tid")"'.

       int pth_join(pth_t tid, void **value);
           This joins the current thread with the thread specified
           via tid.  It first suspends the current thread until the
           tid thread has terminated. Then it is awakened and stores
           the value of tid's pth_exit(3) call into *value (if value
           and not "NULL") and returns to the caller. A thread can
           be joined only when it has the attribute "PTH_ATTR_JOIN‐
           ABLE" set to "TRUE" (the default). A thread can only be
           joined once, i.e., after the pth_join(3) call the thread
           tid is completely removed from the system.

       void pth_exit(void *value);
           This terminates the current thread. Whether it's immedi‐
           ately removed from the system or inserted into the dead
           queue of the scheduler depends on its join type which was
           specified at spawning time. If it has the attribute
           "PTH_ATTR_JOINABLE" set to "FALSE", it's immediately
           removed and value is ignored. Else the thread is inserted
           into the dead queue and value remembered for a subsequent
           pth_join(3) call by another thread.


       Utility functions.

       int pth_fdmode(int fd, int mode);
           This switches the non-blocking mode flag on file descrip‐
           tor fd.  The argument mode can be "PTH_FDMODE_BLOCK" for
           switching fd into blocking I/O mode, "PTH_FDMODE_NON‐
           BLOCK" for switching fd into non-blocking I/O mode or
           "PTH_FDMODE_POLL" for just polling the current mode. The
           current mode is returned (either "PTH_FDMODE_BLOCK" or
           "PTH_FDMODE_NONBLOCK") or "PTH_FDMODE_ERROR" on error.
           Keep in mind that since Pth 1.1 there is no longer a
           requirement to manually switch a file descriptor into
           non-blocking mode in order to use it. This is automati‐
           cally done temporarily inside Pth.  Instead when you now
           switch a file descriptor explicitly into non-blocking
           mode, pth_read(3) or pth_write(3) will never block the
           current thread.

       pth_time_t pth_time(long sec, long usec);
           This is a constructor for a "pth_time_t" structure which
           is a convenient function to avoid temporary structure
           values. It returns a pth_time_t structure which holds the
           absolute time value specified by sec and usec.

       pth_time_t pth_timeout(long sec, long usec);
           This is a constructor for a "pth_time_t" structure which
           is a convenient function to avoid temporary structure
           values.  It returns a pth_time_t structure which holds
           the absolute time value calculated by adding sec and usec
           to the current time.

       Sfdisc_t *pth_sfiodisc(void);
           This functions is always available, but only reasonably
           usable when Pth was built with Sfio support
           ("--with-sfio" option) and "PTH_EXT_SFIO" is then defined
           by "pth.h". It is useful for applications which want to
           use the comprehensive Sfio I/O library with the Pth
           threading library. Then this function can be used to get
           an Sfio discipline structure ("Sfdisc_t") which can be
           pushed onto Sfio streams ("Sfio_t") in order to let this
           stream use pth_read(3)/pth_write(2) instead of
           read(2)/write(2). The benefit is that this way I/O on the
           Sfio stream does only block the current thread instead of
           the whole process. The application has to free(3) the
           "Sfdisc_t" structure when it is no longer needed. The
           Sfio package can be found at

       Cancellation Management

       Pth supports POSIX style thread cancellation via pth_can‐
       cel(3) and the following two related functions:

       void pth_cancel_state(int newstate, int *oldstate);
           This manages the cancellation state of the current
           thread.  When oldstate is not "NULL" the function stores
           the old cancellation state under the variable pointed to
           by oldstate. When newstate is not 0 it sets the new can‐
           cellation state. oldstate is created before newstate is
           set.  A state is a combination of "PTH_CANCEL_ENABLE" or
           CEL_DEFERRED" (or "PTH_CANCEL_DEFAULT") is the default
           state where cancellation is possible but only at cancel‐
           lation points.  Use "PTH_CANCEL_DISABLE" to complete dis‐
           able cancellation for a thread and "PTH_CANCEL_ASYNCHRO‐
           NOUS" for allowing asynchronous cancellations, i.e., can‐
           cellations which can happen at any time.

       void pth_cancel_point(void);
           This explicitly enter a cancellation point. When the cur‐
           rent cancellation state is "PTH_CANCEL_DISABLE" or no
           cancellation request is pending, this has no side-effect
           and returns immediately. Else it calls

       Event Handling

       Pth has a very flexible event facility which is linked into
       the scheduler through the pth_wait(3) function. The following
       functions provide the handling of event rings.

       pth_event_t pth_event(unsigned long spec, ...);
           This creates a new event ring consisting of a single ini‐
           tial event.  The type of the generated event is specified
           by spec. The following types are available:

               This is a file descriptor event. One or more of
               "PTH_UNTIL_FD_EXCEPTION" have to be OR-ed into spec
               to specify on which state of the file descriptor you
               want to wait.  The file descriptor itself has to be
               given as an additional argument.  Example:

               This is a multiple file descriptor event modeled
               directly after the select(2) call (actually it is
               also used to implement pth_select(3) internally).
               It's a convenient way to wait for a large set of file
               descriptors at once and at each file descriptor for a
               different type of state. Additionally as a nice side-
               effect one receives the number of file descriptors
               which causes the event to be occurred (using BSD
               semantics, i.e., when a file descriptor occurred in
               two sets it's counted twice). The arguments corre‐
               spond directly to the select(2) function arguments
               except that there is no timeout argument (because
               timeouts already can be handled via "PTH_EVENT_TIME"

               Example: `"pth_event(PTH_EVENT_SELECT, &rc, nfd,
               rfds, wfds, efds)"' where "rc" has to be of type
               `"int *"', "nfd" has to be of type `"int"' and
               "rfds", "wfds" and "efds" have to be of type `"fd_set
               *"' (see select(2)). The number of occurred file
               descriptors are stored in "rc".

               This is a signal set event. The two additional argu‐
               ments have to be a pointer to a signal set (type
               `"sigset_t *"') and a pointer to a signal number
               variable (type `"int *"').  This event waits until
               one of the signals in the signal set occurred.  As a
               result the occurred signal number is stored in the
               second additional argument. Keep in mind that the Pth
               scheduler doesn't block signals automatically.  So
               when you want to wait for a signal with this event
               you've to block it via sigprocmask(2) or it will be
               delivered without your notice. Example: `"sigempty‐
               set(&set); sigaddset(&set, SIGINT);
               pth_event(PTH_EVENT_SIG, &set, &sig);"'.

               This is a time point event. The additional argument
               has to be of type "pth_time_t" (usually on-the-fly
               generated via pth_time(3)). This events waits until
               the specified time point has elapsed. Keep in mind
               that the value is an absolute time point and not an
               offset. When you want to wait for a specified amount
               of time, you've to add the current time to the offset
               (usually on-the-fly achieved via pth_timeout(3)).
               Example: `"pth_event(PTH_EVENT_TIME, pth_time‐

               This is a message port event. The additional argument
               has to be of type "pth_msgport_t". This events waits
               until one or more messages were received on the spec‐
               ified message port.  Example:
               `"pth_event(PTH_EVENT_MSG, mp)"'.

               This is a thread event. The additional argument has
               to be of type "pth_t".  One of "PTH_UNTIL_TID_NEW",
               "PTH_UNTIL_TID_DEAD" has to be OR-ed into spec to
               specify on which state of the thread you want to
               wait.  Example:
               `"pth_event(PTH_EVENT_TID⎪PTH_UNTIL_TID_DEAD, tid)"'.

               This is a custom callback function event. Three addi‐
               tional arguments have to be given with the following
               types: `"int (*)(void *)"', `"void *"' and
               `"pth_time_t"'. The first is a function pointer to a
               check function and the second argument is a user-sup‐
               plied context value which is passed to this function.
               The scheduler calls this function on a regular basis
               (on his own scheduler stack, so be very careful!) and
               the thread is kept sleeping while the function
               returns "FALSE". Once it returned "TRUE" the thread
               will be awakened. The check interval is defined by
               the third argument, i.e., the check function is
               polled again not until this amount of time elapsed.
               Example: `"pth_event(PTH_EVENT_FUNC, func, arg,

       unsigned long pth_event_typeof(pth_event_t ev);
           This returns the type of event ev. It's a combination of
           the describing "PTH_EVENT_XX" and "PTH_UNTIL_XX" value.
           This is especially useful to know which arguments have to
           be supplied to the pth_event_extract(3) function.

       int pth_event_extract(pth_event_t ev, ...);
           When pth_event(3) is treated like sprintf(3), then this
           function is sscanf(3), i.e., it is the inverse operation
           of pth_event(3). This means that it can be used to
           extract the ingredients of an event.  The ingredients are
           stored into variables which are given as pointers on the
           variable argument list.  Which pointers have to be
           present depends on the event type and has to be deter‐
           mined by the caller before via pth_event_typeof(3).

           To make it clear, when you constructed ev via `"ev =
           pth_event(PTH_EVENT_FD, fd);"' you have to extract it via
           `"pth_event_extract(ev, &fd)"', etc. For multiple argu‐
           ments of an event the order of the pointer arguments is
           the same as for pth_event(3). But always keep in mind
           that you have to always supply pointers to variables and
           these variables have to be of the same type as the argu‐
           ment of pth_event(3) required.

       pth_event_t pth_event_concat(pth_event_t ev, ...);
           This concatenates one or more additional event rings to
           the event ring ev and returns ev. The end of the argument
           list has to be marked with a "NULL" argument. Use this
           function to create real events rings out of the single-
           event rings created by pth_event(3).

       pth_event_t pth_event_isolate(pth_event_t ev);
           This isolates the event ev from possibly appended events
           in the event ring.  When in ev only one event exists,
           this returns "NULL". When remaining events exists, they
           form a new event ring which is returned.

       pth_event_t pth_event_walk(pth_event_t ev, int direction);
           This walks to the next (when direction is
           "PTH_WALK_NEXT") or previews (when direction is
           "PTH_WALK_PREV") event in the event ring ev and returns
           this new reached event. Additionally "PTH_UNTIL_OCCURRED"
           can be OR-ed into direction to walk to the next/previous
           occurred event in the ring ev.

       pth_status_t pth_event_status(pth_event_t ev);
           This returns the status of event ev. This is a fast oper‐
           ation because only a tag on ev is checked which was
           either set or still not set by the scheduler. In other
           words: This doesn't check the event itself, it just
           checks the last knowledge of the scheduler. The possible
           returned status codes are: "PTH_STATUS_PENDING" (event is
           still pending), "PTH_STATUS_OCCURRED" (event successfully
           occurred), "PTH_STATUS_FAILED" (event failed).

       int pth_event_free(pth_event_t ev, int mode);
           This deallocates the event ev (when mode is
           "PTH_FREE_THIS") or all events appended to the event ring
           under ev (when mode is "PTH_FREE_ALL").

       Key-Based Storage

       The following functions provide thread-local storage through
       unique keys similar to the POSIX Pthread API. Use this for
       thread specific global data.

       int pth_key_create(pth_key_t *key, void (*func)(void *));
           This created a new unique key and stores it in key.
           Additionally func can specify a destructor function which
           is called on the current threads termination with the

       int pth_key_delete(pth_key_t key);
           This explicitly destroys a key key.

       int pth_key_setdata(pth_key_t key, const void *value);
           This stores value under key.

       void *pth_key_getdata(pth_key_t key);
           This retrieves the value under key.

       Message Port Communication

       The following functions provide message ports which can be
       used for efficient and flexible inter-thread communication.

       pth_msgport_t pth_msgport_create(const char *name);
           This returns a pointer to a new message port. If name
           name is not "NULL", the name can be used by other threads
           via pth_msgport_find(3) to find the message port in case
           they do not know directly the pointer to the message

       void pth_msgport_destroy(pth_msgport_t mp);
           This destroys a message port mp. Before all pending mes‐
           sages on it are replied to their origin message port.

       pth_msgport_t pth_msgport_find(const char *name);
           This finds a message port in the system by name and
           returns the pointer to it.

       int pth_msgport_pending(pth_msgport_t mp);
           This returns the number of pending messages on message
           port mp.

       int pth_msgport_put(pth_msgport_t mp, pth_message_t *m);
           This puts (or sends) a message m to message port mp.

       pth_message_t *pth_msgport_get(pth_msgport_t mp);
           This gets (or receives) the top message from message port
           mp.  Incoming messages are always kept in a queue, so
           there can be more pending messages, of course.

       int pth_msgport_reply(pth_message_t *m);
           This replies a message m to the message port of the

       Thread Cleanups

       Per-thread cleanup functions.

       int pth_cleanup_push(void (*handler)(void *), void *arg);
           This pushes the routine handler onto the stack of cleanup
           routines for the current thread.  These routines are
           called in LIFO order when the thread terminates.

       int pth_cleanup_pop(int execute);
           This pops the top-most routine from the stack of cleanup
           routines for the current thread. When execute is "TRUE"
           the routine is additionally called.

       Process Forking

       The following functions provide some special support for
       process forking situations inside the threading environment.

       int pth_atfork_push(void (*prepare)(void *), void (*)(void
       *parent), void (*)(void *child), void *arg);
           This function declares forking handlers to be called
           before and after pth_fork(3), in the context of the
           thread that called pth_fork(3). The prepare handler is
           called before fork(2) processing commences. The parent
           handler is called   after fork(2) processing completes in
           the parent process.  The child handler is called after
           fork(2) processing completed in the child process. If no
           handling is desired at one or more of these three points,
           the corresponding handler can be given as "NULL".  Each
           handler is called with arg as the argument.

           The order of calls to pth_atfork_push(3) is significant.
           The parent and child handlers are called in the order in
           which they were established by calls to
           pth_atfork_push(3), i.e., FIFO. The prepare fork handlers
           are called in the opposite order, i.e., LIFO.

       int pth_atfork_pop(void);
           This removes the top-most handlers on the forking handler
           stack which were established with the last
           pth_atfork_push(3) call. It ret
Results 1 - 1 of 1
Help - FTP Sites List - Software Dir.
Search over 15 billion files
© 1997-2017