Just Software Solutions

Blog Archive for / 2010 / 09 /

August 2010 C++ Standards Committee Mailing

Wednesday, 08 September 2010

The August 2010 mailing for the C++ Standards Committee was published recently. This is the post-meeting mailing for the August 2010 committee meeting, and contains a new C++0x Working Draft. At the meeting in August, the committee discussed many of the National Body comments on the FCD, and this draft incorporates those changes that the committee approved of. As you can see from the FCD Comment Status document in this mailing, there were 301 technical comments and a further 215 editorial comments. Of these, 98 technical comments have been accepted as-is, 8 have been accepted with changes, and 63 have been rejected, leaving 132 technical comments that have still not been addressed one way or the other.

No significant changes have been accepted to the concurrency-related parts of the working draft, though there are quite a few editorial comments. However, there are several papers in this mailing that address the National Body comments in this area. These papers have by and large been drafted to represent the consensus of those members of the concurreny group in the LWG who were present at the meeting. I have summarised these papers below.

Concurrency-related papers

N3113: Async Launch Policies (CH 36)

This paper provides a clearer basis for implementors to supply additional launch policies for std::async, or for the committee to do so in a later revision of the C++ standard, by making the std::launch enum a bitmask type. It also drops the std::launch::any enumeration value, and renames std::launch::sync to std::launch::deferred, as this better describes what it means.

The use of a bitmask allows new values to be added which are either distinct values, or combinations of the others. The default policy for std::async is thus std::launch::async|std::launch::deferred.

N3125: Omnibus Memory Model and Atomics Paper

This paper addresses several National Body comments by updating the wording in the draft standard to better reflect the intent of the committee.

N3128: C++ Timeout Specification

There are several functions in the threading portion of the library that allow timeouts, such as the try_lock_for and try_lock_until member functions of the timed mutex types, and the wait_for and wait_until member functions of the future types. This paper clarifies what it means to wait for a specified duration (with the xxx_for functions), and what it means to wait until a specified time point (with the xxx_until functions). In particular, it clarifies what can be expected of the implementation if the clock is changed during a wait.

This paper also proposes replacing the old std::chrono::monotonic_clock with a new std::chrono::steady_clock. Whereas the only constraint on the monotonic clock was that it never went backwards, the steady clock cannot be adjusted, and always ticks at a uniform rate. This fulfils the original intent of the monotonic clock, but provides a clearer specification and name. It is also tied into the new wait specifications, since waiting for a duration requires a steady clock for use as a basis.

N3129: Managing C++ Associated Asynchronous State

This paper tidies up the wording of the functions and classes related to the future types, and clarifies the management of the associated asynchronous state which is used to communicate e.g. between a std::promise and a std::future that will receive the result.

N3130: Lockable requirements for C++0x

This paper splits out the requirements for general lockable types away from the specific requirements on the standard mutex types. This allows the lockable concepts to be used to specify the requirements on a type to be used the the std::lock_guard and std::unique_lock class templates, as well as for the various overloads of the wait functions on std::condition_variable_any, without imposing the precise behaviour of std::mutex on user-defined mutex types.

N3132: Mathematizing C++ Concurrency: The Post-Rapperswil Model

This paper provides a mathematical description for the C++0x memory model. A similar description was used to highlight some of the areas that are clarified by the omnibus memory model paper (N3125) described above.

N3136: Coherence Requirements Detailed

This paper introduces some simple coherence requirements to the memory model wording to make it clear that the sequence of values read for a given variable must be consistent across threads. The existence of a single modification order for each variable is a key component of the memory model, and the wording introduced in this paper makes it clear that this is a core requirement.

N3137: C and C++ Liaison: Compatibility for Atomics

The structure of the atomic types and operations in the FCD was carefully worked out in conjunction with the C standards committee to ensure that the C++0x atomic types were compatible with those being introduced in the upcoming C1x standard. Unfortunately, the C committee introduced a new incompatible syntax for atomic types into the C1x draft earlier this year because they believed it was a better match for the C language.

This paper attempts to address this new incompatibility by removing the atomic_xxx types that were originally added for C compatibility, leaving just the std::atomic<T> class template. Also, a new _Atomic(T) macro is introduced for compatibility with the new C1x _Atomic keyword.

Other papers

As already mentioned, this mailing contains a new C++0x Working Draft, along with the usual post-meeting stuff — editors notes for the changes in the new draft, new issues lists, minutes of the meeting, etc. It also contains a complete list of the National Body Comments on the FCD, and a few other papers addressing National Body comments.

If you have any opinions on the resolution of any NB comments not yet formally accepted or rejected, please add them to the comments for this post.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.

Definitions of Non-blocking, Lock-free and Wait-free

Tuesday, 07 September 2010

There have repeatedly been posts on comp.programming.threads asking for a definition of these terms. To write good multithreaded code you really need to understand what these mean, and how they affect the behaviour and performance of algorithms with these properties. I thought it would be therefore be useful to provide some definitions.

Definition of Blocking

A function is said to be blocking if it calls an operating system function that waits for an event to occur or a time period to elapse. Whilst a blocking call is waiting the operating system can often remove that thread from the scheduler, so it takes no CPU time until the event has occurred or the time has elapsed. Once the event has occurred then the thread is placed back in the scheduler and can run when allocated a time slice. A thread that is running a blocking call is said to be blocked.

Mutex lock functions such as std::mutex::lock(), and EnterCriticalSection() are blocking, as are wait functions such as std::future::wait() and std::condition_variable::wait(). However, blocking functions are not limited to synchronization facilities: the most common blocking functions are I/O facilities such as fread() or WriteFile(). Timing facilities such as Sleep(), or std::this_thread::sleep_until() are also often blocking if the delay period is long enough.

Definition of Non-blocking

Non-blocking functions are just those that aren't blocking. Non-blocking data structures are those on which all operations are non-blocking. All lock-free data structures are inherently non-blocking.

Spin-locks are an example of non-blocking synchronization: if one thread has a lock then waiting threads are not suspended, but must instead loop until the thread that holds the lock has released it. Spin locks and other algorithms with busy-wait loops are not lock-free, because if one thread (the one holding the lock) is suspended then no thread can make progress.

Defintion of lock-free

A lock-free data structure is one that doesn't use any mutex locks. The implication is that multiple threads can access the data structure concurrently without race conditions or data corruption, even though there are no locks — people would give you funny looks if you suggested that std::list was a lock-free data structure, even though it is unlikely that there are any locks used in the implementation.

Just because more than one thread can safely access a lock-free data structure concurrently doesn't mean that there are no restrictions on such accesses. For example, a lock-free queue might allow one thread to add values to the back whilst another removes them from the front, whereas multiple threads adding new values concurrently would potentially corrupt the data structure. The data structure description will identify which combinations of operations can safely be called concurrently.

For a data structure to qualify as lock-free, if any thread performing an operation on the data structure is suspended at any point during that operation then the other threads accessing the data structure must still be able to complete their tasks. This is the fundamental restriction which distinguishes it from non-blocking data structures that use spin-locks or other busy-wait mechanisms.

Just because a data structure is lock-free it doesn't mean that threads don't have to wait for each other. If an operation takes more than one step then a thread may be pre-empted by the OS part-way through an operation. When it resumes the state may have changed, and the thread may have to restart the operation.

In some cases, a the partially-completed operation would prevent other threads performing their desired operations on the data structure until the operation is complete. In order for the algorithm to be lock-free, these threads must then either abort or complete the partially-completed operation of the suspended thread. When the suspended thread is woken by the scheduler it can then either retry or accept the completion of its operation as appropriate. In lock-free algorithms, a thread may find that it has to retry its operation an unbounded number of times when there is high contention.

If you use a lock-free data structure where multiple threads modify the same pieces of data and thus cause each other to retry then high rates of access from multiple threads can seriously cripple the performance, as the threads hinder each other's progress. This is why wait-free data structures are so important: they don't suffer from the same set-backs.

Definition of wait-free

A wait-free data structure is a lock-free data structure with the additional property that every thread accessing the data structure can make complete its operation within a bounded number of steps, regardless of the behaviour of other threads. Algorithms that can involve an unbounded number of retries due to clashes with other threads are thus not wait-free.

This property means that high-priority threads accessing the data structure never have to wait for low-priority threads to complete their operations on the data structure, and every thread will always be able to make progress when it is scheduled to run by the OS. For real-time or semi-real-time systems this can be an essential property, as the indefinite wait-periods of blocking or non-wait-free lock-free data structures do not allow their use within time-limited operations.

The downside of wait-free data structures is that they are more complex than their non-wait-free counterparts. This imposes an overhead on each operation, potentially making the average time taken to perform an operation considerably longer than the same operation on an equivalent non-wait-free data structure.

Choices

When choosing a data structure for a given task you need to think about the costs and benefits of each of the options.

A lock-based data structure is probably the easiest to use, reason about and write, but has the potential for limited concurrency. They may also be the fastest in low-load scenarios.

A lock-free (but not wait-free) data structure has the potential to allow more concurrent accesses, but with the possibility of busy-waits under high loads. Lock-free data structures are considerably harder to write, and the additional concurrency can make reasoning about the program behaviour harder. They may be faster than lock-based data structures, but not necessarily.

Finally, a wait-free data structure has the maximum potential for true concurrent access, without the possibility of busy waits. However, these are very much harder to write than other lock-free data structures, and typically impose an additional performance cost on every access.

Posted by Anthony Williams
[/ threading /] permanent link
Tags: , , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.

Previous Entries Later Entries

Design and Content Copyright © 2005-2024 Just Software Solutions Ltd. All rights reserved. | Privacy Policy