Blog Archive

Locks, Mutexes, and Semaphores: Types of Synchronization Objects

Tuesday, 21 October 2014

I recently got an email asking about locks and different types of synchronization objects, so I'm posting this entry in case it is of use to others.

Locks

A lock is an abstract concept. The basic premise is that a lock protects access to some kind of shared resource. If you own a lock then you can access the protected shared resource. If you do not own the lock then you cannot access the shared resource.

To own a lock, you first need some kind of lockable object. You then acquire the lock from that object. The precise terminology may vary. For example, if you have a lockable object XYZ you may:

  • acquire the lock on XYZ,
  • take the lock on XYZ,
  • lock XYZ,
  • take ownership of XYZ,
  • or some similar term specific to the type of XYZ

The concept of a lock also implies some kind of exclusion: sometimes you might be unable to take ownership of a lock, and the operation to do so will either fail, or block. In the former case, the operation will return some error code or exception to indicate that the attempt to take ownership failed. In the latter case, the operation will not return until it has taken ownership, which typically requires that another thread in the system does something to permit that to happen.

The most common form of exclusion is a simple numerical count: the lockable object has a maximum number of owners. If that number has been reached, then any further attempt to acquire a lock on it will be unable to succeed. This therefore requires that we have some mechanism of relinquishing ownership when we are done. This is commonly called unlocking, but again the terminology may vary. For example, you may:

  • release the lock on XYZ,
  • drop the lock on XYZ,
  • unlock XYZ,
  • relinquish ownership of XYZ,
  • or some similar term specific to the type of XYZ

When you relinquish ownership in the appropriate fashion then a blocked operation that is trying to acquire the lock may not proceed, if the required conditions have been met.

For example if a lockable object only allows 3 owners then a 4th attempt to acquire the lock will block. When one of the first 3 owners releases the lock then that 4th attempt to acquire the lock will succeed.

Ownership

What it means to "own" a lock depends on the precise type of the lockable object. For some lockable objects there is a very tight definition of ownership: this specific thread owns the lock, through the use of that specific object, within this particular scope.

In other cases, the definition is more fluid, and the ownership of the lock is more conceptual. In these cases, ownership can be relinquished by a different thread or object than the thread or object that acquired the lock.

Mutexes

Mutex is short for MUTual EXclusion. Unless the word is qualified with additional terms such as shared mutex, recursive mutex or read/write mutex then it refers to a type of lockable object that can be owned by exactly one thread at a time. Only the thread that acquired the lock can release the lock on a mutex. When the mutex is locked, any attempt to acquire the lock will fail or block, even if that attempt is done by the same thread.

Recursive Mutexes

A recursive mutex is similar to a plain mutex, but one thread may own multiple locks on it at the same time. If a lock on a recursive mutex has been acquired by thread A, then thread A can acquire further locks on the recursive mutex without releasing the locks already held. However, thread B cannot acquire any locks on the recursive mutex until all the locks held by thread A have been released.

In most cases, a recursive mutex is undesirable, since the it makes it harder to reason correctly about the code. With a plain mutex, if you ensure that the invariants on the protected resource are valid before you release ownership then you know that when you acquire ownership those invariants will be valid.

With a recursive mutex this is not the case, since being able to acquire the lock does not mean that the lock was not already held, by the current thread, and therefore does not imply that the invariants are valid.

Reader/Writer Mutexes

Sometimes called shared mutexes, multiple-reader/single-writer mutexes or just read/write mutexes, these offer two distinct types of ownership:

  • shared ownership, also called read ownership, or a read lock, and
  • exclusive ownership, also called write ownership, or a write lock.

Exclusive ownership works just like ownership of a plain mutex: only one thread may hold an exclusive lock on the mutex, only that thread can release the lock. No other thread may hold any type of lock on the mutex whilst that thread holds its lock.

Shared ownership is more lax. Any number of threads may take shared ownership of a mutex at the same time. No thread may take an exclusive lock on the mutex while any thread holds a shared lock.

These mutexes are typically used for protecting shared data that is seldom updated, but cannot be safely updated if any thread is reading it. The reading threads thus take shared ownership while they are reading the data. When the data needs to be modified, the modifying thread first takes exclusive ownership of the mutex, thus ensuring that no other thread is reading it, then releases the exclusive lock after the modification has been done.

Spinlocks

A spinlock is a special type of mutex that does not use OS synchronization functions when a lock operation has to wait. Instead, it just keeps trying to update the mutex data structure to take the lock in a loop.

If the lock is not held very often, and/or is only held for very short periods, then this can be more efficient than calling heavyweight thread synchronization functions. However, if the processor has to loop too many times then it is just wasting time doing nothing, and the system would do better if the OS scheduled another thread with active work to do instead of the thread failing to acquire the spinlock.

Semaphores

A semaphore is a very relaxed type of lockable object. A given semaphore has a predefined maximum count, and a current count. You take ownership of a semaphore with a wait operation, also referred to as decrementing the semaphore, or even just abstractly called P. You release ownership with a signal operation, also referred to as incrementing the semaphore, a post operation, or abstractly called V. The single-letter operation names are from Dijkstra's original paper on semaphores.

Every time you wait on a semaphore, you decrease the current count. If the count was greater than zero then the decrement just happens, and the wait call returns. If the count was already zero then it cannot be decremented, so the wait call will block until another thread increases the count by signalling the semaphore.

Every time you signal a semaphore, you increase the current count. If the count was zero before you called signal, and there was a thread blocked in wait then that thread will be woken. If multiple threads were waiting, only one will be woken. If the count was already at its maximum value then the signal is typically ignored, although some semaphores may report an error.

Whereas mutex ownership is tied very tightly to a thread, and only the thread that acquired the lock on a mutex can release it, semaphore ownership is far more relaxed and ephemeral. Any thread can signal a semaphore, at any time, whether or not that thread has previously waited for the semaphore.

An analogy

A semaphore is like a public lending library with no late fees. They might have 5 copies of C++ Concurrency in Action available to borrow. The first five people that come to the library looking for a copy will get one, but the sixth person will either have to wait, or go away and come back later.

The library doesn't care who returns the books, since there are no late fees, but when they do get a copy returned, then it will be given to one of the people waiting for it. If no-one is waiting, the book will go on the shelf until someone does want a copy.

Binary semaphores and Mutexes

A binary semaphore is a semaphore with a maximum count of 1. You can use a binary semaphore as a mutex by requiring that a thread only signals the semaphore (to unlock the mutex) if it was the thread that last successfully waited on it (when it locked the mutex). However, this is only a convention; the semaphore itself doesn't care, and won't complain if the "wrong" thread signals the semaphore.

Critical Sections

In synchronization terms, a critical section is that block of code during which a lock is owned. It starts at the point that the lock is acquired, and ends at the point that the lock is released.

Windows CRITICAL_SECTIONs

Windows programmers may well be familiar with CRITICAL_SECTION objects. A CRITICAL_SECTION is a specific type of mutex, not a use of the general term critical section.

Mutexes in C++

The C++14 standard has five mutex types:

The variants with "timed" in the name are the same as those without, except that the lock operations can have time-outs specified, to limit the maximum wait time. If no time-out is specified (or possible) then the lock operations will block until the lock can be acquired — potentially forever if the thread that holds the lock never releases it.

std::mutex and std::timed_mutex are just plain single-owner mutexes.

std::recursive_mutex and std::recursive_timed_mutex are recursive mutexes, so multiple locks may be held by a single thread.

std::shared_timed_mutex is a read/write mutex.

C++ lock objects

To go with the various mutex types, the C++ Standard defines a triplet of class templates for objects that hold a lock. These are:

For basic operations, they all acquire the lock in the constructor, and release it in the destructor, though they can be used in more complex ways if desired.

std::lock_guard<> is the simplest type, and just holds a lock across a critical section in a single block:

std::mutex m;
void f(){
    std::lock_guard<std::mutex> guard(m);
    // do stuff
}

std::unique_lock<> is similar, except it can be returned from a function without releasing the lock, and can have the lock released before the destructor:

std::mutex m;
std::unique_lock<std::mutex> f(){
    std::unique_lock<std::mutex> guard(m);
    // do stuff
    return std::move(guard);
}

void g(){
    std::unique_lock<std::mutex> guard(f());
    // do more stuff
    guard.unlock();
}

See my previous blog post for more about std::unique_lock<> and std::lock_guard<>.

std::shared_lock<> is almost identical to std::unique_lock<> except that it acquires a shared lock on the mutex. If you are using a std::shared_timed_mutex then you can use std::lock_guard<std::shared_timed_mutex> or std::unique_lock<std::shared_timed_mutex> for the exclusive lock, and std::shared_lock<std::shared_timed_mutex> for the shared lock.

std::shared_timed_mutex m;
void reader(){
    std::shared_lock<std::shared_timed_mutex> guard(m);
    // do read-only stuff
}
void writer(){
    std::lock_guard<std::shared_timed_mutex> guard(m);
    // update shared data
}

Semaphores in C++

The C++ standard does not define a semaphore type. You can write your own with an atomic counter, a mutex and a condition variable if you need, but most uses of semaphores are better replaced with mutexes and/or condition variables anyway.

Unfortunately, for those cases where semaphores really are what you want, using a mutex and a condition variable adds overhead, and there is nothing in the C++ standard to help. Olivier Giroux and Carter Edwards' proposal for a std::synchronic class template (N4195) might allow for an efficient implementation of a semaphore, but this is still just a proposal.

Posted by Anthony Williams
[/ threading /] permanent link
Tags: , , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

Comments on the C++ Concurrency TS

Wednesday, 28 May 2014

It's been a while since I wrote any papers for the C++ committee, but I've written two for the committee mailing prior to the upcoming committee in Rapperswil:

The first provides comments, and suggestions for improvements on the concurrency TS based on implementing continuations for Just::Thread V2, and executors for an unreleased internal build of Just::Thread.

The second proposes to standardize the synchronized_value class template from Just::Thread Pro, with a couple of modifications.

Let me know if you have any comments.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

just::thread C++11 and C++14 Thread Library V2.0 released

Monday, 19 May 2014

I am pleased to announce that version 2.0 of just::thread, our C++11 and C++14 Thread Library has just been released with new features and support for new compilers.

This release includes the new std::shared_timed_mutex and std::shared_lock from C++14, which allow for multiple readers to hold a shared lock on a mutex or one writer to hold an exclusive lock.

Also included are extensions to the futures from the upcoming C++ Concurrency Technical Specification in the form of continuations. std::future<> and std::shared_future<> now have an additional member function "then()" which allows a further task to be scheduled when the future becomes "ready". This allows for improved support for asynchronous tasks.

There are also new functions: jss::when_any and jss::when_all which encapsulate a set of futures into a single future which becomes ready when either one or all of the provided futures becomes ready. This can be used with continuations to schedule asynchronous tasks to run when the prerequisites are ready.

Finally, a new lock wrapper jss::generic_lock_guard is provided. This is a concrete type rather than a template, and will lock any type of mutex which provides lock and unlock member functions.

This release also includes support for Microsoft Visual Studio 2013 and gcc 4.8.

Get your copy of Just::Thread

Purchase your copy and get started with the C++11 and C++14 thread library now.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

ACCU 2014 - slides

Monday, 19 May 2014

This year's ACCU conference was at the Marriott hotel in Bristol again. As ever, the conference itself was enjoyable, educational and exhausting in equal measure, and it was good to meet up with people again.

This year, I was presenting on "The continuing future of C++ concurrency", with an overview of the additions to the Standard C++ concurrency libraries proposed for the C++ concurrency TS and C++17, including: continuations, executors, and parallel algorithms, as well as std::shared_timed_mutex from C++14.

My presentation was well-attended, and there were lots of interesting questions.

The slides are available here.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

ACCU 2013 and the C++ Standards Meeting

Monday, 06 May 2013

This year's ACCU conference was at a new venue: the Marriott hotel in Bristol. This is a bit closer to home for me than the previous venue in Oxford, which made the trip there and back more comfortable. As ever, the conference itself was enjoyable, educational and exhausting in equal measure.

This year was also BSI's turn to host the Spring ISO C++ committee meeting, which was conveniently arranged to be the week following ACCU, in the same hotel. Having not attended a meeting since the last time the committee met in the UK, I was glad to be able to attend this too.

ACCU 2013

As usual, with 5 tracks running simultaneously, it was hard to choose which sessions to attend. I stuck mostly to C++-centric, or general software development sessions, but there were also sessions on a wide range of other topics, including Ruby, Java, Scala, Git, C#, testing, management and culture, amongst others.

I was invited to contribute to Pete Goodliffe's Becoming a Better Programmer panel session, which was well attended and entertaining, as usual for Pete's sessions. My contribution on "doing things mindfully" seemed well-received, but wasn't the most popular — that honour went to Seb Rose, though Brian Marick called out Anna-Jayne Metcalfe's contribution on "If it ain't broke, do fix it" in the keynote the next morning.

My presentation on "C++11 in the Real World" was also well attended, with some good questions from the audience. A couple of people have asked me for my slides: they are available from the ACCU website.

ISO C++ committee meeting

This was a landmark meeting, for several reasons. Firstly, there were over 100 attendees, making it one of the most well-attended ISO C++ meetings ever. Secondly, this bumper attendee count was complemented by a bumper batch of proposals and position papers to process, which meant that all the working groups were pressed for time, and people met for extra sessions in the evenings to try and get through them all. Finally, the committee voted to publish a new "CD", starting the formal process leading to a C++14 standard.

The meeting was 6 days long, but I was only able to attend for the first 2 days. Unsurprisingly, I spent my time in the Concurrency group (SG1). We had a lot of papers to discuss, and some of the discussions were quite involved. Ultimately, not many papers were forwarded to the full committee, and only one paper other than the basic defect-report fixes was approved.

Lawrence Crowl's paper on Stream Mutexes (N3535) was first up. The problem this paper is trying to address is ensuring that data written to a stream from multiple threads appears in a coherent order — though concurrent writes to e.g. std::cout are guaranteed not to yield undefined behaviour, the output may be interleaved in an arbitrary fashion. This got quite a bit of discussion over the course of the week, and was eventually submitted as a much-modified paper for writing chunks to a stream in an atomic fashion, which was voted down in full committee.

Herb Sutter's late paper on the behaviour of the destructor of std::future (N3630) was up next. This is a highly conterversial topic, and yielded much discussion. The crux of the matter is that as currently specified the destructor of std::future blocks if it came from an invocation of std::async, the asynchronous function was run on a separate thread (with the std::launch::async policy), and that thread has not yet finished. This is highly desirable in many circumstances, but Herb argued that there are other circumstances where it is less desirable, and this makes it hard to use std::future in some types of program.

Much of the discussion focused on the potential for breaking existing code, and ways of preventing this. The proposal eventually morphed into a new paper (N3637) which created 2 new types of future: waiting_future and shared_waiting_future. std::async would then be changed to return a waiting_future instead of a future. Existing code that compiled unchanged would then keep the existing behaviour; code that changed behaviour would fail to compile. Though the change required to get the desired behaviour would not be extensive, the feeling in the full committee was that this breakage would be too extensive, and the paper was also voted down in full committee.

Herb's original paper also included a change to the destructor of std::thread, so that the destructor of a joinable thread would call join() rather than std::terminate(). This was put up for vote as N3636, but again was voted down in full committee.

Like I said, there were lots of other papers up for discussion. Some were concrete proposals, whilst others were less complete, asking for feedback on the approach. Only one paper was approved for the C++14 time frame — whilst there was considerable interest in the idea behind some of the others, there was disagreement about the details, and nothing else was deemed ready. I look forward to seeing the revised versions of some of these proposals when they are ready, especially the executors, continuations and parallel algorithms papers.

The paper that did get approved was Howard Hinnant's paper on shared locking (N3568), but even that didn't go through unchanged. I have serious concerns about the upgrade_mutex proposed in the original paper, and while I didn't manage to get my concerns across via email (this was discussed after I left), there was not enough interest in including it in C++14. The approved paper (N3659) therefore included only shared_mutex and shared_lock, not upgrade_mutex, which is good. N3659 was also approved by the vote in full committee, so will be part of C++14.

Wrap up

Having the conference and ISO meeting back-to-back was intense, but I thoroughly enjoyed attending both. C++14 looks set to be a significant improvement over C++11 — though the individual changes are minor, they offer quite a bit in terms of improved usability of the language and library. See the trip reports by Herb Sutter and Michael Wong (part 2, part 3) for more details on the accepted papers.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

just::thread Pro: Actors Edition released

Tuesday, 23 April 2013

I am pleased to announce that the first release of Just::Thread Pro is here. The Actors Edition provides a framework for creating actors that run on separate threads, and communicate via message passing, as well as jss::synchronized_value for synchronizing access to a single object and jss::concurrent_map, a hash map that is safe for concurrent access from multiple threads.

See the overview for more information, or read the full documentation.

Get your copy of Just::Thread Pro: Actors Edition

Purchase your copy and get started now.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

ACCU 2013

Saturday, 06 April 2013

I'm presenting on "C++11 features and real-world code" at ACCU 2013 in Bristol on this coming Thursday, 11th April. Here's the abstract:

C++11 has many nifty features, but how do they actually impact developers at the code face? Which C++11 features offer the best bang for the buck?

In this session I'll look at a selection of C++11 language and library features that I've found of real practical benefit in application and library code, with examples of equivalent C++03 code.

The features covered will include the concurrency support (of course), lambdas, and "auto", amongst a variety of others.

Hope to see you there!

Posted by Anthony Williams
[/ news /] permanent link
Tags:

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

Duplication in Software

Tuesday, 26 March 2013

Much has been said about the importance of reducing duplication in software. For example, J. B. Rainsberger has "minimizes duplication" as the second of his four "Elements of Simple Design", and lots of the teachings of the Agile community stress the importance of reducing duplication when refactoring code.

Inspired by Kevlin Henney's tweet last week, where he laments that programmers trying to remove duplication often take it literally, I wanted to talk about the different kinds of duplication in software. I've just mentioned "literal" duplication, so let's start with that.

Basic Literal Duplication

This is the most obvious form of duplication: sections of code which are completely identical. This most often arises due to copy-and-paste programming, but can often arise in the form of repetitive patterns — a simple for loop that is repeated multiple places with the same body, for example.

Removing Literal Duplication

The easiest to create, literal duplication is also the easiest to remove: just extract a function that does the necessary operation.

Sometimes, though the code is identical, the types involved are different. You cannot address this with extracting a simple function, so we have a new class of duplication.

Parametric Literal Duplication

Parametric literal duplication can also arise from copy-and-paste programming. The key feature is that the types of the variables are different so you cannot just reuse the code from one place in another, even if it was a nicely self-contained function. If you eliminate all the basic literal duplication, parametric literal duplication will give you sets of functions with identical structure but different types.

With the lack of a portable is_ready() function for std::future, it is common to test whether a future f is ready by writing f.wait_for(std::chrono::seconds(0))==std::future_status::ready. Since std::future is a class template, the types of the various futures that you may wish to check for readiness may vary, so you cannot extract a simple function. If you write this in multiple places you therefore have parametric literal duplication.

Removing Parametric Literal Duplication

There are various ways to remove parametric literal duplication. In C++ the most straightforward is probably to use a template. e.g.

template<typename T>
inline bool is_ready(std::future<T> f){
    return f.wait_for(std::chrono::seconds(0))==std::future_status::ready;
}

In other languages you might choose to use generics, or rely on duck-typing. You might also do it by extracting an interface and using virtual function calls, but that requires that you can modify the types of the objects, or are willing to write a facade.

Parametric literal duplication is closely related to what I call Structural Duplication.

Structural Duplication

This is where the overall pattern of some code is the same, but the details differ. For example, a for loop that iterates over a container is a common structure, but the loop body varies from loop to loop.e.g

std::vector<int> v;

int sum=0;
for(std::vector<int>::iterator it=v.begin();it!=v.end();++it){
    sum+=*it;
}
for(std::vector<int>::iterator it=v.begin();it!=v.end();++it){
    std::cout<<*it<<std::endl;
}

You can't just extract the whole loop into a separate function because the loop body is different, but that doesn't mean you can't do anything about it.

Removing Structural Duplication

One common way to remove such duplication is to extract the commonality with the template method pattern, or create a parameterized function where the details are passed in as a function to call.

For simple loops like the ones above, we have std::for_each, and the new-style C++11 for loops:

std::for_each(v.begin(),v.end(),[&](int x){sum+=x;});
std::for_each(v.begin(),v.end(),[](int x){std::cout<<x<<std::endl;});

for(int x:v){
    sum+=x;
}
for(int x:v){
    std::cout<<x<<std::endl;
}

Obviously, if your repeated structure doesn't match the standard library algorithms then you must write your own, but the idea is the same: take a function parameter which is a callable object and which captures the variable part of the structure. For a loop, this is the loop body. For a sort algorithm it is the comparison, and so forth.

Temporal Duplication

This is where some code only appears once in the source code, but is executed repeatedly, and the only desired outcome is the computed result, which is the same for each invocation. For example, the call to v.size() or v.end() to find the upper bound of an iteration through a container.

std::vector<int> v;
for(unsigned i=0;i<v.size();++i)
{
    do_stuff(v[i]);
}

It doesn't just happen in loops, though. For example, in a function that inserts data into a database table you might build a query object, run it to insert the data, and then destroy it. If this function is called repeatedly then you are repeatedly building the query object and destroying it. If your database library supports parameterization then you may well be able to avoid this duplication.

Removing Temoral Duplication

The general process for removing temporal duplication is to use some form of caching or memoization — the value is computed once and then stored, and this stored value is used in place of the computation for each subsequent use. For loops, this can be as simple as extracting a variable to hold the value:

for(unsigned i=0,end=v.size();i!=end;++i){
    do_stuff(v[i]);
}

For other things it can be more complex. For example, with the database query example above, you may need to switch to using a parameterized query so that on each invocation you can bind the new values to the query parameters, rather than building the query around the specific parameters to insert.

Duplication of Intent

Sometimes the duplication does not appear in the actual code, but in what the code is trying to achieve. This often occurs in large projects where multiple people have worked on the code base. One person writes some code to do something in one source file, and another writes some code to do the same thing in another source file, but different styles mean that the code is different even though the result is the same. This can also happen with a single developer if the different bits are written with a large enough gap, such that you cannot remember what you did before and your style has changes slightly. To beat the loop iteration example to death, you might have some code that loops through a container by index, and other code that loops through the same container using iterators. The structure is different, but the intent is the same.

Removing Duplication of Intent

This is one of the hardest types of duplication to spot and remove. The way to remove it is to refactor one or both of the pieces of code until they have the same structure, and are thus more obviously duplicates of one-another. You can then treat them either as literal duplication, parametric literal duplication or structural duplication as appropriate.

Incidental Duplication

This is where there is code that looks identical but has completely a different meaning in each place. The most obvious form of this is with "magic numbers" — the constant "3" in one place typically has a completely different meaning to the constant "3" somewhere else.

Removing Incidental Duplication

You can't necessarily entirely eliminate incidental duplication, but you can minimize it by good naming. By using symbolic constants instead of literals then it is clear that different uses are distinct because the name of the constant is distinct. There will be still be duplication of the literal in the definition of the constants, but this is now less problematic.

In the case that this incidental duplication is not just a constant then you can extract separate named functions that encapsulate this duplicate code, and express the intent in each case. The duplication is now just between these function bodies than between the uses, and the naming of the functions makes it clear that this is just incidental duplication.

Conclusion

There are quite a few types of duplication that you may get in your code. By eliminating them you will tend to make your code shorter, clearer, and easier to maintain.

If you can think of any types of duplication I've missed, please add a comment.

Posted by Anthony Williams
[/ design /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

just::thread C++11 Thread Library V1.8.2 released

Tuesday, 06 November 2012

I am pleased to announce that version 1.8.2 of just::thread, our C++11 Thread Library has just been released.

This release adds support for gcc 4.7.2, and consequently official support for Ubuntu Quantal and Fedora 17.

just::thread is now available for the following compilers:

  • Microsoft Visual Studio 2005, 2008, 2010 and 2012 for both 32-bit and 64-bit Windows,
  • TDM gcc 4.5.2 and 4.6.1 for both 32-bit and 64-bit Windows,
  • g++ 4.3, 4.4, 4.5, 4.6 and 4.7 (4.7.2 or later) for both 32-bit and 64-bit Linux (x86/x86_64), and
  • MacPorts g++ 4.3, 4.4, 4.5, 4.6 and 4.7 (4.7.2 or later) for 32-bit and 64-bit MacOSX.

Get your copy of Just::Thread

Purchase your copy and get started with the C++11 thread library now.

As usual, existing customers are entitled to a free upgrade to V1.8.2 from all earlier versions.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

just::thread C++11 Thread Library V1.8.0 vs Microsoft Visual Studio 2012

Thursday, 06 September 2012

I am pleased to announce that version 1.8.0 of just::thread, our C++11 Thread Library has just been released.

This release adds official support for Microsoft Visual Studio 2012, as well as providing some minor bug fixes and improvements across the board.

Some people have asked how Just::Thread compares to the thread-library support in Microsoft Visual Studio 2012, given that VS2012 now provides the new C++11 concurrency headers, so I ran some of the Just::Thread tests against the VS2012 library. It turns out that there are quite a few places where Just::Thread has better conformance than VS2012, so if you're making heavy use of the C++11 thread library then upgrading to Just::Thread is an essential investment.

VS2012 thread library conformance issues

Here is a list of some of the areas where Just::Thread provides better conformance than VS2012. Some of these can be worked around; others are important for correctly-functioning code. This is just a sample, not a comprehensive list.

  • With the VS2012 library, you cannot use move-only types with std::promise, and std::async doesn't work with functions that return move-only types.
  • With the VS2012 library, std::thread doesn't work with move-only argument types.
  • With the VS2012 library, the wait_for and wait_until functions return incorrect values when used with a std::future that comes from a std::promise.
  • With the VS2012 library, when std::async is used with a launch policy of std::launch::async, the destructor of the returned std::future instance does not wait for the thread to complete.
  • With the VS2012 library, std::unique_lock does not check whether or not it owns the lock before calling operations on the underlying mutex, triggering undefined behaviour rather than throwing an exception in many cases.
  • With the VS2012 library, the std::atomic<> class template cannot be used on types without a default constructor.
  • With the VS2012 library, std::launch and other strongly-typed enums such as std::future_status are emulated with a namespace-scoped enum rather than a strongly-typed enum.

In all these cases (and more), Just::Thread conforms with the standard.

Just::Thread optimizations

Just::Thread also offers various optimizations over the VS2012 thread library such as the following.

  • The return value from a task run with std::async is copied/moved fewer times, and moved where possible.
  • A function object passed to std::thread is copied or moved fewer times.
  • The task passed to std::async is destroyed as soon as the task is completed, even if there are outstanding futures that reference the result.

Again, this is not a comprehensive list. Just::Thread has been carefully optimized to ensure common use cases have the best performance possible whilst remaining conformant to the C++11 standard.

Get your copy of Just::Thread

Purchase your copy and get started with the C++11 thread library now.

As usual, existing customers are entitled to a free upgrade to V1.8.0 from all earlier versions.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

If you liked this post, why not subscribe to the RSS feed RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the right.

Older entries