CPP2011 » History » Version 112

« Previous - Version 112/116 (diff) - Next » - Current version
Kevin Lynch, 12/19/2014 06:22 PM
inline namespaces update

C++ 2011/2014

C++ is a language governed by an international standard (INCITS/ISO/IEC 14882:2012). A large ISO committee (JTC1/SC22/WG21 meets to add new functionality to the language and its Standard Library, resolve defects, and publish the standard. Then the compiler writers scramble to comply with it. A new standard came out in 2011 which includes some very impressive features we all should be using; a minor revision with some very important cleanups issued in 2014. The new features mainly focused on making C++ code faster, safer (more likely to be correct), more self-documenting, and more expressive. The previous version of the standard (ISO/IEC 14882:1998) is known as C++98 (or C++03, as they published a "bug fix" then). The current standard is sometimes still referred to as C++0X, as it was targeted for 2008 ratification, but slipped; the Committee has overhauled its procedures, and the next major revision of the standard is currently targeted for 2017 ratification. Standardization activity appears to this outsider to be extremely active and vibrant, across a large number of issues. There is a very active website,, where you can follow all the goings on.

Here, I focus on some relevant features added to the language with the 2011 and 2014 updates. These notes are not intended to teach C++ to a novice; there are many excellent references. If you are new at the whole C++ design and programming thing, you might want to look at Stroustrup's Programming: Principles and Practice using C++. A valuable (although somewhat older, now) text is Accelerated C++ by Andrew Koenig and Barbara Moo.

An overview of all the new C++2011 features may be found at Wikipedia. Bjarne Stroustrup (who designed the original version of C++, and remains extremely active in its ongoing development) maintains a detailed FAQ for the C++11 changes: Alex Sinyakov has a nice set of slides that describes the new feature set: Furthermore, excellent C++ references can be found at and A good guide to some C++11 "best practices" is available in this short writeup:

The definitive guide and reference for the language is The C++ Programming Language, 4th Edition by Bjarne Stroustrup. He has posted a few chapters from the book as a "Tour of C++":, which has since been published as an independent volume. The TCPL book is not for beginners; from the Preface: "In this book, I aim for completeness. I describe very language feature and standard-library component that a professional programmer is likely to need. ... The level of detail is chosen for the expert programmer.". Stroustrup also has a very nice (if long) Keynote address from the Going Native 2012 conference that you might want to watch when you're ready to start thinking about writing C++ code at a higher level:

The definitive guide to the Standard Library is The C++ Standard Library: A Tutorial and Reference (2nd Edition) by Nicolai M. Josuttis; it is currently being updated for C++11/14. An excellent guide to modern C++ practice is Effective Modern C++ by Scott Meyers.

The Holy Standard itself is available as a 1400 page PDF (the previous version was 768 pages), pretty cheaply, from the ANSI online store; you probably don't need a copy, but can get a copy if you're a masochist or desire to become a language lawyer. It is most definitely not a document where you will learn how to use the language, but it will tell you what the compiler is supposed to be doing, if you can penetrate the "legalese".

If you have questions that your colleagues can not answer, a very active and useful Q&A site is StackOverflow, at Look for the tags "C++" and "C++11". A good C++ FAQ is available at

Take online references that refer to C++0X features with a grain of salt: they were published for many years, and major changes occurred in a number of proposed and modified features over the years. If you have a choice, use only materials referring to C++11/14.

There are many hundreds of changes to the language and library since, C++98. Some of the most important for us are gathered below. The headings are annotated as to the nature of the change (language vs library), and the (subjective) importance (huge, major, minor) of the feature. The few items I selected as having Huge importance are very definitely going to have an enormous impact on how we write code in the C++11 era.

While this wiki page is not a solo endeavor, most of it was written by Kevin Lynch for use (originally) by the g-2 Collaboration software group. Contact me with questions, comments, criticisms, etc at klynch at york dot cuny dot edu.

Automatic type deduction (auto and decltype) (Language, Huge)

C++ is a strongly typed language (unlike python) in that each variable has a type 1 and that type cannot change. Strong typing gives the compiler a significant amount of information about the semantics of your program, allowing it to help you write safer code. However, when you write C++, you expend many keystrokes writing that type. For example, if you have a map and you want to declare an iterator to its beginning point, you type in

std::map<std::string, MyDetector*>::const_iterator myMapIter = myMap.cbegin();

That's a lot of typing. And in fact you shouldn't really have to type it because the compiler already knows the return type of myMap.cbegin(). You are doing all that typing just to prove to the compiler that you know it too. 2 But this is silly. Why should we do work that the compiler already has to do for us? You can now type

auto myMapIter = myMap.cbegin();

and myMapIter is declared to have (essentially) the same type as the right hand side of the assignment.

Of course, since this is slightly less informative to the reader (although it does reduce the noise substantially), you should choose your variable names carefully so that the reader can figure out what their purpose is. You should also learn to write idiomatic code so that everyone knows what you mean. Finally, remember that code is meant to be read, so you shouldn't use auto if providing the type makes your code easier to understand:
auto d = 5.;

What's the type of d? Here, it's double ... but knowing that requires that you remember the type rules for floating point literals. Better to just name the type
double d = 5.;

I should point out, however, that Herb Sutter strongly recommends a different approach when you do want to commit to a specific type:
auto d = double{ 5. };

In some sense, this has three major benefits:
  1. By explicitly using double the right hand side,you have committed to the exact type you want, without having to understand the details of type determination for literals,
  2. By using a brace initializer, you eliminate narrowing conversions (see below), and
  3. By using auto, you ensure that your variables are correctly initialized.

See for a detailed argument. Whether this advice catches on to become the idiomatic way to declare variables is yet to be seen.

The detailed semantics of auto are very similar (but not exactly identical) to the rules for template argument type deduction ... which is to say complicated. Generally, though, auto deduces a value type (one that makes copies of the RHS): top level const and volatile qualifiers are stripped from the type of the RHS, along with references. If you don't want that behavior, you need to modify the declaration so it doesn't:

const int& d;
int f();
int& g();
int* h();

auto dd = d; // type of dd is int ... const and & stripped
auto i = f(); // the type of i is int
auto j = g(); // the type of j is int ... & stripped
auto& k = f(); // int&, but this will probably crash!
auto& l = g(); // int& ... & is stripped, but then put back
auto const m = f(); // int const
auto ph = h(); // int*
auto *ph2 = h(); // also int* ... yes, ph2 = ph is a valid special case

For more examples, see

Sometimes, we need to know the static, or declared type of a variable or expression: enter decltype. 3 For variables, decltype deduces the full type of the variable:

int a;
const int b{...}; 
int& c;
const int& d{...};

decltype(a) aa; // aa is an int
decltype(b) bb; // bb is a const int ... and this won't compile without an initializer
decltype(c) cc; // cc is a ref
decltype(d) dd; // dd is a const int& ... again, this won't compile without an initializer

decltype will also deduce the type of a complex expression (something that's not just a variable declaration/definition)

const int a;

decltype(a) ca; // ca is a const int
decltype((a)) cra; // (a) is a complex expression, cra is a const int&

Here, the rules get somewhat tricky, and I'm not going to describe them. If you need to worry about them, go read some references.
For a more detailed discussion of auto and decltype, see

Why two deduction mechanisms? auto is used to deduce the type of an initializer, while decltype is used to deduce the type of an expression. Expressions being a larger class than initializers, we need two related deduction mechanisms. Don't blame me for this complication - I didn't write the rules...

C++14 brings further changes, with decltype(auto), which does automatic type deduction using the decltype, rather than auto rules. This will be mostly useful in conjunction with automatic return type deduction. The details are pretty much beyond the scope of these notes.

Recommendations: Use auto deduction liberally, but (1) choose descriptive variable names, and (2) use idiomatic programming forms so that readers can quickly and easily figure out your intentions. However, avoid auto deduction for literals.

1 Actually, a name has two types: its static type (or declared type) and its dynamic type. In the expression

std::exception& ex = std::runtime_error();

the variable ex has static type std::exception&, while it has dynamic type std::runtime_error&.
2 But, it's even worse! Sometimes, there's no way that you could have known the type - lambdas, for instance, or variadic template instantiations inside generic code.
3 Many pre-C++11 compilers had a (generally incompatible) extension, typeof, that performed this task. decltype was chosen to avoid the various incompatibilities between these extensions.
4 Incidentally, auto was chosen because it was already a keyword, albeit one that no one ever used. It's origins date to the early days of C, where it was an optional declaration specifier (along with register) which encouraged the compiler to put an "object" in "automatic" storage (on the current stack frame). But since it was optional, almost no one ever used it, and it was hijacked by the Committee for this much more productive use.

Range based for loops over containers (Language, Minor)

Looping over containers involves a lot of typing. For example,

std::map<string, myDetector*>::const_iterator mapIter;
for ( mapIter = myMap.cbegin() ; mapIter != myMap.cend(); ++mapIter ) {
  // The iterator points to the pair. Let's do something with the detector part

You can replace this with

for ( auto const& entry : myMap ) {
  // Entry IS a reference to the std::pair in the std::map

See for a good explanation of more features.

In this "range based for statement", the C++ runtime performs (essentially) all the steps you would have performed by hand in the first loop, as well as dereferencing the iterator and assigning the return to entry. In fact, the standard (in 6.5.4) defines the semantics of ranged for through an equivalent for statement. 1 In this specific example, the type of entry is a const& to the value_type of the container. 2 As a const&, of course, you can't modify the value stored in the container, and that's almost always what you want to do.

Unless it isn't. You can also get a non-const reference to the entry, allowing you to mutate a container in place (if it so allows ... don't try this with an associative container like a std::map or std::unordered!):

std::vector<double> vd = {1., 2., 3., 4.};
for( auto& entry : vd )

This increments the values of each element in the vector vd. You can obtain a copy of each element in a container of values (which you probably didn't mean to do...unless you did).
std::vector<double> vd = {1., 2., 3., 4.};
for( auto entry : vd )

This loop is a no-op, as it doesn't modify the container. Finally, you can explicitly state the type, if you want or need fine control over details like type conversions (but again, you're probably making a mistake):
std::vector<int> vi = {1, 2, 3, 4};
for( double d : vi ){

The new syntax also works for arrays (but you should probably be using a container instead)
int a[] = {1,2,3,4};
for( int const& i : a )
   std::cout << i << ' ';

While this new syntax is a really nice bit of syntactic sugar for a common use case, you still need the original for loops in many cases. For starters, you don't have a loop index. You also can't iterate simultaneously over multiple containers.

For reasons of performance (optimization and parallelization come to mind) and documentation, you should generally prefer using the Standard Library algorithms (or implementing your own!) to hand-crafted loops whenever possible. Using std::count_if, for example, is self-documenting, expresses intent, and is trivially parallelized, while

std::vector<double> vd = {........};
double count = 0;
for( auto const& d : vd )
   if( d > limit )

is none of the above. Use of algorithms is significantly easier in C++11 with the introduction of std::bind, std::function and lambdas (see below).
std::vector<double> vd = { ...... };
double limit = ...... ;
double count = std::count_if(begin(vd), end(vd), [](double d){ return d>limit; } );

Recommendations: In order: (1) prefer using standard library algorithms if possible, then (2) use the range based for over containers, and finally (3) use a regular for or while loop. In range based for, use auto const& unless you are sure you need a different behavior.

1 ... which itself is defined in 6.5.3 by an equivalent while statement
2 Probably. It's actually defined for a container Cont by auto deduction from the return type of Cont.begin() if it exists, or begin(Cont) (via ADL including the associated namespace std) if it doesn't. If the container is well behaved, then it's almost certainly as if you wrote typename Cont::value_type const& entry ... but it might not be if Cont returns proxy objects or has a mucked up definition of the value_type. However, for the standard containers, this isn't going to be a problem.

Brace initializer syntax (Language, Major)

The initialization rules of C++98 are a hairy mess. Primitive types, arrays of primitive types, and struct s of primitive types inherit one set of rules from C, while class es inherit a different set of rules. Toss in implicit conversion rules, or try to initialize a vector with a set of values known at compile time, and you'll be pulling your hair out in no time. If you want to be horrified, google "C++ most vexing parse", and just try to follow the discussion.

C++11 introduces an (almost) completely new, uniform "brace initializer syntax" that provides near complete initializer uniformity. In addition to primitive types:

double d{5.};  // direct initializer syntax
int i{4};

and arrays and structs
double d[] = {1.,2.,3.,4.}; // aggregate initializer syntax
struct {double d; int i;} s {5., 3}; // direct initializer syntax

and classes!
class X {
X(double d, int i);

void f(X x){}

X x1{5., 4}; // direct initializer syntax
X x3 = X{5.,4}; // direct initializer syntax
f( {5.,4} ); // direct initializer syntax

Note the function call in the last line in particular. Use this type of initialization with care: code must be read, not just written!

Through the magic of std::initializer_list you can apply array-like initialization syntax to any standard container:

vector<double> v = { 1, 2, 3.456, 99.99 }; // interpreted as std::initializer_list<double> 
list<pair<string,string>> languages = {
    {"Nygaard","Simula"}, {"Richards","BCPL"}, {"Ritchie","C"}
}; // ditto

// from

If you write your own containers, you should provide a constructor taking a std::initializer_list, or your users will come to hate you (the details are beyond the scope of these notes).

The syntax works everywhere, avoids the "vexing parse" and many other surprises. Most importantly, narrowing conversions are not applied!

int i = 5.4;  // i == 5 due to narrowing
int i{5.4};  // error!  narrowing conversion

Recommendations: Prefer the new initializer syntax to the old ( in nearly all contexts. You're going to be seeing a lot more braces in idiomatic C++.

unique_ptr and shared_ptr (Library, Major)

What's wrong with the following code?

void foo() {
   int* x = new int{5};
   throw 5;
   delete x;

try {
} catch (...) {
   std::cout << "Oops\n";

Syntactically, there is no problem: the compiler will happily generate crash free code that prints out "Oops!". Unfortunately, due to the exception thrown between the heap allocation on line 2 and the corresponding delete on line 4, the allocated int will be lost. Memory leak!

To fix this case, C++98 introduced a type called auto_ptr. For example,

std::auto_ptr<MyObject> ptr( new MyObject );

The contents of ptr would be deleted when ptr went out of scope, particularly during stack unwinding following an exception. This is an example of the RAII idiom (Resource Acquisition Is Initialization): let the compiler do the hard work, and rely on guarantees the language gives you on object lifetimes to simplify your code and simultaneously make it more robust. RAII is the core insight enabling modern C++ memory management and exception safety idioms.

If an auto_ptr changes hands then ownership of the pointer is transferred too, for example

int* i = new int;

std::auto_ptr<int> x(i);   // x points to the integer
std::auto_ptr<int> y;    

y = x;    // y now points to the integer, x has zero

But the above is not very clear and it violates the principle of least surprise what looks like a copy assignment modifies the right hand side without compilation warning. That's not something you expect.

std::unique_ptr replaces std::auto_ptr in C++2011 (std::auto_ptr is now deprecated and should never be used in new code). unique_ptr relies on the new move semantics: it is the sole owner of an owned resource (usually a pointer, but also potentially an array). Semantically, a sole owner should not be copied (that would result in two owners), but it could explicitly transfer ownership. unique_ptr comes without copy constructor/assignment, but it does have a move constructor/assignment which permits compiler-checked ownership transfer (see below). unique_ptr is an example of the now ubiquitous C++ idiom of the resource handle, which owns and controls the lifetime of an underlying resource.

To hand off a unique_ptr, you must explicitly call std::move:

std::unique_ptr<int> a{ new int{5} };
std::unique_ptr<int> b;

b = a;  // Compiler error. Copy assignment not allowed.

b = std::move(a);   // This is explicit.  Move assignment

The ownership of the pointer has been transferred from a to b; a now holds a nullptr; the only thing you can do is destroy it, or move another unique_ptr into it.

The implicit move rule for returns (again, see below) allows you to return a unique_ptr from a function:

std::unique_ptr<int> ptrforsp(){
  return std::unique_ptr<int>{ new int{5} }; // OK, implicit move

By default, when a unique_ptr is destroyed (normally by going out of scope), delete is called on the held pointer. If that's not the right behavior - for example, the held object is allocated from a pool, or by malloc - you can provide a custom deleter.

On some rare occasions, unique ownership is the wrong semantics ... sometimes you need multiple handles that share ownership of the resource, but you still want RAII rather than manual intervention to do all the resource management for you. For this use case we have std::shared_ptr. When the last shared_ptr goes out of scope, the owned resource is deleted.

std::shared_ptr<int> p1{ new int{5} };
auto p2{p1}; // 2 owners
auto p3{p2}; // 3 owners
auto p4{p3}; // 4 owners

At the end of the scope, p4 is destroyed, then p3, then p2. Finally, p1 is destroyed and delete is called on the held int. shared_ptr has some overhead compared to unique_ptr, overhead which is rarely needed.

While you can use the syntax above, you're better off not using an explicit new. Instead, prefer to use std::make_shared.

auto ptr = std::make_shared<int>(5); // equivalent to the initialization of p1 above

make_shared is self-documenting, and more efficient; see, for instance, std::make_unique makes it's appearance in C++14 1 You should never need a bare new/delete in your code again (and if you do, you're probably writing a widely useful library, and you don't need these notes to tell you this stuff)!

  1. Strongly prefer to not use pointers in C++11: do everything you can by value. When you really need to deal in pointers (runtime polymorphism and legacy interfaces), wrap them in a smart pointer handle inside the owner: by default, that should be std::unique_ptr. When you must traffic in bare pointers (in legacy code, like Geant4), you should consider using unique_ptr everywhere you can, and call the release member when you have to give up ownership. Use make_shared consistently, and make_unique when available.
  2. Non-owning raw pointers are still perfectly fine and idiomatic in new interfaces and implementations. Raw pointers signify that ownership is not transferred; they should be used in situations where you will observe only.

1 In the meantime, we can just copy this make_unique implementation from Herb Sutter

template<typename T, typename ...Args>
std::unique_ptr<T> make_unique( Args&& ...args )
    return std::unique_ptr<T>( new T( std::forward<Args>(args)... ) );

See for more information

Lambdas and closures (Language, Huge)

When discussing the range based for above, I advised you to prefer using the Standard Library to hand crafted loops. In C++98, this advice was extremely hard to pull off, and quite tedious. Let's see why, and then look at what C++11 provides that makes library use nearly child's play.

Suppose you had a container filled with integers, and you want to count how many of them are divisible by 3. The Standard Library provides an algorithm to do just that, std::count_if, which is approximately 1

template <class InputIterator, class UnaryPredicate>
unsigned std::count_if (InputIterator first, InputIterator last, UnaryPredicate pred);

The type UnaryPredicate must be a type that can be applied to the value_type of the InputIterator. It must either be a function or a function object (functor) with a single argument operator()
template<typename T> class UnaryPredicate {
  bool operator()(T t);

In our case, we might define a UnaryPredicate that tests divisibility as
class Divisible {
   Divisible(unsigned u) : u_{u} {}
   bool operator()(int i) { return i%u_; }
   unsigned u_;

Then, we apply this as
int counted = std::count_if(begin(container), end(container), Divisible{3});

What's the problem? There are two: defining the functor class is full of tedious boilerplate, it's pretty opaque (the test is only one line buried in a mass of hundreds of characters), and it has to be defined far from the point of application. 2 All of these significantly reduce the reuse value of the standard library.

So, how do we fix this? Eliminate the boilerplate, reduce the volume of code needed, and define the function at the point of application. That's what lambda expressions provide. In C++11, the code above can be rewritten as

int counted = std::count_if(std::begin(container), std::end(container), [](int i){ return i%3; } )

The construct
[](int i){ return i%3; }

is the lambda expression. The compiler - not the programmer - writes the boilerplate code, generating a uniquely named type called the closure class with an operator() const member. At runtime, a closure object is instantiated from the closure class.

In the lambda expression, [] is called the lambda introducer, the (int i) is the standard argument list (and yes, you can have more than one argument), and the body appears, as usual, between the braces (and yes, you can have more than one statement). The return type is deduced from the type of the return statement, and you can have multiple returns as long as they all have the same type. If return type deduction is not appropriate, you can explicitly declare the return type using suffix return type syntax (see below). You can also omit the argument list if there are no arguments.

Lambdas generate code that is equivalent to writing a whole functor class. This is almost as simple as it could get! But it doesn't stop there! The C++11 implementation of lambda functions are actually much more powerful, because they are closures: they can capture the surrounding context at the call site, and "import" it into the lambda. 3 Some examples:

std::vector<double> vd = {....};
double const add_me = 7.5;
double accum = 0;

// add 7.5 to each element of vd, and output the sum.  This is "capture by value" 
std::for_each(begin(vd), end(vd), [add_me](double d){ std::cout << d+add_me << ' '; } );
// accumulate the sum of the values into  the accum in the calling function.  "Capture by reference" 
std::for_each(begin(vd), end(vd), [&accum](double d){ accum+=d; } );

You can also mix and match: capture all referenced variables by value (using the introducer [=]), by reference ([&]), some one way and some another ([add_me,&accum]), some explicitly and the rest implicitly ([=,&accum]), and even the this pointer within member functions ([this]). In default capture ([=] or [&]), only from enclosing scopes that are actually used by the call operator are captured. The captured variables are copied into similarly named member variables of the closure object; within the call operator, you aren't actually directly touching the captured from variables (but beware reference captures and pointers!). Since the closure operator() is const by default, you can not modify the captured variables in the call operator; if you need this ability, mark the lambda expression with mutable:
double d = 5;

auto f = [d](){ d+=1; }; // Error! Assignment of read-only variable d!
auto g = [d]() mutable { d+=1; }; // OK
g(); // enclosing d still 5.
auto h = [&d](){ d+=1; };  // OK
h(); // enclosing d now 6!

If you return a lambda from a function, any variables captured by reference from local variables defined within the function have been destroyed ... you have dangling references!

In C++14, lambda expressions have been significantly enhanced relative to C++11. The operator() can use argument type deduction, essentially templatizing the closure to produce polymorphic lambdas:

std::foreach(begin(vd), end(vd), [add_me](auto d){ std::cout << d+add_me << ' '; } );

The introducer syntax has been significantly extended with the addition of init captures, allowing capture by move, naming of the closure member variables, and complicated capture expressions:
auto pw = std::make_unique<double>(5);
auto f = [pw = std::move(pw)](){ return *pw+2; };

std::string foo{"I am bar"};
auto g = [bar = foo](){ std::cout << bar << '\n'; };

auto h = [baz = f()+4](){ std::cout << baz << '\n'; };

An init capture is a strange beast: names on the left hand side of the assignment are in the scope of the closure object, while names on the right hand side are in the scope enclosing the lambda expression. Thus, in f, the LHS pw is not the same as the RHS pw. Since captures always copy state from the enclosing scope the captured variable names have types which are auto deduced, and neither need nor are permitted to have their types named.

We've only scratched the surface of these powerful little buggers; lambdas are likely to have as much of an impact on idiomatic C++ as auto declarators and move semantics. See for more details.

Recommendations: You will use lots of lambdas as algorithm predicates; learn the lambda syntax and use the algorithms in preference to for loops when possible. You will also find that lambda expressions combined with auto will dramatically simplify the definition of small functions.

1 Here's the actual declaration in the libstdc++ shipping with gcc 4.7.2

  template<typename _InputIterator, typename _Predicate>
    typename iterator_traits<_InputIterator>::difference_type
    count_if(_InputIterator __first, _InputIterator __last, _Predicate __pred);

2 Actually, in C++11, we can define local classes inside the context of a function (did you even know such things existed?). But that doesn't really help much, as it just gums up the control flow of the function.

3 If your eyes just bugged out of your head, you (like me) may be too much of a nerd ....

Function types, binders, and partial function application (Library, Major)

The header <functional> includes a library based implementation of function objects that can hold and execute any type 1 that can be applied with (), along with tools to create them. Consider the following set of free functions and class objects:

void free_func(double d, int i);
class X {
   static void static_func(double d, int i);
   void member_func(double d, int i);

std::function< void (double,int) > f;
X x;

f = free_func; // can apply a free function
f = &free_func; // or by function pointer
f = &X::static_func; // a static function

Notice that this doesn't give a direct way to bind f to a member function, like X::member_func, because it needs an instantiated object x of type X to provide the context. For that, we need std::bind, which creates a function object
f = std::bind(&X::member_func, x);

We'll come back to bind in a minute. A final method of generating an "applyable" function object is to use lambdas and auto:
auto f = [](double d, int i) -> void {};

f(5.0, 4);

This generates a function object compatible with assignment to a std::function. And again, lambdas are closures, so you can capture local state!

Why does this exist? To make function-like callee's into first class objects, which can be passed like any other object. In C, where only functions exist, you can pass function pointers with the right signatures to other functions. For instance, qsort takes a pointer to a comparison function:

int qsort(void *base, size_t nmemb, size_t size, int (*compare)(const void*, const void*));

The last argument, compare, has type int (*)(const void*, const void*), which is "pointer to function taking two const void pointers and returning an int. This is a truly horrifying and fragile type-unsafe interface, but it does demonstrate the idea: you pass a function into another function. In the C++ standard library algorithms, we pass in function-like objects all over the place: for comparison, counting, element mutation, etc. In most contexts in C++, you just don't care what the actual type of the "applyable thingy" is: free function, member function, static, whatever. You only care that it matches the signature of the function you are interested in. You can now write a type checked interface in terms of std::function. Here's a dumb, but trivial example:
// define a function that takes as argument another function with signature int (*)(int)
int apply_it( int i, std::function< int(int) > f ){ return f(i) }
// define two free functions
int free_func(int j){ return j+2; }
double free_func2(int j){ return j+3; }
std::vector<double> free_func3(int j){ return {}; }

// in main, we try to use it:

std::cout << apply_it( 1, [](int i){ return i+1; } ) << '\n'; // 2
std::cout << apply_it( 1, [](int i, int j){ return i+1; } ) << '\n'; // error: wrong number of arguments
std::cout << apply_it( 1, free_func) << '\n'; // 3
std::cout << apply_it( 1, free_func2) << '\n'; // 4 (applies double->int conversion on return type of free_func2)
std::cout << apply_it( 1, free_func3) << '\n'; // error: no conversion from std::vector<double> -> int

Back to bind: it's a much much more powerful than just a member function binder. Sometimes, you have a function (or function object) where you know that you will always call it with the first argument being a certain value, or perhaps you have a two argument function, but you need a three argument function, or maybe even you want to reverse the argument order without the ability to rewrite the underlying code. These are called variously partial application or currying:

void three_func(double, double, int);
void backwards_func(int, double);

auto f1 = std::bind(three_func, _1, _1, _2); // f1(4., 2) calls three_func(4., 4., 2)
auto f2 = std::bind(three_func, _1, 4., _2); // f2(5., 4) calls f(5., 4., 4)
auto f3 = std::bind(backwards_func, _2, _1); // f3(4, 5.) call backwards_func(5., 4)

_1, _2, etc are argument placeholders, and are declared in the namespace std::placeholders, which you should import with a using directive before trying to use them. In line 4, the placeholder _1 is bound to the first argument passed when f1 is called, etc.

With overloads, we need to disambiguate manually

void h(int);
void h(double);

auto f4 = std::bind(h); // error, ambiguous binding
auto f4 = std::bind( void(*)(int) h); // icky function cast

There's one more wrinkle here you have to know about: std::bind takes it arguments by value, even if you specify a reference argument. So the following will not do what you were hoping:

void f(int& i) { ++i; }
int j = 1;
auto f1 = std::bind(f, j);
std::cout << j << '\n'; // prints "1", not "2" 

If you really intend to send in a reference, you need to tell bind that:
auto f2 = std::bin(f, std::ref(j) );
std::cout << j << '\n'; // prints "2", as expected

If you really want to pass a const&, you use std::cref. There are a handful of other helper functions to wrap up arguments that don't want to be copied, but they're a bit more esoteric; when you need them, you'll know.

Use these techniques judiciously: they're very powerful, but as with all powerful techniques, you risk making your code very hard to understand. I've used them extensively in call-back interfaces where the function signature is known, but not where the function to be applied will come from.

1 This is accomplished by a spiffy technique called type erasure. See for an introduction

Move semantics and rvalue references (Language/Library, Huge! Huge!!)

Move semantics and rvalue references are well explained in many places. Try the following

Being "well explained" doesn't mean "easy to understand" ... rvalue references in particular are very subtle, and move semantics interacts with various other optimizations.

Motivation: two examples

What happens in the C++98/03 abstract machine with the following code?

std::vector<std::string> fill_names() {
// lots of code to populate names with the list { "Adam", "Kevin", "Leah", "Brendan", "Peter"};
std::vector<std::string> names = fill_names();

In the function fill_names, we construct and fill a vector with a list of names, and then return it by value; this almost certainly requires a bunch of allocations from the heap. On the RHS of the second line, we return an anonymous, temporary vector, which is copy constructed into names; that is, the copy constructor is called for the vector, and then for each of the @string@s in turn. That's a whole bunch more trips to the heap. Then, as soon as we hit the semicolon at the end of the statement, all of those temporaries are tossed in the garbage: destroyed in reverse order of construction. All we've done with them is to copy them and immediately throw them away. What a waste!

Luckily, in C++98 there are lots of tricks that the compiler can pull out to prevent many - and perhaps all - of those copies from actually happening ("copy elision" ... see below). But those tricks are not always available, and even when they are, to make sure they are used you have to understand the details and write your code so that the compiler can apply them ... and those rules are complex and subtle. When those copies can't be prevented, they can be expensive: they'll often involve multiple round trips to the heap. In addition to the costs of allocation, copying, and deallocation itself, there can be sizeable overhead in multi-threaded environments due to lock contention. All to build temporary objects that you never see and can't interact with.

The second motivation is that sometimes, you don't want types to be copyable: consider a thread class, or a non-reference counted smart pointer. Most of the realistic examples are handle classes that own or manage system resources; we've already talked about one of these, std::unique_ptr, but there are others: std::mutex, std::thread, etc. For these, having multiple copies managing the same resource is a recipe for disaster: double deletes, dangling pointers, crashed thread subsystems, etc. Still, you might want to pass those handles across function boundaries, put them into containers, etc.

C++98 has no good solution to these two major issues, and a counterexample in the futility of trying: std::auto_ptr. C++11 does have a solution, and the solution to both problems (and a number of other more esoteric issues) is the same mechanism: the rvalue reference and move semantics.

lvalues, rvalues, and temporaries

In C++ there are really only two types of values: those you can take the address of, and those you can't. Roughly speaking, to take the address of something, you need to be able to name it. Those you can name and take the address of are called lvalues, 1 and everything else is called an rvalue. Only lvalues can appear on the left side of an assignment:

int n; // n is an lvalue
5; // 5 is an rvalue
n = 5; // OK
5 = n; // error: can't assign to an rvalue
int* p = &n; // OK: p is an lvalue
&n = p; // error: &n is an rvalue, and can't be the target of an assignment.  
        // Of course! The storage for n is fixed and can't be moved.

In particular, all anonymous, temporary objects are rvalues.

Whoa! Remember that our copying problem above was, at base, the requirement that we copy and then destroy anonymous, temporary objects. If the language allowed us to identify those temporaries, and handle them differently than we do lvalues, we could write our code to steal the expensive resources from the temporary and avoid all the extra allocation/deallocation nonsense. Does it? Let's look at the C++98 tools: variables, pointers, and references:
  • variables are names for a value (object!) in an identifiable memory location:
    int n;
    std::string s;
    std::vector<std::vector<double>> vvd;

    These things are all lvalues: they have names, and you can take their address. When the target of an assignment, they copy the referent ... the very thing we're trying to avoid.
  • pointers variables are names for a memory address. They, too, are all lvalues, and they can't be bound to temporaries:
    int* p = &5; // error!  can't apply unary & to an rvalue!
  • C++ introduced references, which are essentially aliases for a memory location. When introduced, they must be bound to a particular memory location, and they can't be released or rebound ... there's no syntax to allow it:
    int n = 5; // OK, n an lvalue, 5 an rvalue
    int& m = n; // OK, m is bound to the lvalue n;
    m = 6; // OK, can change value through a non-const reference -> n=6
    int& p = 6; // error!  can not bind a reference to an rvalue, 
                // because then you'd be able to change the value of a temporary!
    int const& q = 7; // OK!  can bind a const reference to an rvalue, 
                      // because you can't change the value ... it's const!
    int const& r = n; // OK!  can bind a const reference to an lvalue as well.

The only way to bind to an rvalue in C++98 is through a const& ... but since const& also binds to lvalues, there's no way to distinguish between lvalues and rvalues on the RHS of a copy or assignment. The compiler knows the difference, but there's no syntax to allow us to ask the compiler whether a value is an lvalue or rvalue!

rvalue references and move semantics

C++11 solves the rvalue identification problem by introducing a special reference type that binds only to rvalues: the rvalue reference

int&& r = 6;  // OK
int g = 5;
int&& q = g; // error: cannot bind ‘int’ lvalue to ‘int&&’

There are lots of details that we're going to skip, because you're never actually going to create an rvalue reference variable like this! Essentially the only thing you will ever actually use them for is to identify temporaries, and to then move data from the temporary, rather than copy it. The original style of reference has been restyled as the lvalue reference to distinguish it, but remember: the const version binds to rvalues too!

move constructor/assignment

Cosider the following two calls of use_big_object

void use_big_object( std::vector<std::string> names );

std::vector<std::string> list_of_names = { "Adam", "Kevin", ..... };

use_big_object( names );
use_big_object( {"Leah", "Brendan", "Peter", .... } );

In the first call, we must copy the vector, because we've told the compiler we're going to (probably) modify the copy within the function call, but we don't want to modify the copy of the lvalue object inside the caller (ie list_of_names) (But what if we don't need the original anymore? See below.). In the second call, however, the argument is an rvalue, and there's no way to refer to it within the caller after the function use_big_object completes. We can steal all the resources, and avoid the copy: we can move the object into the call. So we can overload the function to use one method for lvalues, and a different one for rvalues:
void use_big_object( std::vector<std::string> const& names ); // binds to everything
void use_big_object( std::vector<std::string>&& names ); // binds to rvalues

The rules have the second overload binding with higher precedence than the first, so rvalue arguments will call the second function, leaving the first one to swallow up everything else.

So, the function binds the way we want it to, but how do we manage the objects themselves? To copy objects, we need a copy constructor and a copy assignment operator:

class something {
   something(something const& rhs); // copy constructor
   something& operator=(something const& rhs); // copy assignment

To these, C++11 now allows for the definition of a move constructor and a move assignment operator.
   something(something&& rhs); // move constructor
   something& operator=(something&& rhs); // move assignment 

Under most circumstances, if you don't define these move enabling functions, the compiler will generate them implicitly for you ... under much the same rules as it implicitly generates copy constructors and assignment. In that case, it will usually be identical to the copy constructor. 2 When the move members are not implicitly generated, in every place where there's a choice, the copy members will be called instead.

There are, of course, a few rules for the move members: they must leave the stolen-from resource in a valid destructible state. The only thing you are guaranteed to be able to do with a moved-from object is to assign to it, or destroy it. Anything else is undefined behavior. Thus, your move members should do something like the following:

class something {
  int* foo; 
   something(something&& rhs) : foo{} { = nullptr; }

No, you don't really need to move an integer (that's likely the same as copying), but the idea is to clear out the value in the RHS after you've moved it to this object. This member is also bad style, of course (raw pointers! ick!). Further, you should endeavor mightily to ensure that your move members are strongly exception safe They really ought to be unconditionally noexcept if possible
   something(something&& rhs) noexcept : foo{} { = nullptr; }

Don't let these simple examples fool you ... in the move members, you still have to manage the lifetime of objects you are managing (duh). When you write the copy assignment operator, for instance, you need to properly release the contents of the LHS object before you write over them with the RHS objects. You need to do the same thing for move assignment, as well:

class ick {
   std::vector<double>* vpd;
   ick() : vpd{new std::vector<double>*} {}
   ~ick() noexcept { delete vpd; }
   // idiomatic
   ick(const ick& rhs) { // deep copy vpd }
   // idiomatic: copy construct and swap ... may also need to define swap for ick, rather than using std::swap.
   // could be noexcept if swap is non-throwing
   ick& operator=(const ick& rhs) { ick temp(rhs); swap(*this, temp); return *this; }
   // wrong
   ick& operator=(ick&& rhs) { vpd = rhs.vpd; rhs.vpd=0; } // ick leaks the original pointee!
   // less wrong
   ick& operator=(ick&& rhs) noexcept { delete vpd; vpd = rhs.vpd; rhs.vpd=nullptr; }
   // even less wrong
   ick& operator=(ick&& rhs) noexcept { swap(vpd, rhs.vpd); }

You also need to worry about exception safety. Constructors are easier, because there isn't a preexisting object to disassemble.

All that said, a more realistic example might look like this:

class something {
  std::vector<double> foo; 
  std::string name
   something(something&& rhs) noexcept : foo{std::move(}, name{std::move(} {}

"Whoa! What's up with that std::move thing? What's that, and why do we need it?"

When is an rvalue reference an rvalue?

A named rvalue reference is actually an lvalue.

"Say what?"

No, really!

That has to be the case. Consider the following:

void g(something const& j);
void g(something&& j);
void f(something&& i){

If i was an rvalue, then the first call to g would call the second overload, and i would be toast. That can't be allowed to happen, or confusion would reign: "Human sacrifice, dogs and cats living together... mass hysteria! "

That means that the members of the object on the RHS of the move constructor we're trying to write are lvalues and will be copied. That's exactly what we're trying to avoid! How do we fix this? The Committee did this for us! All you need is a cast that says "treat this type as an rvalue reference to the object". This function is a do nothing function that says "I know what I'm doing ... I really want to steal from the RHS!"

void g(something const& j);
void g(something&& j);
void f(somethign&& i){
   g(i); // call overload 1
   g( std::move(i) ); // I really know what I'm doing ... steal from i and call overload 2

You can use std::move anywhere you want to say "Move if you can, I give you permission" even if there's no move constructor/assignment ... it doesn't actually do anything. And it doesn't matter if there's no move constructor .. the copy constructor is called for rvalues anyway.

Returning values: moving vs copy elision

I started this fairly long story by saying "Look at all these copies! Isn't that horrible!" You'd probably think that the compiler writers and standards committee would have figured that out and done something about it a long time ago, right?

Well, they did. Many moons ago. It's called "copy elision": under certain circumstances, you don't have to do the copies, you just build the object in the destination:

std::vector<double> do_something(){
   std::vector<double> d;
   .... // do something that fills d
   return d;          

std::vector<double> foo = do_something();

In this code, no copy constructors need to be called, because the compiler can figure out how to build d directly at the address of foo. This return value optimization (aka RVO) is explicitly permitted by the standard.

The next thing the standard says is that in return x, if x is a local variable it is first treated as if it was an rvalue (it is implicitly treated as if it were written return std::move(x). Don't ever do that explicitly because it defeats the optimizer!). This is the implicit move return.

So, for functions (generally only with one return) returning local variables by value, like do_something(), the Standard explicitly says:
  • Check if there's an accessible move or copy constructor: make sure at there's at least one that's accessible so that we're even allowed to return the variable! Complain loudly if there isn't.
  • If you can figure out how to elide copies, apply the RVO.
  • If not, and there's an accessible move constructor, move (treat the local variable lvalue as an rvalue ... we're about to nuke it anyway).
  • If not, and there's an accessible copy constructor, copy.
  • Else, complain loudly.

Pass-by-value: copy, move, and elision

Consider the following function signature:
int take_pointer( std::unique_ptr<double> upd );

Is that legal? Pass-by-value arguments can bind to both lvalues and rvalues. In C++98, you could pass-by-value anything with a copy constructor, from either rvalues or lvalues. In C++11, the rules for pass-by-value have been updated to account for move construction. So, yes, the above is a valid signature: it will copy lvalues, and (when possible) move rvalues. Except when it can completely avoid doing the work, in which case it will utilize copy elision (sound familiar?), and construct (or use) the argument in place. The rules are similar to those for returns:
  • Check if there's an accessible move or copy constructor. Complain loudly if there isn't.
  • If the copy can be elided, do so.
  • If not, if there's an accessible move constructor, and if the argument is an rvalue, move construct.
  • If not and if there's an accessible copy constructor, copy construct.
  • Else, complain loudly.

In the pass-by-value context, copy elision is usually harder to arrange that in function returns. But, we usually want a copy of an argument in pass-by-value functions, so that we can manipulate the copy inside the function without mangling the original. The point here is to let the compiler do the work for you: it's usually smarter at figuring out how to do things efficiently than you are.

Of course, you might need to "help" the compiler along in identifying rvalues, using std::move. Here's a rather extensive example of when elisions, copies, and moves occur.

#include <iostream>
#include <cstdlib>

class both {
  both() {std::cout << "both::DC\n"; }
  both(const both& b){ std::cout << "both::CC\n"; }
  both& operator=(const both& b){ std::cout << "both::CA\n"; }
  both(both&& b){ std::cout << "both::MC\n"; }
  both& operator=(both&& b){ std::cout << "both::MA\n"; }
  int j=std::rand();
both retb(){ std::cout << "retb:\n"; return both(); }
void eatb( both b ){ std::cout << "eatb: " << b.j << '\n'; }

class monly {
  monly(){ std::cout << "monly::DC\n"; }
  monly(const monly& b) = delete;
  monly& operator=(const monly& b) = delete;
  monly(monly&& b){ std::cout << "monly::MC\n"; }
  monly& operator=(monly&& b){ std::cout << "monly::MA\n"; }
  int j=std::rand();
monly retm(){ std::cout << "retm:\n"; return monly(); }
void eatm( monly m ){ std::cout << "eatm: " << m.j << '\n'; }

int main(int, char**){

  std::cout << "both\n";
  std::cout << "explicit temporary\n";
  eatb( both() );
  std::cout << "implicit temporary\n";
  eatb( retb() );
  both b;
  std::cout << "Copy Constructor?\n";
  eatb( b );
  std::cout << "Move Constructor?\n";
  eatb( std::move(b) );
  std::cout << '\n';

  std::cout << "monly\n";
  std::cout << "explicit temporary\n";
  eatm( monly() );
  std::cout << "implicit temporary\n";
  eatm( retm() );
  monly m;
  std::cout << "Move Constructor?\n";
  eatm( std::move(m) );
  std::cout << '\n';

  return 0;

Here's the output with gcc 4.7.2 ... this is with optimization disabled, and you still see lots of copy elision occurring:
[krlynch@i-m-so-tired 0 c++11]$ ./rvalues-valueargs 
explicit temporary
eatb: 1804289383 // copy elided!
implicit temporary
eatb: 846930886 // copy elided!
Copy Constructor?
eatb: 1714636915
Move Constructor?
eatb: 1957747793

explicit temporary
eatm: 424238335  // copy elided!
implicit temporary
eatm: 719885386  // copy elided!
Move Constructor?
eatm: 596516649

Reference collapsing rules and forwarding references

In the bad ol' days of C++03, you couldn't take an arbitrary type, and tack on a reference qualification, because you couldn't declare an object that was of reference-to-reference type:

typdef int& IntRef;
int f = 6;
// ... lots of intervening code ...
IntRef& ir = f;  // error: cannot declare reference to ‘IntRef {aka int&}’

The problem here is that references in C++03 don't collapse. You would like the type of ir to simply be int&, but that doesn't happen, and there's no good reason for it ... it happens in templates, after all.

In C++11, this is no longer a problem: references do collapse, with well defined semantics:

using IntRef = int&;
int f = 6;
IntRef& ir = f;  // ir is of type int&

Throw in rvalue references, though, and we need to be careful. But, the reference collapsing rules are still pretty straightforward: if we try to mix two references, they collapse thusly:
& & -> & // lvalue + lvalue -> lvalue
& && -> & // lvalue + rvalue -> lvalue
&& & -> & // rvalue + lvalue -> lvalue
&& && -> && // rvalue + rvalue -> rvalue

This is really only one rule: if rvalue-to-rvalue reference, collapse to rvalue reference, else collapse to lvalue reference. So you can now declare typedef reference types with impunity, and not fear that the compiler will get upset with you.

Because of the (complicated) type deduction rules, lvalue-references to derived types make sense:

template<typename T> T func1(T i);
template<typename T> T func2(T& i);
int test = 6;

func1(test); // ok, i is type int, can copy lvalue
func1(2); // ok, i is type int, can copy rvalue
func2(test); // ok, i is type int&, can bind to lvalue
func2(2); // error: i is type int&, can not bind to rvalue

There is, however, a new twist when we have && in a deduced context:

template<typename T> T func(T&& i);

Here, && is applied to the type variable T, which must be deduced from the type of the initializer at the point of instantiation of the formal argument (a deduced context). In general, you do not end up with i being of rvalue-reference type, but of lvalue-reference type. In this context, T&& (also auto&& and decltype) has been dubbed a Universal Reference (, because i is assigned exactly the right type depending on the initializer used to deduce the type. The C++ Community is converging on Forwarding Reference for this construct, as that is what it's really for. The formal specification is complicated, and relies on the reference collapsing rules above, but the application turns out to be easy:

  • if the initializer is an lvalue, then i becomes an lvalue-reference.
  • if the initializer is an rvalue, then i becomes an rvalue-reference.
func(test); // ok, test is lvalue, so i is lvalue-reference to type of test, here int&; can be bound to lvalue
func(2); // ok, 2 is rvalue, so i is rvalue-reference to type of 2, here int&&; can be bound to rvalue

We can now understand how std::move does its magic.

  template<typename _Tp>
    constexpr typename std::remove_reference<_Tp>::type&&
    move(_Tp&& __t) noexcept
    { return static_cast<typename std::remove_reference<_Tp>::type&&>(__t); }

Here, the template parameter _TP is used in a deduced context ... it's deduced from the type of the argument to std::move at the point of use. If the initializer of the formal argument __t is an lvalue, _Tp is deduced to be an lvalue-reference type, so the static_cast casts from whatever type the initializer has to an rvalue-reference to that lvalue initializer. If the initializer is an rvalue, _Tp is deduced as an rvalue-reference type, so the static_cast is a nop. Either way, we end up with an rvalue-reference to the argument, which triggers a call to the move constructor if one exists, which is exactly what we wanted!

So, when do you use Forwarding References? Well, first of all, they are only available in the three places where template type deduction can happen: function templates with deduced parameter types, variables declared with auto, and variables declared with decltype. Use Forwarding References in those cases where preserving the lvalue or rvalue nature of the initializer is important. If you need it, you'll probably know it.

Of course, not every appearance of && in a template denotes a forwarding reference: the template argument must appear in exactly the right form, where the deduced type argument has the && appended:

template<typename T> func(T&& i); // i is a forwarding reference
template<typename T> func(std::vector<T>&& v); // v is an rvalue-reference to a std::vector<T>

Don't try to be clever and use a forwarding reference to declare special member functions (copy constructors and the like); it doesn't work the way you want it to. Also, don't try to overload regular functions with function templates taking forwarding references. These also don't work the way you want them to. There be dragons here!

Ref-qualifiers (rvalue references for *this)

What's wrong with this code?

class Matrix {
   // lots of reasonable matrix code
   Matrix& operator=(Matrix const& rhs){ .... }

Matrix operator+(Matrix const& A, Matrix const& B){ .... }

Matrix A, B, C; // and that these get initialized

C = A+B; // ok
(A+B) = C; // ok.  ok?!?!

Syntactically, this is perfectly fine ... both of the lines of code at the bottom will compile. But should they? Of course the first one must compile, or our Matrix class is useless. It would be nice if the last line failed to compile: A+B is an rvalue, and we really don't want it to be assigned to. Of course, the last line of code is converted by the compiler into the following call:

As written, there is nothing to prevent that second call ... and in general, you wouldn't want to prevent rvalues from calling operators, or any other member or non-member functions:
Matrix A,B,C,D;

if( (A+B) == (C+D) ) // probably operator==(operator+(A,B), operator+(C,D))

The returns of operator+ are rvalues, but we still want the operator== call to succeed. As these examples should make clear, sometimes it would be extremely powerful to be able to prevent rvalues (and maybe even lvalues!) from calling certain functions ... in this case, rvalues shouldn't be assignable. In C++98 there's no way to prevent this, so we are forced into extremely suboptimal workarounds, like declaring return values const. These hacks interfere with optimizers, in particular, with move optimizations. Big problem.

C++11 solves this with the addition of trailing ref-qualifiers, also known as "rvalue references to *this". The implicit this parameter is always an lvalue, so this is a bit of a misnomer, but it's what we're stuck with. We can qualify member functions so that they can only be applied in rvalue or lvalue contexts:

class Matrix {
   // lots of reasonable matrix code

   // can only be applied in lvalue contexts
   Matrix& operator=(Matrix const& rhs) & { .... }
   //    lvalue ref-qualifier ----------^ 

   // can only be applied in rvalue contexts
   void test_matrix() && { ...... }
   //    rvalue r-q --^^

Now, the offending code above will fail to compile

C = A+B; // ok
(A+B) = C; // error!  no operator= in rvalue context

The only real issue with ref-qualifiers is that compiler support in g++ didn't show up until 4.8.1 (very recently, as I write this).

Recommendations 3

  • For classes that manage large objects, make sure you write a correct move constructor and move assignment operator, in addition to copy constructor, copy assignment and destructor. This is the Rule of Five: if you write any one of them, you need to write all five. Make sure the five (plus your swap, if any) are unconditionally noexcept so they can be used safely inside standard containers.
  • Use smart handle classes (std::unique_ptr instead of raw pointers, std::vector instead of arrays) to control resource lifetime, and make sure to std::move them in your move constructor. Aim to have the automatically generated special member function (copy/move constructors/assignment operators and destructor) bodies empty.
  • Really important: Return by pointer is not a performance optimization! Only return by pointer if you need a pointer, and even then, return by smart pointer to manage lifetime. Even for very large objects, write a move constructor and return local variables by value. Worst case, the move constructor will be called; best case, no copying will happen, and you won't pay the penalties (cache misses!) for indirecting through a pointer.
  • Really important: When passing parameters into a function, do what you've always done since C++98: for small objects, pass by value. For larger objects, pass by const reference. For in/out parameters, pass by non-const reference. The exception is when you need to handle rvalues and lvalues differently: then, write two overloads or use a templated forwarding reference.

1 Some things can be named, but you can't take their address: enum and enum class constants spring to mind:

enum class color { red, green, blue };

color* pc = &color::red; // error: lvalue required as unary ‘&’ operand

These things are rvalues.
2 Which is to say, probably wrong. If you allocate resources in your constructors, you almost certainly need to define a destructor, and you probably need to define move and copy. This is known as the Rule of Five.
3 The recommendations in this section have been a moving target as the C++ Community adjusts to the new features and compiler optimizations of C++11. The recommendations here are those championed by Herb Sutter in his CPPCON 2014 talk: Back to the Basics! Essentials of Modern C++ Style

Contextual keywords and trailing specifiers (Lanuage, Minor with one Major exception)

Can you find the bug in the code below?

struct Base {
    virtual void some_func(float);

struct Derived : Base {
    virtual void some_func(int);

It's subtle. The intention is that Derived::some_func overrides the virtual Base::some_func. But it won't, because Derived::some_func has a different signature (int instead of float). If you do

Derived d;

This will unexpectedly call Base::some_func. This will be a hard bug to find!

C++2011 now allows you to specify your intention to override a virtual function in the base class.

struct Base {
    virtual void some_func(float);

struct Derived : Base {
    virtual void some_func(int) override; // Compiler error because it doesn't override a base class method

You should use override everywhere you intend to override a virtual function in the base class, so the compiler can help you fix this one class of mistakes; unfortunately, there's still not much to prevent you from making the error noted above, where you add a new member that isn't an override, without telling the compiler. But override is still useful in minimizing this class of errors.

override is just one of a set of (currently) two trailing "contextual keywords" in C++11. The other is final (which prevents overriding in derived classes). A third new operator can also appear in the trailing position, noexcept (a replacement for throws specifications):

class Base {
   virtual void some_func(float);
   virtual void other_func(float) final; // can not be overridden!
   void last_func(float) noexcept; // will not throw exceptions!

void Base::last_func(float f) noexcept { throw f; } // violation!  calls std::terminate without stack unwinding

class Derived : public Base{
   void some_func(int); // probably mistake ... new signature, not an override
   void some_func(int) override; // error caught!
   void other_func(float) override; // error!  can't override a final function!

class Base2 final {

class Derived2 : public Base2{  // error!  can't derive from a final class!

If you find yourself writing final, think again: you are almost certainly doing the wrong thing. 1 By far, the most important of these keywords is noexcept. 2 override and final, perhaps obviously, only apply in class contexts; noexcept can also be used on free functions.

These are called contextual keywords, because they only have their "keyword" behavior in this particular context, where identifiers of the same name would not be allowed:

class bar;
class foo {
   int somefunc() final; // OK ... final is contextual keyword only
   int final(); // OK .... contextually allowed use of final, not a keyword
   int otherfunc() bar; // error ... identifier not permitted in this context
int override; // OK ... override is not a keyword, allowed as identifier
int noexcept; // error ... noexcept is a keyword, can't be used as identifier in any context

In addition to the contextual keywords are the "trailing specifiers" (my term ... they don't seem as a group to have a good name): =0 (abstract virtual functions), =default (provide default definition), =delete (don't define, and remove from overload set):

class test {
  test() = default; // provide default implementation of default constructor
  test(test const&) = delete; // remove copy constructor!
  virtual void some_func() = 0; // abstract base class ... can't instantiate test, 
                                // must provide implementation in derived classes.

  void f(double);
  void f(int) = delete; // can call f(5.0), but not f(5)

Recommendations: Use override everywhere you intend to override a base class member function. Using final is probably wrong. Experience with noexcept is evolving, however ... make sure constructors, assignments, destructors, and swaps are noexcept. Use default liberally. For non-copyable classes, use delete to remove copy constructor and copy assignment.

1 When a class is designed as part of an inheritance hierarchy, it is almost always a mistake to want to cut off that inheritance: That said, there are occassional valid reasons, for example
2 noexcept is a promise from the programmer - mainly to other programmers - that a function won't throw (a subset of) exceptions; it's part of a function's signature. The compiler can make use of that promise to optimize stack frame setup and teardown (although most compilers already use zero overhead exception handlers), removing exception handling code, and even propagating that optimization up the call stack if possible. If you lied, and an exception escapes a noexcept function, you're program dies immediately via std::terminate. It's really important that move-enabled classes have non-throwing move constructor/assignment: it enables a large set of significant optimizations inside the standard library; potentially throwing moves lose those options. There's a lot more to noexcept than we've said here ... it's actually an operator that evaluates a compile-time boolean expression. An empty noexcept is equivalent to noexcept(true), while a missing noexcept is equivalent to noexcept(false). The operator can evaluate type traits in making noexcept calculations. See for detailed history, discussion, and examples.

Delegating constructors (Language, Minor)

In C++98 if you have multiple constructor signatures for a given class, validating the inputs (you validate all your constructor inputs, right?) requires making a choice between two unpalatable options:

  1. You can repeat all the validation code in each constructor definition. Of course, repeating code is evil, because you're more likely to mess up when you need to change the code. Never do this.
  2. You can write a initialization or validation method, and call it inside each constructor body:
    class test {
       test(int a) { validate() };
       test(double b) { validate() };
       void validate() const;

    This fixes the problem, but is itself verbose and error prone. Why not have the compiler do the work for you?

Delegating constructors to the rescue!

class testmore {
  testmore() : testmore(1,2) { do_something_else(); }
  testmore(int a) : testmore(a, 2) {}
  testmore(int a, int b) {}

In this example, the first two constructors "delegate" all the work to the third constructor. As noted in the default constructor, the delegating constructors don't have to be empty.

Recommendations: Prefer delegating constructors to initialization and validation functions.

Inheriting constructors (Language, Minor)

In C++98, all member functions are inherited in derived classes, except the constructors. In C++11, constructors can also be inherited:

class Base {
  Base(int i);
  Base(double d, int i);

class Derived : public Base {
  using Base::Base;

The line using Base::Base implicitly defines the constructors Derived::Derived(int) and Derived::Derived(double, int) as if you had written

class Derived : public Base {
  Derived(int i) : Base{i} {};
  Derived(double d, int i) : Base{d, i} {}

In-class member initializers (Language, Minor)

In C++98, only static const integral data members can be initialized in the class declaration:

int var = 7;

class X {
   static const int m1 = 7;    // ok
   const int m2 = 7;           // error: not static
   static int m3 = 7;          // error: not const
   static const int m4 = var;  // error: initializer not constant expression
   static const string m5 = "odd"; // error: not integral type

// examples from

There's not really a good reason for any of these restrictions. In C++11 you can initialize all of these things in the class declaration:

class inclass {
   inclass() = default;
   int foo = 42;
   double bar = 3.14159;

This is syntactic sugar for adding these to the initializer lists of the constructors:
//equivalent to
   inclass() : foo{42}, bar{3.14159} {}

Since this initialization is done at runtime, constructors can "overrule" the defaults:
inclass(double b) : bar{b} {} // foo == 42

Scoped enum and strongly typed enumerations (enum class) (Language, Minor)

Enumerations, in principle, allow you to declare a bounded set of names which are given consecutive compiler generated integral values. This is very useful when you need a bunch of unique names. For example

enum color {red, green, blue};

They're stronger than typedefs, but weaker than a real type.

C++98 "inherited" its integral enumerations from C, which means they also inherited all of the major problems of the C enum:
  1. Unlike structs and classes, enums are not scoped. You can't say color::red, only red. These names are "injected" into the enclosing scope: they pollute the declaration namespace, unnecessarily increasing the risk of name collisions.
  2. enums are just thinly veiled integers: they implicitly convert to integers ...
  3. ... unfortunately, the underlying integer type that holds the enum can't be specified.
  4. An enum can't be forward declared.
C++11 fixes all of these problems, with the enum class, a strongly typed enum:
  1. The enum class is scoped, so the names don't pollute the namespace.
  2. There is no implicit conversion from an enum class to integers.
  3. The underlying type can be specified directly.
  4. An enum class can be forward declared.

C++11 also allows scoped access to enums, but doesn't deprecate the namespace pollution.

enum color { red=1, green=2, blue };
enum class strongcolor { scred=4, scgreen=5, scblue };
// optionally, force the underlying type with
// enum class strongcolor : int { scred=4, scgreen=5, scblue };

int c = red; // implicit conversion and namespace injection/pollution
int c2 = color::blue; // weak scoping ... better than nothing; c2 == 3
color c3 = red; // you can declare an object of enum type

strongcolor sc = strongcolor::scred; // OK
int sc2 = strongcolor::scred; // error, no implicit conversion to int
strongcolor sc3 = scred; // error, no namespace injection
int sc4 = static_cast<int>(strongcolor::scblue); // OK, explicit conversion, sc4 == 6

Recommendations: Prefer an enum class to an enum in new code. In legacy code, use scoped enum for documentation purposes.

Function return types: suffix declarations and automatic deduction (Language, Minor)

In C++98, when declaring a function, you write:

return_type function_name(argument_type dummy_variable);

as in
double f(double x);
template<typename T, typename U> void template_f(T t, U u);

But, what if the return type needs to be calculated or deduced from the argument types? The C++98 parsing rules don't provide a solution to that problem.

Enter new style function declarators:

auto function_name(argument_type dummy_variable) -> return_type

The above functions could also be declared as
auto f(double x) -> double;
template<typename T, typename U> auto template_f(T t, U u) -> void;

Trailing return specifications are needed for lambda expressions where return type deduction would fail to deduce the correct type.

This style comes into its own when used with template functions whose return type must be calculated.

template<typename T, typename U> auto addition(T t, U u) -> decltype(t+u);

decltype is a new operator (like sizeof, offsetof) that do type computations without evaluating their argument expression. Here, it would return the type of the addition of a T with a U.

The other main use case is with embedded types to simplify the boilerplate:

class List {
   class Link { ... };
   Link* Erase(Link*);

// C style
List::Link* List::Erase(Link* p) { ... }

// new style
auto List::Erase(Link* p) -> List* { ... }

Here, we don't have to redundantly qualify the return type, because it will be looked up in the context of class List.

For C++14, the Committee has voted to accept automatic return type deduction for functions, eliminating the need for many trailing return type specifications.

// C++14
auto List::Erase(Link* p) { List* pl; ....  return pl; }

The return type here is List*; while convenient when writing code, be careful with this feature, as it can cause readability problems.

Recommendations: For now, prefer the old style declarators unless necessary; you're most likely to see this style in template heavy library code. But you need to use this notation when declaring lambdas (see above), and those are going to become ubiquitous, so you ought to become comfortable with the notation. When auto return type specifications become available in your compilers, feel free to use them, but only with short functions ... if you have to search for the return, using auto is too opaque.

Standard size integer types (Library, Minor)

An import from the C library, this extension provides a standard set of signed and unsigned integer types:

#include <cstdint>

std::int32_t i32t; // signed, 32 bit integer type
std::uint64_t u64t; // unsigned, 64 bit integer type
std::int_fast32_t f32t; // the fastest signed integer with at least 32 bits
std::int_least32_t l32t; // the smallest type with at least 32 bits

There are a large number of these. You are only guaranteed sizes, not endianness.

static_assert (Language, Minor)

The standard assert macro is a runtime boolean check on a condition; if the argument evaluates to false, the program aborts.

#include <cassert>

std::assert(sizeof(int)==2); // abort if int is not 2 bytes

The static_assert facility fills a similar niche at compile time:

static_assert(sizeof(int)==2, "int is not two bytes!");

If the assert fails, the second argument is printed, and the compilation aborts.

Recommendation: Liberally annotate your code with static_assert where you can make determinations at compile time; it dramatically improves error messages, and makes debugging much easier.

std::begin() and std::end() (Library, Minor)

While every container provides a begin() and end() member returning iterators, C-style arrays don't have member functions. C++11 adds a set of free functions that return the begin and end iterators for all containers, including C-style arrays:

template<typename C> auto std::begin(C& c) -> decltype(c.begin()); // for containers
template<typename A, std::size_t N> A* std::begin( A (&array)[N] ); // for arrays

// cribbed from

which are declared in the <iterator> header.

The Committee forgot to standardize std::cbegin() and std::cend() functions; that's very likely to be fixed in the C++14 patch release, and some standard libraries may start providing them well before then. If you need them, they aren't hard to write.

Recommendations: Prefer to use begin(Cont) and end(Cont) in generic code (and do so via ADL, not via explicit std qualification).

nullptr (Language, Minor)

Have you ever run into this?

void f(int);
void f(double*);

f(0); // which f?

What's supposed to happen? You probably want to call f(int), but in C++98, the call is ambiguous! Why? Well, 0 is both an integer literal, and the null pointer constant literal. There's no way for the compiler to tell what you want to have happen here.

C++11 adds a new keyword for the null pointer constant, nullptr. If you want a null pointer, use nullptr, and the ambiguity is cured:

void f(int);
void f(double*);

f(0); // calls f(int);
f(nullptr); // calls f(double*);

Recommendations: Prefer nullptr to 0 for null pointer constants.

Template aliases (Language, Minor)

Sometimes, you want to "partially apply" a template, where you know in advance some of the template arguments, but not others. For example, given

template<typename T, typename U> class someclass;

you might know that every time you use someclass, T = double. In C++98, you still have to write out someclass<double, whatever> each and every time we declare a new variable. There's simply no way to make life easier.

In C++11, the template alias comes along to ease the burden

template<typename T, typename U> class someclass;
template<typename U> using someclassU = someclass<double, U>;
someclassU<int> scu;

We can also fully apply the template parameters
using someclassInt = someclass<double, int>;
someclassInt sci;

But, this fully applied template alias is nothing more than a good ol' typedef! Unsurprisingly, the template alias syntax can be used as a more readable replacement for the ancient C-style typedef!
using Dtype = double; // equivalent to typedef double Dtype;
using func_type = void (*)(double); // equivalent to typedef void (*func_type)(double)

Recommendations: For consistency and readability, prefer the new alias/typedef syntax in all cases.

Generalized compile-time constant expressions: constexpr (Language, Minor)

In C++98 there are a small number of numerical computations that can be performed at compile time: these computations are restricted to binary operations on integer literals, const int, enum, and the like. And they're only done at compile time in contexts that require integer constant expressions (array bounds, template arguments, etc). The template in the following can only be instantiated with an integer constant expression:

template<int N> class Foo;
int const i = 5;
enum color { red, green, blue };
double const d = 5.4;
int h = 7;

Foo<i*blue+6> f;  // OK
Foo<d>; // nope, not an integer
Foo<h>; // nope, not an ICE

C++11 expands compile time computation to many more contexts, with the addition of constexpr. A few examples:

constexpr int multiply(int a, int b){
  return a*b;

When fed with compile time constant values, it can be compile time evaluated (although it's only guaranteed to be evaluated at compile time when used in a context requiring compile time evaluation)
int array[ multiply(4,3) ]; // guaranteed compile time evaluation
int const i = multiply(2,6); // may be evaluated at compile time, but then again, maybe not

We can force a compile time evaluation with constexpr
int constexpr i = multiply(4,3); // guaranteed compile time evaluation

constexpr is not just for integers.

constexpr double multiply(double a, double b){
  return a*b;

double constexpr d = multiply(2.0, 2.0); // evaluated at compile time!
int const k = static_cast<int const>(dk); // need an ICE in the line below, guaranteed evaluation
static_assert(k==4, "Not compile time constant!");

This static assert will not result in a compile time error or assertion, because the constexpr forces compile time evaluation.

constexpr can also be applied to user defined type (UDTs) that are "simple enough".

struct Point {
  int x,y;
  constexpr Point(int xx, int yy) : x{xx}, y{yy} { }

constexpr Point origin{0,0};
constexpr int z = origin.x;

constexpr Point a[] = { Point{0,0}, Point{1,1}, Point{2,2} };
constexpr int x = a[1].x;    // x becomes 1

// swiped from

In C++ 11, constexpr functions must consist of a single return statement of the form { return expression;}; hence, no loops, branches, local variables, etc. Additionally, constexpr member functions are implicitly const.

Many of these restrictions on constexpr functions have been lifted for C++14. In general, constexpr functions can contain loops, branches, local variables, etc. You are also allowed to mutate those local variables, and have multiple return statements. One really interesting wrinkle is that constexpr member functions are no longer implicitly const; this is a breaking change in just 3 years from C++11, as it can lead to visibly different behavior. See for a discussion. If your function should be both constexpr and const, you should mark it so; it is not an error to do so in C++11, so it's just a safer choice overall.

Boolean constexpr functions will likely form the basis for a template constraints language in C++14 ("Concepts Lite") and perhaps full concepts in C++17; see

Recommendations: Use constexpr where it might be useful for small value type classes. Use constexpr functions in preference to macros for compile time calculations.

Variatic templates and std::tuple (Language/library, Minor)

The venerable C output function printf has a simple declaration

void printf(char const* s, ...);

The ..., or ellipsis operator, takes any number of arguments of any type, and pushes them on the stack. They must be popped off the stack, and interpreted correctly by the runtime, via the va_arg mechanism. If interpreted incorrectly, all hell breaks loose (read: crashes, memory corruption, buffer overflows, security holes, etc.). Furthermore, printf is not extensible: it only works on built in types, not user defined (or library, for that matter) types. It would be wonderful if printf, including all of its formatting power, could be brought to bear in a typesafe manner.

Enter variadic templates. In C++98, one can only define templates (functions and classes) with fixed numbers of template arguments

template<typename T, typename U> class A{...}; // two template arguments
template<typename T> class A{...}; // one template argument
template<> class A{}; // usually known as the "base case" 
template<typename T, typename U> void f(T& t, U& u) {...}

When instantiated in class contexts, the arguments must be specified explicitly
A<double, int> a1;
A<std::vector<double>> a2;

while in function contexts, the arguments can (usually) be deduced by the compiler 1
f(double, int); // template argument deduction finds T==double, U==int

Class templates can also be specialized for specific types
template<typename T> class A{...}; // will be used for T != int
template<> class A<int> {...}; // will be used for int

Templates can also take non-type arguments - usually integer types - but only if they are compile time constant:
template<int n> int fibonacci();

Template instantiations can be generated recursively, but not iteratively
template<int n> int fibonacci() { return n==0?1:fibonacci<n-1>(); }

The glaring weakness of C++98 is the lack of an equivalent to ... for templates: if the number of arguments is unknown, you have to generate a very large number of templates manually (or via the preprocessor, or generator code, or ...):

template<typename T, typename U, typename V> class foo;
template<typename T, typename U> class foo;
template<typename T> class foo;
template<> class foo;

C++11 introduces the variadic template for both class and function templates (see and
template<typename... Args> class foo;
template<typename... Args> void f(Args... args);

The ellipsis notation here designates variously the template parameter pack and the function argument parameter pack. Typical usage is to define a variadic template thusly
template<typename Head, typename... Tail> void f(T t, Args... args){
   g(t); // do something with t
   f(args...); // recurse, unpacking the argument parameter pack
template<typename T> void f(T t){ // base case to end the recursion

If this looks like recursion in Lisp to you (car and cdr and cons, anyone? shudder), that's not an accident: both the template mechanism in C++ and Lisp are functional languages, cut from the same cloth.

What good is this stuff? If you're an end user, you'll never write code with this syntax; it's almost strictly the domain of library writers. std::function and std::bind are written with it, as are a number of other Standard Library components. One of the most interesting and useful is std::tuple, which represents a type safe n-tuple of values

std::tuple<double, int, std::string> t{4.2, 2, std::string{"Hello!"}};

There are two ways to access tuples: positionally and via a tie. Positional access is via the get function:
auto d = std::get<0>(t);

std::tie allows us to "unpack" the values from a tuple into individual variables:
double d;
int i;
std::string s;
std::tie(d,std::ignore,s) = t;

where std::ignore does the obvious thing. Building a tuple is usually done indirectly, via function template argument deduction and std::make_tuple
auto t2 = std::make_tuple(4.2, 2, std::string{"Hello!"} );

Here, t2 should be equivalent to t; in fact, we can apply various boolean tests, with (more or less) obvious semantics: t==t2.

Tuple is most useful for returning muliple values from a function call, immediately unpacking it into variables

std::tuple<double, int> add_em(double a, double b, int c, int d){
   return std::make_tuple(a+b, c+d);

double d=5.2;
int i=3;
std::tie(d,i) = add_em(d,d,i,i); // d==10.4, i=6

Recommendations: You're going to be a user of variadic templates via std::tuple, std::function, etc, but you probably won't be implementing any functions or classes. But you probably will find a use for std::tuple, so it's best that you understand how to use it.

1 Templated return types are non-deducible:

template<typename T, typename U> T f(U u);
int i;
double d = f(i); // error!  can't deduce T

The return type T=double can't be deduced, even though U=int can be. You need to specify the return type explicitly, while the trailing argument types can still be deduced:
int i;
auto d = f<double>(i);

Only trailing types can be deduced. This won't work
template<typename U, typename T> T fbad(U u);
template<typename T, typename U> T fgood(U u);
int i;
auto d = fbad(i); // error: could deduce U, but can't deduce T
auto d = fgood(i); // error: can't deduce return
auto d = fgood<double>(i); // T explicit, but U deduced

Concurrency and the Memory Model (Huge, Language/Library)

It's finally possible to write portable multithreaded code in C++. Yay! I, however, am far from a concurrency expert ... I know just enough to be dangerous. That said, there are a number of really well thought out features in both the language and the library: there is an interface to hardware level atomic types (the std::atomic<> template), low level threads (std::thread), and various timers, locks, mutexes and condition variables. There's a lot of stuff still missing (thread cancellation, reader/writer locks, etc.), but the low level tools are all there, and much of the missing stuff will probably make an appearance in the C++14 bug fix standard. In addition, there are a number of task rather than thread based primitives - std::async, std::promise, and std::future - that allow you to program at the task level with the library runtime handling the gory threading details underneath the covers.

Currently, the best written book references to C++ concurrency is Anthony Williams', C++ Concurrency in Action. It's written by one of the drivers of the standardization of threads, and the maintainer of a number of C++ threading library implementations. The boost thread library implementation also has significant documentation available:

There is a fabulous video introduction to the gory details of concurrency in C++ from Herb Sutter, the convener of WG21. It's in two parts, fills about three hours, is not targeted at the beginning programmer, and is certainly not for the faint of heart: But it is really interesting and informative.

Regular expression support

A regular expression, or regex, is a text string that specifies a pattern to search for in yet another text string. For example, "any number of letters, digits, underscores, and dots - followed by the '@' symbol - followed by any number of letters, digits, underscores, and dots" is roughly the regular expression that describes a valid internet email address. Formally specified regular expression languages give a compact, detailed, and powerful way to specify patterns to match, extraction of subpatterns, etc.

While I'm not a regex expert, here's a short snippet that I used in a recent project to parse the log file of a G4beamline run, to find the event processing time embedded in the output.

    std::string line;
    std::regex const reg{ "Run complete\\s+(\\d+)\\s+Events\\s+(\\d+)\\s+seconds" };
    while( std::getline(ifs, line) ){
      std::smatch result;
      if( std::regex_search(line,result,reg) ){
        //      std::cout << "Match!\t" << result.size() << '\n';
        //      for( auto const& r : result )
        //        std::cout << '\t' << r << '\n';
        vevent.emplace_back( boost::lexical_cast<int>(result[1]) );
        runtime.emplace_back( boost::lexical_cast<double>(result[2]) ); 

The regex pattern is specified by the string argument to the "std::regex". It consists of both literal text to match (such as "Run complete") and "character classes" ("\s" matches a whitespace character, "\d" matches a digit, "+" specifies "one or more"), as well as specifying which pieces of the pattern to submatch (the parts inside the parens). We'll talk about the overabundance of backslash characters in a minute...

C++11 introduced a library based regular expression engine based on the boost::regex library; the standard library supports a large number of regular expression dialects, but defaults to the ECMA/JavaScript notation. The library support consists of a small set of classes, the most relevant being:
  • std::regex: holds the regular expression pattern that is used by the matching engine. Much like std::string is a specialization of std::basic_string, std::regex is actually a specialization of std::basic_regex.
  • std::match_results: holds the submatches found by the matching engine. std::smatch is a specialization for matches against std::string.
  • std::regex_match: This algorithm determines whether a given std::regex matches the entire target, which can be a std::string or a sequence delimited by a pair of iterators.
  • std::regex_search: This algorithm is similar to std::regex_match, but searches whether some subsequence of the target matches the std::regex; only the first match is returned.
  • std::regex_iterator: This iterator permits forward iteration through a target for multiple matches of a std::regex. Think of this as the multiple match version of std::regex_search, but returning those results through an iterator interface.
  • std::regex_replace: This mutating algorithm performs search and replace, using std::regex as the search criteria.

Backslashes hold a special place in the regular expression syntax: they're the "escape character". When you want to say "match any whitespace character", you say "[:space:]", or more compactly "\s". The backslash tells the engine to treat the next character as "special", and not as a literal to match. This causes a bit of a problem because the backslash is also the escape character for character literals in C++. In C++, then, if you want the regular expression engine to "see" the backslash as a special character, you have to escape the backslash itself! So, to match any number of digits, use the regex "\d*", but use the C++ string literal "\\d*". This can get unwieldy very quickly. If you want to specify a match against a literal backslash (say in a Windows path name like "C:\Windows", you need to escape the backslash once to get the literal "\" into the regex, and then you need to escape both of the backslashes to get them past the C++ compiler:

std::regex windows{ "C:\\\\Windows" };

I think .... this gets really confusing in a hurry, and is part of the reason that regular expressions are often accused of having a "write only" syntax.

To mitigate this specific problem with regular expression specification, C++11 introduced the "Raw string literal", where the backslash has no special meaning.

std::string windows{ "C:\\\\Windows" };
std::string windows2{ R"(C:\\Windows)" }; // only escape the backslash once, for the regex
windows == windows2; // true

The raw string literal is by introduced by R and is delimited (by default) by "( and )". Alternative delimiters can be used, in the form R"delim( ..... )delim". I could have specified my regex up above with a raw string literal as
std::regex const reg{ R"(Run complete\s+(\d+)\s+Events\s+(\d+)\s+seconds)" };

At the time I write this, there is one wrinkle to warn about: gcc has had compile time support for the regex library for a long time now, but no runtime support. That is, the headers are implemented, so compilation succeeds, but there is no runtime (shared object) library. Programs using std::regex and its ilk compile, but will die at runtime (segfaulting or terminate ing with an uncaught exception, depending on the compiler version). Use boost::regex instead, and be sure to link against the regex library shared object.

explicit conversion operators (Language, minor)

Single argument constructors can be used to implicitly convert between types:

struct A {

A a = 1;

This can get you into a lot of trouble, as the C++ compiler tries really hard to build an object from whatever arguments you hand it. It does that by applying standard conversions, and up to one user defined conversion in the form of converting constructors or conversion operators:
struct A {

struct B {
  operator double(){ return 2.6; }

int i = B{3.5};  // i = 2

The compiler takes the double literal 3.5, converts it to an int and builds an A object that is fed to the B constructor which builds a B object that is implicitly converted to a double object that is converted to an int. Ick.

C++98 brought the explicit keyword that eliminates use of that constructor in an implicit conversion chain; it can still be used to explicitly construct an object, but never silently as here. C++11 brings explicit to conversion operators. The chain above fails with the inclusion of any of the following uses of explicit; you should always include them all.

struct A {
  explicit A(int){}

struct B {
  explicit B(A){}
  explicit operator double(){ return 2.6; }

A a = 1;  // error: conversion from ‘int’ to non-scalar type ‘A’ requested
A aa{1};  // ok!  explicit constructor call

B b = aa; // error: conversion from ‘A’ to non-scalar type ‘B’ requested
B bb = static_cast<B>{aa};  // ok!  explicit conversion request

Recommendations: Always mark single argument constructors, and all conversion operators, explicit. While explicit conversion operators are significantly safer than the non-explicit version, the consensus in the C++ community is that writing well named member functions (for example, to_double, to_string, etc) to do type conversion almost always result in clearer and easier to understand code.

inline namespaces (Minor, language)

Versioning of libraries is a difficult issue, which has little language support in most languages. C++11 struck a small blow for versioning with inline namespaces. When a namespace is marked inline, all the names within are transparently "hoisted" into the enclosing namespace. It's as if all the names declared in the inline namespace are also declared in the outer namespace:

namespace outer {
  namespace n2013 {
    int foo(int) { /* original implementation */ }
  inline namespace n2014 {
    int foo(int) { /* current implementation */ }
  namespace n2015 {
    int foo(int) { /* prototype implementation */ }

outer::foo(1); // calls outer::n2014::foo

When the new release is ready, move the inline from n2014 to n2015, and all instances of outer::foo will now call outer::n2015::foo.

In the presence of a using directive, all the following call the same function:

using namespace outer;

Since an inline namespace is still a namespace, it can participate in using directives and declarations like any other. The C++14 Standard Library uses inline namespaces liberally for the definition of user defined literal operators:

namespace std {
  inline namespace literals {
    inline namespace string_literals {
    inline complex_literals {

using namespace std; // makes all unqualified UDL operator calls available
using namespace std::literals;  // ditto, but not the rest of std names
using namespace std::literals::complex_literals; // only makes complex literals available, but not other literals, nor the rest of std
using namespace std::complex_literals; // ditto


There's a lot more cool stuff we haven't discussed here: explicit conversion operators, user defined literals, unicode literals, time utilities, random numbers, and the rest of the new standard library. We'll add more over time as we gain familiarity with them.

The Future: C++14, TSs up the wazoo, C++17, etc.

As mentioned at the top, C++ is still under active evolution and development. The near complete C++14 working paper was passed by the Committee in March 2013; while a "minor" update, it contains a significant number of fairly important "tweaks" to the language and library
  • Return type deduction for normal (non-lambda) functions
  • decltype(auto)
  • Generalized capture (including move capture) and polymorphic argument deduction for lambdas
  • Variable templates
  • User-defined literals for certain library types
  • Relaxed rules for constexpr functions
  • std::make_unique, paralleling std::make_shared (which will nearly eliminate all need for new/delete pairs in any user code!)
  • Binary literals
  • Shared (reader/writer) locks for concurrent code

Many of these are already available in some form as compiler extensions (for gcc, see, and no doubt more of them will be coming quickly to a compiler near you. As of January 2014, in fact, clang is already C++14 (language) feature complete, and gcc is working hard to get there. The compilers could beat the ratification of the standard itself!

The committee also plans to publish a number of "Technical Specifications" standardizing a number of major features before C++17
  • Concepts Lite
  • Filesystem support
  • Networking (TCP/IP) support
  • Library Fundamentals (std::optional<T>, etc)
  • Array Extensions (runtime and library based)
  • Concurrency Extensions
  • Parallelism Extensions

2017 will likely see the completion of a major revision to the standard, incorporating the results of all the TS work in addition to things not yet dreamed of...