add

Monday 13 March 2017

namespace c++ explanation with examples



C++ Standard Library:

In the C++ programming language, the C++ Standard Library is a collection of classes and functions, which are written in the core language and part of the C++ ISO Standard itself.The C++ Standard Library provides several generic containers, functions to utilize and manipulate these containers, function objects, generic strings and streams (including interactive and file I/O), support for some language features, and functions for everyday tasks such as finding the square root of a number. The C++ Standard Library also incorporates 18 headers of the ISO C90 C standard library ending with ".h", but their use is deprecated. No other headers in the C++ Standard Library end in ".h". Features of the C++ Standard Library are declared within the std namespace.

The C++ Standard Library is based upon conventions introduced by the Standard Template Library (STL), and has been influenced by research in generic programming and developers of the STL such as Alexander Stepanov and Meng Lee. Although the C++ Standard Library and the STL share many features, neither is a strict superset of the other.
A noteworthy feature of the C++ Standard Library is that it not only specifies the syntax and semantics of generic algorithms, but also places requirements on their performance. These performance requirements often correspond to a well-known algorithm, which is expected but not required to be used. In most cases this requires linear time O(n) or linearithmic time O(n log n), but in some cases higher bounds are 

allowed, such as quasilinear time O(n log2 n) for stable sort (to allow in-place merge sort). Previously sorting was only required to take O(n log n) on average, allowing the use of quicksort, which is fast in practice but has poor worst-case performance, but introsort was introduced to allow both fast average performance and optimal worst-case complexity, and as of C++11, sorting is guaranteed to be at worst linearithmic. In other cases requirements remain laxer, such as selection, which is only required to be linear on average (as in quick select), not requiring worst-case linear as in introselect.
The C++ Standard Library underwent ISO standardization as part of the C++ ISO Standardization effort, and is undergoing further work regarding standardization of expanded functionality.

std::list:

 

std::list is a container that supports constant time insertion and removal of elements from anywhere in the container. Fast random access is not supported. It is usually implemented as a doubly-linked list. Compared to std::forward_list this container provides bidirectional iteration capability while being less space efficient.
Addition, removal and moving the elements within the list or across several lists does not invalidate the iterators or references. An iterator is invalidated only when the corresponding element is deleted. 

std::list meets the requirements of Container, Al locator Aware Container, Sequence Container and Reversible Container.

Template parameters


-
The type of the elements.
T must meet the requirements of Copy Assignable and Copy Constructible.
The requirements that are imposed on the elements depend on the actual operations performed on the container. Generally, it is required that element type is a complete type and meets the requirements of Erasable, but many member functions impose stricter requirements.
The requirements that are imposed on the elements depend on the actual operations performed on the container. Generally, it is required that element type meets the requirements of Erasable, but many member functions impose stricter requirements. This container (but not its members) can be instantiated with an incomplete element type if the allocator satisfies the allocator completeness requirements.
Allocator
-
An allocator that is used to acquire/release memory and to construct/destroy the elements in that memory. The type must meet the requirements of Allocator. The behavior is undefined if Allocator::value_type is not the same as T.

Member types

Member type
Definition
value_type
T
allocator_type
Allocator
size_type
Unsigned integer type (usually std::size_t)
difference_type
Signed integer type (usually std::ptrdiff_t)
reference
Allocator::reference
value_type&

const_reference
Allocator::const_reference

const value_type&
pointer
Allocator::pointer
std::allocator_traits<Allocator>::pointer
const_pointer
Allocator::const_pointer
std::allocator_traits<Allocator>::const_pointer
iterator
BidirectionalIterator
const_iterator
Constant bidirectional iterator
reverse_iterator
std::reverse_iterator<iterator>
const_reverse_iterator
std::reverse_iterator<const_iterator>

Member functions

(constructor)
constructs the list
(public member function)
(destructor)
destructs the list
(public member function)
operator=
assigns values to the container
(public member function)
assign
assigns values to the container
(public member function)
get_allocator
returns the associated allocator
(public member function)
Element access
front
access the first element
(public member function)
back
access the last element
(public member function)
Iterators
begin cbegin
returns an iterator to the beginning
(public member function)
end cend
returns an iterator to the end
(public member function)
rbegin crbegin
returns a reverse iterator to the beginning
(public member function)
rend crend
returns a reverse iterator to the end
(public member function)
Capacity
empty
checks whether the container is empty
(public member function)
size
returns the number of elements
(public member function)
max_size
returns the maximum possible number of elements
(public member function)
Modifiers
clear
clears the contents
(public member function)
insert
inserts elements
(public member function)
emplace

constructs element in-place
(public member function)
erase
erases elements
(public member function)
push_back
adds an element to the end
(public member function)
emplace_back
constructs an element in-place at the end
(public member function)
pop_back
removes the last element
(public member function)
push_front
inserts an element to the beginning
(public member function)
emplace_front
constructs an element in-place at the beginning
(public member function)
pop_front
removes the first element
(public member function)
resize
changes the number of elements stored
(public member function)
swap
swaps the contents
(public member function)
Operations
merge
merges two sorted lists
(public member function)
splice
moves elements from another list
(public member function)
removeremove_if
removes elements satisfying specific criteria
(public member function)
reverse
reverses the order of the elements
(public member function)
unique
removes consecutive duplicate elements
(public member function)
sort
sorts the elements
(public member function)
Non-member functions
operator==operator!=operator<operator<=operator>operator>=
lexicographically compares the values in the list
(function template)
std::swap(std::list)
specializes the std::swap algorithm
(function template)
Example
Run this code
#include <algorithm>
#include <iostream>
#include <list>

int main()
{
    // Create a list containing integers

    std::list<int> l = { 7, 5, 16, 8 };

    // Add an integer to the front of the list

    l.push_front(25);
    // Add an integer to the back of the list

    l.push_back(13);

    // Insert an integer before 16 by searching

    auto it = std::find(l.begin(), l.end(), 16);
    if (it != l.end()) {
        l.insert(it, 42);
    }

    // Iterate and print values of the list

    for (int n : l) {
        std::cout << n << '\n';
    }
}
Output:
25
7
5
42
16
8
13

Saturday 4 March 2017

Quantum Computing Explanation



 

Quantum computing 

Quantum computing studies theoretical computation systems (quantum computers) that make direct use of mechanical Phenomena such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in super positions of states. A quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of quantum computing was initiated by the work of Paul Ben off and Yuri Manin in 1980, Richard Feynman in 1982,and David Deutsch in 1985.A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.

As of 2017, the development of actual quantum computers is still in its infancy, but experiments have been carried out in which quantum computational operations were executed on a very small number of quantum bits. Both practical and theoretical research continues, and many national governments and military agencies are funding quantum computing research in an effort to develop quantum computers for civilian, business, trade, environmental and national security purposes, such as cryptanalysis.

Large-scale quantum computers would theoretically be able to solve certain problems much quicker than any classical computers that use even the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle (with exponential resources) simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis. On the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers.

Basis:

A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with {\displaystyle n}qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these {\displaystyle 2^{n}}states at any one time). A quantum computer operates by setting the qubits in a perfect drift that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with a measurement, collapsing the system of qubits into one of the {\displaystyle 2^{n}}pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most {\displaystyle n}classical bits of information. Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in that case to mean probabilistic (computing), because the term non-deterministic has a different meaning in computer science.

An example of an implementation of qubits of a quantum computer could start with the use of particles with two spin states: "down" and "up"{\displaystyle |1{\rangle }}). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive Eigen values, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, representing the state of an n-qubit system on a classical computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before the measurement. It is in general incorrect to think of a system of qubits as being in one particular state before the measurement, since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
To better understand this point, consider a classical computer that operates on a three-bit register. If the exact state of the register at a given time is not known, it can be described as a probability distribution over the {\displaystyle 2^{3}=8}different three-bit strings 000, 001, 010, 011, 100, 101, 110, and 111. If there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a probabilisticcomputer, then there is a possibility of it being in any one of a number of different states.

Principles of operation:

A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, representing the state of an n-qubit system on a classical computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before the measurement. It is in general incorrect to think of a system of qubits as being in one particular state before the measurement, since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.

To better understand this point, consider a classical computer that operates on a three-bit register. If the exact state of the register at a given time is not known, it can be described as a probability distribution over the {\displaystyle 2^{3}=8}different three-bit strings 000, 001, 010, 011, 100, 101, 110, and 111. If there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states.

Quantum Coherence:

There are two formulations of mechanics: classical mechanics and quantum mechanics.   Classical mechanics was the formulation of the laws of nature until 1900. After 1900, quantum mechanics was discovered/invented as a way to explain numerous anomalies that couldn't be explained by classical mechanics.  It was quickly shown that many aspects of classical mechanics could be derived from quantum mechanics.

Having quantum mechanics supersede classical mechanics is necessary to avoid the contradictions between the two frameworks. 

The primary challenge was the transition from quantum realm where there are super positions of states (i.e. a particle can be "here and there", rather than only "here or there" as classical mechanics dictates).  There were ad hoc rules that were created to make the transition from one realm to the other and the loose concept of "measurement" was introduced to dictate when a quantum system becomes classical.  This led to many conceptual problems such as Schrodinger Cat.

It took a surprisingly long time to construct a non-ad hoc process to describe the change -- if you think about it, you could expect that the quantum realm and classical realm might not be a dichotomy, but instead something that can be interpolated between (i.e. a system might have lost part of its quantumness, but not all of it).  


Quantum decoherence is the physical process that describes this transition from the quantum realm to the classical realm.  Somewhat shockingly, quantum decoherence follows directly from taking quantum mechanics itself seriously.  No real, extension of quantum mechanics was necessary.

The details of quantum decoherence are as follows and are somewhat technical.
In quantum mechanics, a system is described by a state and if you have multiple independent components, you combine the states by concatenating the multiple states together.   

To see quantum mechanical behavior, a system must separated into two different states and then be brought back together.  I'll describe this as a process: Start in a state, split that state into two components (a coherent superposition), and bright the states back together
[math] |\Psi_1\rangle \rightarrow_{\text{split}} (|\Psi_1\rangle + |\Psi_2\rangle)/\sqrt {2} \rightarrow_{\text{combine}} |\Psi_1\rangle[/math]

                                       
The key realization is that you can't ignore the rest of the Universe.  If the two parts of the quantum wave function interact with the rest of the Universe while they were apart, they change the Universe.  If this happens, then when you try to bring the states back together, the Universe has changed (and you can't unchange the Universe simply, it's like unscrambling an egg).   

Mathematically
[math] |\Psi_1\rangle|U\rangle[/math]
[math].\qquad\rightarrow_{\text{split}}[/math]
[math].\qquad\qquad (|\Psi_1\rangle|U\rangle + |\Psi_2\rangle |U'\rangle)/\sqrt{2} [/math]
[math].\qquad\rightarrow_{\text{combine}} [/math]
[math].\qquad\qquad(|\Psi_1\rangle|U\rangle+ |\Psi_1\rangle |U'\rangle)\sqrt{2}[/math].

Where the second part of the "state" is the rest of the Universe.  Therefore, you can't observe quantum mechanical interference, the critical aspect that differentiates the quantum realm from the classical realm. The degree that the Universe has changed between the two halves of the wave function is the degree to which the quantumness has been lost. 
{\displaystyle |-\rangle ={\tfrac {1}{\sqrt {2}}}\left(1,-1\right)}