So, before moving on to study increasingly complex data structures and algorithms, we first look in more detail at how to measure and describe their
The study of data structures helps to understand the basic concepts involved in organizing and storing data as well as the relationship among the data sets
One gives the result before the other The data structure that gives results first is better than the other one But sometimes, the data grows too
17 nov 2011 · Data structures provide a means to manage huge amounts of data efficiently, such as large databases and internet indexing services Usually,
Before moving on to any programming language, it is important to know about the various types of languages used by the computer Let us first know what the
The Program Counter value is saved in the Return Address location 4 The Base Pointer is now reset to the new base (top of the call stack prior to the creation
Because we are attempting to detect data structures with- out prior knowledge, we must use unsupervised learn- ing algorithms
5 jui 2012 · data structures and algorithms covered in the book times we must take the log of a number before we reach a value ? 1 This quantity
days before the commencement of the Data Structure using C course Bridging Course was divided into 3 groups: Beginner, Learning, and Advanced
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.PDF generated at: Thu, 17 Nov 2011 20:55:22 UTCData structures
ContentsArticlesIntroduction1Data structure1Linked data structure3Succinct data structure5Implicit data structure7Compressed data structure8Search data structure9Persistent data structure11Concurrent data structure15Abstract data types18Abstract data type18List26Stack29Queue57Deque60Priority queue63Map67Bidirectional map70Multimap71Set72Tree76Arrays79Array data structure79Row-major order84Dope vector86Iliffe vector87Dynamic array88Hashed array tree91Gap buffer92Circular buffer94Sparse array109
Bit array110Bitboard115Parallel array119Lookup table121Lists127Linked list127XOR linked list143Unrolled linked list145VList147Skip list149Self-organizing list154Binary trees158Binary tree158Binary search tree166Self-balancing binary search tree176Tree rotation178Weight-balanced tree181Threaded binary tree182AVL tree188Red-black tree192AA tree207Scapegoat tree212Splay tree216T-tree230Rope233Top Trees238Tango Trees242van Emde Boas tree264Cartesian tree268Treap273B-trees276B-tree276B+ tree287Dancing tree2912-3 tree292
Space-partitioning trees371Space partitioning371Binary space partitioning372Segment tree377Interval tree380Range tree385Bin386k-d tree388Implicit k-d tree395min/max kd-tree398Adaptive k-d tree399Quadtree399Octree402Linear octrees404Z-order404UB-tree409R-tree409R+ tree415R* tree416Hilbert R-tree419X-tree426Metric tree426vP-tree427BK-tree428Hashes429Hash table429Hash function442Open addressing450Lazy deletion453Linear probing453Quadratic probing454Double hashing458Cuckoo hashing459Coalesced hashing463Perfect hash function466Universal hashing468
Linear hashing473Extendible hashing4742-choice hashing480Pearson hashing480Fowler-Noll-Vo hash function481Bitstate hashing483Bloom filter483Locality preserving hashing494Morton number495Zobrist hashing500Rolling hash501Hash list503Hash tree504Prefix hash tree506Hash trie506Hash array mapped trie507Distributed hash table508Consistent hashing513Stable hashing515Koorde515Graphs518Graph518Adjacency list520Adjacency matrix522And-inverter graph525Binary decision diagram527Binary moment diagram531Zero-suppressed decision diagram533Propositional directed acyclic graph534Graph-structured stack535Scene graph536Appendix541Big O notation541Amortized analysis551Locality of reference553Standard Template Library556
ReferencesArticle Sources and Contributors566Image Sources, Licenses and Contributors575Article LicensesLicense580
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to
specific tasks. For example,Data structures are used in almost every program or software system. Data structures provide a means to manage
huge amounts of data efficiently, such as large database s and internet indexing service s. Usually, efficient data structures are a key to designing efficient algorithms . Some formal design methods and programming language s emphasize data structures, rather than algorithms, as the key organizing factor in software design.-An array stores a number of elements of the same type in a specific order. They are accessed using an integer tospecify which element is required (although the elements may be of almost any type). Arrays may be fixed-lengthor expandable.-Record (also called tuple or struct) Records are among the simplest data structures. A record is a value thatcontains other values, typically in fixed number and sequence and typically indexed by names. The elements ofrecords are usually called fields or members.
-A hash or dictionary or map is a more flexible variation on a record, in which name-value pairs can be added anddeleted freely.
-Union. A union type definition will specify which of a number of permitted primitive types may be stored in itsinstances, e.g. "float or long integer". Contrast with a record, which could be defined to contain a float and aninteger; whereas, in a union, there is only one value at a time.
-A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an additionalfield indicating its current type, for enhanced type safety.
-A set is an abstract data structure that can store certain values, without any particular order, and no repeatedvalues. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a boolean"in" or "not in".-An object contains a number of data fields, like a record, and also a number of program code fragments foraccessing or modifying them. Data structures not containing code, like those above, are called plain old datastructure.
Many others are possible, but they tend to be further variations and compounds of the above.Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory,
specified by an address - a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations ; while the linked data structure s are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as ininstances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations.
This observation motivates the theoretical concept of an abstract data type , a data structure that is defined indirectlyby the operations that may be performed on it, and the mathematical properties of those operations (including their
space and time cost).have special syntax or other built-in support for certain data structures, such as vectors (one-dimensional
array s) in the C language or multi-dimensional arrays in.Most programming languages feature some sorts of library mechanism that allows data structure implementations tobe reused by different programs. Modern languages usually come with standard libraries that implement the mostcommon data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, andMicrosoft's .NET Framework.
]Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards andTechnology. 15 December 2004. Online version (http:/ / www. itl. nist. gov/ div897/ sqg/ dads/ HTML/ datastructur. html) Accessed May 21,2009.
[ 2]Entry data structure in the Encyclop'dia Britannica (2009) Online entry (http:/ / www. britannica. com/ EBchecked/ topic/ 152190/data-structure) accessed on May 21, 2009.
Further readings-Peter Brass, Advanced Data Structures, Cambridge University Press, 2008.-Donald Knuth, The Art of Computer Programming, vol. 1. Addison-Wesley, 3rd edition, 1997.
-Dinesh Mehta and Sartaj Sahni Handbook of Data Structures and Applications, Chapman and Hall/CRC Press,2007.
-Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985.External links-UC Berkeley video course on data structures (http:/ / academicearth. org/ courses/ data-structures)-Descriptions (http:/ / nist. gov/ dads/ ) from the Dictionary of Algorithms and Data Structures-CSE.unr.edu (http:/ / www. cse. unr. edu/ ~bebis/ CS308/ )-Data structures course with animations (http:/ / www. cs. auckland. ac. nz/ software/ AlgAnim/ ds_ToC. html)
-Data structure tutorials with animations (http:/ / courses. cs. vt. edu/ ~csonline/ DataStructures/ Lessons/ index.html)-An Examination of Data Structures from .NET perspective (http:/ / msdn. microsoft. com/ en-us/ library/aa289148(VS. 71). aspx)-Schaffer, C. Data Structures and Algorithm Analysis (http:/ / people. cs. vt. edu/ ~shaffer/ Book/ C+ +3e20110915. pdf)
Linked data structureIn computer science, a linked data structure is a data structure which consists of a set of data records (nodes) linkedtogether and organized by references (links or pointers).
In linked data structures, the links are usually treated as special data type s that can only be dereferenced or compared for equality. Linked data structures are thus contrasted with arrays and other data structures that require performingarithmetic operations on pointers. This distinction holds even when the nodes are actually implemented as elements
of a single array, and the references are actually array indices : as long as no arithmetic is done on those indices, the data structure is essentially a linked one. Linking can be done in two ways - Using dynamic allocation and using array index linking.A linked list is a collection of structures ordered not by their physical placement in memory but by logical links that
are stored as part of the data in the structure itself. It is not necessary that it should be stored in the adjacent memory
locations. Every structure has a data field and a address field. The Address field contains the address of its successor .Linked list can be singly, doubly or multiply linked and can either be linear or circular.Basic Properties-Objects, called nodes, are linked in a linear sequence-A reference to the first node of the list is always kept. This is called the 'head' or 'front'.[3]
A linked list with three nodes contain two fields each: an integer value and a link to the next nodeA linked list with a single node.
Example in JavaThis is an example of the node class used to store integers in a Java implementation of a linked list.
public classA structure like this which contains a member that points to the same structure is called a self refrential
structure.that in an in-order traversal of the tree the nodes are visited in ascending order of the stored values.
Basic Properties-Objects, called nodes, are stored in an ordered set.-In-order traversal provides an ascending readout of the data in the tree-Sub trees of the tree are in themselves, trees.Advantages and disadvantagesAdvantages Against Arrays
Compared to arrays, linked data structures allow more flexibility in organizing the data and in allocating space for it.
In arrays, the size of the array must be specified precisely at the beginning, this can be a potential waste of memory.
A linked data structure is built dynamically and never needs to be bigger than the programmer requires. It also
requires no guessing in terms of how much space you must allocate when using a linked data structure. This is a
feature that is key in saving wasted memory.In array, the array elements have to be in contiguous(connected and sequential) portion of memory. But in linkeddata structure, the reference to each node gives us the information where to find out the next one. The nodes of alinked data structure can also be moved individually to different locations without affecting the logical connectionsbetween them, unlike arrays. With due care, a process can add or delete nodes to one part of a data structure evenwhile other processes are working on other parts.On the other hand, access to any particular node in a linked data structure requires following a chain of referencesthat stored in it. If the structure has n nodes, and each node contains at most b links, there will be some nodes thatcannot be reached in less than logb n steps. For many structures, some nodes may require worst case up to n'1 steps.In contrast, many array data structures allow access to any element with a constant number of operations,independent of the number of entries.
Broadly the implementation of these linked data structure is through dynamic data structures . It gives us the chanceto use particular space again. Memory can be utilized more efficiently by using this data structures. Memory is
allocated as per the need and when memory is not further needed, deallocation is done.In some cases, linked data structures may also use more memory (for the link fields) than competing array structures.
This is because linked data structures are not contiguous. Instances of data can be found all over in memory, unlike
arrays.In arrays, nth element can be accessed immediately, while in a linked data structure we have to follow multiple
pointers so element access time varies according to where in the structure the element is.In some theoretical models of computation that enforce the constraints of linked structures, such as the pointermachine, many problems require more steps than in the unconstrained random access machine model.
References[1]Donald Knuth, The Art of Computer Programming [ 2]Bernard A. Galler and Michael J. Fischer. An improved equivalence algorithm. Communications of the ACM, Volume 7, Issue 5 (May 1964),pages 301-303. The paper originating disjoint-set forests. ACM Digital Library (http:/ / portal. acm. org/ citation. cfm?doid=364099. 364331)
[ 3]http:/ / www. cs. toronto. edu/ ~hojjat/ 148s07/ lectures/ week5/ 07linked. pdfSuccinct data structure
In computer science , a succinct data structure is data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson [1] to encode bit vector s, (unlabeled) trees , and planar graph s. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure , in which the size of the data structure depends upon the particular data being represented.Suppose that is the information-theoretical optimal number of bits needed to store some data. A representation ofthis data is called
- implicit if it takes bits of space,- succinct if it takes bits of space, and- compact if it takes bits of space.Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most
well-known example of this is the heap.represented as a bit array where iff . An indexable dictionary supports the usualmethods on dictionaries (queries, and insertions/deletions in the dynamic case) as well as the following operations:
- - for .which uses bits of storage space (the original bit array and an auxiliary structure) and supports rank and select in constant time. It uses an idea similar to that for range-minimum
; there are a constant number of recursions before stopping at a subproblem of a limited size. The bit array is
partitioned into large blocks of size bits and small blocks of size bits. For each large block, the rank of its first bit isstored in a separate table ; each such entry takes bits for a total of bits of storage. Within a large block, another
directory stores the rank of each of the small blocks it contains. The difference here is that it only needs bits for eachentry, since only only the differences from the rank of the first bit in the containing large block need to be stored.
Thus, this table takes a total of bits. A lookup table can then be used that stores the answer to every possible rank
query on a bit string of length for ; this requires bits of storage space. Thus, since each of these auxiliary tables take
space, this data structure supports rank queries in time and bits of space. To answer a query for in constant time, a constant time algorithm computesIn practice, the lookup table can be replaced by bitwise operations and smaller tables to perform find the numberof bits set in the small blocks. This is often beneficial, since succinct data structures find their uses in large data sets,
in which case cache misses become much more frequent and the chances of the lookup table being evicted from
closer CPU caches becomes higher. [5] Select queries can be easily supported by doing a binary search on the same auxiliary structure used for rank ; however, this takes time in the worst case. A more complicated structure using bits of additional storage can be used to support select in constant time. [6]In practice, many of these solutions have hidden constants in the notation which dominatebefore any asymptotic advantage becomes apparent; implementations using broadword operations and word-aligned
blocks often perform better in practice. [7]Entropy-compressed dictionaries The space approach can be improved by noting that there are distinct -subsets of (or binary strings of length with exactly 1-s), and thus is an information theoretic lower boundon the number of bits needed to store . There is a succinct (static) dictionary which attains this bound, namely
using space.[8] This structure can be extended to support rank and select queries and takes space.[2] This bound can be reduced to a space/time tradeoff by reducing the storage space of the dictionary to with queries taking time.[9]ExamplesWhen a sequence of variable-length items needs to be encoded, the items can simply be placed one after another,
with no delimiters. A separate binary string consisting of 1s in the positions where an item begins, and 0s every
where else is encoded along with it. Given this string, the function can quickly determine where each itembegins, given its index.[10]
: an arbitrary binary tree o nodes can be represented inbits while supporting a variety of operations on any node, which includes finding its parent, its left andright child, and returning the size of its subtree, each in constant time. The number of different binary trees on
nodes is . For large , this is about ; thus we need at least about bits to encode it. A succinct binary tree therefore would occupy only bits per node.]Raman, R.; V. Raman, S. S Rao (2002). "Succinct indexable dictionaries with applications to encoding k-ary trees and multisets" (http:/ /www. cs. cmu. edu/ afs/ cs. cmu. edu/ project/ aladdin/ wwwlocal/ hash/ RaRaRa02. pdf). Proceedings of the thirteenth annual ACM-SIAMsymposium on Discrete algorithms. pp.-233242. ISBN-089871513X. .[3]Sadakane, K.; R. Grossi (2006). "Squeezing succinct data structures into entropy bounds" (http:/ / www. dmi. unisa. it/ people/ cerulli/ www/WSPages/ WSFiles/ Abs/ S3/ S33_abs_Grossi. pdf). Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm.pp.-12301239. ISBN-0898716055. .
[ 4]Jacobson, G. (1989). Space-efficient static trees and graphs (http:/ / www. cs. cmu. edu/ afs/ cs/ project/ aladdin/ wwwlocal/ compression/00063533. pdf). .
[ 5]Gonzlez, R.; S. Grabowski, V. M"kinen, G. Navarro (2005). "Practical implementation of rank and select queries" (http:/ / www. dcc. uchile.cl/ ~gnavarro/ algoritmos/ ps/ wea05. pdf). Poster Proceedings Volume of 4th Workshop on Efficient and Experimental Algorithms (WEA).pp.-2738. .
[ 6]Clark, D. (1998). Compact pat trees (https:/ / uwspace. uwaterloo. ca/ bitstream/ 10012/ 64/ 1/ nq21335. pdf). .
[ 7]Vigna, S. (2008). "Broadword implementation of rank/select queries" (http:/ / sux. dsi. unimi. it/ paper. pdf). Experimental Algorithms:154168. .[8]Brodnik, A.; J. I Munro (1999). "Membership in constant time and almost-minimum space" (http:/ / www. cs. cmu. edu/ afs/ cs. cmu. edu/project/ aladdin/ wwwlocal/ compression/ BM99. pdf). SIAM J. Comput. 28 (5): 16271640. doi:10.1137/S0097539795294165. .[9]Patrascu, M. (2008). "Succincter" (http:/ / people. csail. mit. edu/ mip/ papers/ succinct/ succinct. pdf). Foundations of Computer Science,2008. FOCS'08. IEEE 49th Annual IEEE Symposium on. pp.-305313. .
[ 10]Belazzougui, Djamal. "Hash, displace, and compress" (http:/ / cmph. sourceforge. net/ papers/ esa09. pdf). .Implicit data structure
In computer science , an implicit data structure is a data structure that uses very little memory besides the actualdata elements i.e. very little information other than main data is stored in these structures. These are storage schemes
which retain no pointers and represent the file o k-key records as a simple n by k array n thus retrieve faster. In
implicit data structures the only structural information to be given is to allow the array to grow and shrink as n. No
extra information is required. It is called "implicit" because most of the structure of the elements is expressed
implicitly by their order. Another term used interchangeably is space efficient . Definitions of " very little ... is vague and can mean from O(1) to O(log n ) extra space. Implicit data structure encodes data efficiently,so that it does notneed to be decoded to be used. Everything is accessed in-place, by reading bits at various position in data. To
achieve optimal coding, we use bits instead of bytes. Implicit data structures are frequently also succinct data structure s.Although one may argue that disk space is no longer a problem and we should not concern ourselves with improving
space utilization, the issue that implicit data structures are designed to improve is main memory utilization. Harddisks, or any other means of large data capacity, I/O devices, are orders of magnitudes slower than main memory.
Hence, the higher percentage of a task can fit in buffers in main memory the less dependence is on slow I/O devices.
Hence, if a larger chunk of an implicit data structure fits in main memory the operations performed on it can be
faster even if the asymptotic running time is not as good as its space-oblivious counterpart. Furthermore, since the
CPU-cache is usually much smaller than main-memory, implicit data structures can improve cache-efficiency and
thus running speed, especially if the method used improves locality. Keys are scanned very efficiently. Downloading
indexed data in mobiles becomes easier.For presentation of elements with different weight several data structures are required.The structure uses one more
location besides required for values of elements.The first structure supports worst case search time in terms of rank of weight of elements w.r.t multi-set of weights.If the elements are drawn from uniform distribution , then variationof this structure takes average time.The same result obtain for the data structures in which the intervals between
consecutive values have access probabilities .ExamplesExamples of implicit data structures include-Binary heap-BeapFurther reading-See publications of Herv... Brnnimann [1], J. Ian Munro [2], Greg Frederickson [3]References[1]http:/ / photon. poly. edu/ ~hbr/[2]http:/ / www. cs. uwaterloo. ca/ ~imunro/[3]http:/ / www. cs. purdue. edu/ people/ gnfCompressed data structure
conventional data structure for the problem, but whose size can be substantially smaller. The size of the compressed
data structure is typically highly dependent upon the entropy of the data being represented. Important examples of compressed data structures include the compressed suffix array [1] [2] and thecan support only word-based searches. In addition, inverted indexes do not have the self-indexing feature.
information-theoretic minimum, which is a worst-case notion of the space needed to represent the data. In contrast,
the size of a compressed data structure depends upon the particular data being represented. When the data are
compressible, as is often the case in practice for natural language text, the compressed data structure can occupy
substantially less space than the information-theoretic minimum.]R. Grossi and J. S. Vitter, Compressed Suffix Arrays and Suffix Trees with Applications to Text Indexing and String Matching], Proceedingsof the 32nd ACM Symposium on Theory of Computing, May 2000, 397-406. Journal version in SIAM Journal on Computing, 35(2), 2005,378-407.
[ 2]R. Grossi, A. Gupta, and J. S. Vitter, High-Order Entropy-Compressed Text Indexes, Proceedings of the 14th Annual SIAM/ACM Symposiumon Discrete Algorithms, January 2003, 841-850.[3]P. Ferragina and G. Manzini, Opportunistic Data Structures with Applications, Proceedings of the 41st IEEE Symposium on Foundations ofComputer Science, November 2000, 390-398. Journal version in Indexing Compressed Text, Journal of the ACM, 52(4), 2005, 552-581.
Search data structureIn computer science, a search data structure is any data structure that allows the efficient retrieval of specific itemsfrom a set of items, such as a specific record from a database.
The simplest, most general, and least efficient search structure is merely an unordered sequential list of all the items.allow faster retrieval; however, they are limited to queries of some specific kind. Moreover, since the cost of
building such structures is at least proportional to n , they only pay off if several queries are to be performed on the same database (or on a database that changes little between queries).insertion, deletion, or modification of items between successive queries. In the dynamic case, one must also consider
the cost of fixing the search structure to account for the changes in the database.common kinds of query are "find the item with smallest (or largest) key value", "find the item with largest key value
not exceeding v ", "find all items with key values between specified bounds v min and v max ". In certain databases the key values may be points in some multi-dimensional space . For example, the key may be a geographic position ( latitude and longitude ) on the Earth . In that case, common kinds of queries are find the record with a key closest to a given point v ", or "find all items whose key lies at a given distance from v ", or "find all items within a specified region R of the space".A common special case of the latter are simultaneous range queries on two or more simple keys, such as "find all
employee records with salary between 50,000 and 100,000 and hired between 1995 and 2007".Single ordered keys-Array if the key values span a moderately compact interval.-Priority-sorted list; see linear search-Key-sorted array; see binary search-Self-balancing binary search tree-Hash table
Finding the smallest element-HeapAsymptotic amortized worst-case analysisIn this table, the asymptotic notation O(f(n)) means "not exceeding some fixed multiple of f(n) in the worst case." Insert Delete Search Find maximum Space usage Unsorted arrayO(1)O(1)O(n)O(n)O(n)Value-indexed arrayO(1)O(1)O(1)O(n)O(n)Sorted arrayO(n)O(n)O(log n)O(1)O(n)Unsorted linked listO(1)*O(1)*O(n)O(n)O(n)Sorted linked listO(1)*O(1)*O(n)O(1)O(n)Self-balancing binary treeO(log n)O(log n)O(log n)O(log n)O(n)HeapO(log n)O(log n)**O(n)O(1)O(n)Hash tableO(1)O(1)O(1)O(n)O(n)
* The cost to add or delete an element into a known location in the list (i.e. if you have an iterator to the location) is O(1). If you don't know the
location, then you need to traverse the list to the location of deletion/insertion, which takes O( n ) time. -** The deletion cost is O(log n ) for the minimum or maximum, O( n ) for an arbitrary element.This table is only an approximate summary; for each data structure there are special situations and variants that may
lead to different costs. Also two or more data structures can be combined to obtain lower costs.structure in-place, but instead always yield a new updated structure. (A persistent data structure is
not a data structure committed to persistent storage , such as a disk; this is a different and unrelated sense of the word "persistent.")merge operation that can create a new version from two previous versions, the data structure is called
confluently persistent . Structures that are not persistent are called ephemeral .[1]These types of data structures are particularly common in logical and functional programming, and in a purelyfunctional program all data is immutable, so all data structures are automatically fully persistent.[1] Persistent datastructures can also be created using in-place updating of data and these may, in general, use less time or storagespace than their purely functional counterparts.
While persistence can be achieved by simple copying, this is inefficient in time and space, because most operations
make only small changes to a data structure. A better method is to exploit the similarity between the new and old
versions to share structure between them, such as using the same subtree in a number of tree structure s. However,because it rapidly becomes infeasible to determine how many previous versions share which parts of the structure,
and because it is often desirable to discard old versions, this necessitates an environment with garbage collection .becoming shared between both the old list and the new list. So long as the contents of the tail are immutable, this
sharing will be invisible to the program. Many common reference-based data structures, such as red-black tree s, [2] and queue s, [3] can easily be adapted to create a persistent version. Some other like Stack ,additional operation min returning minimal element in constant time without incurring additional complexity on
standard operations of queuing and dequeuing on both ends),linked list, but with additional operation of random access with sub-linear, most often logarithmic, complexity),
There exists also persistent data structures which uses destructible operations (thus impossible to implement
efficiently in the purely functional languages like Haskell, however possible in languages like C, Java), they are
however not needed, as most data structures are currently available in pure versions which are often simpler to
implement, and often behaves better in multi-threaded environments., they are purely functional because once a node in the list has been allocated, it cannot be modified, only
copied or destroyed. Note that ML itself is not purely functional.where a circle indicates a node in the list (the arrow out showing the second element of the node which is a pointer to
another node). Now concatenating the two lists:zs = xs ++ ysresults in the following memory structure:) persist and have not been modified.The reason for the copy is that the last node in xs (the node containing the original value 2) cannot be modified topoint to the start of ys, because that would change the value of xs.
TreesThis example is taken from Okasaki. See the bibliography.For instance, the set of dataxs = [a, b, c, d, f, g, h]might be represented by the following binary search tree:
A function which inserts data into the binary tree and maintains the invariant is: fun insert (x, E) = T (E, x, E) | insert (x, s as T (a, y, b)) = if x < y then T (insert (x, a), y, b) else if x > y then T (a, y, insert (x, b)) else s After executingys = insert ("e", xs)we end up with the following:(GC) to automatically free up nodes which have no live references, and this is why GC is a feature commonly found
in functional programming languages .Since every value in a purely functional computation is built up out of existing values, it would seem that it is
impossible to create a cycle of references. In that case, the reference graph (the graph of the references from object to
object) could only be a directed acyclic graph . However, in most functional languages, functions can be defined recursively ; this capability allows recursive structures using functional suspensions . In lazy languages, such asReferences[1]Kaplan, Haim (2001). "Persistent data structures" (http:/ / www. math. tau. ac. il/ ~haimk/ papers/ persistent-survey. ps). Handbook on DataStructures and Applications (CRC Press). .[2]Neil Sarnak, Robert E. Tarjan (1986). "Planar Point Location Using Persistent Search Trees" (http:/ / www. link. cs. cmu. edu/ 15859-f07/papers/ point-location. pdf). Communications of the ACM 29 (7): 669679. doi:10.1145/6138.6151. .
[ 3]Chris Okasaki. Purely Functional Data Structures (thesis) (http:/ / www. cs. cmu. edu/ ~rwh/ theses/ okasaki. pdf). .Further reading
-Persistent Data Structures and Managed References (http:/ / www. infoq. com/ presentations/Value-Identity-State-Rich-Hickey) - video presentation by Rich Hickey on Clojure's use of persistent datastructures and how they support concurrency
-Making Data Structures Persistent (http:/ / www. cs. cmu. edu/ ~sleator/ papers/ Persistence. htm) by James R.Driscoll, Neil Sarnak, Daniel D. Sleator, Robert E. Tarjan-Fully persistent arrays for efficient incremental updates and voluminous reads (http:/ / citeseerx. ist. psu. edu/viewdoc/ summary?doi=10. 1. 1. 34. 1317)-Real-Time Deques, Multihead Turing Machines, and Purely Functional Programming (http:/ / citeseerx. ist. psu.edu/ viewdoc/ summary?doi=10. 1. 1. 51. 2895)
-Purely functional data structures by Chris Okasaki, Cambridge University Press, 1998, ISBN 0-521-66350-4.
-Purely Functional Data Structures (http:/ / www. cs. cmu. edu/ ~rwh/ theses/ okasaki. pdf) thesis by Chris Okasaki(PDF format)-Fully Persistent Lists with Catenation (http:/ / www. cs. cmu. edu/ ~sleator/ papers/ fully-persistent-lists. pdf) byJames R. Driscoll, Daniel D. Sleator, Robert E. Tarjan (PDF)
-Persistent Data Structures (http:/ / ocw. mit. edu/ courses/ electrical-engineering-and-computer-science/6-854j-advanced-algorithms-fall-2005/ lecture-notes/ persistent. pdf) from MIT open course AdvancedAlgorithms (http:/ / ocw. mit. edu/ courses/ electrical-engineering-and-computer-science/6-854j-advanced-algorithms-fall-2005)
-Lightweight Java implementation of Persistent Red-Black Trees (http:/ / wiki. edinburghhacklab. com/PersistentRedBlackTreeSet)
Concurrent data structureIn computer science, a concurrent data structure is a particular way of storing and organizing data for access bymultiple computing threads (or processes) on a computer.
threads' operations on the data by the operating system, even though the processors never issued two operations that
accessed the data simultaneously.accessed by multiple threads which may actually access the data simultaneously because they run on different
processors that communicate with one another. The concurrent data structure (sometimes also called a
shared data structure ) is usually considered to reside in an abstract storage environment called shared memory , though thismemory may be physically implemented as either a "tightly coupled" or a distributed collection of storage modules.
Concurrent data structures, intended for use in parallel or distributed computing environments, differ from
"sequential" data structures, intended for use on a processor machine, in several ways . [1]sequential environment one specifies the data structure's properties and checks that they are implemented correctly,
by providing safety properties . In a concurrent environment, the specification must also describe liveness propertieswhich an implementation must provide. Safety properties usually state that something bad never happens,
while liveness properties state that something good keeps happening. These properties can be expressed, for
example, usingThe safety properties of concurrent data structures must capture their behavior given the many possible interleavings
of methods called by different threads. It is quite intuitive to specify how abstract data structures behave in a
sequential setting in which there are no interleavings. Therefore, many mainstream approaches for arguing the safety
properties of a concurrent data structure (such as serializability , linearizability , sequential consistency , and quiescent consistency [1]) specify the structures properties sequentially, and map its concurrent executions to a collection of
sequential ones.In order to guarantee the safety and liveness properties, concurrent data structures must typically (though not always)
allow threads to reach consensus as to the results of their simultaneous data access and modification requests. Tosupport such agreement, concurrent data structures are implemented using special primitive synchronization
operations (see synchronization primitives ) available on modern multiprocessor machine s that allow multiple threads to reach consensus. This consensus can be reached achieved in a blocking manner by using locks , or without locks, in which case it is non-blocking . There is a wide body of theory on the design of concurrent data structures (see bibliographical references).Concurrent data structures are significantly more difficult to design and to verify as being correct than their
sequential counterparts.The primary source of this additional difficulty is concurrency, exacerbated by the fact that threads must be thought
of as being completely asynchronous: they are subject to operating system preemption, page faults, interrupts, and so
on.On today's machines, the layout of processors and memory, the layout of data in memory, the communication load
on the various elements of the multiprocessor architecture all influence performance. Furthermore, there is a tension
between correctness and performance: algorithmic enhancements that seek to improve performance often make it
more difficult to design and verify a correct data structure implementation. A key measure for performance is scalability, captured by the speedup of the implementation. Speedup is a measureof how effectively the application is utilizing the machine it is running on. On a machine with P processors, the
speedup is the ratio of the structures execution time on a single processor to its execution time on T processors.
Ideally, we want linear speedup: we would like to achieve a speedup of P when using P processors. Data structures
whose speedup grows with P are called scalable . The extent to which one can scale the performance of a concurrent data structure is captured by a formula known as.A key issue with the performance of concurrent data structures is the level of memory contention: the overhead intraffic to and from memory as a result of multiple threads concurrently attempting to access the same locations inmemory. This issue is most acute with blocking implementations in which locks control access to memory. In orderto acquire a lock, a thread must repeatedly attempt to modify that location. On a cache-coherent multiprocessor (onein which processors have local caches that are updated by hardware in order to keep them consistent with the latestvalues stored) this results in long waiting times for each attempt to modify the location, and is exacerbated by theadditional memory traffic associated with unsuccessful attempts to acquire the lock.
References[1]Mark Moir and Nir Shavit (2007). " Concurrent Data Structures (http:/ / www. cs. tau. ac. il/ ~shanir/ concurrent-data-structures. pdf)". InDinesh Metha and Sartaj Sahni. 'Handbook of Data Structures and Applications' (1st ed.). Chapman and Hall/CRC Press. pp.-47-14 - 47-30.
Further reading-Nancy Lynch "Distributed Computing"-Hagit Attiya and Jennifer Welch "Distributed Computing: Fundamentals, Simulations And Advanced Topics, 2ndEd"
-Doug Lea, "Concurrent Programming in Java: Design Principles and Patterns"-Maurice Herlihy and Nir Shavit, "The Art of Multiprocessor Programming"-Mattson, Sanders, and Massingil "Patterns for Parallel Programming"External links
-Multithreaded data structures for parallel computing, Part 1 (http:/ / www. ibm. com/ developerworks/ aix/library/ au-multithreaded_structures1/ index. html) (Designing concurrent data structures) by Arpan Sen
-Multithreaded data structures for parallel computing: Part 2 (http:/ / www. ibm. com/ developerworks/ aix/library/ au-multithreaded_structures2/ index. html) (Designing concurrent data structures without mutexes) byArpan Sen
abstract data type is defined indirectly, only by the operations that may be performed on it and by mathematical
constraints on the effects (and possibly cost ) of those operations. [1]all operations take the same time no matter how many items have been pushed into the stack, and that the stack uses
a constant amount of storage for each element.Abstract data types are purely theoretical entities, used (among other things) to simplify the description of abstract
algorithms, to classify and evaluate data structures, and to formally describe the type system s of programming languages. However, an ADT may be implemented by specific data type s or data structure s, in many ways and in many programming languages; or described in a formal specification language . ADTs are often implemented as modules : the module's interface declares procedures that correspond to the ADT operations, sometimes with comments that describe the constraints. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs.An Abstract Data type is defined as a mathematical model of the data objects that make up a data type as well as the
functions that operate on these objects. There are no standard conventions for defining them. A broad division may
be drawn between "imperative" and "functional" definition styles.Some operations may change the state of the ADT; therefore, the order in which operations are evaluated is
important, and the same operation on the same entities may have different effects if executed at different times
-just like the instructions of a computer, or the commands and procedures of an imperative language. To underscore
this view, it is customary to say that the operations are executed or applied , rather than evaluated . The imperativestyle is often used when describing abstract algorithms. This is described by Donald E. Knuth and can be referenced
from here-store(V,x) where x is a value of unspecified nature; and-fetch(V), that yields a value;with the constraint that-fetch(V) always returns the value x used in the most recent store(V,x) operation on the same variable V.
As in so many programming languages, the operation store ( V , x ) is often written V x (or some similar notation), and fetch ( V ) is implied whenever a variable V is used in a context where a value is required. Thus, for example, V V + 1 is commonly understood to be a shorthand for store ( V , fetch ( V ) + 1). In this definition, it is implicitly assumed that storing a value into a variable U has no effect on the state of a distinct variable V. To make this assumption explicit, one could add the constraint that-if U and V are distinct variables, the sequence { store(U,x); store(V,y) } is equivalent to { store(V,y);store(U,x) }.
More generally, ADT definitions often assume that any operation that changes the state of one ADT instance has no
effect on the state of any other instance (including other instances of the same ADT) - unless the ADT axioms imply that the two instances are connected ( aliased ) in that sense. For example, when extending the definition of abstract variable to include abstract records , the operation that selects a field from a record variable R must yield a variable V that is aliased to that part of R .because its effect is not defined. (However, there are some important algorithms whose efficiency strongly depends
on the assumption that such a fetch is legal, and returns some arbitrary value in the variable's range.)Some algorithms need to create new instances of some ADT (such as new variables, or new stacks). To describe
such algorithms, one usually includes in the ADT definition a create () operation that yields an instance of theThis axiom may be strengthened to exclude also partial aliasing with other instances. On the other hand, this axiom
still allows implementations of create () to yield a previously created instance that has become inaccessible to the program.As another example, an imperative definition of an abstract stack could specify that the state of a stack
S can be modified only by the operations-push(S,x), where x is some value of unspecified nature; and-pop(S), that yields a value as a result;with the constraint that
-For any value x and any abstract variable V, the sequence of operations { push(S,x); V pop(S) } is equivalentto { V x };
); }where x,y, and z are any values, and U, V, W are pairwise distinct variables, is equivalent to{ U y; V z; W x }
Here it is implicitly assumed that operations on a stack instance do not modify the state of any other ADT instance,
including other stacks; that is,-For any values x,y, and any distinct stacks S and T, the sequence { push(S,x); push(T,y) } is equivalent to {push(T,y); push(S,x) }.A stack ADT definition usually includes also a Boolean-valued function empty(S) and a create() operation thatreturns a stack instance, with axioms equivalent to
-create() S for any stack S (a newly created stack is distinct from all previous stacks)-empty(create()) (a newly created stack is empty)-not empty(push(S,x)) (pushing something into a stack makes it non-empty)Single-instance style
Sometimes an ADT is defined as if only one instance of it existed during the execution of the algorithm, and all
operations were applied to that instance, which is not explicitly notated. For example, the abstract stack above could
have been defined with operations push ( x ) and pop (), that operate on "the" only existing stack. ADT definitions inthis style can be easily rewritten to admit multiple coexisting instances of the ADT, by adding an explicit instance
parameter (like S in the previous example) to every operation that uses or modifies the implicit instance.On the other hand, some ADTs cannot be meaningfully defined without assuming multiple instances. This is the case
when a single operation takes two distinct instances of the ADT as parameters. For an example, consider augmenting
the definition of the stack ADT with an operation compare ( S , T ) that checks whether the stacks S and T contain the same items in the same order.immaterial, and the same operation applied to the same arguments (including the same input states) will always
return the same results (and output states).In the functional view, in particular, there is no way (or need) to define an "abstract variable" with the semantics of
imperative variables (namely, with fetch and store operations). Instead of storing values into variables, one passes them as arguments to functions.Example: abstract stack (functional)For example, a complete functional-style definition of a stack ADT could use the three operations:-push: takes a stack state and an arbitrary value, returns a stack state;-top: takes a stack state, returns a value;-pop: takes a stack state, returns a stack state;with the following axioms:-top(push(s,x)) = x (pushing an item onto a stack leaves it at the top)-pop(push(s,x)) = s (pop undoes the effect of push)
In a functional-style definition there is no need for a create operation. Indeed, there is no notion of "stackinstance". The stack states can be thought of as being potential states of a single stack structure, and two stack states
that contain the same values in the same order are considered to be identical states. This view actually mirrors the
behavior of some concrete implementations, such as linked list s with hash cons .As in some other branches of mathematics, it is customary to assume also that the stack states are only those whose
existence can be proved from the axioms in a finite number of steps. In the stack ADT example above, this rule
means that every stack is a finite sequence of values, that becomes the empty stack () after a finite number of pop s. By themselves, the axioms above do not exclude the existence of infinite stacks (that can be pop ed forever, eachtime yielding a different state) or circular stacks (that return to the same state after a finite number of
pop s). In particular, they do not exclude states s such that pop ( s ) = s or push ( s , x ) = s for some x . However, since one cannot obtain such stack states with the given operations, they are assumed "not to exist".these is all that is required to make use of an ADT object. The user does not need any technical knowledge of how
the implementation works to use the ADT. In this way, the implementation may be complex but will be encapsulated
in a simple interface when it is actually used. -Localization of changeCode that uses an ADT object will not need to be edited if the implementation of the ADT is changed. Since any
changes to the implementation must still comply with the interface, and since code using an ADT may only refer to
properties and abilities specified in the interface, changes may be made to the implementation without requiring any
changes in code where the ADT is used. -FlexibilityDifferent implementations of an ADT, having all the same properties and abilities, are equivalent and may be used
somewhat interchangeably in code that uses the ADT. This gives a great deal of flexibility when using ADT objects
in different situations. For example, different implementations of an ADT may be more efficient in different
situations; it is possible to use each in the situation where they are preferable, thus increasing overall efficiency.
Typical operationsSome operations that are often specified for ADTs (possibly under other names) are-compare(s,t), that tests whether two structures are equivalent in some sense;-hash(s), that computes some standard hash function from the instance's state;-print(s) or show(s), that produces a human-readable representation of the structure's state.In imperative-styl