Dataset Viewer
Auto-converted to Parquet Duplicate
question
stringclasses
1 value
answer
stringlengths
576
1.59k
source
stringclasses
1 value
No question: Moodle
cap theorem & kvs martijn de vos week 10 - cs - 460 cs - 460 1 modern web workloads β€’ web - based applications cause spikes β€’ data : large and unstructured β€’ random reads and writes ; sometimes write - heavy ( e. g., finance apps ) β€’ joins infrequent challenges with rdbms β€’ not designed for distributed environments β€’ scaling sql is expensive and inefficient cs - 460 3 shift in workload demands gave rise to nosql nosql = not only sql avoids : β€’ strict acid compliance β€’ complex joins and relational schemes provides : β€’ scalability β€’ easy and frequent changes to db β€’ large data volumes cs - 460 4 free lunch weaker consistency guarantees, limited query expressiveness availability β€’ data replication improves availability in case of failures β€’ by storing the same data in more than one site or node cs - 460 5 replicated replicated replicated the cap theorem * in a distributed system you can satisfy at most 2 out of 3 guarantees : 1. consistency : every read receives the most recent write or an error 2. availability : every request received by a non - failing node in the system must result in a ( timely ) response 3. partition tolerance : the system continues to operate despite an arbitrary number of messages being dropped ( or delayed ) by the network cs - 460 6 * proposed by eric brewer ( berkeley ) in 2000, proved by gilbert ( nus ) and lynch ( mit ) in 2002 why does
EPFL CS 460 Moodle
No question: Moodle
consistency matter? consistency : every read receives the most recent write or an error cs - 460 7 use case what you expect ( consistency ) what could go wrong ( inconsistency ) app transfer €500 via your phone, it instantly shows up on your desktop app too. your balance looks updated on your phone but not on your desktop. a flight a seat is shown as unavailable right after someone else books it. two users book the same seat at once. shopping you remove an item from your shopping cart and it ’ s instantly reflected everywhere. you get charged for the same item because a device has stale cart data. why does availability matter? availability : every request received by a non - failing node in the system must result in a ( timely ) response cs - 460 8 reliability users expect services to work 24 / 7 β€’ a 500ms delay on amazon β†’ 20 % revenue loss β€’ if checkout fails, users can abandon their purchase speed = money latency kills engagement β€’ amazon : every extra 100ms β†’ millions lost β€’ google : longer load time β†’fewer searches β†’lost revenue cognitive drift humans are impatient β€’ 1s of delay and users mentally move on β€’ responsiveness is key to user flow and retention why does partition tolerance matter? cs - 460 9 partition tolerance : the system continues to operate despite an arbitrary number of messages being dropped ( or delayed ) by the network event example impact internet router outage data center isp failure servers are not reachable anymore under
EPFL CS 460 Moodle
No question: Moodle
##sea cable cut sea - me - we 5 cable incident ( 2024 ) connectivity loss between regions dns outage dyn ddos attack ( 2016 ) users can ’ t resolve hostnames bgp configuration error facebook outage ( 2021 ) outage of facebook and subsidiaries take - away : parnnons actually happen in real - world seongs cap combinations cs - 460 10 ca : consistency + availability ap : availability + partition tolerance cp : consistency + partition tolerance consistency available consistency available under partitions under partitions on partition return stale data deny some requests exist in practical distributed settings : cassandra : zookeeper cap in practice β€’ 2 out of 3 is somewhat misleading β€’ partition tolerance is non - negotiable in real systems, we need it β€’ so the real choice is between consistency and availability β€’ traditional rdbmss β†’consistency, partition tolerance β€’ nosql β†’ availability, partition tolerance cs - 460 11 priorinzes user experience, consistency priorinzes correctness acid vs. base – the tradeoff in modern systems β€’ you can ’ t have acid properties and high availability under network partitions β€’ base systems embrace this, trading strict consistency for availability and scalability cs - 460 12 is like a strict accountant, base is like a bar tab. base properties β€’ basic availability β€’ possibilities of faults but not a fault of the whole system β€’ soft - state β€’ copies of a data item may be inconsistent β€’
EPFL CS 460 Moodle
No question: Moodle
eventually consistent β€’ copies becomes consistent at some later time if there are no more updates to that data item cs - 460 13 [ https : / / www. guru99. com / sql - vs - nosql. html ] key takeways 1. choose the right guarantee for the right task ( cp vs. ap ) 2. partition tolerance is non - negotiable in the cap theorem 3. acid for rdbms, base for nosql systems 4. different applications might need different consistency guarantees cs - 460 14 references β€’ theorem first presented as a conjecture by brewer at the 2000 symposium on principles of distributed compu [ ng ( podc β€’ seth gilbert and nancy lynch, " brewer's conjecture and the feasibility of consistent, available, par [ [ on - tolerant web services ", acm sigact news, volume 33 issue 2 ( 2002 ), pg. 51 – 59. β€’ eric brewer, " cap twelve years later : how the'rules'have changed ", computer, volume 45, issue 2 ( 2012 ), pg. 23 – 29. cs - 460 15 key - value stores cs - 460 16 serving today ’ s workloads cs - 460 17 β€’ speed ( req / s. ) β€’ scale out, not up β€’ avoid single point of failure β€’ low total cost of operation β€’ fewer system administrators β€’ need to serve many users the key - value abstraction ( 1 / 2 ) cs - 460 18 key
EPFL CS 460 Moodle
No question: Moodle
value post _ id ( x. com, facebook. com ) post content, author, timestamp item _ id ( amazon. com ) name, price, stock info flight _ no ( expedia. com ) route, availability, price account _ no ( bank. com ) balance, transactions, owner key - value is a powerful abstraction powering the modern web the key - value abstraction ( 2 / 2 ) β€’ a dictionary - like data structure β€’ supports insert, lookup, and delete by key β€’ example : a local hash table β€’ but now, distributed across many machines β€’ designed to handle web - scale workloads β€’ like distributed hash tables ( dhts ) in p2p systems β€’ key - value solutions reuse many techniques from dhts β€’ consistent hashing, replication, partitioning, … cs - 460 19 can we effectively locate and retrieve a key in a large, distributed database? key - value / nosql data model β€’ core opera [ ons : get ( key ) and put ( key, value ) β€’ storage model : tables, but more flexible β€’ called column families ( cassandra ), tables ( hbase ), collec3ons ( mongodb ) β€’ unlike tradi [ onal rdbms tables : β€’ may be schema - less : each row can have columns β€’ does not always support joins or foreign keys cs - 460 20 design of a real key - value store, cassandra released in 2008
EPFL CS 460 Moodle
No question: Moodle
, a ` er dynamo ( 2007 ) and bigtable ( 2006 ) cs - 460 21 cassandra cs - 449 22 β€’ a distributed key - value store β€’ many companies use cassandra in their production clusters β€’ ibm, adobe, hp, ebay, ericsson, symantec, twitter, spotify, netflix β€’ scalable data model : data split across nodes β€’ cap : availability and partition tolerance objectives β€’ distributed storage system β€’ targets large amount of unstructured data β€’ intended to run in a datacenter ( and also across dcs ) across many commodity servers β€’ no single point of failure β€’ originally designed at facebook β€’ open - sourced later, today an apache project ( 2010 ) β€’ but : does not support joins, limited support for transac5ons and aggrega5on cs - 460 24 data model ( 1 / 4 ) β€’ table in cassandra : distributed map indexed by a key ( can be nested ) β€’ row : idenrfied by a unique key ( primary key ) β€’ keyspace : a logical container for column families that defines the replicaron strategy and other configuraron oprons β€’ column family : a logical grouping of columns with a shared key, contains supercolumns or columns β€’ column : basic data structures with a name, type, value, rmestamp β€’ supercolumn : stores a map of sub - columns. columns that are likely to be queried together should be placed in the same column family cs
EPFL CS 460 Moodle
No question: Moodle
- 460 25 data model ( 2 / 4 ) cs - 460 26 settings settings name value timestamp column column family keyspace type data model ( 3 / 4 ) cs - 460 27 feature rdbms cassandra organization database β†’ table β†’ row keyspace β†’ column family β†’ column row structure fixed schema dynamic columns column data name, type, value name, type, value, timestamp schema changes typically requires downtime during runtime data model normalized with joins denormalized data model ( 4 / 4 ) cs - 460 28 column 1 column 2 column 3 row data row data row data key 1 key 2 simple column family key 1 super column 1 column 1 column 4 row data row data super column family column 1 column 2 row data row data super column 2 column 3 column 4 row data row data facebook example β€’ facebook maintains a per - user index of all messages exchanged between senders and receivers β€’ two kind of search features enabled in 2008 β€’ search by term β€’ search by user : given a user ’ s name, returns all the messages sent / received by that user cs - 460 29 facebook term search β€’ primary key : userid β€’ words of messages : super columns β€’ columns within the super columns : individual message identifiers ( messageid ) of the messages that contains the word cs - 460 30 m msgidi m msgidj m msgidk … super column 1 : term1 m msgidt m msgidj m msgidk
EPFL CS 460 Moodle
No question: Moodle
… super column k : termk row key < user id > super column 1 super column k column family ( user 1 ) super column 1 super column k column family ( user 2 ) facebook inbox search β€’ primary key : userid β€’ recipients id ’ s : super columns β€’ columns within the super columns : messageid cs - 460 31 m msgidi m msgidj m msgidk … super column 1 : user id1 m msgidt m msgidj m msgidk … super column r : user idr row key < user id > super column 1 super column k column family 1 super column 1 super column k column family 2 schema cs - 460 32 m msgidi m msgidj m msgidk … super column 1 : term1 m msgidt m msgidj m msgidk … super column k : termk m msgidi m msgidj m msgidk … super column 1 : userid1 m msgidt m msgidj m msgidk … super column r : userid r row key < user id > super column 1 super column k column family 1 super column 1 super column k column family 2 term search interactions example cs - 460 33 m m ab … super column hello m m m … super column world ab m m … super column bob m m m … super column jack alice super column hello column family terms super column bob column family
EPFL CS 460 Moodle
No question: Moodle
inter. alice sends β€œ hello ” to bob ( msgid : ab ) cassandra architecture β€’ decentralized, peer - to - peer architecture β€’ easy to scale : add / remove nodes β€’ read / write requests can go to any replica node β€’ reads and write have a configurable consistency level cs - 460 34 cassandra architecture 1. partitioning 2. load balancing 3. replication 4. writes and reads 5. data structures 6. membership management 7. consistency cs - 460 35 terms node and replica interchangably cassandra : partioning cs - 460 36 1 2 3 4 β€’ nodes are conceptually ordered on a clockwise ring β€’ each node is responsible for the region of the ring between itself and its predecessor β€’ example of a write without replication ( right ) 0 - 32 32 - 64 96 - 128 64 - 96 write β€œ user123 ” 1 2 h ( β€œ user123 ” ) = 68 3 route write to responsible node uses a ring - based dht but without finger tables or rounng token range : [ 0 - 128 ] cassandra : load balancing β€’ random partitioning leads to non - uniform data and load distribution β€’ cassandra assumes homogeneous nodes ’ performance β€’ how is it addressed β€’ lightly loaded nodes move on the ring to alleviate loaded ones β€’ virtual nodes cs - 460 37 1 2 3 4 0 - 32 32 - 64 96 - 128 64 - 96 token range : [ 0 - 128 ] popular key range overloaded 1 cassandra : replication β€’ replication factor
EPFL CS 460 Moodle
No question: Moodle
n : determines how many copies of the data exist β€’ each data item is replicated at n nodes β€’ various replication strategies β€’ example with n = 2 ( right ) cs - 460 38 38 1 2 3 4 0 - 32 32 - 64 96 - 128 64 - 96 write β€œ user123 ” 1 2 h ( β€œ user123 ” ) = 35 3 route write to responsible nodes cassandra : replication strategies cs - 460 39 simplestrategy networktopologystrategy used for single dc and rack for deployment across different dcs easy setup tunable replication factor per dc create keyspace cluster1 with replication = {'class ’ :'simplestrategy ','replication _ factor': 2 } ; create keyspace cluster1 with replication = {'class':'networktopologystrategy ','east': 2,'west': 3 } ; : random partitioner or byte - ordered ( ideal for range queries ) cassandra : writes ( 1 / 2 ) β€’ coordinator : acts as a proxy between clients and replicas β€’ writes need to be lock - free and fast ( no reads or disk seeks ) β€’ client sends write to one coordinator node in a cassandra cluster β€’ coordinator may be per - key, or per - client, or per - query β€’ per - key coordinator ensures writes for that key are serialized β€’ when x replicas respond, coordinator returns an acknowledgement to the client cs - 460 40 client coordinator r1 rn … write request replica
EPFL CS 460 Moodle
No question: Moodle
set cassandra : writes ( 2 / 2 ) β€’ always writable : hinted mechanism β€’ if any replica is down, the coordinator writes to all other replicas, and keeps the write locally unnl the down replica comes back up. β€’ when all replicas are down, the coordinator ( front end ) writes ( for up to a few hours ). cs - 460 41 - world analogy : accepting parcels of neighbors who are not at home client coordinator r1 r3 write request for r3 r2 reconcile later cassandra : lightweight transactions β€’ ensures sequential transaction execution β€’ implemented using paxos consensus β€’ at the cost of performance cs - 460 42 cassandra : data structures cs - 460 43 client request memtable ( in memory ) commit log sstables asynchronous flush transacjonal log, used for recovery in case of failures write at a replica node β€’ write - back cache of data partitions that can be searched by key. β€’ in - memory representation of multiple key - value pairs β€’ append - only data structure ( fast ) sorted string tables ( disk ) : β€’ persistent, ordered immutable map from keys to values, where both keys and values are arbitrary byte strings β€’ uses bloom filters 1 2 3 cassandra : memtables flushes β€’ background thread keeps checking the size of all memtables β€’ when a new memtable is created, the previous one marked for flushing β€’ node ’ s global memory threshold have been reached β€’
EPFL CS 460 Moodle
No question: Moodle
commit log is full β€’ another thread flushes all the marked memtables β€’ commit log segments of the flushed memtable are marked for recycling β€’ a bloom filter and index are created cs - 460 44 bloom filters β€’ compact way of representing a set of items β€’ checking for existence ( membership ) in set is cheap β€’ probability of false positives : an item not in set may return true as being in set β€’ never false negatives example fp rate : β€’ m = 4 hash functions β€’ 100 items in filter β€’ 3200 bits β€’ fp rate = 0. 02 % cs - 460 45 cassandra : reads cs - 460 46 1 2 3 4 0 - 32 32 - 64 96 - 128 64 - 96 read β€œ user123 ” 1 2 h ( β€œ user123 ” ) = 35 3 route read to x replicas 4 replicas respond to coordinator client return latest - timestamped value 5 β€’ coordinator can contact x replicas β€’ checks consistency in the background, initiating a read repair if any two values are different β€’ this mechanism seeks to eventually bring all replicas up to date β€’ at a replica : read looks at memtables first, and then sstables β€’ a row may be split across multiple sstables cassandra : membership management ( 1 / 2 ) β€’ any server in the cluster could be the coordinator β€’ so every server needs to maintain a list of all the other servers that are currently in the cluster : full membership β€’ membership needs
EPFL CS 460 Moodle
No question: Moodle
to be updated automatically as servers join, leave, and fail β€’ membership protocol β€’ efficient anti - entropy gossip - based protocol β€’ p2p protocol to discover and share location and state information about other nodes in a cassandra cluster cs - 460 47 cassandra : membership management ( 2 / 2 ) 1 1 10120 66 2 10103 62 3 10098 63 4 10111 65 2 4 3 protocol : β€’ nodes periodically gossip their membership list β€’ on receipt, the local membership list is updated, as shown β€’ if any heartbeat older than Ξ΄fail, node is marked as failed 1 10118 64 2 10110 64 3 10090 58 4 10111 65 1 10120 70 2 10110 64 3 10098 63 4 10111 65 current time : 70 at node 2 ( asynchronous clocks ) address heartbeat counter time ( local ) cs - 460 48 cassandra uses gossip - based cluster membership cassandra : consistency β€’ cassandra has tunable consistency levels β€’ client chooses a consistency level for each read / write operation cs - 460 49 level behavior remarks any contact any node fast ; low consistency all contact all replicas slow ; strong consistency one contact at least one replica faster than all quorum contact quorum across replicas in dcs local _ quorum wait for quorum in first dc client contacts faster than quorum quorum - based protocols in cassandra, the coordinator must contact a quorum of replicas to read or write data cs - 460 50 let : n = # of replicas
EPFL CS 460 Moodle
No question: Moodle
r = # of nodes in read quorum w = # of nodes in write quorum constraints ( for strong consistency ) : r + w > n most recent write is always read quorum = getting agreement from a committee – you don ’ t need everyone, just a majority quorums : example 51 let : n = 5 r = 3 w = 3 consistency cs - 460 1 2 3 5 4 read quorum : { 1, 2, 3 } write quorum : { 2, 4, 5 } will return latest value - off consistency and availability quorums : write - write conflicts 52 let : n = 5 w = 3 - write conflicts can be detected and resolved cs - 460 1 2 3 5 4 write quorum 1 : { 1, 2, 3 } write quorum 2 : { 2, 4, 5 } can ignore older write constraints ( to detect write - write conflicts ) : w > n / 2 quorum trade - offs β€’ in cassandra, values of r and w are configurable per query β€’ no need for strong consistency sometimes β†’eventual consistency 53 cs - 460 53 goal choose : why? consistency high r and w ensures quorum overlap write availability lower w less nodes need to acknowledge a write low read latency lower r faster reply collection by coordinator key features of cassandra cs - 460 54 distributed and decentralized always available with tunable consistency fault - tolerant high write throughput fast and linear scalability
EPFL CS 460 Moodle
No question: Moodle
multiple data center support β€’ nosql appropriate datastructures for many big data applications β€’ distributed key - value stored widely used in production β€’ uses many algorithms from p2p systems and distributed computing key takeaways 1. designing distributed systems is all about trade - offs 2. designing for scale requires rethinking consistency 3. key - value abstractions power modern web applications 55 cs - 460 55 references β€’ giuseppe decandia, deniz hastorun, madan jampani, gunavardhan kakulapati, avinash lakshman, alex pilchin, swaminathan sivasubramanian, peter vosshall, werner vogels : dynamo : amazon's highly available key - value store. sosp 2007 β€’ avinash lakshman, prashant malik : cassandra : a decentralized structured storage system. acm sigops oper. syst. rev. 44 ( 2 ) : 35 - 40 ( 2010 ) cs - 460 56 dhts anne - marie kermarrec cs 460 1 you know you have a distributed system when the crash of a computer you have never heard of stops you from getting any work done. leslie lamport where are we? cs 460 2 consistency protocols cap theorem week 9 gossip protocols week 7 distributed / decentralized systems week 8 - 12 data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db dynamo
EPFL CS 460 Moodle
No question: Moodle
big table cassandra week 9 distributed messaging systems kafka – week 11 structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning week 12 batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( mesos, yarn ) - week 10 query optimization storage hierarchies & layouts transaction management query execution they had a dream cs 460 3 share ressources among multiple machines internet cs 460 4 internet : a collaborative decentralized system cs 460 5 internet : a collaborative decentralized system cs 460 6 he had a similar dream cs 460 7 the web : a decentralized system cs 460 8 the web : a decentralized system cs 460 9 address book for websites ( dns ) common language to communicate ( http ) https : / / www. lemonde. fr / cs 460 10 the web turned extremely centralized, now in the hand of a few giants an increasingly popular alternative β€’ citizen - friendly alternative β€’ decentralized infrastructure β€’ privacy - aware cs 460 11 ( fully ) distributed architectures aka p2p / decentralized cs 460 12 distributed systems β€’ use several machines β€’ yet : appears to the users as a single computer : your fb wall, your netflix interface, etc β€’ name it : the web, the internet, a wireless network, bitcoin, a cloud amazon ec2 / s3 or microsoft azure, a datacenter cs
EPFL CS 460 Moodle
No question: Moodle
460 13 characteristics β€’ aggregate resources β€’ scalability β€’ speed β€’ reliability β€’ at the price of β€’ complexity β€’ cost of maintenance cs 460 14 why are they more complex? β€’ no global clock ; no single global notion of the correct time ( asynchrony ) β€’ unpredictable failures of components : lack of response may be due to either failure of a network component, network path being down, or a computer crash β€’ highly variable bandwidth : from 16kbps ( slow modems or google balloon ) to gbps ( internet2 ) to tbps ( in between dcs of same big company ) β€’ possibly large and variable latency : few ms to several seconds β€’ large numbers of hosts : 2 to several millions cs 460 15 16 p2p applications β€’ large contributor of internet traffic ( ~ 50 % of the internet traffic ) β€’ applications β€’ bitcoin & blockchain β€’ file sharing applications ( gnutella, kazaa, edonkey, bit torrent … ) β€’ archival systems β€’ application level multicast β€’ streaming protocols β€’ telco applications ( skype ) β€’ recommenders β€’ decentralized ai cs 460 why do i tell you about p2p systems? β€’ first distributed systems that seriously focused on scalability β€’ p2p techniques are widely used in cloud computing systems β€’ key - value stores ( e. g., cassandra, riak, voldemort ) use p2p hashing cs 460 17 18 what makes p2p interesting? β€’ end
EPFL CS 460 Moodle
No question: Moodle
- nodes are promoted to active components β€’ nodes participate, interact, contribute to the services they use. β€’ harness huge pools of resources accumulated in millions of end - nodes. β€’ avoid a central / master entity β€’ irregularities and dynamicity are treated as the norm cs 460 the internet : a decentralized system cs 460 19 overlay networks cs 460 20 20 unstructured overlays cs 460 21 22 structured overlays cs 460 hash table β€’ structured overlay network β€’ a hash table : insert, lookup, delete object with keys key = hash ( name ) put ( key, value ) get ( key ) - > value cs 460 23 k6, v6 k1, v1 k5, v5 k2, v2 k4, v4 k3, v3 containers operations : put ( k, v ) get ( k, v ) table of containers β€’ efficient access to a value given a key β€’ mapping key - value ensured by the table of containers distributed hash table β€’ a dht does the same in a distributed setting across millions of hosts on the internet key = hash ( data ) lookup ( key ) - > ip addr ( dht lookup service ) send - rpc ( ip address, put, key, data ) send - rpc ( ip address, get, key ) - > data cs 460 24 k6, v6 k1, v1 k5, v5 k2, v2 k4, v4 k3
EPFL CS 460 Moodle
No question: Moodle
, v3 nodes operations : send ( ) p2p overlay network p2p infrastructure ensures mapping between keys and physical nodes 25 distributed hash table k6, v6 k1, v1 k5, v5 k2, v2 k4, v4 k3, v3 nodes operations : send ( m, k ) p2p overlay network β€’ message sent to keys : implementation of a dht β€’ p2p infrastructure ensures mapping between keys and physical nodes β€’ fully decentralized : peer to peer communication paradigm cs 460 cs 460 26 distributed application ( e. g. storage, multicast, pub - sub ) distributed hash table node node node node put ( key, data ) get ( data ) data lookup ( key ) pastry designed by a. rowstron ( msr ) and p. druschel ( rice univ. ) cs 460 27 p2p routing infrastructure β€’ overlay : network abstraction on top of ip β€’ basic functionality : distributed hash table key = sha - 1 ( data ) β€’ an identifier is associated to each node nodeid = sha - 1 ( ip address ) β€’ large identifier space ( keys and nodeid ) β€’ a node is responsible for a range of keys β€’ routing : search efficiently for keys cs 460 28 object distribution cs 460 objid nodeids o 2128 - 1 consistent hashing [ karger et al. β€˜ 97 ] 128 bit circular id space nodeids ( uniform random ) β€’ obji
EPFL CS 460 Moodle
No question: Moodle
##ds ( uniform random ) invariant : node with numerically closest nodeid maintains object. 29 pastry β€’ naming space : β€’ ring of 128 bit integers β€’ nodeids chosen at random β€’ key / node mapping β€’ key associated to the node with the numerically closest node id β€’ routing table β€’ leaf set β€’ 8 or 16 closest numerical neighbors in the naming space cs 460 30 pastry routing table β€’ routing tables based on prefix matching β€’ identifiers are a set of digits in base 16 β€’ matrix of 128 / 4 lines et 16 columns β€’ routetable ( i, j ) : β€’ nodeid matching the current node identifier up to level i β€’ with the next digit is j cs 460 31 simple example β€’ consider a peer with id 01110100101 β€’ maintains a neighor peer in each of the following prefixes β€’ 1 β€’ 00 β€’ 010 β€’ 0110 β€’ …. β€’ at each routing step, forward to a neighbor with the largest matching prefix cs 460 32 pastry : routing properties β€’ log16 n hops β€’ size of the state maintained ( routing table ) : o ( log n ) d46a1c route ( d46a1c ) d462ba d4213f d13da3 65a1fc d467c4 d471f1 cs 460 33 search takes o ( log ( n ) ) time ( intuition ) : at each step, distance between query and peer - with - file reduces by a factor of at least 2 pastry : routing
EPFL CS 460 Moodle
No question: Moodle
table ( # 65a1fcx ) 0 x 1 x 2 x 3 x 4 x 5 x 7 x 8 x 9 x a x b x c x d x e x f x 6 0 x 6 1 x 6 2 x 6 3 x 6 4 x 6 6 x 6 7 x 6 8 x 6 9 x 6 a x 6 b x 6 c x 6 d x 6 e x 6 f x 6 5 0 x 6 5 1 x 6 5 2 x 6 5 3 x 6 5 4 x 6 5 5 x 6 5 6 x 6 5 7 x 6 5 8 x 6 5 9 x 6 5 b x 6 5 c x 6 5 d x 6 5 e x 6 5 f x 6 5 a 0 x 6 5 a 2 x 6 5 a 3 x 6 5 a 4 x 6 5 a 5 x 6 5 a 6 x 6 5 a 7 x 6 5 a 8 x 6 5 a 9 x 6 5 a a x 6 5 a b x 6 5 a c x 6 5 a d x 6 5 a e x 6 5 a f x log16 n lines line 0 line 1 line 2 line 3 cs 460 34 35 routing algorithm, notations b a shl d l d l / b l l,, i r, r l i b i l b and a between prefix shared the of length : ), ( key of digits the of value : leafset in the nodeid closest ith : 128 0 line 2 0 table routing the
EPFL CS 460 Moodle
No question: Moodle
of entry : cs 460 routing algorithm ( on node a ) b a shl d l d l / b l l,, i r, r l i b i l b and a between prefix shared the of length : ), ( key of digits the of value : leafset in the nodeid closest ith : 128 0 line 2 0 table routing the of entry : leaf set cs 460 36 node departure β€’ explicit departure or failure β€’ graceful replacement of a node β€’ the leafset of the closest node in the leafset contains the closest new node, not yet in the leafset β€’ update from the leafset information β€’ update the application cs 460 37 failure detection β€’ detected when immediate neighbors in the name space ( leafset ) can no longer communicate β€’ detected when a contact fails during the routing β€’ routing uses an alternative route cs 460 38 state maintenance β€’ leaf set β€’ is aggressively monitored and fixed β€’ eventual guarantee up to l / 2 nodes with adjacent nodeids fail simultaneously β€’ routing table β€’ are lazily repaired β€’ when a hole is detected during the routing β€’ periodic gossip - based maintenance cs 460 39 reducing latency β€’ random assignment of nodeid : nodes numerically close are geographically ( topologically ) distant β€’ objective : fill the routing table with nodes so that routing hops are as short ( latency wise ) as possible β€’ topological metric : latency d467c4 d467f5 6fdacd cs 460 40 exploiting locality in pastry β€’ neighbor selected based of a
EPFL CS 460 Moodle
No question: Moodle
network proximity metric : β€’ closest topological node β€’ satisfying the constraints of the routing table routetable ( i, j ) : β€’ nodeid corresponding to the current nodeid courant up to level i β€’ next digit = j β€’ nodes are close at the top level of the routing table β€’ random nodes at the bottom levels of the routing tables cs 460 41 proximity routing in pastry d46a1c route ( d46a1c ) d462ba d4213f d13da3 65a1fc d467c4 d471f1 naming space d467c4 65a1fc d13da3 d4213f d462ba topological space leaf set cs 460 42 joining the network β€’ node x joins through a nearby node a β€’ node x routes to node a β€’ path a, b, … - > z β€’ z numerically closest to x β€’ initialisation of the line i of the routing table with the contents of line i of the routing table of the ith node encountered on the path β€’ improving the quality of the routing table β€’ x asks to each node of its routing table its own routing state and compare distances β€’ gossip - based update for each line ( every 20mn ) β€’ periodically, an entry is chosen at random in the routing table β€’ corresponding line of this entry sent over β€’ evaluation of potential candidates β€’ replacement of better candidates β€’ new nodes gradually integrated cs 460 43 44 performance 0 100 200 300 400 500 600 1 2 3 4 5 hop number
EPFL CS 460 Moodle
No question: Moodle
per - hop distance normal routing tables perfect routing tables no locality 1. 59 slower than ip on average cs 460 references β€’ antony i. t. rowstron, peter druschel : pastry : scalable, decentralized object location, and routing for large - scale peer - to - peer systems. middleware 2001 : 329 - 350 β€’ ion stoica, robert tappan morris, david r. karger, m. frans kaashoek, hari balakrishnan : chord : a scalable peer - to - peer lookup service for internet applications. sigcomm 2001 : 149 - 160 β€’ sylvia ratnasamy, paul francis, mark handley, richard m. karp, scott shenker : a scalable content - addressable network. sigcomm 2001 : 161 - 172 β€’ ben y. zhao, ling huang, jeremy stribling, sean c. rhea, anthony d. joseph, john kubiatowicz : tapestry : a resilient global - scale overlay for service deployment. ieee j. sel. areas commun. 22 ( 1 ) : 41 - 53 ( 2004 ) cs 460 45 consistency models anne - marie kermarrec cs - 460 cs 460 46 where are we? cs 460 47 consistency models cap theorem week 8 - 9 gossip protocols week 7 distributed / decentralized systems week 8 - 12 data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs
EPFL CS 460 Moodle
No question: Moodle
) nosql db dynamo big table cassandra week 9 distributed messaging systems kafka – week 11 structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning week 12 batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( mesos, yarn ) - week 10 query optimization storage hierarchies & layouts transaction management query execution replication β€’ replication is key to availability ( low latency, failure resilience, load balancing ) β€’ but creates inconsistencies due to concurrent accesses cs 460 48 what is a consistency model? β€’ describes a contract between a client application and the data store β€’ states how the memory behaves β€’ states what the application can expect from the underlying storage systems and the associated rules cs 460 49 when is it needed? whenever objects are replicated replicas must be consistent in some way β€’ modifications have to be carried out on all copies β€’ in the presence of concurrent updates / reads different consistency models β€’ a consistency model is a set of rules that process obeys while accessing data cs 460 50 a large spectrum of consistency models strong consistency β€’ β€’ β€’ eventual consistency cs 460 process process process distributed data storage strong eventual more consistency faster reads and writes 51 examples of onsistency guarantees cs 460 52 strong consistency see all previous writes eventual consistency see subset of previous writes consistent prefix see initial sequence of writes monotonic freshness see increasing sequence of writes read
EPFL CS 460 Moodle
No question: Moodle
my writes see all writes performed by reader bounded staleness see all β€œ old ” writes consistency requirements in a volley - ball game β€’ the first team to reach 25 points and by at least two points wins a set ( for the first 4 sets ) β€’ the first team to reach 15 points and by at least two points wins the 5th set β€’ the first team to win 3 sets wins the game β€’ imagine the score is stored and replicated in the cloud cs 460 53 inspired from replicated data consistency explained through baseball doug terry msr technical report, october 2011 cs 460 54 2 : 1 20 - 22 2 : 1 20 - 23 2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 reader # 1 3 : 1 3 : 1 home - visitors reader # 2 3 : 1 strong consistency aka linearizability, one - copy serializability the responses to the operations invoked in an execution are the same as if all operations were executed in a sequential order and this order respects those specified by each process strong consistency is impossible to achieve in the presence of partition ( cap - next lecture ) strong consistency is impossible to achieve in an asynchronous system without assumptions on message delivery latencies ( flp ) guarantee : see all previous writes. all reads at time t should reflect all the writes that happened before t. cs 460 55 2 : 1 20 - 22 2 : 1 20 - 23
EPFL CS 460 Moodle
No question: Moodle
2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 reader # 1 3 : 1 2 : 1 20 : 23 home - visitors reader # 2 3 : 1 eventual consistency eventually, in the absence of operations, replicas will be consistent guarantee : see some previous writes. eventually ( in the absence of new writes ), all the reads will return the correct and most recent state. cs 460 56 2 : 1 20 - 22 2 : 1 20 - 23 2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 reader # 1 3 : 1 2 : 1 20 : 23 home - visitors 2 : 1 22 : 23 consistent prefix snapshot isolation, ordered delivery guarantee : see initial sequence of writes that existed at some point in time. if a reader issues a read request at time t, it should read the result of any of the prefixes of the sequence of writes. reader # 2 cs 460 57 2 : 1 20 - 22 2 : 1 20 - 23 2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 reader # 1 at time t1 3 : 1 2 : 1 20 : 23 home - visitors 3 : 1 monotonic freshness if a process reads the
EPFL CS 460 Moodle
No question: Moodle
value of a data item x, any successive operation on x by that process will always return the same or a more recent value guarantee : see increasing subset of previous writes ( local guarantee from a given reader ) reader # 1 at time t2 cs 460 58 2 : 1 20 - 22 2 : 1 20 - 23 2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 reader # 1 3 : 1 2 : 1 20 : 23 home - visitors 2 : 1 22 : 23 bounded staleness periodic snapshot, continuous consistency guarantee : see all Β« old Β» writes. the staleness parameter denotes the allowed staleness of the system. reader # 2 old new cs 460 59 2 : 1 20 - 22 2 : 1 20 - 23 2 : 1 21 - 23 2 : 1 22 - 23 2 : 1 23 - 23 2 : 1 24 - 23 2 : 1 25 - 23 3 : 1 writer # 1 3 : 1 3 : 1 home - visitors 2 : 1 22 : 23 read my writes the effect of a write operation by a process on data item x will be always seen by a successive read operation on x by the same process guarantee : see all writes performed by reader. local guarantee any read by client c should reflect all the writes by c in the past. this means that it can also be local - strong - consistency, whereas for writes of other clients, reads by c
EPFL CS 460 Moodle
No question: Moodle
can be eventually - consistent. reader # 2 official score keeper cs 460 60 suppose visitor score read ( visitor _ score ) write ( visitor _ score. update ) read my writes ( single score keeper ) strong consistency otherwise referee cs 460 61 4th set, home scores @ 24 vs = read ( visitor _ score ) ; hs = read ( home _ score ) ; if ( hs = 25 ) & ( vs < 24 ) end game ; strong consistency radio reporter cs 460 62 do { vs = read ( visitor _ score ) ; hs = read ( home _ score ) ; report vs and hs ; sleep ( 30mn ) ; } consistent prefix ( if reads from same replica ) monotonic freshness or bounded staleness sportswriter cs 460 63 while not end of the game { drink beer ; } go out to diner ; vs = read ( visitor _ score ) ; hs = read ( home _ score ) ; write article ; eventual consistency or bounded staleness statistician cs 460 64 wait for end of game ; score read ( β€œ home _ stats ” ) ; stat = read ( β€œ season - runs ” ) ; write { β€œ season - runs ”, stat | + score ) ; strong consistency ( 1st read ) read my writes after supporter cs 460 65 read score ; discuss score with friends eventual consistency or strong consistency cs 460 66 a wide range of models score keeper referee radio reporter sportswriter supporter statistician read my writes strong consistent prefix monotonic freshness bounded staleness strong read my
EPFL CS 460 Moodle
No question: Moodle
writes strong eventual conclusions β€’ different clients want different guarantees β€’ one client might want different guarantees for different reads β€’ several models can be applied β€’ strong consistency would do but is prohibitive performance wise β€’ use the lowest consistency ( to the left ) consistency model that is β€œ correct ” for your application cs 460 67 gossip - based computing anne - marie kermarrec cs - 460 cs - 460 1 the scalable computing systems lab ( sacs ) cs - 460 2 β€’ system support for machine learning β€’ federated / decentralized learning systems β€’ large - scale recommenders β€’ privacy - aware learning systems β€’ collaborative computing cs - 460 3 week date topic 7 07 / 04 gossip protocols 8 14 / 04 distributed hash tables + consistency models 9 28 / 04 key - value stores + cap theorem 10 05 / 05 scheduling 11 12 / 05 stream processing 12 19 / 05 distributed learning systems 13 26 / 05 invited industry lecture where are we? cs - 460 4 consistency protocols cap theorem week 9 gossip protocols week 7 distributed / decentralized systems week 8 - 12 data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db dynamo big table cassandra week 9 distributed messaging systems kafka – week 11 structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning week 12 batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( me
EPFL CS 460 Moodle
No question: Moodle
##sos, yarn ) - week 10 query optimization storage hierarchies & layouts transaction management query execution dissemination - multicast β€’ key feature in distributed computing cs - 460 5 consistency protocols event dissemination fault - tolerant dissemination cs - 460 6 atomicity : 100 % nodes receive the message trade - off : latency / load - balancing / failure resilience centralized : star topology cs - 460 7 tree - based multicast cs - 460 8 a third approach 9 cs - 460 9 a third approach 10 f cs - 460 10 a third approach 11 cs - 460 11 epidemic / gossip - based dissemination 12 f simple reliable exponential spreading cs - 460 12 gossip / epidemic in distributed computing replace people by computers ( nodes or peers ), words with data β€’ gossip : peerwise exchange of information β€’ epidemic : wide and exponential spread two approaches β€’ anti - entropy : peer - wise exchange β€’ gossiping : update f neighbors cs - 460 13 principle β€’ information is spread to allow for local - only decision making β€’ nodes exchange information with their neighbors : peer - to - peer communication paradigm β€’ data disseminated efficiently β€’ no centralized control β€’ eventual convergence : probabilistic nature cs - 460 14 mathematics of epidemics β€’ n processes β€’ each individual contaminates with some probability f other members chosen at random β€’ number of rounds an individual remains infectious : from infect and die to infect forever β€’ metric of success of an epidemic dissemination β€’ proportion of infected processes after r rounds β€’ probability of atomic β€œ infection ” cs
EPFL CS 460 Moodle
No question: Moodle
- 460 15 r z n z y r r r round prior to processes infected of number the is / = ) ( n z p r = infect forever model n. t. j. bailey, the mathematical theory of infectious diseases and its applications, 2nd ed., hafner press, 1975. probability of β€œ atomic ” infection is connected is graph random a ity that probabibil the, log is fanout the if members. all to from path a is there if successful is at starting epidemic an. chooses and infected is if to from edge an is there process, a is node each where graph a as d represente is system the state, system final examine i erdos / reny 0 0 2 1 2 1 c ( n ) n n n n n n + - c - e e p ( connect ) = cs - 460 16 ) ( n z p r = the log ( n ) magic β€’ simple dissemination algorithm β€’ probabilistic guarantees of delivery β€’ each node forwards the message to f nodes chosen uniformly at random β€’ if f = o ( log ( n ) ), β€œ atomic ” broadcast whp in o ( log ( n ) ) hops β€’ result is valid if the fanout for each peer is on average log ( n ) + c, regardless of the degree distribution. β€’ relate probability of reliable dissemination and proportion of failure β€’ set parameters cs - 460 17 log ( n ) is a very slowly growing number base 2 log ( 1000
EPFL CS 460 Moodle
No question: Moodle
) ~ 10 log ( 1m ) ~ 20 log ( 1b ) ~ 30 performance ( 100, 000 peers ) 0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 0. 7 0. 8 0. 9 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 f proportion of connected peers in non β€œ atomic ” broadcast proportion of β€œ atomic ” broadcast cs - 460 18 proportion of nodes who received the message in non atomic runs proportion of atomic runs / all runs failure resilience ( 100, 000 peers ) 0 10 20 30 40 50 60 70 80 90 100 0 % 10 % 20 % 30 % 40 % 50 % percentage of faulty peers 99. 98 99. 94 proportion of β€œ atomic ” broadcast proportion of connected peers in non β€œ atomic ” broadcast cs - 460 19 push versus pull protocols β€’ β€œ push ” protocols β€’ once a node receives a multicast message, it forwards it to f nodes β€’ β€œ pull ” protocols β€’ periodically a node sends a request to f randomly selected processes for new multicast messages that it has not received. β€’ hybrid variant : push - pull β€’ as the name suggests cs - 460 20 the relevance of gossip β€’ introduces implicit redundancy β€’ flexible, scalable, and simple protocols β€’ overhead β€’ small messages β€’ application to maintenance, monitoring, etc … differ in the choice of gossip targets and information exchanged cs - 460 21 basic functionnality β€’ requires a uniform random sample β€’ how can
EPFL CS 460 Moodle
No question: Moodle
we do this in a decentralized way? cs - 460 22 achieving random topologies cs - 460 23 the peer sampling service β€’ how to create a graph upon which applying gossip - based dissemination?... by gossiping β€’ goal : β€’ create an overlay network β€’ provide each peer with a random sample of the network in a decentralized way β€’ means : gossip - based protocols β€’ what data should be gossiped? β€’ to whom? β€’ how to process the exchanged data? β€’ resulting β€œ who knows who ” graphs : overlay β€’ properties ( degree, clustering, diameter, etc. ) β€’ resilience to network dynamics β€’ closeness to random graphs cs - 460 24 objective β€’ provide nodes with a peer drawn uniformly at random from the complete set of nodes β€’ sampling is accurate : reflects the current set of nodes β€’ independent views β€’ scalable service cs - 460 25 example : gossip - based generic protocol 1 7 8 9 10 3 2 4 6 5 1 2 9 5 2 6 10 3 c = 3 cs - 460 26 example : gossip - based generic protocol 1 7 8 9 10 3 2 4 6 5 1 2 9 5 6 10 3 cs - 460 27 example : gossip - based generic protocol 1 7 8 9 10 3 2 4 6 5 2 9 10 cs - 460 28 system model β€’ system of n peers β€’ peers join and leave ( and fail ) the system dynamically and are identified uniquely ( ip @ ) β€’ epidemic interaction model : β€’ peers exchange some membership
EPFL CS 460 Moodle
No question: Moodle
information periodically to update their own membership information β€’ reflect the dynamics of the system β€’ ensures connectivity β€’ each peer maintains a local view ( membership table ) of c entries β€’ network @ ( ip @ ) β€’ age ( freshness of the descriptor ) β€’ each entry is unique β€’ ordered list β€’ active and passive threads on each node cs - 460 29 operations on partial views ( membership ) selectpeer ( ) permute ( ) increaseage ( ) append (... ) removeduplicates ( ) removeolditems ( n ) removehead ( n ) removerandom ( n ) returns an item randomly shuffles items forall items add 1 to age append a number of items remove duplicates ( on same address ), keep youngest remove n descriptors with highest age remove n first descriptors remove n random descriptors cs - 460 30 active thread wait ( t time units ) / / t is the cycle length p < - selectpeer ( ) / / sample a live peer from the current view if push then / / takes initiative mydescriptor < - ( my @, 0 ) buffer < - merge ( view, { mydescriptor } ) / / temporary list view. permute ( ) / / shuffle the items in the view move oldest h items to end of the view / / to get rid of old nodes buffer. append ( view. head ( c / 2 ) ) / / copy first half of the items send
EPFL CS 460 Moodle
No question: Moodle
buffer to p else send { } to p / / triggers response if pull then receive buffer from p view. selectview ( c, h, s, buffer ) view. increaseage ( viewp ) cs - 460 31 passive thread do forever receive bufferp from p if pull then mydescriptor < - ( my @, 0 ) buffer < - merge ( view, { mydescriptor } ) view. permute ( ) move oldest h items to end of the view buffer. append ( view. head ( c / 2 ) ) send buffer to p view. selectview ( c, h, s, buffer ) view. increaseage ( view _ p ) cs - 460 32 design space β€’ periodically each peer initiates communication with another peer β€’ peer selection β€’ data exchange ( view propagation ) β€’ how peers exchange their membership information? β€’ data processing ( view selection ) : select ( c, buffer ) β€’ c : size of the resulting view β€’ buffer : information exchanged cs - 460 33 design space : peer selection selectpeer ( ) : returns a live peer from the current view β€’ rand : pick a peer uniformly at random β€’ head : pick the β€œ youngest ” peer β€’ tail : pick the β€œ oldest ” peer note that head leads to correlated views. cs - 460 34 view propagation β€’ push : node sends descriptors to selected peer β€’ pull : node only pulls in descriptors from selected peer β€’ pushpull : node and selected peer exchange descriptors
EPFL CS 460 Moodle
No question: Moodle
pulling alone is pretty bad : a node has no opportunity to insert information on itself. potential loss of all incoming connections. cs - 460 35 design space : data exchange β€’ buffer ( h ) β€’ initialized with the descriptor of the gossiper β€’ contains c / 2 elements β€’ ignores h β€œ oldest ” β€’ communication model β€’ push : buffer sent β€’ push / pull : buffers sent both ways β€’ ( pull : left out, the gossiper cannot inject information about itself, harms connectivity ) cs - 460 36 design space : data processing β€’ select ( c, h, s, buffer ) 1. buffer appended to view 2. keep the freshest entry for each node 3. h oldest items removed 4. s first items removed ( the one sent over ) 5. random nodes removed β€’ merge strategies β€’ blind ( h = 0, s = 0 ) : select a random subset β€’ healer ( h = c / 2 ) : select the β€œ freshest ” entries β€’ shuffler ( h = 0, s = c / 2 ) : minimize loss c : size of the resulting view h : self - healing parameter s : shuffle buffer : information exchanged cs - 460 37 nov. 2008 38 example b x d l i j v x g a w j a d c / 2 c / 2 b x d v x g nov. 2008 39 example b x d l i j a 1. buffer appended to view b x d l i j a v x g nov.
EPFL CS 460 Moodle
No question: Moodle
2008 40 example b x d l i j a v x g 1. buffer appended to view 2. keep the freshest entry for each node b d l i j a v x g nov. 2008 41 example b d l i j a v x g 1. buffer appended to view 2. keep the freshest entry for each node 3. h ( = 0 ) oldest items removed b d l i j a v x g nov. 2008 42 example b d l i j a v g 1. buffer appended to view 2. keep the freshest entry for each node 3. h ( = 0 ) oldest items removed 4. s ( = 1 ) first items removed ( the one sent over ) d l i j a v x g x nov. 2008 43 example a 1. buffer appended to view 2. keep the freshest entry for each node 3. h ( = 0 ) oldest items removed 4. s ( = 1 ) first items removed ( the one sent over ) 5. random nodes removed a d l i a v x g d l i j v x g existing systems β€’ lpbcast [ eugster & al, dsn 2001, acm tocs 2003 ] β€’ node selection : random β€’ data exchange : push β€’ data processing : random β€’ newscast [ jelasity & van steen, 2002 ] β€’ node selection : head β€’ data exchange : pushpull β€’ data processing : head β€’ cyclon [ voulgaris
EPFL CS 460 Moodle
No question: Moodle
& al jnsm 2005 ] β€’ node selection : random β€’ data exchange : pushpull β€’ data processing : shuffle cs - 460 degree distribution f = 30 in a 10. 000 node system metrics β€’ degree distribution β€’ average path length β€’ clustering coefficient a generic gossip - based substrate cs - 460 45 gossip - based generic substrate β€’ each node maintains a set of neighbors ( c entries ) β€’ periodic peerwise exchange of information β€’ each process runs an active and passive threads p q buffer [ p ] buffer [ q ] data exchange data processing peer selection parameter space cs - 460 46 a generic gossip - based substrate active thread ( peer p ) ( 1 ) selectpeer ( & q ) ; ( 2 ) selecttosend ( & bufs ) ; ( 3 ) sendto ( q, bufs ) ; ( 4 ) - ( 5 ) receivefrom ( q, & bufr ) ; ( 6 ) selecttokeep ( view, bufr ) ; ( 7 ) processdata ( view ) passive thread ( peer q ) ( 1 ) ( 2 ) ( 3 ) receivefrom ( & p, & bufr ) ; ( 4 ) selecttosend ( & bufs ) ; ( 5 ) sendto ( p, bufs ) ; ( 6 ) selecttokeep ( view, bufr ) ; ( 7 ) processdata ( view ) selectpeer : ( randomly ) select a neighbor selecttosend : select some entries from local view select
EPFL CS 460 Moodle
No question: Moodle
##tokeep : add received entries to local view cs - 460 47 gossip - based dissemination data exchanged data processing peer selection message k random how can we achieve random sampling? dissemination data = msg to broadcast each process gossips one message once cs - 460 48 topology maintenance data exchange membership data data processing peer selection list of neighbours push random random merging lpbcast [ eugster & al, dsn 2001, acm tocs 2003 ] Β½ list of neighbours pushpull head age - based merging ( head ) newscast [ jelasity & van steen, 2002 ] Β½ list of neighbours oldest shuffle cyclon [ voulgaris & al jnsm 2005 ] cs - 460 49 decentralized computations data exchange data processing peer selection value random aggregation average aggregation [ jelasity & al., acm tcs 20025 ] value random aggregation system size estimation attribute value random value random attribute / random matching slicing cs - 460 50 gossip - based aggregation β€’ each node holds a numeric value s β€’ aggregation function : average over the set of nodes cs - 460 51 gossip - based aggregation β€’ assume getneighbor ( ) returns a uniform random sample β€’ update ( sp, sq ) returns ( sp + sq ) / 2 β€’ operation does not change the global average but redistributes the variance over the set of all estimates in the system β€’ proven that the variance tends to zero β€’ exponential convergence cs - 460 52 counting with gossip β€’ initialize all nodes with
EPFL CS 460 Moodle
No question: Moodle
value 0 but the initiator β€’ global average = 1 / n β€’ size of the network can be easily deduced β€’ robust implementation β€’ multiple nodes start with their identifier β€’ each concurrent instance led by a node β€’ message and data of an instance tagged with a unique id cs - 460 53 ordered slicing β€’ create and maintain a partitioning of the network β€’ each node belongs to one slice β€’ ex : 20 % of nodes with the largest bandwidth β€’ network of size n β€’ each node i has an attribute xi β€’ we assume that values ( x1, xn ) can be ordered β€’ problem : automatically assign a slice ( top 20 % ) for each node cs - 460 54 where is that used in practice? β€’ clearinghouse and bayou projects : email and database transactions [ podc β€˜ 87 ] β€’ refdbms system [ usenix β€˜ 94 ] β€’ bimodal multicast [ acm tocs β€˜ 99 ] β€’ sensor networks [ li li et al, infocom β€˜ 02, and pbbf, icdcs β€˜ 05 ] β€’ aws ec2 and s3 cloud ( rumored ). [ β€˜ 00s ] β€’ cassandra key - value store ( and others ) uses gossip for maintaining membership lists β€’ bitcoin / cryptocurrencies uses gossip for all communications ( pre and post mining ) ( β€˜ 10s ) β€’ federated and decentralized learning for model averaging ( β€˜ 20s ) cs - 460 55 references β€’ Β« the peer sampling service : experimental
EPFL CS 460 Moodle
No question: Moodle
evaluation of unstructured gossip - based implementation Β» m. jelasity, r. guerraoui, a. - m. kermarrec and m. van steen, middleware 2004 – acm tocs 2007 β€’ Β« newscast computing Β» m. jelasity, w. kowalczyk, m. van steen. internal report ir - cs - 006, vrije universiteit, department of computer science, november 2003 β€’ Β« lightweight probabilistic broadcast Β». p. eugster, s. handurukande, r. guerraoui, a. - m. kermarrec, and p. kouznetsov acm transactions on computer systems, 21 ( 4 ), november 2003. β€’ Β« peer - to - peer membership management for gossip - based protocols Β». a. j. ganesh, a. - m. kermarrec, and l. massoulie ieee transactions on computers, 52 ( 2 ), february 2003 β€’ β€œ gossip - based aggregation in large dynamic networks ” m. jelasity, a. montresor, o. babaoglu. acm tcs 23 ( 3 ), 2005 β€’ differentiated consistency for worldwide gossips. d. frey, a. mostefaoui, m. perrin, p. - l. roman, f. taiani. ieee tpds 34 ( 1 ), 2023 ) cs - 460 56 scheduling anne - marie ke
EPFL CS 460 Moodle
No question: Moodle
##rmarrec cs - 460 1 where are we? cs - 460 2 consistency protocols cap theorem week 9 gossip protocols week 7 distributed / decentralized systems week 8 - 12 data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db dynamo big table cassandra week 9 distributed messaging systems kafka – week 11 structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning week 12 batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling - week 10 query optimization storage hierarchies & layouts transaction management query execution scheduling β€’ multiple β€œ tasks ” to schedule β€’ the processes on a single - core os β€’ the tasks of a hadoop job β€’ the tasks of multiple hadoop jobs β€’ the tasks of multiple frameworks β€’ limited resources that these tasks require β€’ processor ( s ) β€’ memory β€’ ( less contentious ) disk, network β€’ scheduling goals 1. good throughput or response time for tasks ( or jobs ) 2. high utilization of resources 3. share resources cs - 460 3 single processor scheduling cs - 460 4 task 1 10 task 2 5 task 3 3 arrival times β†’ 0 6 8 processor task length arrival 1 10 0 2 5 6 3 3 8 which tasks run when? cs - 460 5 task 1 task 2 task 3 time β†’ 0 6 8 10 15 18 processor task length arrival 1 10 0 2
EPFL CS 460 Moodle
No question: Moodle
5 6 3 3 8 β€’ maintain tasks in a queue in order of arrival β€’ when processor free, dequeue head and schedule it fifo scheduling ( first in first out ) fifo / fcfs performance β€’ average completion time may be high β€’ for our example on previous slides, β€’ average completion time of fifo / fcfs = ( task 1 + task 2 + task 3 ) / 3 = ( 10 + 15 + 18 ) / 3 = 43 / 3 = 14. 33 cs - 460 6 stf scheduling ( shortest task first ) task 1 task 2 task 3 time β†’ 0 3 8 18 processor task length arrival 1 10 0 2 5 0 3 3 0 β€’ maintain all tasks in a queue, in increasing order of running time β€’ when processor free, dequeue head and schedule cs - 460 7 stf is optimal β€’ average completion of stf is the shortest among all scheduling approaches β€’ average completion time of stf = ( task 1 + task 2 + task 3 ) / 3 = ( 18 + 8 + 3 ) / 3 = 29 / 3 = 9. 66 ( versus 14. 33 for fifo / fcfs ) β€’ in general, stf is a special case of priority scheduling β€’ instead of using time as priority, scheduler could use user - provided priority cs - 460 8 round - robin scheduling time β†’ 0 6 8 processor task length arrival 1 10 0 2 5 6 3 3 8 β€’ use a quantum ( say 1 time unit ) to run portion of
EPFL CS 460 Moodle
No question: Moodle
task at queue head β€’ pre - empts processes by saving their state, and resuming later β€’ after pre - empting, add to end of queue task 1 15 ( task 3 done ) … cs - 460 9 round - robin vs. stf / fifo β€’ round - robin preferable for β€’ interactive applications β€’ user needs quick responses from system β€’ fifo / stf preferable for batch applications β€’ user submits jobs, goes away, comes back to get result cs - 460 10 summary β€’ single processor scheduling algorithms β€’ fifo / fcfs β€’ shortest task first ( optimal ) β€’ priority β€’ round - robin β€’ what about cloud scheduling? cs - 460 11 goals of cloud computing scheduling β€’ running multiple frameworks on a single cluster. β€’ maximize utilization and share data between frameworks. β€’ two main resource management systems : β€’ yarn : cluster management system designed for hadoop workloads β€’ mesos : manage a variety of different workloads, including hadoop, spark, and containerized applications cs - 460 12 schedule frameworks : global scheduler β€’ job requirements β€’ response time β€’ throughput β€’ availability β€’ job execution plan β€’ task dag β€’ inputs / outputs β€’ estimates β€’ task duration β€’ input sizes β€’ transfer sizes cs - 460 13 global scheduler advantages β€’ can achieve optimal schedule disadvantages β€’ complexity : hard to scale and ensure resilience β€’ hard to anticipate future frameworks requirements. β€’ need to refactor existing frameworks. cs - 460 14
EPFL CS 460 Moodle
No question: Moodle
mesos β€œ a platform for fine - ‐ grained resource sharing in the data center β€œ benjamin hindman, andy konwinski, matei zaharia, ali ghodsi, anthony joseph, randy katz, scott shenker, ion stoica university of california, berkeley usenix 2011 cs - 460 15 mesos cs - 460 16 coexistence of multiple applications β€’ ex : fb - > business intelligence, spam detection, ad optimization β€’ production job, machine learning ranging from multi - hour computation to 1 mn ad - hoc query platform for sharing resources of commodity clusters between multiple diverse frameworks mesos model β€’ a framework ( e. g., hadoop, spark ) manages and runs one or more jobs. β€’ a job consists of one or more tasks. β€’ a task ( e. g., map, reduce ) consists of one or more processes running on same machine. β€’ short duration of tasks : exploit data locality cs - 460 17 cdf of job and task durations in facebook ’ s hadoop data warehouse challenges β€’ various scheduling needs of frameworks β€’ programming model, scheduling needs, task dependencies, data placement, etc. β€’ fault - tolerant & high availability β€’ avoids the complexity of a central scheduler cs - 460 18 cs - 460 19 β€œ a platform for fine - ‐ grained resource sharing in the data center β€œ benjamin hindman, andy konwinski, matei zaharia, ali ghodsi
EPFL CS 460 Moodle
No question: Moodle
, anthony joseph, randy katz, scott shenker, ion stoica. usenix 2011 ressource offers β€’ delegates control over scheduling to the frameworks β€’ offer available resources to frameworks, let them pick which resources to use and which tasks to launch β€’ keeps mesos simple, lets it support future frameworks β€’ high utilization of resources β€’ support diverse frameworks ( current & future ) β€’ scalability to 10, 000 ’ s of nodes β€’ reliability in face of failures resulting design : small microkernel - like core that pushes scheduling logic to frameworks cs - 460 20 distributed scheduler cs - 460 21 distributed scheduler β€’ master sends resource offers to frameworks β€’ frameworks select which offers to accept and which tasks to run β€’ unit of allocation : resource offer β€’ vector of available resources on a node β€’ for example, node1 : ( 1cpu ; 1gb ), node2 : ( 4cpu ; 16gb ) cs - 460 22 distributed scheduler advantages β€’ simple : easier to scale and make resilient β€’ easy to port existing frameworks, support new ones disadvantages β€’ may not always lead to optimal β€’ in practice meet goals such as data locality almost perfectly cs - 460 23 mesos architecture cs - 460 24 slaves continuously send status updates about resources to the master framework scheduler selects resources and provides tasks. pluggable scheduler picks framework to send an offer to. framework executors launch tasks. mesos vs static partitioning β€’
EPFL CS 460 Moodle
No question: Moodle
compared performance with statically partitioned cluster where each framework gets 25 % of nodes framework speedup on mesos facebook hadoop mix 1. 14Γ— large hadoop mix 2. 10Γ— spark 1. 26Γ— torque / mpi 0. 96Γ— from arka bhattacharya cs - 460 25 β€’ ran 16 instances of hadoop on a shared hdfs cluster β€’ used delay scheduling in hadoop to get locality ( wait a short time to acquire data - local nodes ) data locality with resource offers 1. 7Γ— from arka bhattacharya cs - 460 26 scalability β€’ mesos only performs inter - framework scheduling ( e. g. fair sharing ), which is easier than intra - framework scheduling 0 0. 2 0. 4 0. 6 0. 8 1 - 10000 10000 30000 50000 task start overhead ( s ) number of slaves result : scaled to 50, 000 emulated slaves, 200 frameworks, 100k tasks from arka bhattacharya cs - 460 27 who is using mesos β€’ apple uses it to power the back end of siri β€’ netflix uses it for batch and stream processing, anomaly detection, machine learning β€’ twitter uses it for analytics and ads cs - 460 28 resource allocation in mesos how to allocate resources of different types? cs - 460 29 single resource : fair sharing n users want to share a resource, e. g., cpu. β€’ solution : allocate each 1
EPFL CS 460 Moodle
No question: Moodle
/ n of the shared resource. generalized by max - min fairness. β€’ handles if a user wants less than its fair share. β€’ e. g., user a wants no more than 20 %. generalized by weighted max - min fairness β€’ give weights to users according to importance. β€’ e. g., user a gets weight 1, user b weight 2. cs - 460 30 max - min fairness : example β€’ 1 resource : cpu β€’ total resources : 20 cpu β€’ user a has x tasks and wants ( 1cpu ) per task β€’ user b has y tasks and wants ( 2cpu ) per task max ( x ; y ) ( maximize allocation ) subject to x + 2y = 20 ( cpu constraint ) x = 2y so x = 10, y = 5 cs - 460 31 properties of max - min fairness share guarantee β€’ each user can get at least 1 / n of the resource. β€’ but will get less if her demand is less. strategy proof β€’ users are not better off by asking for more than they need. β€’ users have no reason to lie. max - min fairness is the only reasonable mechanism with these two properties. widely used : os, networking, datacenters, can be used in mesos cs - 460 32 when is max - min fairness not enough? need to schedule multiple, heterogeneous resources, e. g., cpu, memory, etc. cs - 460 33 problem β€’ single resource example β€’ 1 resource : cpu
EPFL CS 460 Moodle
No question: Moodle
β€’ user a wants 1cpu per task β€’ user b wants 2cpu per task β€’ multi - resource example β€’ 2 resources : cpus and mem β€’ user a wants 1cpu ; 2gb per task β€’ user b wants 2cpu ; 4gb per task cs - 460 34 a natural policy ( 1 / 2 ) fairness : give weights to resources ( e. g., 1 cpu = 1 gb ) and equalize total value given to each user. β€’ total resources : 28 cpu and 56 gb ram ( e. g., 1 cpu = 2 gb = 1 $ ) β€’ user a has x tasks and wants 1cpu ; 2gb per task β€’ user b has y tasks and wants 1cpu ; 4gb per task β€’ asset fairness yields max ( x ; y ) x + y < = 28 ( cpu constraints ) 2x + 4y < = 56 ( memory constraint ) 2x = 3y ( every user spends the same 1 cpu = 2 gb ) cs - 460 35 user a : x = 12 : ( 43 % cpu ; 43 % gb ( 86 % ) ) user b : y = 8 : ( 28 % cpu ; 57 % gb ( 85 % ) ) a natural policy ( 2 / 2 ) β€’ problem : violates share guarantee. β€’ user a : x = 12 : ( 43 % cpu ; 43 % gb ( 86 % ) ) β€’ user b : y = 8 : ( 28 % cpu ; 57 % gb (
EPFL CS 460 Moodle
No question: Moodle
85 % ) ) β€’ user a gets less than 50 % of both cpu and ram. β€’ better off in a separate cluster with half the resources cs - 460 36 challenge : can we find a fair sharing policy that provides share guarantee & strategy - proofness can we generalize max - min fairness to multiple resources? dominant - resource fair scheduling cs - 460 37 β€’ proposed by researchers from u. california berkeley β€’ proposes notion of fairness across jobs with multi - resource requirements β€’ they showed that drf is β€’ fair for multi - tenant systems β€’ strategy - proof : tenant cannot benefit by lying β€’ envy - free : tenant cannot envy another tenant ’ s allocations dominant resource fairness ( drf ) cs - 460 38 β€’ drf is β€’ usable in scheduling vms in a cluster β€’ usable in scheduling hadoop in a cluster β€’ drf used in mesos β€’ drf - like strategies also used some cloud computing company ’ s distributed os ’ s where is drf useful? cs - 460 39 dominant resource fairness ( drf ) ( 1 / 2 ) β€’ dominant resource of a user : the resource that user has the biggest share of. β€’ total resources : 8cpu ; 5gb β€’ user a allocation : 2cpu ; 1gb β€’ 2 / 8 = 25 % cpu and 1 / 5 = 20 % ram β€’ dominant resource of user a is cpu ( 25 % > 20 % ) β€’ dominant share of a user : the fraction of the dominant resource she is allocated. β€’ user a
EPFL CS 460 Moodle
No question: Moodle
dominant share is 25 %. cs - 460 40 dominant resource fairness ( drf ) ( 2 / 2 ) β€’ apply max - min fairness to dominant shares : give every user an equal share of her dominant resource. β€’ equalize the dominant share of the users. β€’ total resources : ( 9cpu ; 18gb ) β€’ user a wants ( 1cpu ; 4gb ) for each task ; dominant resource : ram ( 1 / 9 < 4 / 18 ) 22 % ram β€’ user b wants ( 3cpu ; 1gb ) for each task ; dominant resource : cpu ( 3 / 9 > 1 / 18 ) 33 % cpu β€’ x is the number of tasks allocated to user a, y to user b cs - 460 41 max ( x ; y ) subject to x + 3y < = 9 ( cpu constraints ) 4x + y < = 18 ( memory constraints ) 4x / 18 = 3y / 9 ( equalize dominant shares ) user a : x = 3 : ( 33 % cpu ; 66 % gb ) user b : y = 2 : ( 66 % cpu ; 16 % gb ) user a user b algorithm cs - 460 42 cs - 460 43 step 0 : no tasks assigned. β€’ dominant shares : a = 0 %, b = 0 % step 1 : assign 1 task to user a ( lowest dominant share ) β€’ a : 1 cpu, 4 gb β†’ dominant share = 4 / 18 = 22. 2 % β€’ b : 0 β†’ 0 %
EPFL CS 460 Moodle
No question: Moodle
β€’ next : assign to user b step 2 : assign 1 task to user b β€’ a : 1 cpu, 4 gb β†’ 22. 2 % β€’ b : 3 cpu, 1 gb β†’ 3 / 9 = 33. 3 % β€’ next : a ( smaller dominant share ) step 3 : a gets 2nd task β€’ a : 2 cpu, 8 gb β†’ 8 / 18 = 44. 4 % β€’ b : 3 cpu, 1 gb β†’ 33. 3 % β€’ next : b user a wants ( 1cpu ; 4gb ) user b wants ( 3cpu ; 1gb ) total resources : ( 9cpu ; 18gb ) step 4 : b gets 2nd task β€’ a : 2 cpu, 8 gb β†’ 44. 4 % β€’ b : 6 cpu, 2 gb β†’ 6 / 9 = 66. 6 % β€’ next : a step 5 : a gets 3rd task β€’ a : 3 cpu, 12 gb β†’ 12 / 18 = 66. 6 % β€’ b : 6 cpu, 2 gb β†’ 66. 6 % β€’ equal! can ’ t go further without exceeding total resources. β€’ at the end of the schedule β€’ user a gets ( 3cpu, 12gb ) β€’ user b gets ( 6cpu, 2gb ) β€’ corresponds to the solution β€’ user a : x = 3 : ( 33 % cpu ; 66 % gb ) β€’ user b : y = 2 : ( 66 % cpu ; 16 % gb ) example cs - 460 44 β€’ for a given job
EPFL CS 460 Moodle
No question: Moodle
, the % of its dominant resource type that it gets cluster - wide, is the same for all jobs β€’ job 1 ’ s % of ram = job 2 ’ s % of cpu β€’ can be written as linear equations, and solved drf fairness cs - 460 45 β€’ drf generalizes to multiple jobs β€’ drf also generalizes to more than 2 resource types β€’ cpu, ram, network, disk, etc. β€’ drf ensures that each job gets a fair share of that type of resource which the job desires the most β€’ hence fairness other drf details cs - 460 46 β€’ scheduling very important problem in cloud computing β€’ limited resources, lots of jobs requiring access to these resources β€’ single - processor scheduling β€’ fifo / fcfs, stf, priority, round - robin β€’ centralized scheduler ( hadoop ) β€’ two - level scheduler ( mesos, yarn ) β€’ distributed scheduler ( sparrow ) β€’ hybrid scheduling ( omega, hawk ) summary : scheduling cs - 460 47 references β€’ b. hindman et al., β€œ mesos : a platform for fine - grained resource sharing in the data center ", usenix 2011 β€’ a. ghodsi, m. zaharia, b. hindman, a. konwinski, s. shenker, i. stoica. β€œ dominant resource fairness : fair allocation of multiple resource types ”. nsdi 2011 β€’ v. vavilapalli et al., β€œ apache had
EPFL CS 460 Moodle
No question: Moodle
##oop yarn : yet another resource negotiator ", acm cloud computing 2013 β€’ p delgado, f dinu, am kermarrec, w zwaenepoel, β€œ hawk : hybrid datacenter scheduling ”, usenix atc, 2015 cs - 460 48 hawk : hybrid datacenter scheduling usenix, atc 2015 cs - 460 49 centralized schedulers cs - 460 50 centralized schedulers cs - 460 51 good placement high scheduling latency distributed scheduling cs - 460 52 distributed scheduling cs - 460 53 good scheduling latency sub - optimal placement hybrid scheduling cs - 460 54 hawk : hybrid scheduling β€’ long jobs - > centralized β€’ short jobs - > distributed cs - 460 55 hawk : hybrid scheduling cs - 460 56 hawk : rationale cs - 460 57 cs - 460 58 cs - 460 59 long jobs : minority but take most of the resources cs - 460 60 cs - 460 61 long jobs : good placement short jobs : good scheduling latency hawk β€’ sparrow : random placement [ sparrow : distributed, low latency scheduling. kay ousterhout, patrick wendell, matei zaharia, ion stoica, university of california, berkeley, sosp 2013 ] β€’ randomized work stealing β€’ cluster partitioning cs - 460 62 sparrow cs - 460 63 sparrow cs - 460 64 high load + heterogeneity - > head - of - line blocking hawk : work stealing cs - 460 65 hawk : work stealing cs - 460 66 hawk :
EPFL CS 460 Moodle
No question: Moodle
work stealing cs - 460 67 under high load - > high probablity of contacting high - loaded nodes steal from them hawk : cluster partitioning cs - 460 68 hawk : cluster partitioning cs - 460 69 references β€’ b. hindman et al., β€œ mesos : a platform for fine - grained resource sharing in the data center ", usenix 2011 β€’ a. ghodsi, m. zaharia, b. hindman, a. konwinski, s. shenker, i. stoica. β€œ dominant resource fairness : fair allocation of multiple resource types ”. nsdi 2011 β€’ v. vavilapalli et al., β€œ apache hadoop yarn : yet another resource negotiator ", acm cloud computing 2013 β€’ p delgado, f dinu, am kermarrec, w zwaenepoel, β€œ hawk : hybrid datacenter scheduling ”, usenix atc, 2015 thanks to indranil gupta and to amir h. payberah cs - 460 70 stream processing anne - marie kermarrec cs - 460 1 where are we? cs - 460 2 consistency protocols cap theorem week 9 gossip protocols week 7 distributed / decentralized systems week 8 - 12 data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db dynamo big table cassandra week 9 distributed messaging systems kafka – week 11 structured
EPFL CS 460 Moodle
No question: Moodle
data spark sql graph data pregel, graphlab, x - streem, chaos machine learning week 12 batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( mesos ) - week 10 query optimization storage hierarchies & layouts transaction management query execution stream processing β€’ stream processing continuously incorporates new data to compute a result. β€’ the input data is unbounded. β€’ a series of events, no predetermined beginning or end. β€’ user applications can compute various queries over this stream of events. cs - 460 3 use cases β€’ fraud detection systems β€’ trading systems ( examine price changes and execute trades ) β€’ military and intelligence systems β€’ advertizement systems and recommenders β€’ data analytics β€’ recommenders β€’ … β€’ real - time analytics β€’ rate of certain events e. g. tracking a running count of each type of event, or aggregating them into hourly windows β€’ compute rolling average of a value β€’ compare current statistics cs - 460 4 stream processing versus dbms β€’ database management systems ( dbms ) : data - at - rest analytics β€’ store and index data before processing it. β€’ process data only when explicitly asked by the users β€’ stream processing systems ( sps ) : data - in - motion analytics β€’ processing information as it flows, without ( or with ) storing them persistently. cs - 460 5 stream processing versus batch processing β€’ advantages of stream processing β€’ near real - time
EPFL CS 460 Moodle
No question: Moodle
results β€’ do not need to accumulate data for processing β€’ streaming operators typically require less memory β€’ disadvantages of stream processing β€’ some operators are harder to implement with streaming β€’ stream algorithms are often approximations cs - 460 6 example β€’ recommender system β€’ every time someone loads a page ; a viewed page event generates several events. β€’ that may lead to any of the following : β€’ store the message in cassandra / mongodb for future analysis β€’ count page views and update a dashboard β€’ trigger an alert if a page view fails β€’ send an email notification to another user β€’ compute analytics β€’ compute recommendations cs - 460 7 messaging system β€’ disseminate streams of events from various producers to various consumers. β€’ messaging system is an approach to notify consumers about new events. β€’ messaging systems β€’ direct messaging β€’ pub / sub systems cs - 460 8 gossip protocols direct messaging β€’ necessary in latency critical applications ( e. g., remote surgery ). β€’ a producer sends a message containing the event, which is pushed to consumers. β€’ both consumers and producers have to be online at the same time. cs - 460 9 direct messaging β€’ issues when consumer crashes or temporarily goes offline. β€’ producers may send messages faster than the consumers can process. β€’ dropping messages β€’ backpressure β€’ message brokers can log events to process at a later time. cs - 460 10 publish - subscribe systems β€’ asynchronous ( loosely coupled ) event notification system β€’ a set of subscribers / consumers register their interest
EPFL CS 460 Moodle
No question: Moodle
( subscriptions ) β€’ a set of publishers / producers issue some events ( events ) β€’ publish - subscribe system 1. manages users subscriptions 2. matches published events against subscriptions 3. disseminate events to matching subscribers ( and no others ) β€’ flexible and seamless messaging substrate for applications subscribers publishers pub - sub system 11 cs - 460 pub - sub system storage and management of subscriptions ( subscribe / uns ubscribe ) event dissemination example cs - 460 pub - sub system publisher storage and management of subscriptions ( subscribe / uns ubscribe ) event dissemination analytics service profile management service recommender update profile in cassandra compute recommendation update dashboard prominent way of disseminating information β€’ social networks β€’ rss feeds β€’ recommendation systems 12 decoupling in time, space and synchronization cs - 460 13 publisher pub - sub system publisher publisher publisher storage and management of subscriptions ( subscribe / uns ubscribe ) event dissemination subscriber subscriber subscriber subscriber the interacting parties do not need to know each other. decoupling in time, space and synchronization cs - 460 14 publisher pub - sub system publisher publisher publisher storage and management of subscriptions ( subscribe / uns ubscribe ) event dissemination subscriber subscriber subscriber subscriber decoupling in time, space and synchronization cs - 460 15 pub - sub system publisher storage and management of
EPFL CS 460 Moodle
No question: Moodle
subscriptions ( subscribe / uns ubscribe ) event dissemination subscriber publish ( ) notify ( ) synchronization decoupling : producers are not blocked upon publication, subscribers are asynchronously notified pub - sub systems : expressiveness β€’ differences in subscription expressiveness β€’ topic - based ~ application - level multicast topic = houses _ sales β€’ content - based β€’ attribute - based s1 = ( city = rennes ) ( capacity = 2 _ bedrooms ) β€’ range queries s1 = ( city = rennes | | saint malo ) & & ( capacity = 3 _ bedrooms | | price < 500, 000 eur ) 16 cs - 460 pub / sub architecture β€’ centralized broker model β€’ consists of multiple publishers and multiple subscribers and centralized broker β€’ subscribers / publishers will contact 1 broker, and do not need to have knowledge about others. β€’ e. g. corba event services, jms, jedi etc … cs - 460 17 distributed / decentralized architectures β€’ distributed model β€’ a set of nodes act as brokers ( siena, kafka ) β€’ decentralized model β€’ each node can be publisher, subscriber or broker. β€’ dht are employed to locate nodes in the network. β€’ e. g. java distributed event service / tera β€’ topic - based pub - sub : equivalent / related to application - level multicast ( alm ) cs - 460 18 19 alm on structured overlay networks β€’
EPFL CS 460 Moodle
No question: Moodle
overlay network used for group naming and group localization β€’ flooding - based multicast [ can multicast ] : β€’ creation of a specific network for each group β€’ message flooded along the overlay links β€’ tree - based multicast [ bayeux, scribe ] β€’ creation of a tree per group β€’ flooding along the tree branches cs - 460 20 scribe tcp / ip internet scribe multicast protocol membership management pastry p2p infrastructure cs - 460 21 scribe : design β€’ goals β€’ group creation β€’ membership maintenance β€’ messages dissemination within a group β€’ construction of a multicast tree on top of a pastry - like infrastructure β€’ creation of a tree per group β€’ the tree root is the peer hosting the key associated to that group β€’ the tree is formed as the union of routes from every member to the root β€’ reverse path forwarding β€’ messages flooded along the tree branches cs - 460 22 scribe : group ( topic ) creation β€’ each group is assigned an identifier groupid = hash ( name ) β€’ multicast tree root : node which nodeid is the numerically closest to the groupid β€’ create ( group ) : p2p routing using the groupeid as the key # g create ( # g ) root cs - 460 23 one tree per topic β€’ join ( group ) : message sent through pastry using groupeid as the key β€’ multicast tree : union of pastry routes from the root to each group β€’ low latency : leverages pastry proximity routing β€’ low network link stress : most
EPFL CS 460 Moodle
No question: Moodle
packets are replicated low in the tree cs - 460 24 scribe : join ( group ) 1100 1101 1001 0100 0111 1011 1111 1100 0111 0100 1000 1111 1000 1101 1001 1011 cs - 460 25 scribe : message dissemination multicast ( group, m ) β€’ routing through pastry to the root key = groupeid β€’ flooding along the tree branches from the root to the leaves 1100 1101 1001 0100 0111 1011 e cs - 460 26 reliability β€’ Β« best effort Β» reliability guarantee β€’ tree maintenance when failures are detected β€’ stronger guarantee may also be implemented β€’ node failure β€’ parents periodically send heartbeat messages to their descendants in the tree β€’ when such messages are missed, nodes join the group again β€’ local reconfiguration β€’ pastry routes around failures cs - 460 27 tree maintenance 1100 1101 1011 0100 0111 1011 1000 1001 1111 root cs - 460 28 tree maintenance 1100 1101 0100 0111 1011 1000 1001 1111 faulty root new root cs - 460 29 load balancing β€’ specific algorithm to limit the load on each node β€’ size of forwarding tables β€’ specific algorithm to remove the forwarders - only peers from the tree β€’ small - size groups cs - 460 30 scribe performance β€’ discrete event simulator β€’ evaluation metrics β€’ relative delay penalty β€’ rmd : max delayapp - mcast / max delayip - mcast β€’ rad : avg delayapp - mcast / avg delay
EPFL CS 460 Moodle
No question: Moodle
##ip - mcast β€’ stress on each network link β€’ load on each node β€’ number of entries in the routing table β€’ number of entries in the forwarding tables β€’ experimental set - up β€’ georgia tech transit - stub model ( 5050 core routers ) β€’ 100 000 nodes chosen at random among 500 000 β€’ zipf distribution for 1500 groups β€’ bandwidth not modeled cs - 460 31 group distribution 1 10 100 1000 10000 100000 0 150 300 450 600 750 900 1050 1200 1350 1500 group rank group size instant messaging windows update stock alert cs - 460 32 delay / ip 0 300 600 900 1200 1500 0 1 2 3 4 5 delay penalty cdf of groups rmd rad mean = 1. 81 median = 1. 65 cs - 460 33 load balancing 0 5000 10000 15000 20000 25000 0 5 10 15 20 25 30 35 40 number of forwarding tables number of nodes cs - 460 34 load balancing 0 5000 10000 15000 20000 0 100 200 300 400 500 600 700 800 900 1000 1100 total number of entries in forwarding tables number of nodes 0 5 10 15 20 25 30 35 40 45 50 55 50 150 250 350 450 550 650 750 850 950 1050 total number of entries in forwarding tables number of nodes cs - 460 35 network load 0 5000 10000 15000 20000 25000 30000 1 10 100 1000 10000 stress number of network links scribe ip multicast maximum cs - 460 36 summary β€’ generic p2p infrastructures
EPFL CS 460 Moodle
No question: Moodle
β€’ good support for large - scale distributed applications β€’ alm infrastructure β€’ scribe exhibits good performances / ip multicast β€’ large size groups β€’ large number of groups β€’ good load - balancing properties cs - 460 kafka kafka : a distributed messaging system for log processing developed by linkedin, now apache, written in scala and java cs - 460 37 kafka β€’ kafka is a distributed, topic - based, partitioned, replicated commit log service. β€’ fast β€’ scalable β€’ durable β€’ distributed β€’ a log - based message broker β€’ distributed stream processing software cs - 460 38 kafka adoption and use cases β€’ widely spread across industries such as healthcare, finance, retail, and manufacturing ( 000 companies in 2025 ). β€’ linkedin : activity streams, operational metrics, data bus β€’ 400 nodes, 18k topics, 220b msg / day ( peak 3. 2m msg / s ) β€’ tesla : stream processing to handle trillions of iot events daily, uses kafka to ingest, process, and analyze data from its vehicle fleet in real time β€’ netflix : real - time monitoring and event processing β€’ x : as part of their storm real - time data pipelines β€’ spotify : log delivery ( from 4h down to 10s ), hadoop β€’ airbnb, cisco, gnip, infochimps, ooyala, square, uber, jpmorgan, … β€’ mediegoj cs -
EPFL CS 460 Moodle
No question: Moodle
460 39 kafka in a nutshell β€’ producers write data to brokers. β€’ consumers read data from brokers ( pull model ) β€’ distributed, run in a cluster β€’ the data β€’ data is stored in topics. β€’ topics are split into partitions, which are replicated. 40 producer consumer producer broker broker broker broker consumer zk cs - 460 partitioned logs β€’ in typical message brokers, once a message is consumed, it is deleted. β€’ log - based message brokers durably store all events in a sequential log. β€’ a log is an append - only sequence of records on disk. β€’ a producer sends a message by appending it to the end of the log. β€’ a consumer receives messages by reading the log sequentially. cs - 460 41 partitioned logs β€’ to scale up the system, logs can be partitioned hosted on different machines. β€’ each partition can be read and written independently of others. β€’ a topic is a group of partitions that all carry messages of the same type. β€’ within each partition, the broker assigns a monotonically increasing sequence number ( offset ) to every message β€’ no ordering guarantee across partitions. cs - 460 42 topics β€’ topics are queues : a stream of messages of a particular type β€’ each message is assigned a sequential id called an offset ( no overhead related to maintaining explicit message id ) β€’ topics are logical collections of partitions ( the physical files ). β€’ ordered β€’ append only β€’ immut
EPFL CS 460 Moodle
No question: Moodle
##able cs - 460 43 kafka producer cs - 460 44 producer kafka broker consumer alice users topic publish β€œ alice ” to users topic topic, position append only ana jeanne consume β€œ alice ” to users topic kafka partitions cs - 460 45 producer kafka broker consumer alice users topic ana jeanne malo medhi partition 1 ( a - k ) partition 2 ( l - z ) 0 1 2 0 1 2 partitions β€’ partitions of a topic are replicated : fault - tolerance β€’ a broker contains some of the partitions for a topic : load - balancing β€’ one broker is the leader of a partition : all writes and reads must go to the leader. cs - 460 46 kafka architecture cs - 460 47 consumer groups β€’ a consumer group : one or more consumers that jointly consume a set of subscribed topics β€’ each message is delivered to only one of the consumers within the group. β€’ at any given time, all messages from one partition are consumed only by a single consumer within each consumer group β€’ avoids synchronization cs - 460 48 partitioned logs cs - 460 49 state and guarantees β€’ state β€’ brokers are stateless : no metadata for consumers - producers in brokers. β€’ consumers are responsible for keeping track of offsets. β€’ messages in queues expire based on pre - configured time periods ( e. g., once a day ). β€’ side benefit : a consumer can deliberately rewind back to an old offset and
EPFL CS 460 Moodle
No question: Moodle
re - consume data. β€’ delivery guarantees β€’ kafka guarantees that messages from a single partition are delivered to a consumer in order. β€’ there is no guarantee on the ordering of messages coming from different partitions. β€’ kafka only guarantees at - least - once delivery ( the client needs to check for duplicate ) β€’ kafka uses zookeeper ( up to 2025 ) for the following tasks : β€’ detecting the addition and the removal of brokers and consumers. β€’ keeping track of the consumed offset of each partition. cs - 460 50 scalability 101 190 293 381 0 50 100 150 200 250 300 350 400 1 broker 2 brokers 3 brokers 4 brokers throughput in mb / s ( 10 topics, broker flush interval 100k ) cs - 460 51 kafka β€’ simple and efficient ( hight throughput ) β€’ persistent storage β€’ pull - based pub - sub system β€’ widely used in industry cs - 460 52 references the many faces of publish / subscribe. patrick th. eugster, pascal a. felber, rachid guerraoui, anne - marie kermarrec. acm computing surveys june 2003. xl peer - to - peer pub / sub systems. anne - marie kermarrec & peter triantafillou. acm computing surveys nov. 2013. kafka : a distributed messaging system for log processing. j. kreps et al. netdb, 2011 spark : the definitive guide. m
EPFL CS 460 Moodle
No question: Moodle
. zaharia et al., o'reilly media, 2018 - chapter 20 fundamentals of stream processing : application design, systems and analytics. h. andrade et al., cambridge university press, 2014 - chapter 1 - 5, 7, 9 high - availability algorithms for distributed stream processing. j. hwang et al., icde 2005 cs - 460 53 references ( alm ) β€’ m. castro, p. druschel, a - m. kermarrec and a. rowstron, " scribe : a large - scale and decentralised application - level multicast infrastructure ", ieee journal on selected areas in communication ( jsac ), vol. 20, no, 8, october 2002. β€’ m. castro, p. druschel, a - m. kermarrec, a. nandi, a. rowstron and a. singh, " splitstream : high - bandwidth multicast in a cooperative environment ", sosp'03, lake bolton, new york, october, 2003. β€’ shelley q. zhuang, ben y. zhao, anthony d. joseph, randy katz john kubiatowicz Β« bayeux : an architecture for scalable and fault - tolerant wide - area data dissemination Β» eleventh international workshop on network and operating systems support for digital audio and video ( nossdav 2001 ) β€’ sylvia ratnasamy, mark handley, richard karp, scott shenker Β« application - level multicast using
EPFL CS 460 Moodle
No question: Moodle
content - addressable networks Β» ( 2001 ) lecture notes in computer science, ngc 2001 london. β€’ d. kostic, a. rodriguez, j. albrecht, and a. vahdat. Β« bullet : high bandwidth data dissemination using an overlay mesh Β». in 19th acm symposium on operating systems principles, october 2003. cs - 460 54 cs460 systems for data management and data science prof. anastasia ailamaki prof. anne - marie kermarrec introduction and storage management february 17, 2025 cs460 1 data : an extremely valuable resource cs460 2 database what is data? β€’ facts β€’ basis for reasoning / discussion / calculation β€’ useful or irrelevant or redundant β€’ must be processed to be meaningful data information β€’ has meaning β€’ relevant to the problem β€’ actionable – leads to a solution organized, processed knowledge wisdom organized, processed 3 β€’ a large, integrated, structured collection of data β€’ usually intended to model some real - world enterprise β€’ example : university – courses – students – professors – enrollment – teaching entities relationships 4 what is a database? relationships what is a database management system ( dbms )? β€’ a software system designed to store, manage, and facilitate access to databases β€’ dbms = interrelated data ( database ) + set of programs to access it ( software ) 5 concurrency control protects data from failures : h / w, s / w, power ; malicious users physical data independence, declarative high - level query languages provides efficient, reliable
EPFL CS 460 Moodle
No question: Moodle
, convenient, and safe multi - user storage of and access to massive amounts of persistent data. reliable convenient safe multi - user massive persistent efficient extremely large ( often exabytes every day ) data outlives the programs that operate on it thousands of queries / updates per second 24x7 availability 6 what does a dbms do? data - intensive applications & systems β€’ data - intensive vs compute - intensive β€’ volume, complexity, velocity, volatility, variety … β€’ hardware / software codesign – optimizing for memory hierarchy, and hardware accelerators β€’ scientific applications cs460 7 data science cs460 8 a data - driven approach to problem solving by analyzing and exploring large volumes of possibly multi - modal data. it involves collecting, preparing, managing, processing, analyzing, and explaining the data and analysis results. data science is interdisciplinary ( statistics, computer science, information science, mathematics, social science, visualization, etc. ). debunking some myths β€’ data science < > big data β€’ data science < > machine learning cs460 9 related but not the same! data science ’ s raison d ’ existence : its applications cs460 10 fraud and risk detection recommendation systems digital advertisement image and speech recognition gaming healthcare smart cities sustainability real time accuracy scalability latency security / privacy failure resilience finance augmented reality the many faces of data science cs460 11 data analytics data exploration ( mining ) models and algorithms ( ml ) visualizations data security
EPFL CS 460 Moodle
No question: Moodle
/ privacy data integrity differential privacy cryptography data ethics biases ( data and algorithms ) impact on society regulations data engineering big data management data preparation large - scale deployment cs460 cs460 landscape 12 consistency protocols cap theorem gossip protocols distributed / decentralized systems data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db ( dynamo big table cassandra ) distributed messaging systems ( kafka ) structured data ( spark sql ) graph data ( pregel, graphlab, x - streem, chaos ) machine learning batch data ( map reduce, dryad, spark ) streaming data ( storm, naiad, flink, spark streaming google data flow ) scheduling ( mesos, yarn ) query optimization storage hierarchies & data layout transaction management query execution cs460 cs460 13 week date topic 1 17 / 02 introduction and storage hierarchy 2 24 / 02 query execution 3 03 / 03 query optimization 4 10 / 03 transactions 5 17 / 03 distributed transactions 6 24 / 03 distributed query execution 7 31 / 03 midterm exam ( not graded ) 8 7 / 04 gossip protocols 9 14 / 04 distributed hash tables + consistency models 10 28 / 04 key - value stores + cap theorem 11 5 / 05 scheduling 12 12 / 05 stream processing 13 12 / 05 distributed learning systems 14 19 / 05 invited industry lecture cs460 cs460 learning experience 14 lecture learn the internals of a ( distributed ) platform for data
EPFL CS 460 Moodle
No question: Moodle
science breadth coverage exercises put the course in practice programming skills exam preparation background for the project project acquaintance with a real platform going in depth intended as a practical work may not be related to every part of the course cs460 ta / ae team 15 hamish ( ta ) martijn ( head ta ) yi ( ta ) mathis ( ta ) diana ( ta ) milos ( ta ) mathis ( ta ) elif ( ae ) jakob ( ae ) alex ( ae ) rishi ( ta ) course logistics β€’ cs460 moodle : all the material, updated every week β€’ schedule β€’ lecture ( monday 2 : 15 - 4 pm ) – ce12 β€’ exercises ( monday 4 : 15 - 6 pm ) – ce12 ( week 1 : project overview ) β€’ individual project ( tuesday 11 - 1pm : indicative time slot to work on the project ) β€’ grading scheme β€’ project ( 40 % ) ( presentation at 16 : 15 today ) β€’ midterm exam ( not graded, highly recommended, covers weeks 1 - 5 ) β€’ final exam ( 60 % ) cs460 16 data moves to the cloud cs460 19 in 2021, cloud workloads represent 94 % of all it workload worldwide fb spent 16bn $ on datacenter in 2019 google spent 13bn $ datacenter in 2019 hyperscale datacenters ( fitting more it in less space, scale hugely and quickly to increasing demand ( elasticity, computing ability, memory, networking infrastructure, disa
EPFL CS 460 Moodle
No question: Moodle
##ggregated storage ) have been growing at a historic rate over the past 10 years 0 20 40 60 80 100 120 140 160 180 200 data size ( zb ) explore data efficiently data age 2025, data from idc global datasphere, nov 2018 33 % of data is inaccurate in some way processing technology grows much slower than data how much data are we talking about? scalability β€’ what if your system grows from 50, 000 concurrent users to 10m β€’ scalability : ability to cope with increasing load β€’ load : number of requests / second, ratio of reads / writes in a database, number of simultaneously active users … β€’ performance metrics β€’ latency / response time : duration for a request to be handled β€’ average versus percentiles β€’ the 95th : response time at which 95 % of requests are faster than that threshold β€’ tail latency : refers to high latencies that clients see fairly infrequently cs460 21 consistency protocols cap theorem gossip protocols distributed / decentralized systems data science software stack data processing ressource management & optimization data storage distributed file systems ( gfs ) nosql db dynamo big table cassandra distributed messging systems kafka structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( mesos, yarn ) query optimization storage hierarchies
EPFL CS 460 Moodle
No question: Moodle
& data layout transaction management query execution 22 today ’ s topic ( simplified ) dbms architecture 23 recovery manager transaction manager files and access methods buffer management parser + optimizer + plan execution web forms application front ends sql interface sql commands storage management data today ’ s topic 24 buffer management storage management data storage management : outline – storage technologies – file storage – buffer management ( refresher ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 25 storage hierarchy 26 network storage hdd ssd dram cpu caches cpu registers faster smaller slower larger storage layer access times 27 network storage hdd ssd dram cpu caches cpu registers faster smaller slower larger ~ 30, 000, 000 ns 100 ns l2 : 7 ns l1 : 0. 5 ns 150, 000 ns 10, 000, 000 ns layers often disaggregated β†’ access times vary! a surprisingly simple model for cache organization 28 multi - level cache main memory l3 cache ( unified ) l2 cache ( unified ) cpu core registers l1 cache average access time ( aat ) = hit time + ( ( miss rate ) x ( miss penalty ) ) hit timecache < hit timememory aatcache < < aatmemory non - volatile memory vs solid - state drive 29 dram nvm ssd β€’ goals : data persists after power - cycle + reduce random / sequential access gap β€’ no seek / rotational delays β€’
EPFL CS 460 Moodle
No question: Moodle
like dram, low - latency loads and stores β€’ like ssd, persistent writes and high density β€’ byte - addressable dram nvm ssd β€’ ssd technology uses non - volatile flash chips package multiple flash chips into a single closure β€’ ssd controller embedded processor that executes firmware - level software bridges flash chips to the ssd input / output interfaces β€’ block - addressable storage management : outline – storage technologies – file storage – buffer management ( refresher ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 30 from tables / rows to files / pages 31 students sid name login age gpa 50000 dave dave @ cs 19 3. 3 53666 jones jones @ cs 18 3. 4 53688 smith smit @ ee 18 3. 2 … … … … … storage … files … … pages records / tuples file page fields goal : dbms must efficiently manage datasets larger than available memory file storage the dbms stores a database as one or more files on disk. the storage manager is responsible for maintaining a database ’ s files and organizes them as a collection of pages. – tracks data read / written to pages – tracks available space 32 alternative file organizations the storage manager is responsible for maintaining a database ’ s files and organizes them as a collection of pages. many alternatives exist, each good for some situations, and not so good in others. indicatively : β€’
EPFL CS 460 Moodle
No question: Moodle
heap files : best when typical access is a full file scan β€’ sorted files : best for retrieval in an order, or for retrieving a β€˜ range ’ β€’ log - structured files : best for very fast insertions / deletions / updates 33 heap ( unordered ) files β€’ simplest file structure – contains records in no particular order – need to be able to scan, search based on rid β€’ as file grows and shrinks, disk pages are allocated and de - allocated. – need to manage free space heap file implemented using lists β€’ < heap file name, header page id > stored somewhere β€’ each page contains 2 β€˜ pointers ’ plus data. β€’ manage free pages using free list – what if most pages have some space? header page data page data page data page data page data page data page pages with free space full pages … … heap file using a page directory β€’ the directory is a collection of pages – linked list implementation is just one alternative. β€’ the entry for a page can include the number of free bytes on the page. – much smaller than linked list of all hf pages! data page 1 header page directory data page 2 data page n log - structured files instead of storing tuples in pages, the dbms only appends log records. blocks are never modified. for workloads with many small files, a traditional file system needs many small synchronous random writes, whereas a log - structured file system does few large asynchronous
EPFL CS 460 Moodle
No question: Moodle
sequential transfers. 37 writing to log - structured files – inserts : store the entire tuple – deletes : mark tuple as deleted – updates : store delta of just the attributes that were modified 38 log file insert id = 1, val = a insert id = 2, val = b delete id = 4 insert id = 3, val = c update val = x ( id = 3 ) … reading from log - structured files – dbms scans log backwards, and β€œ recreates ” the tuple 39 log file insert id = 1, val = a insert id = 2, val = b delete id = 4 insert id = 3, val = c update val = x ( id = 3 ) … reading from log - structured files – dbms scans log backwards, and β€œ recreates ” the tuple – build indexes to allow jumps in the log – periodically compact the log 40 id = 1 id = 2 id = 3 id = 4 log file insert id = 1, val = a insert id = 2, val = b delete id = 4 insert id = 3, val = c update val = x ( id = 3 ) … net - net of log - structured files β€’ advantages – high performance for inserts, deletes and updates – ultra - fast recovery from failures – good for ssd as writes are naturally leveled β€’ disadvantages – unpredictable performance in sequential reads – need a lot of free space – affects garbage collection ( need for
EPFL CS 460 Moodle
No question: Moodle
compaction ) – data can be lost if written but not checkpointed β€’ dbms needs to address two issues – how to reconstruct tuples from logs efficiently – how to manage disk space with ever - growing logs 41 storage management : outline – storage technologies – file storage – buffer management ( refresher ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 42 can ’ t we just use the os buffering? β€’ layers of abstraction are good … but : – unfortunately, os often gets in the way of dbms β€’ dbms needs to do things β€œ its own way ” – specialized prefetching – control over buffer replacement policy β€’ lru not always best ( sometimes worst!! ) – control over thread / process scheduling β€’ β€œ convoy problem ” – arises when os scheduling conflicts with dbms locking – control over flushing data to disk β€’ wal protocol requires flushing log entries to disk 43 buffer management in a dbms β€’ data must be in ram for dbms to operate on it! β€’ buffer manager hides the fact that not all data is in ram ( just like hardware cache policies hide the fact that not all data is in the caches ) 44 db main memory disk disk page free frame page requests from higher levels buffer pool choice of frame dictated by replacement policy when a page is requested... β€’ buffer pool information table contains : < frame #, pageid, pin _ count, dirty > β€’ if requested page
EPFL CS 460 Moodle
No question: Moodle
is not in pool : – choose a frame for replacement ( only un - pinned pages are candidates ) – if frame is β€œ dirty ”, write it to disk – read requested page into chosen frame β€’ pin the page and return its address. 45 * if requests can be predicted ( e. g., sequential scans ) pages can be pre - fetched several pages at a time! more on buffer management β€’ requester of page must unpin it, and indicate whether page has been modified : – dirty bit is used for this. β€’ page in pool may be requested many times, – a pin count is used. a page is a candidate for replacement iff pin count = 0 ( β€œ unpinned ” ) β€’ cc & recovery may entail additional i / o when a frame is chosen for replacement 46 buffer replacement policy β€’ frame is chosen for replacement by a replacement policy : – least - recently - used ( lru ), mru, clock, etc. β€’ policy can have big impact on # of i / o ’ s ; depends on the access pattern. 47 lru replacement policy β€’ least recently used ( lru ) – for each page in buffer pool, keep track of time last unpinned – replace the frame which has the oldest ( earliest ) time – very common policy : intuitive and simple β€’ problem : sequential flooding – lru + repeated sequential scans. – # buffer frames < # pages in file means each page request causes an i / o. mru much
EPFL CS 460 Moodle
No question: Moodle
better in this situation ( but not in all situations, of course ). 48 sequential flooding – illustration 49 1 2 3 4 5 6 1 2 buffer pool lru : mru : repeated scan of file … buffer pool 1 2 4 3 5 2 4 3 5 6 4 3 5 6 4 1 5 6 2 1 1 2 4 3 1 2 5 3 1 2 6 3 1 2 6 3 1 2 6 3 3 4 5 6 5 1 2 3 4 6 1 2 β€œ clock ” replacement policy β€’ an approximation of lru. β€’ arrange frames into a cycle, store one β€œ reference bit ” per frame β€’ when pin count goes to 0, reference bit set on. β€’ when replacement necessary : do { if ( pincount = = 0 & & ref bit is off ) choose current page for replacement ; else if ( pincount = = 0 & & ref bit is on ) turn off ref bit ; advance current frame ; } until a page is chosen for replacement ; 50 a ( 1 ) b ( p ) c ( 1 ) d ( 0 ) storage management : outline – storage technologies – file storage – buffer management ( refresher, slides on moodle ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 51 the n - ary storage model β€’ page = collection of slots β€’ each slot stores one record – record identifier : < page _ id, slot _ number > – option 2 : <
EPFL CS 460 Moodle
No question: Moodle
uniq > - > < page _ id, slot _ number > β€’ page format should support – fast searching, inserting, deleting β€’ page format depends on record format – fixed - length – variable - length 52 record formats : fixed - length β€’ schema is stored in system catalog – number of fields is fixed for all records of a table – domain is fixed for all records of a table β€’ each field has fixed length β€’ finding ith field is done via arithmetic. 53 base address ( b ) l1 l2 l3 l4 f1 f2 f3 f4 address = b + l1 + l2 page format : fixed - length records β€’ record id = < page id, slot # > β€’ in the packed case, moving records for free space management changes rid ; maybe unacceptable. 54 record formats : variable - length β€’ array of field offsets is typically superior – direct access to fields – clean way of handling null values 55 $ $ $ $ fields delimited by special symbols f1 f2 f3 f4 f1 f2 f3 f4 array of field offsets page format : variable - length records 56 β€’ need to move records in a page β€’ allocation / deletion must find / release free space β€’ maintain slot directory with < record offset, record length > pairs β€’ records can move on page without changing rid β€’ useful for freely moving fixed - length records ( ex : sorting ) slot array data variable - length records : issues β€’ if a field grows
EPFL CS 460 Moodle
No question: Moodle
and no longer fits? – shift all subsequent fields β€’ if record no longer fits in page? – move a record to another page after modification β€’ what if record size > page size? – limit allowed record size 57 storage management : outline – storage technologies – file storage – buffer management ( refresher ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 58 decomposition storage model ( dsm ) 59 dsm page format 60 decompose a relational table to sub - tables per attribute column store ( dsm ) : example β€’ columns stored in pages – denoted with different colors β€’ each column can be accessed individually – pages loaded only for the desired attributes 61 name john jack jane george wolf maria andy ross jack age 22 19 37 43 51 23 56 22 63 dept hr hr it fin it hr fin sales fin three different files : tbl1. name tbl1. age tbl1. dept tbl1 column store ( dsm ) properties pros β€’ saves io by bringing only the relevant attributes β€’ ( very ) memory - compressing columns is typically easier cons β€’ writes more expensive β€’ need tuple stitching at some point ( tuple reconstruction ) β€’ indexed selection with low selectivities β€’ queries that require all or most of the attributes 62 β€’ lossless compression β€’ io reduction implies less cpu wait time – introduces small additional cpu load on otherwise idle cpu β€’ run - length encoding ( rle ) : a lossless
EPFL CS 460 Moodle
No question: Moodle
compression algorithm – sequences of redundant data are stored as a single data value compression 63 dept hr hr sales it it cdept ( 2 x hr ) ( 1 x sales ) ( 2 x it ) compression ( 2 ) β€’ bit - vector encoding : compact and constant - time test – useful when we have categorical data & useful when a few distinct values – one bit vector for each distinct value – vector length = # distinct elements 64 dept hr hr sales it it hr sales it 1 1 0 0 0 0 0 1 0 0 0 0 0 1 1 compression ( 3 ) β€’ dictionary encoding – replace long values ( e. g., strings ) with integers 65 dept hr it hr sales hr finance finance it dictionary 1 hr 2 it 3 sales 4 finance cdept 1 2 1 3 1 4 4 2 compression ( 4 ) β€’ frequency partitioning – reorganize each column to reduce entropy at each page 66 dept hr it fin fin hr hr fin hr sales dept 1 hr 5 hr 6 hr 8 hr 2 it 3 fin 4 fin 7 fin 9 sales cdept 1 1 5 1 6 1 8 1 2 1 3 2 4 2 7 2 9 1 column reorganization dictionary - based compression with per - page dictionaries smaller dictionaries improve - memory requirements - cache utilization - effectiveness of run - length encoding operators over compressed data no need to decompress for most query operators β€’ dictionary encoding = > integer comparisons faster than string comparisons select name from tbl where dept = β€œ hr ” vs select
EPFL CS 460 Moodle
No question: Moodle
name from tbl where cdept = 1 – per - page dictionaries? β€’ bit - vector encoding = > find the 1 ’ s directly from the bit vectors select count ( * ) from tbl where cdept = β€œ hr ” β€’ run - length encoding = > batch processing ( aggregation ) dsm : writes β€’ row insertions / deletions – affects all columns – multiple i / os – complicated transactions β€’ deletes / updates : implicit – mark record as deleted! β€’ massive data loading : write - optimized storage ( wos ) 68 name john jack jane age 22 19 37 dept hr hr it tbl1 write - optimized storage batch - loading : β€’ < jill, 24, it > β€’ < james, 56, fin > β€’ < jessica, 34, it > 69 name john jack jane jake age 22 19 37 43 dept hr hr it fin name jill james jessica age 24 56 34 dept it fin it in - memory buffer ( fixed - size ) filesystem storage : 3 different files, possibly compressed! jill james jessica 24 56 34 it fin it flush out write rows in - memory, flush columns to disk storage management : outline – storage technologies – file storage – buffer management ( refresher ) – page layout β€’ nsm, aka row store β€’ dsm, aka column store β€’ pax, hybrid 70 partition attributes across ( pax ) 71 decompose a slotted - page internally in mini - pages per attribute -
EPFL CS 460 Moodle
No question: Moodle
friendly with slotted - pages nsm i / o pattern column β€œ stitching ” delay per - column tuple ids only relevant attributes to cache pax americana β€’ dsm most suitable for analytical queries, but required major rewrites of existing dbms, and penalized transactions a lot. β€’ pax replaces nsm in - place – monetdb / x100 ( vectorwise ) – oracle exadata, snowflake, google spanner, etc. – data lake - oriented file formats β€’ parquet β€’ arrow β€’ … 72 conclusion β€’ one size does not fit all each storage technology favors a different storage layout different workloads require different storage layouts and data access methods β€’ to optimize use of resources and algorithms, we need to know the workload ( unrealistic ) new way of building systems : jit / code generation / virtualization 73 next week principal engineer at oracle zurich formerly professor at ecole polytechnique paris 74 dr. angelos anadiotis will lecture on query processing reading material β€’ row stores ( material of cs300 ). read one of : – cow book. chapters 7. 3 - 7 & 8 ( 2nd ed ) or chapters 8 & 9. 7 - 7 ( 3rd ed ) – database system concepts, sixth edition. ( chapters 13. 1 - 3, 13. 5 + 14. 1 - 9 ) β€’ d. abadi et al. : the design and implementation of modern
EPFL CS 460 Moodle
No question: Moodle
column store database systems. foundations and trends in databases, vol. 5, no. 3, pp. 227 - 263 only, 2013. available online at : stratos. seas. harvard. edu / files / stratos / files / columnstoresfntdbs. pdf β€’ a. ailamaki et al. : weaving relations for cache performance. vldb 2001 β€’ https : / / blog. twitter. com / engineering / en _ us / a / 2013 / dremel - made - simple - with - parquet. html optional readings β€’ the remainder of : β€œ the design and implementation of modern column store database systems ” β€’ i. alagiannis, s. idreos, a. ailamaki : h2o : a hands - free adaptive store. sigmod ’ 14. available online at : http : / / dl. acm. org / citation. cfm? doid = 2588555. 2610502 β€’ joy arulraj, andrew pavlo : how to build a non - volatile memory database management system. sigmod 2017, tutorial 75 cs460 systems for data management and data science prof. anastasia ailamaki data - intensive applications and systems ( dias ) lab query execution β€œ it is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures ” – alan perlis consistency protocols cap theorem gossip protocols distributed / decentraliz ed
EPFL CS 460 Moodle
No question: Moodle
systems data science software stack data processing ressource management & optimization data storage distribute d file systems ( gfs ) nosql db dynamo big table cassandra distributed messging systems kafka structured data spark sql graph data pregel, graphlab, x - streem, chaos machine learning batch data map reduce, dryad, spark streaming data storm, naiad, flink, spark streaming google data flow scheduling ( mesos, yarn ) query optimization storage hierarchies & layouts transaction management query execution 2 today ’ s topic ( simplified ) dbms architecture 3 recovery manager transaction manager files and access methods buffer management parser + optimizer + plan execution web forms application front ends sql interface sql commands disk memory storage management data zoom in : query planner / optimizer / executor 4 how the dbms executes a query ( plan ) plan executor parser planner ( cost - based ) optimizer abstract syntax tree logical query plan physical query plan query optimizer cost estimates query plan operators are arranged in a tree. data flows from leaves to root. output of root = query result. 5 composable algebra = > composable execution processing model the processing model of a dbms defines how the system executes a query plan. – different trade - offs for different workloads β€’ extreme i : tuple - at - a - time via the iterator model β€’ extreme ii : block - oriented model ( typically column -
EPFL CS 460 Moodle
No question: Moodle
at - a - time ) 6 iterator model ( volcano model ) each query operator implements a next function. β€’ on each invocation, the operator returns either a single tuple or a marker that there are no more tuples β€’ next calls next on the operator ’ s children to retrieve and process their tuples 7 common operator interface = > composability class operator : optional < tuple > next ( ) class project : operator input, expression proj optional < tuple > next ( ) : t = input. next ( ) if ( t empty ) return empty return proj ( t ) class filter : operator input, expression pred optional < tuple > next ( ) : while ( true ) : t = input. next ( ) if ( t empty or pred ( t ) ) return t notation 8 class operator : optional < tuple > next ( ) class project : operator input, expression proj optional < tuple > next ( ) : t = input. next ( ) if ( t empty ) return empty return proj ( t ) class filter : operator input, expression pred optional < tuple > next ( ) : while ( true ) : t = input. next ( ) if ( t empty or pred ( t ) ) return t class project : operator input, expression proj generator < tuple > next ( ) : for t in input. next ( ) : emit proj ( t ) class filter : operator input,
EPFL CS 460 Moodle
No question: Moodle
expression pred generator < tuple > next ( ) : for t in input. next ( ) : if pred ( t ) emit t class operator : generator < tuple > next ( ) example : iterator model 9 example : iterator model ( cont ) 10 1 2 3 4 5 ( interpreted ) expression evaluation nodes in the tree represent different expression types : β€’ comparisons ( =, <, >,! = ) β€’ conjunction ( and ), disjunction ( or ) β€’ arithmetic operators ( +, -, *, /, % ) β€’ constant values β€’ tuple attribute references 11 and > = attribute ( b. c ) attribute ( b. d ) attribute ( val ) constant ( 100 ) select a. id, b. value from a, b where a. id = b. id and b. c = b. d and b. value > 100 class filter : operator input expression pred = ( b. c = b. d ) and ( b. value > 100 ) generator < tuple > next ( ) : … interpreted, tuple - at - a - time processing the dbms traverses the tree. for each node that it visits, it has to figure out what the operator needs to do. same for expressions. this happens for every … single … tuple … 12 many function calls β€’ save / restore contents of cpu registers β€’ force new instruction stream in the pipeline β†’bad for instruction cache generic code β€’ has
EPFL CS 460 Moodle
No question: Moodle
to cover every table, datatype, query processing model the processing model of a dbms defines how the system executes a query plan. – different trade - offs for different workloads β€’ extreme i : tuple - at - a - time via the iterator model β€’ extreme ii : block - oriented model ( typically column - at - a - time ) 13 block - oriented ( aka materialization ) model each operator processes its input all at once and emits its output all at once β€’ the operator β€œ materializes ” its output as a single result. β€’ often bottom - up plan processing. 14 class operator : tuples output ( ) class project : operator input, expression proj tuples output ( ) : out = { } for t in input. output ( ) : out. append ( proj ( t ) ) return out class filter : operator input, expression pred tuples output ( ) : out = { } for t in input. output ( ) : if pred ( t ) out. append ( t ) return out block - oriented model 15 1 4 5 3 2 the ( output ) materialization problem – naive version select name from tbl where age > 20 and dept = β€œ hr ” 16 tbl > 20 = dept hr hr name john tid 1 2 tid 1 age 22 37 tid 1 3 tid 1 name john jack jane age 22 19 37 dept hr hr it tbl tid 1
EPFL CS 460 Moodle
No question: Moodle
2 3 the ( output ) materialization problem – version 2 select name from tbl where age > 20 and dept = β€œ hr ” 17 name john jack jane age 22 19 37 dept hr hr it tbl tid 1 2 3 tbl > 20 = name john tid 1 3 tid 1 tid 1 tid as extra filter to reduce output! can we reduce it further? the ( output ) materialization problem – selection vector select name from tbl where age > 20 and dept = β€œ hr ” 18 tbl > 20 = name john jack jane age 22 19 37 dept hr hr it tbl tid 1 2 3 name john bitmap 1 0 1 bitmap 1 0 0 β€’ only materialize bitmap β€’ perform calculations only for relevant tuples the ( tuple ) materialization problem β€’ when joining tables, columns can get shuffled = > cannot use virtual ids = > stitching causes random accesses 19 tid name 1 john 3 jane 2 jack tbl2 tbl1 tbl2. name = tbl1. name tbl1. age the order of tbl1. name entries can change after the join!!! tid age 1 22 2 19 3 37 solution 2 : sort list of tids before projection solution 1 : stitch columns before join solution 3 : use order - preserving join algorithm ( eg jive - join ) – but not always applicable next ( )
EPFL CS 460 Moodle
No question: Moodle
calls - > no per - tuple overhead combined with columnar storage β–ͺcache - friendly β–ͺsimd - friendly β–ͺ β€œ run same operation over consecutive data ” interpretation when evaluating expressions ( in most cases ) – typically use macros to produce 1000s of micro - operators (!!! ) β€’ selection _ gt _ int32 ( int * in, int pred, int * out ) β–ͺselection _ lt _ int32 ( int * in, int pred, int * out ) β–ͺ … - output materialization is costly ( in terms of memory bandwidth ) block - oriented model 20 the beer analogy ( by marcin zukowski ) : how to get 100 beers tuple - at - a - time execution : β€’ go to the store β€’ pick a beer bottle β€’ pay at register β€’ walk home β€’ put beer in fridge repeat till you have 100 beers many unnecessary steps 21 column - at - a - time execution β€’ go to the store β€’ take 100 beers β€’ pay at register β€’ walk home 100 beers not easy to carry processing model the processing model of a dbms defines how the system executes a query plan. – different trade - offs for different workloads β€’ extreme i : tuple - at - a - time via the iterator model β€’ middle ground : vectorization model β€’ extreme ii : block - oriented model ( typically column - at - a - time ) 22 the middle ground : vectorization model β€’ like iterator model, each operator
EPFL CS 460 Moodle
No question: Moodle
implements a next function β€’ each operator emit a vector of tuples instead of a single tuple – vector - at - a - time, aka β€œ carry a crate of beers at a time ”! – the operator ’ s internal loop processes multiple tuples at a time. – vector size varies based on hardware or query properties β€’ general idea : vector must fit in cpu cache 23 the middle ground : vectorization model β€’ like iterator model, each operator implements a next function β€’ each operator emits a vector of tuples instead of a single tuple 24 class operator : optional < vector < tuple > > next ( ) class project : operator input, expression proj optional < vector < tuple > > next ( ) : vec = input. next ( ) if ( vec empty ) return empty out = { } for t in vec : out. add ( proj ( t ) ) return out class filter : operator input, expression pred optional < vector < tuple > > next ( ) : while ( true ) : vec = input. next ( ) if ( vec empty ) return vec out = { } for t in vec : if pred ( t ) : out. add ( t ) return out vectorization model ideal for olap queries β€’ greatly reduces the number of invocations per operator β€’ allows for operators to use vectorized ( simd ) instructions to process batches of tuples β€’ basic model commercialized
EPFL CS 460 Moodle
No question: Moodle
by 25 processing model the processing model of a dbms defines how the system executes a query plan. – different trade - offs for different workloads β€’ extreme i : tuple - at - a - time via the iterator model β€’ query compilation β€’ vectorization model β€’ extreme ii : block - oriented model ( typically column - at - a - time ) 26 remark from microsoft hekaton after switching to an in - memory dbms, the only way to increase throughput is to reduce the number of instructions executed. – to go 10x faster, the dbms must execute 90 % fewer instructions – to go 100x faster, the dbms must execute 99 % fewer instructions the only way to achieve such a reduction in the number of instructions is through code specialization. – generate code that is specific to a particular task in the dbms. – ( currently, most code is written to be understandable ) 27 move from general to specialized code β€’ cpu - intensive code parts can be natively compiled if they have a similar execution pattern on different inputs – access methods – operator execution – predicate evaluation β€’ goal : avoid runtime decisions! decide once, when you see the query plan! 28 β€’ attribute types = > ( inline ) pointer casting instead of data access ( virtual ) function calls β€’ query predicate types = > data comparisons query compiler 29 code generator parser plan rewriter plan optimizer ast logical query plan physical query plan query optimizer native code two approaches
EPFL CS 460 Moodle
No question: Moodle
for code generation transpilation β€’ dbms converts a query plan into imperative source code β€’ compile the produced code to generate native code with a conventional compiler jit compilation β€’ generate an intermediate representation ( ir ) of the query that can be quickly compiled into native code. 30 transpilation use case : the hique system β€’ hique : holistic integrated query engine β€’ for a given query plan, create a c program that implements that query ’ s execution plan. β†’ bake in all the predicates and type conversions. β€’ advantages : – fewer function calls during query evaluation – generated code uses cache - resident data more efficiently – compiler optimization techniques come free β€’ off - the - shelf compiler converts code into a shared object, links it to the dbms process, and then invokes the exec function. 31 operator templates 32 select * from a where a. val =? + 1 interpreted plan templated plan for t in range ( table. num _ tuples ) : tuple = get _ tuple ( table, t ) if eval ( predicate, tuple, params ) : emit ( tuple ) 1. get schema in catalog for table 2. calculate offset based on tuple size 3. return pointer to tuple 1. traverse predicate tree – pull values up 2. for tuple values, calculate the offset of the target attribute 3. resolve datatype ( switch / virtual call ) tuple _ size = # #
EPFL CS 460 Moodle
No question: Moodle
# predicate _ offset = # # # parameter _ value = # # # for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ— tuple _ size val = ( tuple + predicate _ offset ) if ( val = = parameter _ value + 1 ) : emit ( tuple ) known at query compile time integrating with the rest of the dbms β€’ the generated query code can invoke any other function in the dbms β†’no need to generate code for the whole db! β€’ re - use the same components as interpreted queries. – concurrency control – logging and checkpoints – indexes 33 indicative performance [ krikellas, icde 2010 ] 34 up to 2 orders of magnitude reported improvement when compared to interpreted dbs ( e. g., postgresql ) the catch [ krikellas, icde 2010 ] 35 compilation takes time! in practice, ~ 1 second is not a big issue for olap queries β€’ an olap query may take tens to hundreds of seconds β€’ how about oltp queries? β€’ hint : in oltp, we know the typical queries β†’pre - compile and cache hique take - home message β€’ reduce function calls β€’ specialized code β†’avoid type - checking, smaller code, promote cache reuse but β€’ compilation takes time β€’ sticks to the operator β€œ legacy ” abstraction 36 transpilation use case : the hique
EPFL CS 460 Moodle
No question: Moodle
system β€’ hique : holistic integrated query engine β€’ for a given query plan, create a c program that implements that query ’ s execution plan. β†’ bake in all the predicates and type conversions. β€’ advantages : – fewer function calls during query evaluation – generated code uses cache - resident data more efficiently – compiler optimization techniques come free β€’ off - the - shelf compiler converts code into a shared object, links it to the dbms process, and then invokes the exec function. 37 operator templates 38 select * from a where a. val =? + 1 interpreted plan templated plan for t in range ( table. num _ tuples ) : tuple = get _ tuple ( table, t ) if eval ( predicate, tuple, params ) : emit ( tuple ) 1. get schema in catalog for table 2. calculate offset based on tuple size 3. return pointer to tuple 1. traverse predicate tree – pull values up 2. for tuple values, calculate the offset of the target attribute 3. resolve datatype ( switch / virtual call ) tuple _ size = # # # predicate _ offset = # # # parameter _ value = # # # for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ— tuple _ size val = ( tuple + predicate _ offset ) if ( val = = parameter _ value + 1
EPFL CS 460 Moodle
No question: Moodle
) : emit ( tuple ) known at query compile time integrating with the rest of the dbms β€’ the generated query code can invoke any other function in the dbms β†’no need to generate code for the whole db! β€’ re - use the same components as interpreted queries. – concurrency control – logging and checkpoints – indexes 39 indicative performance [ krikellas, icde 2010 ] 40 up to 2 orders of magnitude reported improvement when compared to interpreted dbs ( e. g., postgresql ) the catch [ krikellas, icde 2010 ] 41 compilation takes time! in practice, ~ 1 second is not a big issue for olap queries β€’ an olap query may take tens to hundreds of seconds β€’ how about oltp queries? β€’ hint : in oltp, we know the typical queries β†’pre - compile and cache hique take - home message β€’ reduce function calls β€’ specialized code β†’avoid type - checking, smaller code, promote cache reuse but β€’ compilation takes time β€’ sticks to the operator β€œ legacy ” abstraction 42 two approaches for code generation transpilation β€’ dbms converts a query plan into imperative source code β€’ compile the produced code to generate native code with a conventional compiler jit compilation β€’ generate an intermediate representation ( ir ) of the query that can be quickly compiled into native code. 43 reminder : operator templates 44 select * from a where a. val =?
EPFL CS 460 Moodle
No question: Moodle
+ 1 templated plan tuple _ size = # # # predicate _ offset = # # # parameter _ value = # # # for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ— tuple _ size val = * ( tuple + predicate _ offset ) if ( val = = parameter _ value + 1 ) : emit ( tuple ) interpreted plan for t in range ( table. num _ tuples ) : tuple = get _ tuple ( table, t ) if eval ( predicate, tuple, params ) : emit ( tuple ) operator boundaries 45 tuple _ size = # # # predicate _ offset = # # # ( val _ offset ) for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ— tuple _ size val = * ( tuple + predicate _ offset ) if ( val = = parameter _ value + 1 ) : emit ( tuple ) tuple2 _ size = # # # key _ offset = # # # ( d _ offset ) for t in range ( emitted. num _ tuples ) : t2 = emitted. data + t βˆ— tuple2 _ size k = hash ( * ( t2 + key _ offset ) ) while ( probe ht ( k ) ) : main : execute ( op - 1 ) execute ( op - 2
EPFL CS 460 Moodle
No question: Moodle
) … execute ( op - n ) select a. a + b. b from a, b where a. val =? + 1 and b. c = a. d Οƒ generated code : more specialization 46 tuple _ size = # # # predicate _ offset = # # # ( val _ offset ) parameter _ value = # # # for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ— tuple _ size val = * ( tuple + predicate _ offset ) if ( val = = parameter _ value + 1 ) : emit ( tuple ) tuple2 _ size = # # # key _ offset = # # # ( d _ offset ) for t in range ( emitted. num _ tuples ) : t2 = emitted. data + t βˆ— tuple2 _ size k = hash ( * ( t2 + key _ offset ) ) while ( probe ht ( k ) ) : select a. a + b. b from a, b where a. val =? + 1 and b. c = a. d Οƒ Οƒ tuple _ size = # # # predicate _ offset = # # # ( val _ offset ) parameter _ value = # # # key _ offset = # # # ( d _ offset ) for t in range ( table. num _ tuples ) : tuple = table. data + t βˆ—tuple _ size
EPFL CS 460 Moodle
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1