whatsapp技术架构
2020-02-27 326浏览
- 1.Scaling to Millions of Simultaneous Connections Rick Reed WhatsApp Erlang Factory SF March 30, 2012 1
- 2.About ... Joined WhatsApp in 2011 New to Erlang Background in performance of C-based systems on FreeBSD and Linux Prior work at Yahoo!, SGI 2
- 3.Overview The “good problem to have” Performance Goals Tools and Techniques Results General Findings Specific Scalability Fixes 3
- 4.The Problem A good problem, but a problem nonetheless Growth, Earthquakes, and Soccer! Msg rates for past four weeks goals FT HT Mexican earthquake 4
- 5.The Problem Initial serverloading:~200k connections Discouraging prognosis for growth Cluster brittle in the face of failures/overloads 5
- 6.Performance Goals 1 Million connections per server … ! Resilience against disruptions under load Software failures Hardware failures (servers, network gear) World events (sports, earthquakes, etc.) 6
- 7.Performance Goals Our standard configuration Dual Westmere Hex-core (24 logical CPUs) 100GB RAM, SSD Dual NIC (user-facing, back-end/distribution) FreeBSD 8.3 OTP R14B03 7
- 8.Tools and Techniques System activity monitoring (wsar) OS-level BEAM 8
- 9.Tools and Techniques Processor hardware perf counters (pmcstat) dtrace, kernel lock-counting, gprof 9
- 10.Tools and Techniques fprof (w/ and w/o cpu_timestamp) BEAM lock-counting (invaluable!!!) 10
- 11.Tools and Techniques Synthetic workload Good for subsystems with simple interfaces Limited value for user-facing systems 11
- 12.Tools and Techniques Tee'd workload Where side-effects can be contained Extremely useful for tuning 12
- 13.Tools and Techniques Diverted workload Add additional production load to server DNS via extra IP aliases TTL issues IPFW forwarding Ran into a few kernel panics at high conn counts 13
- 14.Results Initial bottlenecks appeared around 425k First round of fixes got us to 1M conns Fruit was hanging pretty low 14
- 15.Results Continued attacking similar bottlenecks Achieved 2M conns about a month later Put further optimizations on back burner 15
- 16.Results Began optimizing app code after New Years Unintentional record attempt in Feb Peaked at 2.8M conns before we intervened 571k pkts/sec, >200k dist msgs/sec 16
- 17.Results Still trying to obtain elusive 3M conns St. Patrick's Day wasn't as lucky as hoped 17
- 18.General Findings Erlang has awesome SMP scalability >85% cpu utilization across 24 logical cpus FreeBSD shines as well 18
- 19.General Findings CPU% vs. # Conns 19
- 20.General Findings Contention, contention, contention From 200k to 2M were all contention fixes Some issues are internal to BEAM Some addressable with app changes Most required BEAM patches Some required app changesEspecially:partitioning workload correctly Some common Erlang idioms come at a price 20
- 21.Specific Scalability Fixes FreeBSD Backported TSC-based kernel timecounter gettimeofday(2) calls much less expensive Backported igb network driver Had issues with MSI-X queue stalls sysctl tuning Obvious limits (e.g., kern.ipc.maxsockets) net.inet.tcp.tcphashsize=524288 21
- 22.Specific Scalability Fixes BEAM metrics Scheduler (%util, csw, waits, sleeps, …) statistics(message_queues) Msgs queued, #non-empty queues, longest queue process_info(message_queue_stats) Enq/deq/send count & rates (1s, 10s, 100s) statistics(message_counts) Aggregation of message_queue_stats Enable fprof cpu_timestamp for FreeBSD 22
- 23.Specific Scalability Fixes BEAM metrics (cont.) Make lock-counting work for larger async thread counts (e.g., +A 1024) Add suspend, location, and port_locks options to erts_debug:lock_countersEnable/disable process/port lock counting at runtime Fix missing accounting for outbound dist bytes 23
- 24.Specific Scalability Fixes BEAM tuning +swt low Avoid scheduler perma-sleep +Mummc/mmmbc/mmsbc 99999 Prefer mseg over malloc +Mut 24 Want allocator instance per scheduler 24
- 25.Specific Scalability Fixes BEAM tuning +Mulmbcs 32767 +Mumbcgs 1 +Musmbcs 2047 Want large 2M-aligned mseg allocations to maximize superpage promotions Run with real-time scheduling priority +ssct 1 (via patch; scheduler spin count) 25
- 26.Specific Scalability Fixes BEAM contention timeofday lock (esp., timeofday delivery) Reduced slot traversals on timer wheel Widened bif timer hash table Ended up moving bif timers to receive timeouts Improved check_io allocation scalability Added prim_file:write_file/3& /4 (port reuse) Disable mseg max check 26
- 27.Specific Scalability Fixes BEAM contention (cont.) Reduce setopts calls in prim_inet:acceptand ininet:tcp_controlling_process27
- 28.Specific Scalability Fixes OTP throughput Add gc throttling when message queue is long Increase default dist receive buffer from 4k to 256k (and make configurable) Patch mnesia_tm to dispatch async_dirty txns to separate per-table procs for concurrency Add pg2 denormalized group member lists to improve lookup throughput Increase max configurable mseg cache size 28
- 29.Specific Scalability Fixes Erlang usage Preferos:timestamptoerlang:nowImplement cross-node gen_server calls without using monitors (reduces dist traffic and proc link lock contention) Partition ets and mnesia tables and localize access to smaller number of processes Small mnesia clusters 29
- 30.Specific Scalability Fixes Operability fixes Added [prepend] option toerlang:sendAdded process_flag(flush_message_queue) 30
- 31.Questions? Comments? rr@whatsapp.com 31