DataTorrent高级工程师华思远 - 下一代实时数据处理引擎——Apache Apex项目简介及应用

2020-02-27 249浏览

  • 1.Introduction to Apache Apex Siyuan Hua@hsy541 PMC Apache Apex, Senior Engineer DataTorrent, Big Data Technology Conference, Beijing, Dec 10th 2016
  • 2.Stream Data Processing Data Delivery Data Sources Events Logs Sensor Data Social Databases CDC Transform / Analytics Declarative API DAG API SQL Operator Library Beam SAMOA m A O eBa SA M Oper1 Oper2 Oper3 (roadmap) 2 Real-time visualization, …
  • 3.Industries & Use Cases Financial Services Fraud and risk monitoring Ad-Tech Real-time customer facing dashboards on key performance indicators Credit risk assessment Click fraud detection Improve turn around time of trade settlement processes Billing optimization Telecom Call detail record (CDR) & extended data record (XDR) analysis Understanding customer behavior AND context Packaging and selling anonymous customer data Manufacturing Supply chain planning & optimization Preventive maintenance Product quality & defect tracking HORIZONTAL Energy IoT Smart meter analytics Data ingestion and processing Reduce outages & improve resource utilization Predictive analytics Asset & workforce management Data governance • Large scale ingest and distribution • Real-time ELTA (Extract Load Transform Analyze) • Dimensional computation & aggregation • Enforcing data quality and data governance requirements • Real-time data enrichment with reference data • Real-time machine learning model scoring 3
  • 4.Apache Apex • In-memory, distributed, parallel stream processing • Application logic broken into components (operators) that execute distributed in a cluster • Unobtrusive Java API to express (custom) logic • Maintain state and metrics in member variables • Windowing, event-time processing • Scalable, high throughput, low latency • Operators can be scaled up or down at runtime according to the load and SLA • Dynamic scaling (elasticity), compute locality • Fault tolerance & correctness • Automatically recover from node outages without having to reprocess from beginning • State is preserved, checkpointing, incremental recovery • End-to-end exactly-once • Operability • System and application metrics, record/visualize data • Dynamic changes and resource allocation, elasticity 4
  • 5.Native Hadoop Integration • YARN is the resource manager • HDFS for storing persistent state 5
  • 6.Application Development Model A Stream is a sequence of data tuples A typical Operator takes one or more input streams, performs computations & emits one or more output streams • Each Operator is YOUR custom business logic in java, or built-in operator from our open source library • Operator has many instances that run in parallel and each instance is single-threaded Directed Acyclic Graph (DAG) is made up of operators and streams Directed Acyclic Graph (DAG) Filtered Stream Operator Enriched Stream Output Stream Operator Tuple Operator Filtered Stream Operator Operator Enriched Stream Operator 6
  • 7.Development Process Kafka Kafka Input Parser Filter Lines Words Filtered Word Counter JDBC Output Counts Apex Application • Operators from library or develop for custom logic • Connect operators to form application • Configure operator properties • Configure scaling and other platform attributes • Test functionality, performance, iterate 7 Database
  • 8.Application Specification DAG API (compositional) Java Stream API (declarative) 8
  • 9.Developing Operators 9
  • 10.Operator Library Messaging • Kafka • JMS (ActiveMQ, …) • Kinesis, SQS • Flume, NiFi File Systems • HDFS/ Hive • NFS • S3 Analytics • Dimensional Aggregations (with state management for historical data + query) 10 NoSQL • Cassandra, HBase • Aerospike, Accumulo • Couchbase/ CouchDB • Redis, MongoDB • Geode Parsers • XML • JSON • CSV • Avro • Parquet Protocols • HTTP • FTP • WebSocket • MQTT • SMTP RDBMS • JDBC • MySQL • Oracle • MemSQL Transformations • Filter, Expression, Enrich • Windowing, Aggregation • Join • Dedup Other • Elastic Search • Script (JavaScript, Python, R) • Solr • Twitter
  • 11.Stateful Processing with Event Time Event Stream k=A t=4:00 k=B t=5:00 k=B t=5:59 k=A t=4:30 k=A t=5:00 +30s Processing Tim+e60s +90s (All) : 1 t=4:00 : 1 k=A, t=4:00 : 1 State (All) : 4 t=4:00 : 2 t=5:00 : 2 k=A, t=4:00 : 2 K=B, t=5:00 : 2 (All) : 5 t=4:00 : 2 t=5:00 : 3 k=A, t=4:00 : 2 k=A, t=5:00 : 1 k=B, t=5:00 : 2 11
  • 12.Windowing - Apache Beam Model Event-time Session windows Watermarks Accumulation Triggers Keyed or Not Keyed Allowed Lateness Accumulation Mode ApexStreamstream = StreamFactory Merging streams .fromFolder(localFolder) .flatMap(new Split()) .window(new WindowOption.GlobalWindow(), new TriggerOption().withEarlyFiringsAtEvery(Duration.millis(1000)).accumulatingFiredPanes()) .countByKey(new ConvertToKeyVal()).print(); 12
  • 13.Fault Tolerance • Operator state is checkpointed to persistent store ᵒ Automatically performed by engine, no additional coding needed ᵒ Asynchronous and distributed ᵒ In case of failure operators are restarted from checkpoint state • Automatic detection and recovery of failed containers ᵒ Heartbeat mechanism ᵒ YARN process status notification • Buffering to enable replay of data from recovered point ᵒ Fast, incremental recovery, spike handling • Application master state checkpointed ᵒ Snapshot of physical (and logical) plan ᵒ Execution layer change log 13
  • 14.Checkpointing State ▪ Distributed, asynchronous ▪ Periodic callbacks ▪ No artificial latency ▪ Pluggable storage 14
  • 15.Buffer Server & Recovery • In-memory PubSub • Stores results until committed • Backpressure / spillover to disk • Ordering, idempotency Container 1 Operator 1 Buffer Server Node 1 Container 2 Operator 2 Node 2 Downstream Operators reset 15 Independent pipelines (can be used for speculative execution)
  • 16.Recovery Scenario … EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1 … EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1 … EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1 … EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1 16 sum 0 sum 7 sum 10 sum 7
  • 17.Processing Guarantees At-least-once • On recovery data will be replayed from a previous checkpoint ᵒ No messages lost ᵒ Default, suitable for most applications • Can be used to ensure data is written once to store ᵒ Transactions with meta information, Rewinding output, Feedback from external entity, Idempotent operations At-most-once • On recovery the latest data is made available to operator ᵒ Useful in use cases where some data loss is acceptable and latest data is sufficient Exactly-once ᵒ At-least-once processing + idempotency + transactional mechanisms (operator logic) to achieve end-to-end exactly once behavior 17
  • 18.End-to-End Exactly Once • Important when writing to external systems • Data should not be duplicated or lost in the external system in case of application failures • Common external systems ᵒ Databases ᵒ Files ᵒ Message queues • Exactly-once results = at-least-once + idempotency + consistent state • Data duplication must be avoided when data is replayed from checkpoint ᵒ Operators implement the logic dependent on the external system ᵒ Platform provides checkpointing and repeatable windowing 18
  • 19.Scalability Unifier Logical Diagram 0 1 2 Physical Diagram with operator 1 with 3 partitions 1 0 1 Unifier 2 1 19 Logical DAG 01 NxM Partitions 23 Physical DAG with (1a, 1b, 1c) and (2a, 2b): Bottleneck on intermediate Unifier 1a 2a 0 1b Unifier Unifier 3 2b 1c Physical DAG with (1a, 1b, 1c) and (2a, 2b): No bottleneck 1a Unifier 2a 0 1b Unifier 3 Unifier 2b 1c
  • 20.Advanced Partitioning Parallel Partition Logical DAG 012 34 Physical DAG 1a 0 Unifier 2 3 1b 4 Physical DAG with Parallel Partition 1a 2a 3a 0 Unifier 4 1b 2b 3b 20 Cascading Unifiers Logical Plan uopr dopr Execution Plan, for N = 4; M = 1 uopr1 NIC NIC NIC NIC NIC NIC uopr2 uopr3 unifier Container dopr uopr4 Execution Plan, for N = 4; M = 1, K = 2 with cascading uopr1 unifiers Container uopr2 unifier Container uopr3 Container unifier dopr uopr4 unifier
  • 21.Dynamic Partitioning 2a 2a 1a 2a 1b 2b 3 1a 2b 1b 2c 3 1a 2b 3a 1b 2c 3b 2d 2d Unifiers not shown • Partitioning change while application is running ᵒ Change number of partitions at runtime based on stats ᵒ Determine initial number of partitions dynamically • Kafka operators scale according to number of kafka partitions ᵒ Supports re-distribution of state when number of partitions change ᵒ API for custom scaler or partitioner 21
  • 22.How dynamic partitioning works • Partitioning decision (yes/no) by trigger (StatsListener) ᵒ Pluggable component, can use any system or custom metric ᵒ Externally driven partitioningexample:KafkaInputOperator • Stateful! ᵒ Uses checkpointed state ᵒ Ability to transfer state from old to new partitions (partitioner, customizable) ᵒSteps:• Call partitioner • Modify physical plan, rewrite checkpoints as needed • Undeploy old partitions from execution layer • Release/request container resources • Deploy new partitions (from rewritten checkpoint) ᵒ No loss of data (buffered) ᵒ Incremental operation, partitions that don’t change continue processing •API:Partitioner interface 22
  • 23.Compute Locality • By default operators are distributed on different nodes in the cluster • Can be collocated on machine, container or thread basis for efficiency Default (serialization+IPC) HOST (serialization, loopback) • Host Locality ᵒ Operators can be deployed on specific hosts CONTAINE R (in-process queue) • (Anti-)Affinity ᵒ Ability to express relative deployment without specifying a host 23 THREAD (callstack)
  • 24.Compute Locality Message size (bytes) 64 128 256 512 1024 2048 4096 (default locality) (bytes/s) CONTAINER_LOCAL (bytes/s) THREAD_LOCAL (bytes/s) 59,176,512 204,748,032 2,480,432,448 89,803,904 395,023,360 3,662,684,672 137,019,648 671,409,664 5,218,227,968 156,255,744 1,255,749,632 4,416,738,304 167,139,328 2,022,868,992 3,423,519,744 182,349,824 3,508,013,056 4,050,688,000 255,229,952 3,732,725,760 3,884,101,632https://www.datatorrent.com/blog/blog-apex-performancebenchmark/24
  • 25.Performance:Throughput vs. Latency?https://yahooeng.tumblr.com/post/135321837876/benchmarkingstreamingcomputation-engines-athttp://data-artisans.com/extending-the-yahoo-streaming-benchmark/25
  • 26.High-Throughput and Low-Latency Apex, Flink w/ 4 Kafka brokers 2.7 million events/second, Kafka latency limit Apex w/o Kafka andRedis:43 million events/second with more than 90 percent of events processed with the latency less than 0.5 secondshttps://www.datatorrent.com/blog/throughput-latency-and-yahoo/26
  • 27.Recent Additions & Roadmap • Declarative Java API • Windowing Semantics following Beam model • Scalable state management • SQL support using Apache Calcite • Apache Beam Runner, SAMOA integration • Enhanced support for Batch Processing • Support for Mesos • Encrypted Streams • Python support for operator logic and API • Replacing operator code at runtime • Dynamic attribute changes • Named checkpoints 27
  • 28.DataTorrent Product 28
  • 29.Monitoring Console Logical View Physical View 29
  • 30.Real-Time Dashboards 30
  • 31.Who is using Apex? • Powered by Apex •http://apex.apache.org/powered-by-apex.html• Also using Apex? Let us know to beadded:users@apex.apache.org or @ApacheApex • Pubmatic •https://www.youtube.com/watch?v=JSXpgfQFcU8• GE •https://www.youtube.com/watch?v=hmaSkXhHNu0•http://www.slideshare.net/ApacheApex/ge-iot-predix-time-series-data-ingestion-service-usingapache-apex-hadoop • SilverSpring Networks •https://www.youtube.com/watch?v=8VORISKeSjI•http://www.slideshare.net/ApacheApex/iot-big-data-ingestion-and-processing-in-hadoop-by-silver-spring-networks 31
  • 32.Maximize Revenue w/ real-time insights PubMatic is the leading marketing automation software company for publishers. Through real-time analytics, yield management, and workflow automation, PubMatic enables publishers to make smarter inventory decisions and improve revenue performance Business Need • Ingest and analyze high volume clicks & views in real-time to help customers improve revenue - 200K events/second data flow • Report critical metrics for campaign monetization from auction and client logs - 22 TB/day data generated • Handle ever increasing traffic with efficient resource utilization • Always-on ad network, feedback loop for ad server 32 Apex based Solution • DataTorrent Enterprise platform, powered by Apache Apex • In-memory stream processing • Comprehensive library of pre-built operators including connectors • Built-in fault tolerance • Dynamically scalable • Real-time query from in-memory state • Management UI & Data Visualization console Client Outcome • Helps PubMatic deliver ad performance insights to publishers and advertisers in real-time instead of 5+ hours • Helps Publishers visualize campaign performance and adjust ad inventory in real-time to maximize their revenue • Enables PubMatic reduce OPEX with efficient compute resource utilization • Built-in fault tolerance ensures customers can always access ad network
  • 33.Industrial IoT applications GE is dedicated to providing advanced IoT analytics solutions to thousands of customers who are using their devices and sensors across different verticals. GE has built a sophisticated analytics platform, Predix, to help its customers develop and execute Industrial IoT applications and gain real-time insights as well as actions. Business Need • Ingest and analyze high-volume, high speed data from thousands of devices, sensors per customer in real-time without data loss • Predictive analytics to reduce costly maintenance and improve customer service • Unified monitoring of all connected sensors and devices to minimize disruptions • Fast application development cycle • High scalability to meet changing business and application workloads Apex based Solution • Ingestion application using DataTorrent Enterprise platform • Powered by Apache Apex • In-memory stream processing • Built-in fault tolerance • Dynamic scalability • Comprehensive library of pre-built operators • Management UI console Client Outcome • Helps GE improve performance and lower cost by enabling real-time Big Data analytics • Helps GE detect possible failures and minimize unplanned downtimes with centralized management & monitoring of devices • Enables faster innovation with short application development cycle • No data loss and 24x7 availability of applications • Helps GE adjust to scalability needs with auto-scaling 33
  • 34.Smart energy applications Silver Spring Networks helps global utilities and cities connect, optimize, and manage smart energy and smart city infrastructure. Silver Spring Networks receives data from over 22 million connected devices, conducts 2 million remote operations per year Business Need • Ingest high-volume, high speed data from millions of devices & sensors in real-time without data loss • Make data accessible to applications without delay to improve customer service • Capture & analyze historical data to understand & improve grid operations • Reduce the cost, time, and pain of integrating with 3rd party apps • Centralized management of software & operations Apex based Solution • DataTorrent Enterprise platform, powered by Apache Apex • In-memory stream processing • Pre-built operators/connectors • Built-in fault tolerance • Dynamically scalable • Management UI console Client Outcome • Helps Silver Spring Networks ingest & analyze data in real-time for effective load management & customer service • Helps Silver Spring Networks detect possible failures and reduce outages with centralized management & monitoring of devices • Enables fast application development for faster time to market • Helps Silver Spring Networks scale with easy to partition operators • Automatic recovery from failures 34
  • 35.Q&A 35
  • 36.Curious? •http://apex.apache.org/• Learn more -http://apex.apache.org/docs.html • Getting involved -http://apex.apache.org/community.html • Download -http://apex.apache.org/downloads.html • Follow @ApacheApex -https://twitter.com/apacheapex• Meetups -https://www.meetup.com/topics/apache-apex/• Examples -https://github.com/DataTorrent/examples• Slideshare -http://www.slideshare.net/ApacheApex/presentations•https://www.youtube.com/results?search_query=apache+apex• Free Enterprise License for Startups -https://www.datatorrent.com/product/startup-accelerator/36