Clik here to view.

Key Features
Integration with a centralized monitoring system
Node discovery and tracking
Auto configuration of the elasticsearch.yml file
Clik here to view.

Image may be NSFW. | Image may be NSFW. |
Server Count | Precision | Recall | F-score |
1960 | 93% | 87% | 90% |
--- openjdk8clean/hotspot/src/cpu/x86/vm/x86_64.ad 2014-03-04 02:52:11.000000000 +0000... and then fixing the function prologues to store the stack pointer (rsp) into the frame pointer (base pointer) register (rbp):
+++ openjdk8/hotspot/src/cpu/x86/vm/x86_64.ad 2014-11-08 01:10:49.686044933 +0000
@@ -166,10 +166,9 @@
// 3) reg_class stack_slots( /* one chunk of stack-based "registers" */ )
//
-// Class for all pointer registers (including RSP)
+// Class for all pointer registers (including RSP, excluding RBP)
reg_class any_reg(RAX, RAX_H,
RDX, RDX_H,
- RBP, RBP_H,
RDI, RDI_H,
RSI, RSI_H,
RCX, RCX_H,
--- openjdk8clean/hotspot/src/cpu/x86/vm/macroAssembler_x86.cpp 2014-03-04 02:52:11.000000000 +0000It worked. Here are the before and after flame graphs. Brendan posted it, with example flame graphs, to the hotspot compiler devs mailing list. This feature request became JDK-8068945 for JDK9 and JDK-8072465 for JDK8.
+++ openjdk8/hotspot/src/cpu/x86/vm/macroAssembler_x86.cpp 2014-11-07 23:57:11.589593723 +0000
@@ -5236,6 +5236,7 @@
// We always push rbp, so that on return to interpreter rbp, will be
// restored correctly and we can correct the stack.
push(rbp);
+ mov(rbp, rsp);
// Remove word for ebp
framesize -= wordSize;
apt-get install cmakeThe current version of perf-map-agent can be loaded on demand, after Java is running.
export JAVA_HOME=/path-to-your-new-jdk8
git clone --depth=1 https://github.com/jrudolph/perf-map-agent
cd perf-map-agent
cmake .
make
git clone --depth=1 https://github.com/brendangregg/FlameGraphThis contains stackcollapse-perf.pl, for processing perf_events profiles, and flamegraph.pl, for generating the SVG flame graph.
sudo perf record -F 99 -a -g -- sleep 30The attach-main.jar file is from perf-map-agent, and stackcollapse-perf.pl and flamegraph.pl are from FlameGraph. Specify their full paths unless they are in the current directory.
java -cp attach-main.jar:$JAVA_HOME/lib/tools.jar net.virtualvoid.perf.AttachOnce 1690 # run as same user as java
sudo chown root /tmp/perf-*.map
sudo perf script | stackcollapse-perf.pl | \
flamegraph.pl --color=java --hash > flamegraph.svg
sudo perf record -F 99 -a -g -- sleep 30; sudo jmapsjmaps creates symbol files for all Java processes, with root ownership. You may want to write a similar “jmaps” helper for your environment (our jmaps example is unsupported). Remember to clean up the /tmp symbol files when you no longer need them!
sudo perf script | stackcollapse-perf.pl | \
flamegraph.pl --color=java --hash > flamegraph.svg
sudo perf record -F 99 -a -g -- sleep 30; sudo jmapsThe output of stackcollapse-perf.pl formats each stack as a single line, and is great food for grep/sed/awk. For the flamegraph at the top of this post, we used the above procedure, and added “| grep java-339” before the “| flamegraph.pl”, to isolate that one process. You could also use a “| grep -v cpu_idle”, to exclude the kernel idle threads.
sudo perf script -f comm,pid,tid,cpu,time,event,ip,sym,dso,trace | \
stackcollapse-perf.pl --pid | \
flamegraph.pl --color=java --hash > flamegraph.svg
Simply put, performance matters. We know members want to immediately start browsing or watching their favorite content and have found that faster startup leads to more satisfying usage. So, when building the long-awaited update to netflix.com, the Website UI Engineering team made startup performance a first tier priority.
The impact of this effort netted a 70% reduction in startup time, and was focused in three key areas:
The netflix.com legacy website stack had a hard separation between server markup and client enhancement. This was primarily due to the different programming languages used in each part of our application. On the server, there was Java with Tomcat, Struts and Tiles. On the browser client, we enhanced server-generated markup with JavaScript, primarily via jQuery.
This separation led to undesirable results in our startup time. Every time a visitor came to any page on netflix.com our Java tier would generate the majority of the response needed for the entire page's lifetime and deliver it as HTML markup. Often, users would be waiting for the generation of markup for large parts of the page they would never visit.
Our new architecture renders only a small amount of the page's markup, bootstrapping the client view. We can easily change the amount of the total view the server generates, making it easy to see the positive or negative impact. The server requires less data to deliver a response and spends less time converting data into DOM elements. Once the client JavaScript has taken over, it can retrieve all additional data for the remainder of the current and future views of a session on demand. The large wins here were the reduction of processing time in the server, and the consolidation of the rendering into one language.
We find the flexibility afforded by server and client rendering allows us to make intelligent choices of what to request and render in the server and the client, leading to a faster startup and a smoother transition between views.
In order to support identical rendering on the client and server, we needed to rethink our rendering pipeline. Our previous architecture's separation between the generation of markup on the server and the enhancement of it on the client had to be dropped.
Three large pain points shaped our new Node.js architecture:
There are many solutions to this problem that don't require Universal JavaScript, but we found this lesson was most appropriate: When there are two copies of the same thing, it's fairly easy for one to be slightly different than the other. Using Universal JavaScript means the rendering logic is simply passed down to the client.
Node.js and React.js are natural fits for this style of application. With Node.js and React.js, we can render from the server and subsequently render changes entirely on the client after the initial markup and React.js components have been transmitted to the browser. This flexibility allows for the application to render the exact same output independent of the location of the rendering. The hard separation is no longer present and it's far less likely for the server and client to be different than one another.
Without shared rendering logic we couldn't have realized the potential of rendering only what was necessary on startup and everything else as data became available.
Building rich interactive experiences on the web often translates into a large JavaScript payload for users. In our new architecture, we placed significant emphasis on pruning large dependencies we can knowingly replace with smaller modules and delivering JavaScript only applicable for the current visitor.
Many of the large dependencies we relied on in the legacy architecture didn't apply in the new one. We've replaced these dependencies in favor of newer, more efficient libraries. Replacing these libraries resulted in a much smaller JavaScript payload, meaning members need less JavaScript to start browsing. We know there is significant work remaining here, and we're actively working to trim our JavaScript payload down further.
In order to test and understand the impact of our choices, we monitor a metric we call time to interactive (tti).
Amount of time spent between first known startup of the application platform and when the UI is interactive regardless of view. Note that this does not require that the UI is done loading, but is the first point at which the customer can interact with the UI using an input device.
For applications running inside a web browser, this data is easily retrievable from the Navigation Timing API (where supported).
We firmly believe high performance is not an optional engineering goal – it's a requirement for creating great user-experiences. We have made significant strides in startup performance, and are committed to challenging our industry’s best-practices in the pursuit of a better experience for our members.
Over the coming months we'll be investigating Service Workers, ASM.js, Web Assembly, and other emerging web standards to see if we can leverage them for a more performant website experience. If you’re interested in helping create and shape the next generation of performant web user-experiences apply here.
# Required Dependencies require(randomForest) require(gbm) require(pmml) require(XML) data(iris) # Column Names must NOT contain periods names(iris) <- gsub("\\.","_",tolower(names(iris))) # Build Models iris.rf <- randomForest(Species ~ ., data=iris, ntree=5) iris.gbm <- gbm(Species ~ ., data=iris, n.tree=5) # Convert to pmml # Output to File saveXML(pmml(iris.rf) ,file="~/iris.rf.xml") saveXML(pmml(iris.gbm, n.trees=5),file="~/iris.gbm.xml") |
REGISTER '~/scoring.jar'; DEFINE pmmlRF com.netflix.pmml.ScorePMML('~/iris.rf.xml'); DEFINE pmmlGBM com.netflix.pmml.ScorePMML('~/iris.gbm.xml'); -- LOAD Data iris = load '~/iris.csv' using PigStorage(',') as (sepal_length,sepal_width,petal_length,petal_width,species); -- Score two models in one pass over the data scored = foreach iris generate pmmlRF(*) as RF, pmmlGBM(*) as GBM; dump scored; |