Friday, March 8, 2013

Why I joined Arista Networks

Over the past few months, many people have asked me why I jumped from the "web world" to the "network industry" to work at Arista Networks.  I asked myself this question more than once, and it was a bit of a leap of faith, but here's why I did it, and why I'm happy I did it.

Choosing a company to work for

There is a negative unemployment rate in Silicon Valley provided you know how to type on a keyboard.  It's ridiculous, but all the tech companies are hiring like there's no tomorrow.  So needless to say, when the time came to make a move, I had too many options available to me.  It's not easy to decide where you'll want to spend the next X years of your life.

My #1 requirement for my next job was to work with great people.  This was ranking above salary, likelihood of company success, and possibly even location (although I really wanted to try to stay in SF).  I wanted to feel like I felt when I was at Google, when I could look around me, and assume all these engineers I didn't know were smarter than me, because most of them were.  I could have returned to Google too, but I was in for something new.

I quickly wound up with 3 really good offers.  One from CloudFlare, who's coming to kick the butt of the big CDNs, one from Twitter, which you know already, and one from this datacenter networking company called Arista.  The first two were to work on interesting, large-scale distributed systems.  But the last one was different.

Why did I interview with Arista?

So why did I decide to interview with Arista in the first place?  In November 2010, I was shopping for datacenter networking gear to rebuild an entire network from scratch.  I heard about Arista and quickly realized that their switches and software architecture was exactly what I'd been looking for the previous year already (since I left Google basically).  We ended up buying Arista and I was a happy customer for about 2 years, until I joined them.

I don't like to interact with most vendors.  Most of them want to take you out to lunch or ball games or invite you at useless events to brainwash you with sales pitches.  But my relationship with Arista was good, the people we were interacting with on the sales and SE side were absolutely stellar.  In April 2011, they invited me to an event they regularly hold, a "Customer Exchange", at their HQ.  I wasn't convinced this would make a good use of my time, but I decided to give it a shot, and RSVPed yes.

I remember coming home that evening of April, and telling my wife "wow, if I was looking for a job, I'd definitely consider Arista".  The event was entirely bullshit-free, and I got to meet the exec team, who literally blew me away.  If you know me, you know I'm not impressed easily, but that day I was really unsettled by what I'd seen.  I didn't want to change jobs then, so I tried to get over it.

Over the following year, I went to their 2 subsequent customer exchanges, and each time I came back with that same feeling of "darn, these guys are awesome".  I mean, I knew the product already, I knew why it was good, as well as what were its problems, limitations, areas for improvement, etc, because I used it daily.  I knew the roadmap, so it was clear to me where the company was headed (I unfortunately couldn't say so for Twitter).  Everybody – mark my word – everybody I had met so far at Arista, with no exception, was stellar: support (TAC), sales, a handful of engineers, and all their execs and virtually all VPs, marketing, bizdev, etc.

So I decided to give it a shot and interview with them, and see where that would take me.

What's the deal with Arista's people?

Arista isn't your typical Silicon Valley company.  First of all, it doesn't have any outside investors.  The company was entirely funded by its founders, something quite unusual around the Valley, doubly so for a company that sells hardware.  By the way, Arista isn't a hardware company.  There are 3 times more software engineers than hardware engineers.  Sure we do some really cool stuff on the hardware side, and our hardware engineers are really pushing the envelope, allowing us to build switches that run faster and in a smaller footprint than competitors that use the same chips.  But most of the efforts and investments, and ultimately what really makes the difference, is in the software.

Let's take a look at the three founders, maybe you'll start to get a sense of why I speak so highly of Arista's people.

Andy Bechtolsheim, co-founder of Sun Microsystems, is one of the legends of Silicon Valley.  He's one of the brains who puts together hardware design, except he seems to do so one or two years ahead of everybody else.  I always loved his talks at the Arista Customer Exchange as they gave me a glimpse of how technology was going to evolve over the next few years, a glimpse into the future.  Generally he was right, although some of this predictions took more time than anticipated to materialize.
Andy is truly passionate about that stuff, and he seems to have a special interest for optical technologies (e.g. 100Gbps transceivers and such).  He's putting the German touch to our hardware engineering: efficiency.  :)

Then there is David Cheriton, professor at Stanford, who isn't on his first stint with Andy.  The two had founded Granite Systems in '95, which got acquired in just about a year by Cisco, for over $200M.  This apparently made David a bit of a celebrity at Stanford, and in '98 two students called Larry & Sergey sought his advice to start their company, a search engine for the web.  David invited them over to talk about their project, and also invited Andy.  They liked the idea so much that they each gave them a $100k check to start Google.  This 2x$100k investment alone yielded a 10000x return, so now you know why Arista didn't need to raise any money :)
David is passionate about software engineering & distributed systems, and it should be no surprise that virtually all of Arista's software is built upon a framework that came out of David's work.

Last but not least, Ken Duda, who isn't new to the Arista gang either, as he was the first employee at Granite in '95.  Ken is one of the most brillant software engineers I've ever met.  Other common points he shares with Andy and David: super low key, very pragmatic, visionary, incredibly intelligent, truly passionate about what he's doing.  So passionate in fact that when Arista was hosting a 24h-long hackathon (Hack-a-Switch), he was eager to stay with us all night long to hack on some code (to be fair I think he slept about 2 hours on a beanbag).  I will always remember this WTF moment we had around 5am with some JavaScript idiosyncrasy for the web interface we were building, that was epic (when you're tired...).
Not only Ken is one of those extraordinary software engineers, but also he's one of the best leaders I've met, and I'm glad he's our CTO as he's pushing things in the right direction.

Of course, it's not all about those three guys.  What's even more amazing about Arista, is that our VPs of engineering are like that too.  The "management layer" is fairly thin, with only a handful of VPs in engineering and handful of managers who got promoted based on meritocracy, and that "management layer", if I dare to call it this way, is one the most technically competent and apt to drive a tech company that I've ever seen.

I would also like to point out that our CEO is a woman, which is also unusual, unfortunately, for a tech company.  It's a coincidence that today is International Women's Day, but let me just say that there is a reason why Jayshree Ullal frequently ranks high in lists such as "Top X most influential executives", "Top X most powerful people in technology", etc.  Like everybody else at Arista, she has a very deep understanding of the industry, our technology, what we're building, how we're building it, and where we should be going next.

Heck, even our VP of marketing, Doug Gourlay, could be VP of engineering or CTO at other tech companies.  I remember the first time I saw him at the first Arista Customer Exchange, I couldn't help but think "here comes the marketing guy".  But his talk not only made a lot of sense, he was also explaining why our approach to configuring networks today sucks and how it could be done better, and he was spot on.  For a moment I just thought he was really good at talking about something he didn't genuinely understand, a common trait of alluring VPs of marketing, but as he kept talking and correctly answering questions, no matter how technical, it was obvious that he knew exactly what he was talking about.  Mind=blown.

Company culture

Hack-a-switch
So we have a bunch of tech leaders, some of the sharpest minds in this industry, who are all passionate, low-key, and want to build the best datacenter networking gear out there.  This has a profound impact on company culture, and Doug made something click in my mind not so long ago: company culture is a lasting competitive advantage.  Company culture is what enables you to hire, design, build, drive, and ship a product in one way vs another.  It's incredibly important.

Arista's culture is open, "do the right thing", "if you see something wrong/broken, fix it because you can", a lot like Google.  No office drama – yes Silicon Valley startups tend to have a fair bit of office drama.  Ken is particularly sensitive to all the bullshit things you typically see in management, ridiculous processes (e.g. Cisco's infamous "manage out the bottom 10% performers in your organization"), red tape, and other stupid, unproductive things.  Therefore this simply doesn't exist at Arista.

One of the striking peculiarities of the engineering culture at Arista that I haven't seen anywhere else (not saying that it doesn't exist anywhere else, just that I personally never came across this), is that teams aren't very well defined groups.  Teams form and dissolve as projects come and go.  People try to gravitate around the projects they're interested in, and those who end up working together on a particular project make up the de facto team of that project, for the duration of that project.  Then they move along and go do something else with other people.  It's incredibly flexible.

So all in all, I'm very happy I joined Arista, although I'm sure it would have been a lot of fun too with my friends over at Twitter or CloudFlare.  There are a lot of very exciting things happening right now, and a lot of cool challenges to be tackled ahead of us.

Additional disclaimer for this post: the views expressed in this blog are my own, and Arista didn't review/approve/endorse anything I wrote here.

Wednesday, February 6, 2013

Google uses captcha to improve StreetView image recognition

I just stumbled on one of these for the first time:
Here's another one:
These were on some Blogger blogs. Looks like Google is using captchas to help improve StreetView's address extraction quality.

Sunday, January 27, 2013

Using debootstrap with grsec

If you attempt to use debootstrap with grsec (more specifically with a kernel compiled with CONFIG_GRKERNSEC_CHROOT_MOUNT=y), you may see it bail out because of this error:
W: Failure trying to run: chroot path/to/root mount -t proc proc /proc
One way to work around this is to bind-mount procfs into the new chroot.  Just apply the following patch before runnning debootstrap:
--- /usr/share/debootstrap/functions.orig       2013-01-27 02:05:55.000000000 -0800
+++ /usr/share/debootstrap/functions    2013-01-27 02:06:39.000000000 -0800
@@ -975,12 +975,12 @@
                umount_on_exit /proc/bus/usb
                umount_on_exit /proc
                umount "$TARGET/proc" 2>/dev/null || true
-               in_target mount -t proc proc /proc
+               sudo mount -o bind /proc "$TARGET/proc"
                if [ -d "$TARGET/sys" ] && \
                   grep -q '[[:space:]]sysfs' /proc/filesystems 2>/dev/null; then
                        umount_on_exit /sys
                        umount "$TARGET/sys" 2>/dev/null || true
-                       in_target mount -t sysfs sysfs /sys
+                       sudo mount -o bind /sys "$TARGET/sys"
                fi
                on_exit clear_mtab
                ;;
As a side note, a minbase chroot of Precise (12.04 LTS) is takes only 142MB of disk space.

Friday, November 9, 2012

Sudden large increases in MySQL slave lag caused by clock drift

Just in case this ever helps anyone else, I had a machine where slave lag (as reported by Seconds_Behind_Master in SHOW SLAVE STATUS) would sometimes suddenly jump to 7 hours and then come back, and jump again, and come back.


Turns out, the machine's clock was off by 7 hours and no one had noticed!  After fixing NTP synchronization, the issue remained, I suspect that MySQL keeps a base timestamp in memory that was still off by 7 hours.

The fix was to STOP SLAVE; START SLAVE;

Thursday, October 18, 2012

Python's screwed up exception hierarchy

Doing this in Python is bad bad bad:
try:
  # some code
except Exception, e:  # Bad
  log.error("Uncaught exception!", e)
Yet you need to do something like that, typically in the event loop of an application server, or when one library is calling into another library and needs to make sure that no exception escapes from the call, or that all exceptions are re-packaged in another type of exception.

The reason the above is bad is that Python badly screwed up their standard exception hierarchy.
    __builtin__.object
        BaseException
            Exception
                StandardError
                    ArithmeticError
                    AssertionError
                    AttributeError
                    BufferError
                    EOFError
                    EnvironmentError
                    ImportError
                    LookupError
                    MemoryError
                    NameError
                        UnboundLocalError
                    ReferenceError
                    RuntimeError
                        NotImplementedError
                    SyntaxError
                        IndentationError
                            TabError
                    SystemError
                    TypeError
                    ValueError
Meaning, if you try to catch all Exceptions, you're also hiding real problems like syntax errors (!!), typoed imports, etc.  But then what are you gonna do?  Even if you wrote something silly such as:
try:
  # some code
except (ArithmeticError, ..., ValueError), e:
  log.error("Uncaught exception!", e)
You still wouldn't catch the many cases where people define new types of exceptions that inherit directly from Exception. So it looks like your only option is to catch Exception and then filter out things you really don't want to catch, e.g.:
try:
  # some code
except Exception, e:
  if isinstance(e, (AssertionError, ImportError, NameError, SyntaxError, SystemError)):
    raise
  log.error("Uncaught exception!", e)
But then nobody does this. And pylint still complains.

Unfortunately it looks like Python 3.0 didn't fix the problem :( – they only moved SystemExit, KeyboardInterrupt, and GeneratorExit to be subclasses of BaseException but that's all.

They should have introduced another separate level of hierarchy for those errors that you generally don't want to catch because they are programming errors or internal errors (i.e. bugs) in the underlying Python runtime.

Saturday, October 6, 2012

Perforce killed my productivity. Again.

I've used Perforce for 2 years at Google.  Google got a lot of things right, but Perforce has always been a pain in the ass to deal with, despite the huge amount of tooling Google built on top.  I miss a lot of things from my days at Google, but Perforce is definitely not on the list.  Isn't it ironic that for a company that builds large distributed systems on commodity machines, their P4 server had to be by far the beefiest, most expensive server?  Oh and guess what ended up happening to P4 at Google?

Anyways, after a 3 year break during which I happily forgot my struggle with Perforce, I am now back to using it.  Sigh.  Now what's 'funny' is that Arista has the same problem as Google: they locked themselves in through tools.  When you have a large code base of tools built on top of an SCM, it's really, really hard to migrate to something else.

Arista, like Google, literally has tens of thousands of lines of code of tools built around Perforce.  It's kind of ironic that Perforce, the company, doesn't appear to have done anything actively evil to lock the customers in.  The customers got locked in by themselves.  Also note that in both of these instances the companies started quite a few years ago, back when Git didn't exist, or barely existed in Arista's case, so Perforce was a reasonable choice at the time (provided you had the $$$, that is) given that the only other options then were quite brain damaging.

Now I could go on and repeat all the things that have been written many times all over the web about why Perforce sucks.  Yes it's slow, yes you can't work offline, yes you can't do anything that doesn't make it wanna talk to the server, yes it makes all your freaking files read-only and it forces you to tell the server that you're going to edit a file, etc.

But Perforce has its own advantages too.  It has quasi-decent branching / merging capabilities (merging is often more painful than with Git IMO).  It gives you a flexible way to compose your working copy, what's in it, where it comes from.  It's more forgiving for organizations that like to dump a lot of random crap in their SCM.  This seems fairly common, people just find it convenient to commit binaries and such.  It is convenient indeed if you lack better tools, but that doesn't mean it's right.
Used to be a productive software engineer, took a P4 arrow in the knee
So what's my grip with Perforce?  It totally ruins my workflow.  This makes my life as a software engineer utterly miserable.  I always work on multiple things at the same time.  Most of the time they're related.  I may be working on a big change, and I want to break it down in many multiple small incremental steps.  And I often like to revisit these steps.  Or I just wanna go back and forth between a few somewhat related things as I work on an idea and sort of wander into connected ideas.  And I want to get my code reviewed.  Before it gets upstream.

This means that I use git rebase very, very extensively.  And git stash.  I find that this the hardest thing to explain to people who don't know Git.  But once it clicks in your mind, and you understand how powerful git rebase is, you realize it's the best Swiss army knife to manipulate your changes and their history.  When it comes to writing code, it's literally my best friend after vim.

Git, as a tool to manipulate changes made to files, is several orders of magnitude better and more convenient.  It's so simple to select what goes into what commit, undo, redo, squash, split, swap, drop, amend changes.  I always feel like I can manipulate my code and commits effortlessly, that it's malleable, flexible.  I'm removing some lint around some code I'm refactoring?  No problem, git commit -p to select hunk-by-hunk what goes into the refactoring commit and what goes into the "small clean up" commit.  Perforce on the other hand doesn't offer anything but "mark this file for add/edit/delete" and "put these files in a change" and "commit the change".  This isn't the 1990s anymore, but it sure feels like it.

With Perforce you have to serialize your workflow, you have to accept to commit things that will require subsequent "fix previous commit" commits, and thus you tend to commit fewer bigger changes because breaking up a change in smaller chunks is a pain in the ass.  And when you realize you got it wrong, you can't go back, you just have to fix it up with another change.  And your project history is all fugly.  I've used the patch command more over the past 2 months than in the previous 3 years combined.  I'm back to the stone age.

Oh and you can't switch back and forth between branches.  At all.  Like, you just can't.  Period.  This means you have to maintain multiple workspaces and try to parallelize your work across them.  I already have 8 workspaces across 2 servers at Arista, each of which contains mostly-the-same copy of several GB of code.  The overhead to go back and forth between them is significant, so I end up switching a lot less than when I just do git checkout somebranch.  And of course creating a new branch/workspace is extremely time consuming, as in we're talking minutes, so you really don't wanna do it unless you know you're going to amortize the cost over the next several days.

I think the fact that P4 coerces you into a workflow that sucks shows in Perforce's marketing material and product strategy too.  Now they're rolling out this Git integration, dubbed Perforce Git Fusion, that essentially makes the P4 server speak Git so that you can work with Git but still use P4 on the server.  They sell it as "improving the Git experience".  That must be the best joke of the year.  But I think the reality is that engineers don't want to deal with the bullshit way of doing things Perforce imposes, and they want to work with Git.  Anyways this integration sounds great, I would love to use it to stop the pain, only you have to be on a recent enough version of Perforce to be able to use it, and if you're not you "just" need to pay an arm and a fucking leg to upgrade.

My lame workaround: overlay a Git repo on top of my P4 workspace, p4 edit the files I want to work on, maintain the changes in Git until I'm ready to push them upstream.  Still a royal PITA, but at least I can manipulate the files in my workspace.

And then, of course, there is the problem that I'm impatient.  I can't stand waiting more than 500ms at a prompt.  It's quite rare to be able to p4 edit a file in less than a second or two.  At 1:30am on Saturday, after a dozen p4 edits in a row, I was able to get the latency down to 300-500ms (yes it really took a dozen edits/reverts in a row to reliably get lower latency).  It often takes several minutes to trace the history of a file or a branch, or to blame a file ... when that's useful at all with Perforce.

We're in 2012, soon 2013, running on 32 core 128GB RAM machines hooked to 10G/40G networks with an RTT of less than 60┬Ás.  Why would I ever need to wait more than a handful of milliseconds for any of these mundane things to happen?

So, you know what Perforce, (╯°□°)╯︵ ┻━┻

Edit: despite the fact that Arista uses Perforce, which is a bummer, I love that place, love the people I work with and what we're building.  So you should join!

Saturday, April 14, 2012

How Apache Hadoop is molesting IOException all day

Today I'd like to rant on one thing that's been bugging me for last couple years with Apache Hadoop (and all its derived projects). It's a big issue that concerns us all. We have to admit it, each time we write code for the Apache Hadoop stack, we feel bad about it, but we try hard to ignore what's happening right before our eyes. I'm talking, of course, about the constant abuse and molestation of IOException.
I'm not even going to debate how checked exceptions are like communism (good idea in theory, totally fails in practice). Even if people don't get that, I wish they at least stopped the madness with this poor little IOException.
Let's review again what IOException is for:
"Signals that an I/O exception of some sort has occurred. This class is the general class of exceptions produced by failed or interrupted I/O operations."
In Hadoop everything is an IOException. Everything. Some assertion fails, IOException. A number exceeds the maximum allowed by the config, IOException. Some protocol versions don't match, IOException. Hadoop needs to fart, IOException.
How are you supposed to handle these exceptions? Everything is declared as throws IOException and everything is catching, wrapping, re-throwing, logging, eating, and ignoring IOExceptions. Impossible. No matter what goes wrong, you're left clueless. And it's not like there is a nice exception hierarchy to help you handle them. No, virtually everything is just a bare IOException.
Because of this, it's not uncommon to see code that inspects the message of the exception (a bare String) to try to figure out what's wrong and what to do with it. A friend of mine was recently explaining to me how Apache Kafka was "stringly typed" (a new cutting-edge paradigm whereby you show the middle finger to the type system and stuff everything in Strings). Well Hadoop has invented better than checked exceptions, they have stringed exceptions. Unfortunately, half of the time you can't even leverage this awesome new idiom because the message of the exception itself is useless. For example when a MapReduce chokes on a corrupted file, it will just throw an IOException without telling you the path of the problematic file. This way it's more fun, once you nail it down (with a binary search of course), you feel like you accomplished something. Or you'll get messages like "IOException: Split metadata size exceeded 10000000.". Figuring out what was the actual value is left as an exercise to the reader.
So, seriously Apache folks...
Stop Abusing IOException!
Leave this poor little IOException alone!
Hadoop (0.20.2) currently has a whopping 1300+ lines of code creating bare IOExceptions. HBase (0.92.1) has over 400. Apache committers should consider every single one of these lines as a code smell that needs to be fixed, that's begging to be fixed. Please introduce a new base exception type, and create a sound exception hierarchy.
Updates:







Monday, February 6, 2012

Devirtualizing method calls in Java

If you've read code I wrote, chances are you've seen I'm a strong adept of const correctness (WP). Naturally, when I started writing Java code (to my despair), I became equally adept of "final correctness". This is mostly because the keywords const (C/C++) and final (Java/Scala) are truly here to help the compiler help you. Many things aren't supposed to change. References in a given scope are often not made point to another object, various methods aren't supposed to be overridden, most classes aren't designed to be subclassed, etc. In C/C++ const also helps avoid doing unintentional pointer arithmetic. So when something isn't supposed to happen, if you state it explicitly, you allow the compiler to catch and report any violation of this otherwise implicit assumption.

The other aspect of const correctness is that you also help the compiler itself. Often the extra bit of information enables it to produce more efficient code. In Java especially, final plays an important role in thread safety, and when used on Strings as well as built-in types. Here's an example of the latter:

     1 final class concat {
     2   public static void main(final String[] _) {
     3     String a = "a";
     4     String b = "b";
     5     System.out.println(a + b);
     6     final String X = "X";
     7     final String Y = "Y";
     8     System.out.println(X + Y);
     9   }
    10 }
Which gets compiled to:
public static void main(java.lang.String[]);
  Code:
   0: ldc #2; //String a
   2: astore_1
   3: ldc #3; //String b
   5: astore_2
   6: getstatic #4; //Field java/lang/System.out:Ljava/io/PrintStream;
   9: new #5; //class java/lang/StringBuilder
   12: dup
   13: invokespecial #6; //Method java/lang/StringBuilder."":()V
   16: aload_1
   17: invokevirtual #7; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
   20: aload_2
   21: invokevirtual #7; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
   24: invokevirtual #8; //Method java/lang/StringBuilder.toString:()Ljava/lang/String;
   27: invokevirtual #9; //Method java/io/PrintStream.println:(Ljava/lang/String;)V
   30: getstatic #4; //Field java/lang/System.out:Ljava/io/PrintStream;
   33: ldc #10; //String XY
   35: invokevirtual #9; //Method java/io/PrintStream.println:(Ljava/lang/String;)V
   38: return
}
In the original code, lines 3-4-5 are identical to lines 6-7-8 modulo the presence of two final keywords. Yet, lines 3-4-5 get compiled to 14 byte code instructions (lines 0 through 27), whereas 6-7-8 turn into only 3 (lines 30 through 35). I find it kind of amazing that the compiler doesn't even bother optimizing such a simple piece of code, even when used with the -O flag which, most people say, is almost a no-op as of Java 1.3 – at least I checked in OpenJDK6, and it's truly a no-op there, the flag is only accepted for backwards compatibility. OpenJDK6 has a -XO flag instead, but the Sun Java install that comes on Mac OS X doesn't recognize it...

There was another thing that I thought was a side effect of final. I thought any method marked final, or any method in a class marked final would allow the compiler to devirtualize method calls. Well, it turns out that I was wrong. Not only it doesn't do this, but also the JVM considers this compile-time optimization downright illegal! Only the JIT compiler is allowed to do it.

All method calls in Java are compiled to an invokevirtual byte code instruction, except:

  • Constructors and private method use invokespecial.
  • Static methods use invokestatic.
  • Virtual method calls on objects with a static type that is an interface use invokeinterface.
The last one is weird, one might wonder why special-case virtual method calls when the static type is an interface. The reason essentially boils down to the fact that if the static type is not an interface, then we know at compile-time what entry in the vtable to use for that method, and all we have to do at runtime is essentially to read that entry from the vtable. If the static type is an interface, the compiler doesn't even know which entry in the vtable will be used, as this will depend at what point in the class hierarchy the interface will be used.

Anyway, I always imagined that having a final method meant that the compiler would compile all calls to it using invokespecial instead of invokevirtual, to "devirtualize" the method calls since it already knows for sure at compile-time where to transfer execution. Doing this at compile time seems like a trivial optimization, while leaving this up to the JIT is far more complex. But no, the compiler doesn't do this. It's not even legal to do it!

interface iface {
  int foo();
}

class base implements iface {
  public int foo() {
    return (int) System.nanoTime();
  }
}

final class sealed extends base {  // Implies that foo is final
}

final class sealedfinal extends base {
  public final int foo() {  // Redefine it to be sure / help the compiler.
    return super.foo();
  }
}

public final class devirt {
  public static void main(String[] a) {
    int n = 0;
    final iface i = new base();
    n ^= i.foo();              // invokeinterface
    final base b = new base();
    n ^= b.foo();              // invokevirtual
    final sealed s = new sealed();
    n ^= s.foo();              // invokevirtual
    final sealedfinal s = new sealedfinal();
    n ^= s.foo();              // invokevirtual
  }
}
A simple Caliper benchmark also shows that in practice all 4 calls above have exactly the same performance characteristic (see full microbenchmark). This seems to indicate that the JIT compiler is able to devirtualize the method calls in all these cases.

To try to manually devirtualize one of the last two calls, I applied a binary patch (courtesy of xxd) on the .class generated by javac. After doing this, javap correctly shows an invokespecial instruction. To my dismay the JVM then rejects the byte code: Exception in thread "main" java.lang.VerifyError: (class: devirt, method: timeInvokeFinalFinal signature: (I)I) Illegal use of nonvirtual function call

I find the wording of the JLS slightly ambiguous as to whether or not this is truly illegal, but in any case the Sun JVM rejects it, so it can't be used anyway.

The moral of the story is that javac is really only translating Java code into pre-parsed Java code. Nothing interesting happens at all in the "compiler", which should really be called the pre-parser. They don't even bother doing any kind of trivial optimization. Everything is left up to the JIT compiler. Also Java byte code is bloated, but then it's normal, it's Java :)

Saturday, October 8, 2011

Hardware Growler for Mac OS X Lion

Just in case this could be of any use to someone else, I compiled Growl 1.2.2 for Lion with the fix for HardwareGrowler crash on Lion that happens when disconnecting from a wireless network or waking up the Mac. You can download it here. The binary should work on Snow Leopard too. It's only compiled for x86_64 CPUs.

Tuesday, September 13, 2011

ext4 2x faster than XFS?

For a lot of people, the conventional wisdom is that XFS outperforms ext4. I'm not sure whether this is just because XFS used to be a lot faster than ext2 or ext3 or what. I don't have anything against XFS, and actually I would like to see it outperform ext4, unfortunately my benchmarks show otherwise. I'm wondering whether I'm doing something wrong.

In the benchmark below, the same machine and same HDDs were tested with 2 different RAID controllers. In most tests, ext4 has better results than XFS. In some tests, the difference is as much as 2x. Here are the details of the config:

Both RAID controllers are equipped with 512MB of RAM and are in their respective default factory config, except that WriteBack mode was enabled on the LSI because it's disabled by default (!). One other notable difference between the default configurations is that the Adaptec uses a strip size of 256k whereas the LSI uses 64k – this was left unchanged. Both arrays were created as RAID10 (6 pairs of 2 disks, so no spares). One controller was tested at a time, in the same machine and with the same disks. The OS (Linux 2.6.32) was on a separate RAID1 of 2 drives. The IO scheduler in use was "deadline". SysBench was using O_DIRECT on 64 files, for a total of 100GB of data.

Some observations:

  • Formatting XFS with the optimal values for sunit and swidth doesn't lead to much better performance. The gain is about 2%, except for sequential writes where it actually makes things worse. Yes, there was no partition table, the whole array was formatted directly as one single big filesystem.
  • Creating more allocation groups in XFS than physical threads doesn't lead to better performance.
  • XFS has much better random write throughput at low concurrency levels, but quickly degrades to the same performance level as ext4 with more than 8 threads.
  • ext4 has consistently better random read/write throughput and latency, even at high concurrency levels.
  • Similarly, for random reads ext4 also has much better throughput and latency.
  • By default XFS creates too few allocation groups, which artificially limits its performance at high concurrency levels. It's important to create as many AGs as hardware threads. ext4, on the other hand, doesn't really need any tuning as it performs well out of the box.

See the benchmark results in full screen or look at the raw outputs of SysBench.