Monday, December 07, 2009

Groovy Performance Update - Long running programs are as fast as Java

It's been almost one and a half year that I have not updated anything related to Groovy performance in general. As I posted previously that I have been still doing on this topic, now it's time for updating a bit.

The technique is to hook the callsite caching process. Each callsite-to-be-called is profiling, its runtime information is analysed, and the faster code for it is then generated. The callsite is replaced by its equivalent direct call. Replacement is done, anyway, through JVMTI similar to the trick implemented in the previous version of GJIT. However, the concept used in this version of GJIT has been changed from totally automatic to manually controlled. This means that only selected callsites will be optimised, and you, as a developer, can control this optimisation via the mechanism provided. This concept is not that new, I have posted about it a while ago.

Latest results I came up with are micro-benchmarks. All of them in the long run, after related call sites are optimised, are running at the same speed as Java programs.

Let's examining the graph of the Fibonacci benchmark:


You can see that there are 2 interesting regions of graphs before the Groovy program getting the same speed as Java.
1. At the beginning, there is an overhead of profiling the callsite instantiation system of Groovy.
2. For background optimisation, an optimiser thread requires times for analysis and code generation. (I also make a note on the overhead of HotSpot VM, just for referencing).

After the optimisation phase is completed, the speed of Groovy program goes overlapping with Java. This approach would be working fine for a long running program, but not for scripting. Why? I left this as an exercise for the readers ;-)

Saturday, November 14, 2009

Analysis of static Groovy claims

Continued from my last post, I think I will be in the neutral point of view enough to do the analysis for these claims. All of this analysis are solely based on my opinion, so discussion is always welcome.
It is possible to write a statically typed compiler for Groovy keeping all goodies of the beautiful language.
This is partially true. I do not think I can create a static compiler to cover all Groovy features at once.
It is possible to implement very powerful type inference of local variable and closures, so we don't need to provide verbose type information more than necessary and/or useful for documentation purposes.
This is partially true. A language cannot fully support type inference. Type inference itself has its own limits, and it is a known P-complete or NP-complete problem depending on its implementation) [see this].
It is possibly to have compile-time metagrogramming (properies, categories, default groovy methods, mixins, ast transformations - all but runtime meta-programmings).
This is also true. But adding more phases to the compiler will make it slow. When the compilation time is slow + startup time of the JVM, you might not be happy using the language as a scripting tool. Pluggable type systems would be possible to help this issue.
It is possible to support Scala-like traits (also knows as interfaces with default implementation), which is extremly strong meta-programming tool.
This is true. No doubt.
It is even possibly to compile piece of code in so called mixed mode (resolve and call statically what we can and call the rest dynamically). This mode is great for example for mixing computation with building markup or populating UI elements with results of computation.
This is true. There is also something called Soft typing and Gradual typing which could help one archieve both static and dynamic call in the same language. However, the way of building markup can be just a simpler trick. I have it in my last post.
It is even possibly that because of type inference staticly compiled code can be a little bit less verbose compare to Groovy (almost no need in 'as' conversion for example).
This is probably false. In general, I do not think there will be a better way than a dynamic language to do so.



Thursday, November 12, 2009

Static Groovy, is it feasible?

Static Groovy seems to be a hot topic. There have been debates about it in the past. And recently, Alex's call for implementation has been posted on Groovy's DZone. Yesterday, Jochen's got his view about it here.

To me, I have been (and still) doing somethings to boost Groovy performance. Actually, it is based on the current call site implementation of Groovy 1.6. But that's another approach (As people said, performance is not only concern for static Groovy, early detection of errors is also what they want).

One of the biggest challenge of implementing static Groovy is that how could we implement a framework like Grails using it. This is really an interesting question to me, as Grails requires a lot of dynamic features of Groovy. So, I try to summarise what static Groovy should be capable of for making a Grails-like framework happen.

Firstly, GORM dynamic finders:
This kind of finders mainly uses missingMethod to perform their actions.

Considering this (posted as a comment in Jochen's blog):

class C {
  def findBy/s/(Object ...) { .. }
}


You may notice "s" in the method name. It is of type String and can be referenced in the method body.
So this is basically, findBy* of Grails. Missing method could be modelled in the same way. For example,

class C2 {
  def /s/(Object ...) { /* doing something with s */ }
}


Secondly, Adding dynamic methods:
This mainly uses in Grails plugins to extend capability of the framework.
The idea of category would be working perfectly for it.

main() {
  A.test()
}

class A {
}

class Plugin {
  static test(A self) { .. }
}


Resolving the test method would be a bit problem as it's not only through the class hierarchy of A, but also other classes that contain methods that crosscut A. It would be a bit easier for a compiler to resolve them, if we have a marker like @extension static test(A self) { ... } to help it.

Thirdly, interception for implementing invokeMethod:
This may probably involves the concept of AOP. But I will note use its terms here. Actually, the example is came from a work in the AOP area.

When you are going to intercept call of existing methods, this could be working:

class C {
  intercept *(X x) {
    proceed(x); // intercepting methodForX only
  }

  intercept *(Y y) {
    proceed(y); // intercepting methodForY only
  }

  intercept *(Object ...) { ... }

  def methodForX(X x) { .. }
 
  def methodForY(Y y) { .. }
}


This concept would be called local interception, where an interceptor is capable of only intercepting methods in the same class. System-wide interception would be doing the same. Anyway, it will be something like AspectJ, in eventually.

To sum up, there are really challenges to implement a static counterpart of Groovy. However as you might see, having enough features to implement a Grails-like framework using such language is not that easy. A lot of things need to be proper implemented. Anyway, this is really an interesting thing to do.


Friday, September 04, 2009

Using AsmNodeBuilder to Test Bytecodes with Pleasure

DSL of Groovy never made me bored. I am working on some bytecode transformation and having a number of tests to run.
Testing low-level transformation means dealing with JVM instructions. For example,

ILOAD 0
INVOKESTATIC java/lang/Integer.valueOf(I)Ljava/lang/Integer;
LDC 0
INVOKESTATIC java/lang/Integer.valueOf(I)Ljava/lang/Integer;
INVOKESTATIC org/codehaus/groovy/runtime/ScriptBytecodeAdapter
             .compareLessThan(Ljava/lang/Object;Ljava/lang/Object;)Z
IFEQ L2


There is class MethodNode in the tree package of ASM. The class has the field instructions of class InsnList, which contains its method body. In the following, I have InsnList insn = methodNode.instructions; to perform some optimisation over it.

Before making AsmNodeBuilder, I had to create
a test data using Asm's Tree Nodes. Something looks like:

insn.add(new VarInsnNode(ILOAD, 0))

With Groovy and AsmNodeBuilder, now I can write:

insn.append {
  iload 0
}


When comparing results, I had to do:

assert insn.get(0).opcode == ILOAD
assert insn.get(0).var    == 0


Now, it becomes:

assertEquals asm { iload 0 }, insn[0]

Of course, I can also do:

assertEquals asm {
  iload 0
  invokestatic Integer,"valueOf",[int],Integer
}, insn


to assert all instructions in the list.

Look fun? Convinced? There are two classes you may use:

- InsnListHelper.groovy
- AsmNodeBuilder.java (generated from gen_asm_builder.groovy)

Here's how to
  1. call InsnListHelper.install() in the very beginning of your Groovy test case.
  2. Write a test case.
  3. use <InsnList>.append { ... } to create a method body.
  4. do your transformation.
  5. assert the result against an asm { ... } block. Note that Groovy's power assertion won't work with the block for some reasons, use assertEquals instead.
You can also look at an example here.


Sunday, July 26, 2009

Improved Complex Number in Groovy

Thank you for last comment in the previous blog. Here's the improved version of Complex Number.

Number.metaClass.plus = { n ->
  if(n instanceof Imaginary) {
    new Complex(r:delegate, i:n.v)
  }
}
Number.metaClass.getI = { ->
  new Imaginary(v: delegate)
}

class Complex {
  def r
  def i
  String toString() {
    "$r + ${i}i"
  }
}
class Imaginary {
  def v
  def plus(n) {
    if(n instanceof Number) new Complex(r:n, i:this.v)
  }
}

def b = 2 + 2.4.i
def c = 4.2.i + 2
println b
println b.class
println c
println c.class


Thursday, June 11, 2009

Complex Number in Groovy

Here's a quick note of making Groovy to support complex numbers:

1. override number operations via metaclass:
Number.metaClass.plus = { n ->
  if(n instanceof Imaginary) {
    return new Complex(r:delegate, i:n.v)
  }
}

2. define class Complex to hold real, r, and imaginary, i, parts
class Complex {
  def r
  def i
  String toString() {
    "$r + ${i}i"
  }
}


3. create class Imaginary to be a wrapper class (I borrow this idea from Scala's implicit converter).
class Imaginary {
  def v
}


4. define i as a global closure. It would be in DefaultGroovyMethods, actually.
def i = { v ->
  new Imaginary(v:v)
}


Then when you write:
def b = 2 + i(2.4)
println b
println b.class


This prints "2 + 2.4i" and the type of b is class Complex.

Saturday, June 06, 2009

Fortress - Hello World

Here's my early experiment of running Fortress Hello world program, which is listed below (taken from the tutorial at MIT by the Fortress team):

component hello
export Executable

run() = println("Hello, World !")

end


I'm a bit surprised that its compilation time is "really" slow. I may be guessing that it's caused by type inference in the Fortress compiler. JVM 1.6.0_12 was used in the experiment. One of my quick conclusion is that Fortress is not suitable for scripting. But I think its compiled program will not be slow (as I mentioned that I am suspecting its type inference engine that makes scripting slow).

Here's 5 times of running:

chanwit@lb /cygdrive/c/fortress/fortress_3625
$ time `bin/fortress chanwit/hello.fss`
bash: Hello,: command not found

real    0m11.393s
user    0m0.244s
sys     0m0.199s

chanwit@lb /cygdrive/c/fortress/fortress_3625
$ time `bin/fortress chanwit/hello.fss`
bash: Hello,: command not found

real    0m11.486s
user    0m0.181s
sys     0m0.214s

chanwit@lb /cygdrive/c/fortress/fortress_3625
$ time `bin/fortress chanwit/hello.fss`
bash: Hello,: command not found

real    0m11.412s
user    0m0.213s
sys     0m0.215s

chanwit@lb /cygdrive/c/fortress/fortress_3625
$ time `bin/fortress chanwit/hello.fss`
bash: Hello,: command not found

real    0m11.377s
user    0m0.197s
sys     0m0.198s

chanwit@lb /cygdrive/c/fortress/fortress_3625
$ time `bin/fortress chanwit/hello.fss`
bash: Hello,: command not found

real    0m11.432s
user    0m0.229s
sys     0m0.198s


Friday, February 27, 2009

Hybrid call site model based on Groovy

I'm trying to model a class structure that support the following features:
  • callsite; separating call and execution semantics to
    • support before/after advice of AspectJ's pointcut-advice model (possible support around advice)
    • further support my typing concerns.
  • support statically resolved callsite (resolving is done at compile-time, but also changeable at runtime)
  • support dynamically resolved callsite (as done in Groovy, JRuby, etc.)
  • must be compatible with Java. i.e., Generated classes can be used as normal Java classes.
My below POC is based on Alex T. implementation. But instead of showing all kind of callsite, I show you here only statically resolved callsite. This callsite is surely fast (I hope it's gonna be at the same level of Java) because it's compile-time thing. The interesting point is that, I have a special class loader for these callsites, namely StaticCallSiteClassLoader. What does it do?  This class loader allows re-definition of these callsites.

What I have observed so far is that All of my callsite classes (_0_println, _1_ctor, _2_render and _3_index) have not been loaded when there is no "direct" reference to them. So they can be fully lazy loading using the StaticCallSiteClassLoader. (I've checked this with java --verbose)

How could these callsites are changed at runtime?
You may see the acallsite, an CallSite array that hold all callsites in the example class. It's intended to be public and can be accessed by, for example, a virtual meta-class.
So what's a virtual meta-class? It's kind of meta-class, but in this case, it's not for controlling behaviour of the class (because all dispatch are static). It's for changing behaviour of callsite by reassigning new callsites object to the callsite array.

Notic that I use WeakReference to hold a callsite class loader. This will be making the class loader GC-able. That's it, after deferenceing all old callsites out of the callsite array, the class loader can be unloaded. Therefore, all callsite class definition are gone as well. Then, we can create a new set of callsites for the current class. This technique would be flexible enough for:
  • changing a static call to be a dynamic call
  • changing a dynamic call back to a static call
  • installing a woven callsite with before, after advice (it features dynamic AOP)
  • inserting type advice, which can lead to bytecode inlining
  • design a hybrid type system to support both static and dynamic typing in the same system.
import java.lang.ref.WeakReference;

import runtime.CallSite;
import classloader.StaticCallSiteClassLoader;
import dm.Render;

public class TestController {

    /* synthetic */
    private static WeakReference<StaticCallSiteClassLoader> cl;

    /* synthetic normal callsite */
    public static class _0_println extends CallSite {...}

    /* synthetic constructor */
    public static class _1_ctor extends CallSite {...}

    /* synthetic inter-type declaration callsite. statically resolved */
    public static class _2_render extends CallSite {...}

    /* synthetic property callsite */
    public static class _3_index extends CallSite {...}

    /* synthetic */
    public static CallSite[] acallsite;

    static {
        cl = new WeakReference<StaticCallSiteClassLoader>(
                    new StaticCallSiteClassLoader(TestController.class)
        );
        acallsite = new CallSite[4];
        acallsite[0] = cl.get().get("_0_println");
        acallsite[1] = cl.get().get("_1_ctor");
        acallsite[2] = cl.get().get("_2_render");
        acallsite[3] = cl.get().get("_3_index");
    }

    public TestController(){
        super();
    }

    public int getIndex() {
        return 10;
    }

    public static void main(String[] args) {
        acallsite[0].callStatic("hello world");
        TestController t = (TestController)(acallsite[1].callConstructor());
        acallsite[2].call(t, "hello world");
        acallsite[0].callStatic(acallsite[3].callGetProperty(t, "index"));
    }

}


Wednesday, February 25, 2009

Using AspectJ to find a Grails' bug

It's because I've got a constraint resource (also time). I've found a Grails bug (4060) that is really hard to spot. There is no exception was thrown and I have no idea how to trace this bug because my knowledge about Spring is really limited. I also do not have any good IDE to help debug Grails well. (calling grails-debug and attaching it remotely to Eclipse won't help much in this situation).

After I enabled Grails log to understand some behaviour of Grails' spring package, I saw some point in the log that it's likely a bug, but the log message is inside Spring, not Grails. (Yep, I need a stack trace).

I then got an idea of using AspectJ to help inserting a conditional code to "dump" stack from that point. What I've used is an execution PCD and a call to Thread.dumpStack();

Here's steps:
  • Enable trace log in grails-app/Config.groovy
  • Find a message
  • Grep to locate the point that emits the message. I found it in Spring code, not Grails.
  • Instead of rebuild Spring, I wrote this AspectJ code:
    pointcut pc():
        execution(
           public Object DefaultSingletonBeanRegistry
               .getSingleton(String,ObjectFactory)
        );
       
    before(String name): pc() && args(name,..) {
        if(name.equals("localeResolver")) {
            Thread.dumpStack();
        }
    }

Note that my target Spring bean is localeResolver, so I put a condition here to dump the current stack to find which part of Grails code is doing with this bean at that time.
  • I then wove the above aspect into spring.jar
$ ajc -cp .:aspectjrt.jar:spring.jar
   -inpath spring.jar\ // I was going to weave spring.jar
   -outjar spring-2.5.6.jar\ // output would be spring-2.5.6.jar
   -showWeaveInfo\ // I wanted to see some weaving info
   -Xlint:ignore\ // ignore error when resolving some missing classes
   Dump.aj // the above aspect


And gotcha, I could really see the context around when this Spring bean is loaded, and reported to Grails JIRA (And Graeme, you did fix it really quickly - you're super cool!).

Thus, using AspectJ code + Thread.dumpStack() is kind of a conditional break point that can help you debugging Grails without an IDE.

Friday, February 20, 2009

Simulate Object with Groovy's Closures and Map

After reading Chapter 18 of Types and Programming Languages by Benjamin C. Pierce, I've got an idea to simulate how OOP is working with closures and map in Groovy. Just for fun anyway :)

def func_01 = { self, props ->
  self.name = props["name"]
}

def func_02 = { self ->
  println "${self.name} is running"
}

def a = [init: func_01, run: func_02]
a.self = a

a.init(a.self, [name:"mr. a"])
a.run(a.self)


Tuesday, February 17, 2009

Explaining Typesheet

I've been designing a language for typesheet, a technique to speeding up Groovy by enforcing type for specific calls.

Current I can enforce type of a call by writing:

method(my.package.MyClass#main(String[])) {
  call(println(*)) && args(s) {
    s ~> String;
  }
}


This typesheet reads: With in the main method of class MyClass, match every call to meta-method "println" (regardless input args). For matched calls,
  1. let MOP select real methods by assuming the type of input argument to be "String", and
  2. convert values of those arguments to be "String" before passing them to the calls.
You may notice that I emphasis 2 places. There are 2 different phases of the above code to be working. Firstly, in dynamically typed languages, like Groovy, method binding is performed at runtime. So matched call messages are not the real methods. Calls to "println" there may be anything depending on the MOP engine to select.

Secondly, declaration in { s ~> String; } (I call it a type intervention block) enforces MOP to select the method according to the type of "s", which is bound to the only argument of the method. ("s" has been binding using an args predicate).

So, what's happening next? You might be guessing correctly :)
MOP will be fully by-passed for those matched calls (because related types are enforced). Therefore, speed will be increased to the level of normal Java calls.

Monday, February 09, 2009

Playing around XUL, JavaScript and jQuery

<?xml version="1.0"?>
<?xml-stylesheet href="chrome://global/skin/" type="text/css"?>
<x:window
    xmlns:x="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"
    xmlns="http://www.w3.org/1999/xhtml"
>
    <x:script type="application/x-javascript"
        src="file:///c:/xul/jquery-1.2.6.js"
    />
 
<x:script type="application/x-javascript; e4x=1">
<![CDATA[
  function test(){
    $('#b').attr("label", "3");
    var frag =
        <listitem value="4"
            label="new item"
            xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"
        />;
    $('#l').get(0)
        .appendChild(
            (new DOMParser()).parseFromString(
                frag.toXMLString(), 'text/xml'
            ).documentElement
        );
  }
]]></x:script>
     
    <x:button label="click me" oncommand="test();"/>
    <x:button id="b" label="2"/>
    <x:listbox id="l">
        <x:listitem value="1" label="item label"/>
        <x:listitem value="2" label="item label"/>
        <x:listitem value="3" label="item label"/>
    </x:listbox>
 
</x:window>



The above code is to dynamically adding a new <listitem/> into listbox #l. It will be working fine when we get some XUL fragment remotely from the server-side as well. This is Gecko 1.9.0.5.

What I've learned:
  • $("#id").get(0) returns a single element, while .get() returns a list.
  • e4x=1 tells the JavaScript parser to use the newer language spec.
  • jQuery works fine with XUL, but XML namespace is needed to avoid some gotcha because it's been designed to work with HTML, not actually XUL.
  • DOMParser works as expected
  • Gecko 1.9 solved a lot of bugs I was encountering in 1.8.
  • With Grails as a back-end, we can now create a dynamically half-Web/half-desktop app.

Monday, February 02, 2009

What if Groovy has got Structural Typing

Here's a quick mock class after I've been reading this entry.

class G7Greeter {

  def greet(String who, {def setOutput(String text)} model) {
    println "Hello from G7 ${who}"
    model.output = "Hello from G7 ${who}"
  }

}


It would be great if Groovy has Scala's structural typing feature. Structural typing is good for statically "duck typing"-like behaviour. But it seems we need a fork as Groovy dynamic typing allows multi-dispatch already.

Monday, January 12, 2009

Fortress Summary

This is a summary from Fortress Language Specification 1.0.

Fortress "secure Fortran" in a nutshell.

  • General-purpose
  • Statically typed
  • Component-based
  • Designed for producing robust high-performance software with "high programmability"
  • Intended to be a "growable language"
    • be gracefully extended
    • be scaling to "unprecedented" levels of parallelism and of addressable memory
  • Supports modular and extensible parsing
    • allow new notations and static analyses to be added
  • High-performance computation with abstraction and type safety
  • Support type inference
    • although it's statically and nominally typed
    • types can be omit if they are clear in the context
    • types can be parametric
  • Functions are first-class values
  • Components in Fortress are defined as entities with export and import APIs (not in general compared with the same term of Scala)
  • Built-in parallel computation support for large scale data structures, e.g. for loop is parallel by default
Modular Concerns
  • Object and Trait
    • an object consists of fields and methods
    • traits are named constructs that declare sets of methods. This concept is from Self.
      • method in traits may be abstract or concrete
      • trait may extend other traits
      • trait provides inherited methods as well as those declared in its declaration.

Sunday, January 11, 2009

Scala Summary

I have been reading Scala papers for a couple of days. Here's a quick summary from the first section of "An overview of the Scala programming language":

  • Scala fuses OOP and functional programming together.
  • Scala has been designed for construction of components and component system, which is a goal of software industry.
  • Components can be in any form, any size. Classes, libraries, frameworks, or processes can be considered as components.
  • Most existing languages offer only limited level of abstraction and composition.
  • The same concept of programming should be scalable to use for small as well as large programs.
  • Scala aims to be scalable by fusing OO and functional features into it.
  • Every value is an object, and every operation is a method call in Scala (uniform object model).
  • Functions are first-class values in Scala.
  • Scala has a unified abstraction for both types and values.
  • Scala provides constructs for composing classes and traits.
  • Scala provides object decomposition using pattern matching.