It's very long time since last my last post and I am still learning how to properly prove type soundness of a programming language. Here's a list of lectures from Cornell that I've been reading and they're quite good to understand.
Saturday, December 24, 2011
Learning to know Featherweight Java - An Easy Way
Posted by chanwit at 9:41 PM 0 comments
Labels: Language, type system
Tuesday, February 09, 2010
Type system primer
Given an expression e, a type T and a dictionary TT
that maps variables to type, the notation
TT |- e : T
roughly means
T = typecheck(TT, e)
and the horizontal bar notation:
P1 P2 P3
----------------
Q
means
Q if P1 and P2 and P3
from: http://wiki.jvmlangsummit.com/pdf/28_Siek_gradual.pdf (slide 6)
Posted by chanwit at 8:45 PM 0 comments
Labels: type system
Monday, December 07, 2009
Groovy Performance Update - Long running programs are as fast as Java
It's been almost one and a half year that I have not updated anything related to Groovy performance in general. As I posted previously that I have been still doing on this topic, now it's time for updating a bit.
The technique is to hook the callsite caching process. Each callsite-to-be-called is profiling, its runtime information is analysed, and the faster code for it is then generated. The callsite is replaced by its equivalent direct call. Replacement is done, anyway, through JVMTI similar to the trick implemented in the previous version of GJIT. However, the concept used in this version of GJIT has been changed from totally automatic to manually controlled. This means that only selected callsites will be optimised, and you, as a developer, can control this optimisation via the mechanism provided. This concept is not that new, I have posted about it a while ago.
Latest results I came up with are micro-benchmarks. All of them in the long run, after related call sites are optimised, are running at the same speed as Java programs.
Let's examining the graph of the Fibonacci benchmark:
You can see that there are 2 interesting regions of graphs before the Groovy program getting the same speed as Java.
1. At the beginning, there is an overhead of profiling the callsite instantiation system of Groovy.
2. For background optimisation, an optimiser thread requires times for analysis and code generation. (I also make a note on the overhead of HotSpot VM, just for referencing).
After the optimisation phase is completed, the speed of Groovy program goes overlapping with Java. This approach would be working fine for a long running program, but not for scripting. Why? I left this as an exercise for the readers ;-)
Posted by chanwit at 6:55 PM 0 comments
Labels: bytecode, GJIT, Groovy, Optimisation, Productivity
Saturday, November 14, 2009
Analysis of static Groovy claims
It is possible to write a statically typed compiler for Groovy keeping all goodies of the beautiful language.This is partially true. I do not think I can create a static compiler to cover all Groovy features at once.
It is possible to implement very powerful type inference of local variable and closures, so we don't need to provide verbose type information more than necessary and/or useful for documentation purposes.This is partially true. A language cannot fully support type inference. Type inference itself has its own limits, and it is a known P-complete or NP-complete problem depending on its implementation) [see this].
It is possibly to have compile-time metagrogramming (properies, categories, default groovy methods, mixins, ast transformations - all but runtime meta-programmings).This is also true. But adding more phases to the compiler will make it slow. When the compilation time is slow + startup time of the JVM, you might not be happy using the language as a scripting tool. Pluggable type systems would be possible to help this issue.
It is possible to support Scala-like traits (also knows as interfaces with default implementation), which is extremly strong meta-programming tool.This is true. No doubt.
It is even possibly to compile piece of code in so called mixed mode (resolve and call statically what we can and call the rest dynamically). This mode is great for example for mixing computation with building markup or populating UI elements with results of computation.This is true. There is also something called Soft typing and Gradual typing which could help one archieve both static and dynamic call in the same language. However, the way of building markup can be just a simpler trick. I have it in my last post.
It is even possibly that because of type inference staticly compiled code can be a little bit less verbose compare to Groovy (almost no need in 'as' conversion for example).This is probably false. In general, I do not think there will be a better way than a dynamic language to do so.
Posted by chanwit at 3:52 PM 0 comments