Tejus Parikh

I'm a software engineer that writes occasionally about building software, software culture, and tech adjacent hobbies. If you want to get in touch, send me an email at [my_first_name]@tejusparikh.com.

New Year, New Blog Layout, New Plugins

Posted by Tejus Parikh on January 02, 2009

Some people have noticed, but siince most actually just read the RSS feed, I guess most didn’t notice. I had a few major reasons for changing the theme. I liked the custom theme that I used to have, but it was just one more thing to maintain. It got somewhat tiring keeping it up to date with newer wordpress features. It as just not something I wanted to deal with anymore. Along with this change, I’m also moving my photo galleries off of Gallery2. The reasons were the same, it was yet another system to maintain and another set of themes to keep updated. Just not worth the time and effort. I’d rather spend time uploading pictures and blogging than making css and php changes. I get enough of that kind of stuff in the day job. I half-way debated about paying for a flickr pro account. I might still go that way eventually, but I’ve been hosting my own photos since before flickr existed and I didn’t want to give up that geek cred. My criteria for new photo software is that it had to integrate with Wordpress so that I’ll only need one theme. It also had to be incredibly simple to keep up to date. Eventually I settled on the Nextgen Gallery for Wordpress. It appeared to be the most feature rich and complete gallery for Wordpress. I’ve migrated a few of my galleries and it is easy to use. Of course, it has some limitations that weren’t present in Gallery 2. It does not have an interface for rotating images. Having it so that the interface supports both the original and scaled images requires some hacking. These limitations are small compared to the work required to make two pieces of software look the same. Of course, since I was in a plugin happy mood, I added a few more. Just in case you’re interested, this is the full run-down of all the software I’m now running on this blog: Interface:

Read the full post »

Now an employee of Premire Global

Posted by Tejus Parikh on January 15, 2009

I can’t believe that it’s already been 2 months since I joined Premiere Global’s UI team for their email marketing platform.  Joining Premiere is a bit of a shift for me.  First, PGI is a public company, it has more employees than I can remember, and for the first time in years, I actually have a title, well two if you count Chief Spacial Officer.

Read the full post »

Cloud Camp Atlanta

Posted by Tejus Parikh on January 21, 2009

Cloud Camp was a topical mini-bar camp that was about, of all things, Could Computing. While I’m glad I did attend, I felt it could have been even better and had some reservations about some of the sessions.

Read the full post »

iFart as a Benchmark for Brilliance

Posted by Tejus Parikh on January 25, 2009

On Friday I had lunch with the founder and CEO of Return7, Amro Mousa.  The discussion of course covered the state of iPhone development and what makes money on the App Store. Sure there are some folks putting deposits on Ferraris because they built super useful or fun apps. But there are other’s in the Ferrari dealership because they made something like iFart or SoundGrenade. Apps that don’t really do anything, but are cheap and appeal to sophomoric sensibilities.

Read the full post »

Nikon D60 First Shots at Kennesaw Mountain

Posted by Tejus Parikh on February 02, 2009

[flickr-gallery mode=”photoset” photoset=”72157616749046365”]

Read the full post »

Moving On Up

Posted by Tejus Parikh on February 12, 2009

Wordpress is now the CMS for vijedi.net. Everything that I do on the site is recorded here in some way. It just made sense to have wordpress actually be at the top level directory.

Read the full post »

Chaining Mootools Events Explained

Posted by Tejus Parikh on February 14, 2009

There’s tons of doc out there explaining how to chain Mootools events, but very little of it really explains what’s really going on, why the code looks the way it does, and most importantly, how to get it to do what you want. This is my attempt to fill in the gap with Mootools 1.2.1.

Chaining

The basic problem is that the way people do transitions in javascript. Every transition library effectively just does something very fancy around setTimeout. This works well if you’re just sliding in an element, but if you want to slide out an element, then slide in another one, you get something that looks very weird, if you’re not careful. Transitions aren’t atomic functions. When the javascript interpreter hits a setTimeout in Transition A, it moves on to Transition B, and vice versa. Effectively they happen at the same time. To get what you want, Mootools includes the Chain class. From the doc:
A Utility Class which executes functions one after another, with each function firing after completion of the previous. Its methods can be implemented with Class:implement into any Class, and it is currently implemented in Fx and Request. In Fx, for example, it is used to create custom, complex animations.
Chain is implemented by the Fx.Tween class. The Fx.Tween class is the class that allows you to transition between various style properties. This super simple example shows how to switch between background colors for the element with id="example1"(download mootools_chaining):

    var fx = new Fx.Tween($('example1'));

    fx.start('backgroundColor', 'red').chain(function() {

        this.start('backgroundColor', 'blue')

    }).chain(function() {

        this.start('backgroundColor', 'green')

    });

All I’m doing here is creating a tween on the element example1. Then I turn the background red, blue, and green (in that order). Ok, that looks easy enough. But what if I want to do something more complex, such as fade out one element (example2_header1), and fade in another (example2_header2)? The first thought would be to do something like:

    // THE WRONG WAY TO DO IT!

    var fx = new Fx.Tween($('example2_header1'));

    fx.start('opacity', 0).chain(function() {

        this.set('display', 'none');

    }).chain(function() {

        $('example2_header2').setStyle('display', 'block');

    }).chain(function() {

        $('example2_header2').tween('opacity', 1);

    });

This example will never show the element example2_header2. To understand why this is, we need to look at the difference between the Fx.Tween.set and Fx.Tween.start methods. Fx.Tween.set

set: function(property, now){

	if (arguments.length == 1){

		now = property;

		property = this.property || this.options.property;

	}

	this.render(this.element, property, now, this.options.unit);

	return this;

}

Fx.Tween.start

start: function(property, from, to){

	if (!this.check(arguments.callee, property, from, to)) return this;

	var args = Array.flatten(arguments);

	this.property = this.options.property || args.shift();

	var parsed = this.prepare(this.element, this.property, args);

	return this.parent(parsed.from, parsed.to);

}

Notice the difference in return. Fx.Tween.start calls into the Fx parent class, but Fx.Tween.set does not. The Fx.start method creates a timer, and when that completes, it calls the Fx.onComplete method:

onComplete: function(){

	this.fireEvent('complete', this.subject);

	if (!this.callChain()) this.fireEvent('chainComplete', this.subject);

},

Aha! This is why the second example fails. For non-fx type function calls, this.callChain() must be called manually. Thus, the code to get it all working is:

    var fx = new Fx.Tween($('example2_header1'));

        fx.start('opacity', 0).chain(function() {

            this.set('display', 'none');

            this.callChain();

        }).chain(function() {

            $('example2_header2').setStyle('display', 'block');

            this.callChain();

        }).chain(function() {

            $('example2_header2').tween('opacity', 1);

        });

    });

You can download the full working example (including the HTML around it).

Read the full post »

StartupRiot 2009

Posted by Tejus Parikh on February 19, 2009

Sonali and I have been working a project to make a light weight and hosted project management tool. We’re calling it SCMPLE and yesterday was our first pitch. You can checkout Our Slides.

Read the full post »

Evaluating Javascript Performance

Posted by Tejus Parikh on February 25, 2009

There’s a lot of factors that you have to take into account when you pick a javascript framework. Picking the right tool is a balance of developer familiarity, the task at hand, and performance. Measuring a framework’s performance is tricky. The reality is that no benchmark will account for what your app will do in real life. Making the task harder is that each framework has different features, making a straight head-to-head comparison impossible. However, that doesn’t mean we don’t need to know the relative speed of different frameworks. I found myself in that position recently. I attempted to gauge the relative speed of Mootools, JQuery, Prototype, and Appcelerator. The rational behind picking these four frameworks is that I understood them enough to create a simple test.

Read the full post »

Snow in Atlanta

Posted by Tejus Parikh on March 02, 2009

Real snow, in Atlanta, at the beginning of March.

Read the full post »

Reasons for using the GIT SVN bridge

Posted by Tejus Parikh on March 12, 2009

I’ve already posted about why I like Git better than Subversion. This motivated me to install the GIT-SVN bridge up to my work SVN repository. Using the bridge is a bit confusing at first. This tweet by Calvin Yu pointed out is right on the money. Git’s interface isn’t great, but it is powerful. There are two major resources that I use to use the bridge effectively: this blog post and the git-svn man page which has a lot of examples. Between the two, I’ve been able to do almost everything I want. So why go through the effort of cloning your SVN repository (which took a half day for me)? What follows are my reasons.

Read the full post »

Cruising - Jacksonville, Key West, Nassau

Posted by Tejus Parikh on March 22, 2009

[flickr-gallery mode=”photoset” photoset=”72157616693268038”]

Read the full post »

Acer Aspire One Clock Adjust

Posted by Tejus Parikh on March 26, 2009

Since the time change, I’ve been having an issue with the clock on my Aspire One. Everytime it sleeps, the clock is exactly one hour in the past. It wasn’t a timezone problem, since running date correctly showed the time as EDT The fix for this is to run the follow two commands:


$ sudo ntpdate us.pool.ntp.org

$ sudo hwclock --systohc

This will sync your hardware clock to the time from the network time service.

Read the full post »

Integration testing Spring MVC Annotated Controllers

Posted by Tejus Parikh on April 03, 2009

Annotations and POJO controllers make dead simple to unit test the web layer and ensure that the logic within it is correct. What’s not as clear is how to quickly (and automatically) test the configurations of your controllers and ensure the correct controller method is called with the correct parameters on a request. After looking through the Spring MVC tests, it becomes apparent that you want to create a DispatcherServlet and send it requests. If the DispatcherServlet is initialized with the correct context, it will then behave just as it does in your web container. It will look at the request, find the correct handler, and make the appropriate controller call. I created three classes to help set up the environment.


public class MockWebContextLoader extends AbstractContextLoader {

    

    public static final ServletContext SERVLET_CONTEXT = new MockServletContext("/WebContent", new FileSystemResourceLoader());



    private final static GenericWebApplicationContext webContext = new GenericWebApplicationContext();

    

    protected BeanDefinitionReader createBeanDefinitionReader(final GenericApplicationContext context) {

        return new XmlBeanDefinitionReader(context);

    }

    

    public final ConfigurableApplicationContext loadContext(final String... locations) throws Exception {



        SERVLET_CONTEXT.setAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE, webContext);

        webContext.setServletContext(SERVLET_CONTEXT);

        createBeanDefinitionReader(webContext).loadBeanDefinitions(locations);

        AnnotationConfigUtils.registerAnnotationConfigProcessors(webContext);

        webContext.refresh();

        webContext.registerShutdownHook();

        return webContext;

    }

    

    public static WebApplicationContext getInstance() {

        return webContext;

    }

    

    protected String getResourceSuffix() {

        return "-context.xml";

    }



}

The MockWebContextLoader loads in the spring config locations and creates a WebContext . In order for this environment to mimic the one in your web container, you will need to pass in the same configs. I will show you how to do that later. To help validate the success of a test, I’ve created another ViewResolver that just echoes the viewname into the response. You could have the ViewResolver return the correct view, but parsing that to gauge success seemed like too much of a headache.

public class TestViewResolver implements ViewResolver {

    

    public View resolveViewName(final String viewName, Locale locale) throws Exception {

        return new View() {

            public String getContentType() {

                return null;

            }

            @SuppressWarnings({"unchecked"})

            public void render(Map model, HttpServletRequest request, HttpServletResponse response) throws Exception {

                response.getWriter().write(viewName);

            }

        };

    }

    

}

Finally, I created an abstract class that would handle the creation of the DispatcherServlet for all tests that extend it. I’ve put my spring configuration files on the classpath that my test run it. If your configs are elsewhere, you will need to modify MockWebContextLoader to look in a different path.

@RunWith(SpringJUnit4ClassRunner.class)

@ContextConfiguration(loader=MockWebContextLoader.class, locations={"/classes/spring.xml", "/springmvc-servlet.xml"})

public abstract class AbstractControllerTestSupport {



    private static DispatcherServlet dispatcherServlet;

    

    

    @SuppressWarnings("serial")

    public static DispatcherServlet getServletInstance() {

        try {

            if(null == dispatcherServlet) {

                dispatcherServlet = new DispatcherServlet() {

                    protected WebApplicationContext createWebApplicationContext(WebApplicationContext parent) {

                        GenericWebApplicationContext wac = new GenericWebApplicationContext();

                        wac.setParent(MockWebContextLoader.getInstance());

                        wac.registerBeanDefinition("viewResolver", new RootBeanDefinition(TestViewResolver.class));

                        wac.refresh();

                        return wac;

                    }

                };

    

                dispatcherServlet.init(new MockServletConfig());

            }

        } catch(Throwable t) {

            Assert.fail("Unable to create a dispatcher servlet: " + t.getMessage());

        }

        return dispatcherServlet;

    }

    

    protected MockHttpServletRequest mockRequest(String method, String uri, Map params) {

        MockHttpServletRequest req = new MockHttpServletRequest(method, uri);

        for(String key : params.keySet()) {

            req.addParameter(key, params.get(key));

        }

        return req;

    }

    

    protected MockHttpServletResponse mockResponse() {

        return new MockHttpServletResponse();

    }

}

With this harness, it’s trivial to automatically check the configuration of the web controllers. Since the functionality is tested separately from the configuration, the problems can be isolated quicker, leaving more time to fix them.

Read the full post »

Colorbox: A customizable JQuery Lightbox

Posted by Tejus Parikh on June 07, 2009

As soon as someone mentions Web 2.0, you know that you’re going to need a modal dialog. If a designer is involved, then you know that it’s going to have to look nothing like any of the standard lightbox clones.

Read the full post »

Hiking in Fort Mountain

Posted by Tejus Parikh on June 21, 2009

Sonali and I haven’t been to North Georgia in forever so we decided to brave the heat and head some place new. When we lived in Alpharetta, we frequented the Northeast side of our state a lot. Lately we’ve been trying to see what’s there on the Eastern parts.

Read the full post »

Customizing Spring Security with Legacy Transactions and Authorization

Posted by Tejus Parikh on June 22, 2009

A few months ago at work I got stuck with a rather daunting assignment: to make Spring Security work alongside our legacy security model. The rationale was sound. We have a legacy UI and we want a smooth transition to the new one. Which means that as much of their information, including their credentials need to carry over. Furthermore, our application runs load-balanced in the production environment and we can’t make use of sticky sessions. Which means that the solution needs to integrate with our database-backed sessions. If that was not complicated enough, there was also a lot of hidden authorization code that relied on specific properties being set in ThreadLocal. After a few months of trial and error, I think I finally have a solution that both works and doesn’t lock the database. There are quite a few steps and the process is somewhat lengthy. For that reason, the rest of this tutorial is under the fold.

Important Things to Understand Before You Start

Spring Security works as a servlet filter and will execute before the interceptors or controllers. This is crucial if you are relying on one of those mechanisms for your transaction management. The Spring Security filter chain can be configured with a arbitrary number of filters, but subsequent filters will only execute if the previous filter calls:

filterChain.doFilter(request, response);

One filter that does not call this method is the DefaultAuthenticationFilter. When the DefaultAuthenticationFilter handles the authentication, it terminates execution of the filter chain and replays the original request, separate from the authentication request. This second request could occur on an entirely different thread, or even a different server in a load-balanced environment. This has implications for both transaction management and thread-local based legacy authorization. I’ve drawn up a diagram to try and explain what’s happening: [flickr]3651548939[/flickr]

What You Need to Do

Once that’s understood, it’s pretty clear what needs to happen.
  1. Authenticate the user
  2. Create the Session
  3. Commit the Transaction
  4. Create a PreAuthenticationFilter to load the session and set token in ThreadLocal
  5. Open a Transaction
  6. Let Spring do it’s thing
  7. Commit the transaction and remove token from ThreadLocal
It’s easy in theory, but somewhat difficult to sift through all the documentation and find exactly what you need to do.

Creating a Custom AuthenticationProcessingFilter

You need to extend AuthenticatingProcessingFilter in order to configure your own backing sessions and perform your own cleanup. Mine looks like this:

package net.vijedi.spring;



public class CustomAuthenticationProcessingFilter extends

        AuthenticationProcessingFilter {



    @Override

    protected void onSuccessfulAuthentication(HttpServletRequest request,

            HttpServletResponse response, Authentication authResult) throws IOException {

        

        createSession(request, response, authResult);

        commit();

        super.onSuccessfulAuthentication(request, response, authResult);

    }



    @Override

    protected void onUnsuccessfulAuthentication(HttpServletRequest request,

            HttpServletResponse response, AuthenticationException failed) throws IOException {

        commit();

        super.onUnsuccessfulAuthentication(request, response, failed);

    }

}

It’s important to implement custom logic in both over-ridden methods since both will short circuit the rest of the stack. In order to use your own filter, you must disable auto configuration and add the following lines to the spring security configuration file:

The line

makes the authenticationManager available to inject into other beans. The authenticationProcessingFilterEntryPoint configures what page is shown when the user is asked to log in. This is configured for if you were using Spring’s auto-configuration.

Creating the PreAuthenticationFilter to Re-Authenticate

Since you’ve set allowSessionCreation to false you’ll need to re-authenticate with every request. My scheme uses a cookie set in the createSession method mentioned above. Whatever your scheme is, you’ll need to create another filter and set it to execute before the AuthenticationProcessingFilter.

package net.vijedi.spring;



public class PreAuthenticationFilter extends AbstractPreAuthenticatedProcessingFilter {



    /**

     * Try to authenticate a pre-authenticated user with Spring Security if the user has not yet been authenticated.

     */

    @Override

    public void doFilterHttp(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws IOException, ServletException {



        try {

            loadSession(request);

            if (SecurityContextHolder.getContext().getAuthentication() == null) {

                doAuthenticate(request, response);

            }

        } catch (SecurityException se) {

            LOG.warn("The cookie is valid, but there is no corresponding session in the database");

        }

        filterChain.doFilter(request, response);

    }



    @Override

    protected Object getPreAuthenticatedPrincipal(HttpServletRequest request) {

        return SessionUtil.getUsername();

    }



    @Override

    protected Object getPreAuthenticatedCredentials(HttpServletRequest request) {

        return SessionUtil.getPassword();

    }

}

First, I load the session from the cookie, which puts the credentials in thread local. I then use those credentials to re-authenticate with Spring. Of course, this requires some corresponding xml:

Clean Up After Yourself

Now that I’ve stuck something into ThreadLocal, I need to make sure it gets removed at the end of request processing. This is where I need a HandlerInterceptor. I’m going to implement the afterCompletion method, since I want this to be the last thing run before the request is finished processing.

package net.vijedi.spring;



public class SecurityInterceptor implements HandlerInterceptor {



    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception e) throws Exception {

        SessionUtil.clear();

        commit();

    }



	// Other methods removed for brevity

	

}

This removes the authentication from ThreadLocal as well as committing any open transaction. This bit of xml needs to go in the *-servlet.xml file and not in the security file:

Logging Out

Almost done, but you also need a custom filter for logging out. This follows the same pattern as the PreAuthenticationFilter so no need to repeat.

Conclusion

It’s a lot of typing, but these are all the entry points you’ll need to make Spring Security work with any legacy cruft. Corrections and comments are always welcome.

Read the full post »

ViJedi is in the Cloud

Posted by Tejus Parikh on July 01, 2009

A few months ago the power supply on my server totally tanked. The cost to replace it was somewhere around $99 and some elbow grease, but it got beyond the point where I wanted to maintain my own hardware. This would be the third hardware failure in as many years and I was frustrated.

Read the full post »

Ruby-Enterprise Edition, Fusion Passenger, and Nginx on Ubuntu Jaunty

Posted by Tejus Parikh on July 26, 2009

I’ve got a 512MB Slicehost instance that I needed to launch a new Rails app (details coming soon). Since 512MB isn’t a whole lot of memory anymore, I wanted to optimize the services on this machine as much as possible. The most efficient setup appeared to be running nginx, Passenger-Nginx, and Ruby Enterprise Edition. It was actually pretty easy to get all this running and would have been even easier had I followed the correct order of operations. Nginx doesn’t support dynamically compiled runtime modules. Everything has to be built-in at compile time. Because of this, running:


$ sudo aptitude install nginx

will get you nothing useable besides the init script. Instead, the best approach is:
  1. Install Ruby Enterprise Edition
  2. Install nginx-passenger
  3. Adjust paths
  4. Modify the /etc/init.d/nginx script
  5. Reinstall all gems for REE
  6. Tweak the nginx config
  7. Add nginxensite and nginxdissite scripts to make it more ubuntu-y
The first two steps are pretty self-explanatory and start with the directions found here. Of course, if you are installing nginx, you want to the following instead of the command for apache:

$ /opt/ruby-enterprise/bin/passenger-install-nginx-module

Next you want to tweak the ubuntu /etc/init.d/nginx script to reflect the new paths. You need to change the two lines that read:

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

DAEMON=/usr/sbin/nginx

to

PATH=/opt/nginx/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

DAEMON=/opt/nginx/sbin/nginx

Next, you need to add the Ruby Enterprise Edition interpreter to your path. Open /etc/login.defs and update the paths settings with the location of Ruby Enterprise. Installing gems is pretty straight forward. There are a few small tweaks to make to the nginx config file so that it works more like the ubuntu package. At the top of nginx.conf add/change these settings:

user www-data;

worker_processes  4;



error_log  /var/log/nginx/error.log;

pid        /var/run/nginx.pid;

In the http section, add:

    include /etc/nginx/sites-enabled/*;

Finally, it’s time to create the nginxensite and nginxdissite scripts:

#!/bin/bash



# nginxensite



if [ -z $1 ]; then

        echo 

        echo "You must specify a site name"

        exit 0

fi



NGINX_CONF=/etc/nginx

CONF_FILE="$1"

AVAILABLE_PATH="$NGINX_CONF/sites-available/$CONF_FILE"

ENABLED_PATH="$NGINX_CONF/sites-enabled/$CONF_FILE"



echo 

if [ -e $AVAILABLE_PATH ]; then

        ln -s $AVAILABLE_PATH $ENABLED_PATH



        echo "$1 has been enabled"

        echo "run /etc/init.d/nginx reload to apply the changes"        

else

        echo "$AVAILABLE_PATH does not exist"

        exit 1

fi

and

#!/bin/bash



# nginxdissite



if [ -z $1 ]; then

        echo 

        echo "You must specify a site name"

        exit 0

fi



NGINX_CONF=/etc/nginx

CONF_FILE="$1"

AVAILABLE_PATH="$NGINX_CONF/sites-available/$CONF_FILE"

ENABLED_PATH="$NGINX_CONF/sites-enabled/$CONF_FILE"



echo 

if [ -e $ENABLED_PATH ]; then

        rm $ENABLED_PATH



        echo "$1 has been disabled"

        echo "run /etc/init.d/nginx reload to apply the changes"        

else

        echo "$ENABLED_PATH does not exist, ignoring"

fi

There you go, a working nginx install for your rails apps in less than an hour.

Read the full post »

Grafting Spring Transactions on Legacy Transaction Models

Posted by Tejus Parikh on July 30, 2009

In another part of the continuing series on getting Spring to work properly with our legacy components, I recently had to revisit our the way were were handling transactions. In the previous post on this topic, I demonstrated how you can use a HandlerInterceptor to control your transactions. In this post, I demonstrate how you could use spring transactions, even if you can’t use the Spring way of configuring your Hibernate SessionFactory and need to support legacy code. The end result is much like putting a Ferrari body on a Pontiac Fiero, but it will accomplish the job. The earlier solution has a significant drawback. The transaction will commit after the controller finished processing the request. Since Hibernate will not flush it’s session until a commit, all database errors will occur in a place where you can’t respond to them easily. Pushing the transaction boundary to the level below the controller (where it belongs anyway) will allow for correct error handling at the UI level. This can be accomplished with Spring Transactions. Spring needs two things for @Transactional annotated code to do what you expect: the session factory and the transaction manager. Our legacy code used a statically initialized class to create a session factory. My first step was to manually create the HibernateTransactionManager with the correct session factory.


	sessionHolder = new HibernateUtilSessionHolder();

	SessionFactory sessionFactory = configuration.buildSessionFactory();

	sessionHolder.setSessionFactory(sessionFactory);



	HibernateTransactionManager htm = new HibernateTransactionManager(sessionFactory);

	sessionHolder.setTxManager(htm);

The sessionHolder is just a class I use to hold the current SessionFactory and TransactionManager. The sessionHolder is also the interface between the legacy code TransactionManager. Here is an example method from HibernateUtilSessionHolder:

    public Session getSession() {

        SessionHolder sessionHolder = (SessionHolder) TransactionSynchronizationManager.getResource(getSessionFactory());

        if (sessionHolder == null) {

            if (LOG.isDebugEnabled()) {

                LOG.debug("opening session");

            }

            Session s = sessionFactory.openSession();

            sessionHolder = new SessionHolder(s);

            TransactionSynchronizationManager.bindResource(getSessionFactory(), sessionHolder);

        }

        if (LOG.isDebugEnabled()) {

            LOG.debug("GETTING session");

        }

        return sessionHolder.getSession();

    }

Since I hide all the functionality of the TransactionManager behind the previous API, none of the legacy code is aware the underlying transaction model has changed and manual commits will work as they have done before. The next trick is to get annotations working within the context of new code. Exposing the SessionFactory and TransactionManager to Spring is pretty easy. First, create two static methods to access what was created earlier:

    public PlatformTransactionManager getTransactionManagerInstance() {

        return sessionHolder.getTxManager();

    }



    public SessionFactory getSessionFactoryInstance() {

        return sessionHolder.getSessionFactory();

    }

Now expose them to the context by using Spring’s factory bean creation mechanism:

HIbernateUtil is the static class that created the SessionFactory and TransactionManager. The other two lines show how to use it as a factory. Finally, all that’s left to do is to add

to the spring.xml file and mark up a bunch of code with @Transactional. It might be a Fiero underneath, but it still feels like a Ferrari.

Read the full post »

Migrating TestTrack to JIRA

Posted by Tejus Parikh on August 05, 2009

In another long fought battle that eventually led to victory, we finally moved off TestTrack to JIRA Studio. Of course, now I got stuck with the task of figuring out how to get the open defects from one system to the other. Thankfully this ended up being somewhat un-painful. The Jira4R gem allows for easy creation of issues with just a few lines of Ruby. There are a few caveats to using this script. The documentation is almost non-existent and very out of date. It’s also not available on any standard gem repository.

Read the full post »

Standalone Rails Migrations

Posted by Tejus Parikh on August 17, 2009

One of the most annoying things about deployments are dealing with databases. In the Java world, using Hibernate to generate your schema is pretty common. It works well in development, where you generally re-create and re-seed your database after each model change. However, it can be disastrous when deploying. It felt like each sprint needed a dedicated day for diff-ing the database schemas and figuring out what changed and what needed to be applied. Often, indexes would be forgotten, resulting in un-foreseen slowness.

Read the full post »

Startup Compensation and Fortune 500 Extraction

Posted by Tejus Parikh on September 09, 2009

This topic starts with a conversation I was having a while back about how small developer pool available for startups in Atlanta and how difficult it is to pry development talent from jobs in large companies. While never explicitly stated, the difficulty baseline is The Valley. I think that this situation is not as bad as many make it out to be. Home Depot, UPS, Delta, AFLAC and SunTrust conjure up very different images than Google, Yahoo!, Apple, eBay, and Oracle. Yet all are Fortune 500 companies and none of them count as startups. However, we do think that individuals working at the latter set of companies are more startup friendly. While not as terrible as the perception, I do believe that this phenomena is real and it has little to do with the makeup or mentality of the workforce. It’s a result of simple economics. A standard for working for a startup is the assumption that the employee will give up some base salary in return for performance based compensation in the form of stock options. An Atlanta company puts itself at a hiring disadvantage using the same compensation trade-off as its Valley counterparts. Lets put some numbers to this to see why. I’m going to assume that the company has issued 1 million shares at $1/share, and will never dilute. Implicit in this assumption is that pre-money valuations and eventual dilution is similar between the average Atlanta and Valley companies. Differences here will skew the outcomes but the goal isn’t trying to figure out specifics, but the relative reward between regions. Now the real numbers. According to Salary.com, the salary for a Senior Software Engineer is approximately $84,000 in Atlanta and $100,000 in Palo Alto. For exit numbers, I used the charts in this post from Scott Burkett’s blog. The average exit in Atlanta is $259 million, the Bay Area is $356 million. Generally, in my experience, the tacit assumption is that some dollar value of salary is forgone for equity. For this example, I’m going to use $2 of salary equates to one share. Percentage trade-offs make the final outcome look better, but there will still be a difference. The final large assumption is that everything goes to plan, and the company sells right at the end of the four-year vest for the average sale price in the region. Here are the numbers after 4 years:

Read the full post »

LASIK vs PRK: Differences and benefits with each

Posted by Tejus Parikh on September 17, 2009

Disclaimers first. This post is not medical advice and I’m not a medical professional. Always consult with your eye-care professional about anything involving your vision. Always. With that out of the way, I want to answer a question that I’ve gotten a lot recently. Almost everyone has heard of LASIK (laser-assisted in situ keratomileusis). Most companies and doctors offering corrective surgery advertise this procedure. Not as many people have heard of PRK (Photorefractive keratectomy). PRK is the procedure that Sonali had done a few weeks ago. LASIK is what I will be having done tomorrow. Both are corrective, out-patient eye surgeries, and eliminate the need for the patient to wear corrective equipment, such as glasses or contacts. The recommended procedure depends on many factors. This post seeks to explain some of these differences.

Read the full post »

Are you there to ship, or write code?

Posted by Tejus Parikh on September 24, 2009

Amro Mousa tweeted about a great post by Joel Spolsky about Duct Tape Programmers. Also known as hackers, rockstars, and problem-solvers duct tape programmers are the ones that just get it done. The line that resonated the most with me is the following quote from Jamie Zawinski:

Read the full post »

Hackintosh

Posted by Tejus Parikh on November 19, 2009

There’s a sizeable gap between Apple’s low end stand-alone desktop (the mac mini) and the next tier (the $2500 mac pro). I found that my current mini wasn’t keeping up with what I wanted to do. Textmate, passenger, a couple virtual machines and Photoshop were enough to send the machine, and it’s 3G of ram to it’s knees.

Read the full post »

Zimbra Disaster Recovery

Posted by Tejus Parikh on November 22, 2009

I had queued up a post about the improvements/deprovements in Zimbra 6. Except a comedy of errors led the power supply my mail server to die before I had the chance. This is the second hardware failure this year, so we decided to move our mail and calendars to Google Apps.

Read the full post »

Is better photo printing worth it? (and fotoflot review)

Posted by Tejus Parikh on December 24, 2009

The holiday season is in full swing, which means shopping, lots of eating, some traveling and lots of pictures. If you’re not 80 years old, every photo you you take is now digital. Of course there are always a few each year that you really want a physical copy of, and maybe one that you would love to see blown up hanging on your wall.

Read the full post »