Tejus Parikh

I'm the CTO and co-founder of WideAngle and I write weekly about building startups and the technology that powers them from Atlanta, GA, the startup capital of the south. Follow me on twitter or subscribe to stay in touch.

iPhone (but not my phone)

Posted by Tejus Parikh on January 10, 2007

Yesterday, apple made a big splash with their announcement of iPhone. Despite my generally favorable feelings towards Apple, I wasn’t really all that impressed. Frequent readers of this blog might relate this to my recent purchase of the Sony Mylo, which has many of the features of the iPhone, but lacks a touch-screen interface, bluetooth, and cellphone capabilities. Granted, that might have a little to do with it, but the vast majority is that the iPhone sufferes from the major problem that faces all convergent devices available on the market. It’s too expensive ($500-$600 w/ 2-yr service agreement) and it doesn’t reduce the number of devices I have to carry with me in my pocket.

Read the full post »

Nokia N800

Posted by Tejus Parikh on January 11, 2007

Nokia has finally released the successor to the somewhat underpowered 770 by releasing the N800. As far as specs go, it looks like it’s a minor upgrade. I’m sure the software boasts improvement too. I would give a more complete technical analysis, but the N800 page does not seem to render correctly in Opera on my underpowered laptop. That fact is either very ironic or very telling.

Read the full post »

Return of the Fujitsu

Posted by Tejus Parikh on January 14, 2007

One thing that I’ve noticed that in 2007 our laptops are as big as they were in 2003. It’s been almost 4 years since I got my Fujitsu, and now that I’ve taken it back, I still haven’t come across a laptop that I like more. I’m very impressed with the build of this little machine. Even after a tour of duty with my technology destroying younger brother, it still works like a charm (once I cleaned the toothpaste off the keyboard).

Read the full post »

Now, Learn how to Write

Posted by Tejus Parikh on January 15, 2007

In a karma-based approach to making up for all the work I managed to get out of while younger, I’ve decided to start writing book reviews on this blog. This is good timing, since it coincides with my renewed interest in reading books.

Read the full post »

Xfce 4.4.0 Review

Posted by Tejus Parikh on January 25, 2007

The Xfce project has just announced the release of Xfce 4.4.0. For those who are unfamiliar with the project, Xfce is known for two things. First, it’s built on the GTK toolkit, but is extremely light-weight. It takes a fraction of the required memory that a full Gnome install takes. Therefore, it’s been dubbed by the Slackware folks as the “Cholesterol Free Desktop Environment.” The second is that it was the first desktop environment to have a solid and non-buggy composting manager. Since Gnome runs dog slow on this laptop, I decided to give it a spin.

Read the full post »

The Secret is don't watch "The Secret"

Posted by Tejus Parikh on January 28, 2007

In the last few months, I’ve mostly thought about the things that I normally think about. Things like being a competent and interesting writer, the Porsche Cayman, java development and an OS X sub-notebook. At no point did thoughts of self-help babble float across my cerebral cortex. This makes me highly skeptical of The Secret’s premise: the “Law” of Attraction. This law explains that human thought is so powerful, that it attracts the objects and states of being that one thinks about. If you have a lot of debt, or are stricken with cancer, you can change this simply by thinking of yourself as debt free or cured.

Read the full post »

Using Spring Web Controllers with Dojo Dialogs

Posted by Tejus Parikh on January 28, 2007

Now getting back to some real blog content, and to clear out some of the backlog……. Not too long ago, at work, I ran into the problem of using a url backed by a Spring SimpleFormController inside of a Dojo modal dialog widget. The modal dialog was intended to modifiy the properties of some object on the parent page. The goal was to have the dojo dialog open, populate the dialog with the contents of the url, submit the form and if successful, close the dialog and update the parent; otherwise show the errors. Our solution was to use Prototype’s Ajax.Updater to populate the div and submit the form. The first step was to import the two javascript libraries we need to use, the aforementioned dojo and <script type="text/javascript" src="dojo.js"> </script> <script type="text/javascript" src="prototype.js"> </script> After that, we needed to write the script to actually open the dialog, populate it, and submit the results.


<script type="text/javascript">

    dojo.require("dojo.widget.Dialog");



    var popupDialog;

    var dialogFormUrl;

    var parentFormId;

    

    dojo.addOnLoad(function() {

        popupDialog = dojo.widget.byId("dialog_id");

    });



    function hideDialog() {

        popupDialog.hide();

    }



    function _showDialog() {

        popupDialog.show();

    }



    function showDialog(formUrl, outerFormId, modelId) {

        var params = {};

        if(typeof modelId != 'undefined') {

            params = {id: modelId};

        }

        parentFormId = outerFormId;

        dialogFormUrl = formUrl;

        new Ajax.Updater( 'dialogForm', 

                dialogFormUrl, 

                {method: 'get',

                parameters: $H(params).toQueryString(),

                asynchronous: true,

                onSuccess: _showDialog});

    }

    

    function submit() {

        /* Submit the parent form to save any state changes */

        new Ajax.Updater( 'dialogForm', 

                dialogFormUrl, 

                {method: 'post',

                parameters: Form.serialize('dialogForm'),

                onComplete: processResult,

                asynchronous: true});

    }

   

    function processResult() {

        var elems = document.getElementsByClassName("errors");

        if(elems.length == 0) {

            refreshWindow();

        }        

    }

    

    function refreshWindow() {

        if(typeof refreshParentPanel != 'undefined') {

            refreshParentPanel();

        } else {

            $(parentFormId).submit();

        }

    }



</script>



The method showDialog() is the entry point into the widget. It’s function is to initialize some global variables and then call Ajax.Updater. The parameters for the function are formUrl (the url for Ajax.Updater to retrieve), outerFormId (the form id of the parent), and modelId (the id of the model object we are attempting to modify). Calling Ajax.Updater using the ‘get’ method triggers the show initial view path in the spring controller. This is exactly as if the user had navigated to the url manually. However, instead of the contents replacing the contents of the browser window, they are placed into a previously hidden div on the parent page. Our dialog in this case has two buttons: cancel and save. The cancel button calls the hideDialog function, which simply re-hids the dialog. The save button presents a few more challenges. Simply sumitting the form will not work, since it will cause the whole page to refresh, and not only the dialog. Therefore, the submit action is wrapped with a call to the Ajax.Updater. ‘post’ is specified here to cause the SimpleFormController to use it’s onSubmit methods. The one remaining hurdle is that the controller returns the http success status code, regardless of validation errors. Therefore, we must use an additional javascript check. By our convention, all error messages have the element class “errors.” Initially, this was so that we could easily make them all red, but it serves a dual purpose of signaling to our method whether or not the popup needs to be hidden, or if the validation errors need to be displayed. It took a little work but it should be clear to see how you can adapt almost any existing Spring controller to be used inside a modal dojo dialog.

Read the full post »

Greek God Vs Sumerian Spirit

Posted by Tejus Parikh on January 30, 2007

Yesterday, Tech Crunch ran a PR fluff piece on Adobe’s new framework Apollo. I felt somewhat compelled to comment on it, because of my well documented views on Flex. The gist of the article is that Apollo, Adobe’s pre-alpha but soon to be released web container, will change the space of Rich Internet Applications and make every early adopter extremely rich.

Read the full post »

OSX Ruby Rails Postgresql Quickstart

Posted by Tejus Parikh on February 01, 2007

My goal is pretty simple. I want to use my favorite database PostgreSQL, with my favorite environment (Ruby on Rails), on my favorite OS (Mac OSX). Unfortunately, this wasn’t as straight-forward as I could have hoped. After shifting through various conflicting and inconclusive sources of information including the occasionally cryptic Ruby Wiki and the Postgresql docs, I decided that I’d try to condense this information down for those of us that just want to write some code. I found the easiest and most trouble free way to do this is to install Mac Ports. Their instructions are good and make sense, so there is no need to elaborate here. Just make sure you have Xcode installed. With that out of the way, to install Postgresql type:


sudo port install postgresql82 postgresql82-server

Along with installing postgres, it’ll put up a bunch of instructions about how to install postgres as a service, and create a new database. Unlike normal unix installs, the default postgres user is postgres82. The next step is to update Ruby, since the Ruby that is shipped with Tiger is broken. The simplest way is to use ports again:

sudo port install ruby rb-rubygems

You might need to add /opt/local/bin to your PATH environment variable in order to use your updated Ruby instead of the system one. Next, install Rails with:

sudo gem install rails -y

Now for the final and most complicated step, installing a Postgres ruby driver. There are two available drivers, a native one and a pure ruby implementation. For performance reasons, the native driver is preferred, however it is a little unintuitive to install. You first need to set some environment variables (these examples assume the bash shell): If you miss that step, you’ll receive numerous errors from gcc. The final step is (as root):

POSTGRES_INCLUDE=/opt/local/include/postgresql82 gem install postgres -- --with-pgsql-lib-dir=/opt/local/lib/postgresql82

At this point, you should be ready to start developing your Rails apps against Postgres.

Read the full post »

Sometimes the answer is just too obvious

Posted by Tejus Parikh on February 01, 2007

I knew that my Fujitsu was slow, but since I got it back, it had seemed excessively so. I initially attributed it to being used to beefier computers. After all, the other machines I use regularly are dual cores with 2 gigs of RAM. I was okay for a while, but it just started getting to me.

Read the full post »

Outage info

Posted by Tejus Parikh on February 02, 2007

At some point late last night, the CISCO router that we use to handle our phones and data entered a really funky state and became unresponsive. This of course, led to vijedi.net and choicesquilts.com being down. Of course, the Choicesquilts IT staff, aka my wife, was out of the office all day and the problem didn’t get fixed until a few hours ago. IT catastrophies always seem to happen when there’s nobody around to fix them.

Read the full post »

10920217044

Posted by Tejus Parikh on February 03, 2007

I was originally going to leave this as comment to Jeff’s What’s up Atlanta? post, but my thoughts aren’t really clear on the subject, and I felt I was a little impolite to ramble on somebody else’s blog.

Read the full post »

Super Bowl Prediction

Posted by Tejus Parikh on February 04, 2007

41-2 Colts. The only Bear’s only score will be an odd interception with a fumbled return a shade before halftime. Things are going to go so bad that Lovie Smith’s halftime adjustments will include putting Kyle Orton in to provide a spark. Unfortunately, the game will be nothing compared to the post game aftermath, where we will learn in the press conference that the McCaskey’s have been at it again and not only did they give a 10-year contract extension to Rex Grossman, but they’ve decided to let Lovie become Dallas’s new head coach, thereby ensuring that the bears will have another two decades of mediocrity.

Read the full post »

Linux Woes

Posted by Tejus Parikh on February 07, 2007

Sometimes I feel like I’m in an abusive relationship. I’m with someone that keeps beating me up, keeps giving me headaches, but for some reason, I just can’t seem to leave it behind. That someone is Linux. After an update this weekend, the fonts in opera  a l l   s t a r t e d  t o  l o o k  l i k e   t h i s. Since Opera is 90% of what I do on my linux box, this was simply unacceptable. A lot of googling, some tweaking, much cursing, and a good bit of “this would never happen on a mac” later, it still looked the same. Somehow, I completely screwed up my fonts and I had no idea how to fix it.

Read the full post »

Our Houses have Basements

Posted by Tejus Parikh on February 12, 2007

At SoCon 07 it became pretty clear that Atlanta doesn’t have a very close knit startup community, free-spending VC’s, and enough crazy people. Unfortunately, by the time we got finished talking about what we didn’t have, we didn’t have time to talk about what we do have: basements.

Read the full post »

Atlanta actually is hard to get around

Posted by Tejus Parikh on February 21, 2007

There’s a lot of stuff in the news about how our current societal structure, especially in the US, is simply put, bad for us. The distances between where we work, eat, live and shop put undo strain on our relationships with our families and our bodies. Numerous studies have shown that there is a correlation between your weight and how far you have to commute. Clearly, spending 1-2 hours in a car, sitting in traffic is going to eat valuable time that could be better spent doing better things.

Read the full post »

Lullaby

Posted by Tejus Parikh on February 21, 2007

I’ve actually started working on my little web based mp3 streamer project again. Some people will probably claim that iTunes does everything that I’m trying to do anyway, but I’d like the point out that iTunes doesn’t run on any hand-held internet devices or on the Wii. Of course, it’s possible that development will grind to a halt again, since it has to be squeezed in between all the other stuff that’s going on. However, hopefully now that I’ve picked a technology and got seriously started, I’ll stay motivated and actually get something useful.

Read the full post »

Lullaby Progressing

Posted by Tejus Parikh on March 04, 2007

I’ve finally started getting my Lullaby project moving foward. The fact is, since I decided to move my CD’s into mp3’s, I haven’t had a solid way to control what’s playing on my music system without walking over the the Mac in the other room. Hopefully Lullaby will be able to solve that problem.

Read the full post »

Junits for Log Messages

Posted by Tejus Parikh on March 20, 2007

I’m becoming a slow convert towards the idea that unit testing is actually a good thing. Especially when you’re building new features. After all, do I really want to click around on a bunch of different screens trying to test things? Hell no. I’m a developer and not a QA guy for a reason. Something that I’ve decided is a not bad thing to test is verify that error messages are being logged out on error conditions. Previously, I’ve been a fan of bubbling errors up to the user level so that they can file meaningful bug reports. Unfortunately, end users don’t seem to be as fond of stack-traces as developer types. Therefore, I’ve started to move towards doing something sorta right in an error condition and logging that something went horribly wrong. In this case, it’s probably a pretty good idea to make sure that the system is going to tell you that tidbit of information. Thankfully, this is extremely easy to do with log4j and Junit Junit. The first step is to follow the log4j instructions on how to get logging to work with your system. In my case, I’m running integration tests in a Spring context so the logging plumbing was already up and running. The next step is building a mock appender object that extends org.apache.log4j.AppenderSkeleton. You need to override append, close, and requiresLayout. My append method simply stores the last item logged in an instance variable. This could be as complicated as needed. I’ve also added a few more functions for management. The final class looks like:


public class TestAppender extends AppenderSkeleton {



    private LoggingEvent lastEvent;

    

    @Override

    protected void append(LoggingEvent event) {

        lastEvent = event;

    }



    @Override

    public void close() {



    }



    @Override

    public boolean requiresLayout() {

        return false;

    }



    public LoggingEvent getLastEvent() {

        return lastEvent;

    }

    

    public void clearAppender() {

        lastEvent = null;

    }

    

}

Next, you need to configure log4j to use this appender in addition to or instead of whatever it’s currently using. If you have a working log4j.properties file, you’ll need to add something like the following:

log4j.appender.test=net.vijedi.logging.test.TestAppender



# Output to stdout and the file with WARN being the default

log4j.rootLogger=warn, stdout, test

This code snippet assumes that “stdout” is your default logger. Finally it comes time to put this in action. I found the easiest way to do this is to build an abstract test class that all my other tests would extend from. Lets take a look at the relevant bits:

    public void setUp() throws Exception {

        ....

        ((MockTestLoggingAppender) Logger.getRootLogger()

                .getAppender("test")).clearAppender();

    }



    ....



    protected void testLogMessageAbsence(Level level, Class clazz, String message) {

        LoggingEvent event = getLastEventFromLogger();

        if(null != event) {

            assertFalse( 

                    clazz.getCanonicalName().equals( 

                         event.getLocationInformation().getClassName() ) &&

                         level.equals(event.getLevel()) && 

                         ((String) event.getMessage()).contains(message)

            );

        }

    }

    

    protected void testLogMessage(Level level, Class clazz, String message) {

        LoggingEvent event = getLastEventFromLogger();

        assertNotNull(event);

        assertEquals(clazz.getCanonicalName(), 

               event.getLocationInformation().getClassName());

        assertEquals(level, event.getLevel());

        assertTrue(((String) event.getMessage()).contains(message));

    }

    

    private LoggingEvent getLastEventFromLogger() {

        MockTestLoggingAppender mockAppender = (MockTestLoggingAppender) 

                Logger.getRootLogger().getAppender("test");

        return mockAppender.getLastEvent();

    }

setUp gets the “test” appender from the root logger and clears the instance variable. This ensures that previous tests don’t effect the current one. Note that the name used here is the same as the one specified in the log4j.properties file. getLastEventFromLogger retrieves the last logged event from the appender. testLogMessage and testLogMessageAbsence check whether or not the last log message matches what was or was not expected.

Read the full post »

The worst time of year

Posted by Tejus Parikh on March 30, 2007

Pollen season in Atlanta. It sucks. No one can build a successful startup here because they are too busy sneezing.

Read the full post »

Changing Network Providers Never Goes Smoothly

Posted by Tejus Parikh on April 04, 2007

We recently switched where we get our bandwidth provider from XO to FDN Communications. The complete switchover was supposed to be a few weeks away still, but of course something went wrong and we lost connectivity early Monday afternoon. After a scramble to get wires moved and dns entries updated, we still couldn’t connect to any of our servers, by IP or name, even after dns propogated.

Read the full post »

Happy Hacking Keyboard

Posted by Tejus Parikh on April 04, 2007

I’ve gone ahead and done the unthinkable, installing only non-apple peripherals to my mac mini. The crazy idea started a few weeks ago when I noticed that after extended periods at my desktop, my right elbow started to hurt a little bit. However, when using the Macbook at work, I never had that problem. At both locations I have equally un-ergonomic desks and use really poor posture. The only real difference is the size of the keyboard. The full sized mac keyboard is w i d e. The mouse was just too far off to the right.

Read the full post »

Hotel Dusk

Posted by Tejus Parikh on April 04, 2007

If you’re a fan of the adventure game genre and mystery novels, I’d recommend checking out Hotel Dusk. Billed as an interactive mystery novel, it plays like a Dashiell Hammet novel reads, and looks like a very well drawn graphic novel.

Read the full post »

New Wii Browser

Posted by Tejus Parikh on April 11, 2007

It’s finally here, the final version of Opera for the Wii. I just grabbed it and it’s a solid improvement over the trial version. I’m not too sure about stability yet, since I haven’t used it too much. The best new feature is the option to hide the menubar by pressing 1. The browser feels a little faster as well. Another little tweak is holding down B to scroll now puts a little vector arrow on your screen that helps describe how and where you’re scrolling. I think they’ve added single column mode (it might have been there before) which makes extremely wide pages fit within the width of your tv. In other words, no more left and right scrolling when attempting to read slashdot comments

Read the full post »

Collaboa Issue Tracker

Posted by Tejus Parikh on April 29, 2007

A little while back, I described my search for a issue tracking tool to use on my personal projects. I think I’ve finally found one that I like in Collaboa. Collaboa is a simple, lightweight, and relatively easy to set-up issue tracker and Subversion repository viewer written with Ruby On Rails.

Read the full post »

Test-driven Programming

Posted by Tejus Parikh on May 15, 2007

I’m definitely one of those programmers that believes that a programmer’s true calling is to create bugs. After all, if we don’t create them, we’ll have nothing to fix and fixing a lot of issues looks really good when performance reviews come around. However, writing bugs comes with the added hassles of irate customers and frustrated qa people. These two are normally don’t look so good when performance reviews come around. But… committing buggy code saves you from the annoyance of making a change, compiling, restarting jboss, clicking through a bunch of screens just to discover that you wrote ‘LIKE’ instead of ‘ILIKE’ in your query, and then repeating ad infinium. On one hand, you have irate customers on the other you have mind-numbing clicking. Either way, you’re likely to lose your mind.

Read the full post »

OpenSUSE NFS Configuration Tutorial

Posted by Tejus Parikh on May 16, 2007

In a heterogeneous OS environment, the network file system of choice is clearly SMB/CIFS, mostly since heterogeneous implies the existence of windows machines. Lately though, I’ve been getting a little annoyed with some of the quirks of a file-system from the single user Windows world. I also don’t have any Windows machines left. Therefore, when it came time to share one resource to multiple machines, I decided to give NFS a try.

Read the full post »

Installing Freevo on OpenSUSE

Posted by Tejus Parikh on May 22, 2007

In my last post, I detailed the process of getting Freevo installed on Ubuntu. It actually wasn’t too bad, unless you wanted your Multi-Media PC to respond to an IR remote. This post is a quick overview on how to install Freevo on an OpenSUSE box. The details are once again left out, since each step is much better explained from the pages found in a google search.

Read the full post »

Installing Freevo On Ubuntu

Posted by Tejus Parikh on May 22, 2007

We were pretty impressed by the demos of LinuxMCE, so we decided to give it a try. More details on that, when my wife decides to stop being lazy. The end result was pretty disastrous and we decided to switch back to Freevo. Since I keep hearing good things about Ubuntu, I decided to use that at the base of my Freevo install. This is by no means a detailed walk-though, it just describes the steps I took to try and get it working.

Read the full post »

Business Logic Layer Security and Transactions with Annotations

Posted by Tejus Parikh on May 30, 2007

Not too long again, my place of employment switched to using the Acegi framework for security. The major reasons for this switch were the large user community using Acegi and it’s officially the “Security System for Spring.” One of the loose ends that we didn’t tie up as part of the initial migration was method level security declared with Java 1.5 Annotations. Our old security system had it, but it was practically unused (since the web layer performed the same checks), so we just chucked it. Unfortunately, adding the functionality back it was not as easy as we would have liked. This post is basically an overview of how we got Acegi to use Spring AOP for method level security and still play nice with our transactional support. We already had our transactional support configured before we started implementing method level security. Like our security, we had this configured with annotations. Our pattern is to annotate our business logic interfaces. The corresponding implementation remains un-marked. This is an example of how our business interface looked before security.


@Transactional

public interface UserManager {



    User createUser(String login, String password);



    @Transactional(readOnly=true)

    User getUser(String userId);



    User deleteUser(String userId);

}

Spring’s AOP support reads the @Transactional annotation on these interfaces and intercepts the method invocation to add a transaction around it. Getting Spring to perform this dark magic is as simple as adding a few lines to your spring context xml file. First, you need to tell Spring to do it’s AOP magic.

    <bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/>

This bean definition tells Spring to check for potential AOP advice (ie, method interceptors) to run on a classes method at bean creation time. Anytime your bean is injected into another class, you will get a aop-aware, proxied version, instead of your vanilla concrete implementation. In order to add transactional “advice” to your proxied version, the following lines need to go into your spring context configuration.

    <bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor">

        <property name="transactionInterceptor" ref="txInterceptor"/>

    </bean>



    <bean id="txInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor">

        <property name="transactionManager" ref="transactionManager"/>

        <property name="transactionAttributeSource">

            <bean class="org.springframework.transaction.annotation.AnnotationTransactionAttributeSource"/>

        </property>

    



    <bean id="transactionManager"

        class="org.springframework.orm.hibernate3.HibernateTransactionManager">

        <property name="sessionFactory" ref="sessionFactory" />

    </bean>

Since we use Hibernate, we use the HibernateTransactionManager. If you don’t use hibernate, you have a smorgasbord to choose from. This is what we had before we tried to add method level security. To add method level security, the first step is to realize that you can only have one AOP Proxy Creator. Therefore, blindly following the tutorials on the Spring website won’t help too much. I’m also assuming that you have a working Acegi setup that does authentication at a URL level and that you have an authentication manager bean configured. Configuring that aspect of Acegi is outside the scope of this post. We decided to use role-based authorization for our methods, so we configured a RoleVoter and an AccessDecisionManager to make decisions based on the users GrantedAuthorities. How users get granted authorities is covered in the Acegi documentation.

    <bean id='accessDecisionManager' class='org.acegisecurity.vote.AffirmativeBased'>

        <property name='decisionVoters'>

            <list><ref bean='roleVoter'/>

        </property>

    </bean>



    <bean id='roleVoter' class='org.acegisecurity.vote.RoleVoter'>

        <property name="rolePrefix" value="PERMISSION_" />

    </bean>

The standard rolePrefix is ‘ROLE_’. We just used ‘PERMISSION_’ since it maps to our egacy model more appropriately. Next, we need to configure something to read the Security annotations:

    <bean id="objectDefinitionSource" class="org.acegisecurity.intercept.method.MethodDefinitionAttributes">

      <property name="attributes">

        <bean class="org.acegisecurity.annotation.SecurityAnnotationAttributes" />

      </property>

    </bean>

    

Then we put it all together by creating a security interceptor and adding it to the chain of advisors that the proxy will call before the method invocation.

    <bean class="org.acegisecurity.intercept.method.aopalliance.MethodDefinitionSourceAdvisor">

        <constructor -arg>

            <ref bean="methodSecurityInterceptor" />

        </constructor>

    </bean>

   

    <bean id="methodSecurityInterceptor" class="org.acegisecurity.intercept.method.aopalliance.MethodSecurityInterceptor">

        <property name="validateConfigAttributes"><value>true</value>

        <property name="authenticationManager"><ref bean="authenticationManager"/>

        <property name="accessDecisionManager"><ref bean="accessDecisionManager"/>

        <property name="objectDefinitionSource"><ref bean="objectDefinitionSource"/>

    </bean> 

Finally, we can annotate our classes. If we wanted to ensure that only users with the permission to create a user (PERMISSION_UserCreate) then we need to add ‘@Secured({“PERMISSION_UserCreate”})’ to the correct method on the interface. Our final interface looks like:

@Transactional

public interface UserManager {



    @Secured({"PERMISSION_UserCreate"})

    User createUser(String login, String password);



    @Transactional(readOnly=true)

    User getUser(String userId);



    User deleteUser(String userId);

}

Viola! Users without the correct permissions will no longer be allowed to create new users. Unfortunately, things aren’t ever that simple, and we ran into one pretty major issue. A good chunk of our beans in the business logic layer didn’t get proxied. Therefore, we got neither transactions nor security on these beans. To add the confusion, the business-logic calls failed in a very odd way. Whenever we did an update or create on a persisted model object, we got a:
org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read-only mode (FlushMode.NEVER/MANUAL): Turn your Session into FlushMode.COMMIT/AUTO or remove ‘readOnly’ marker from transaction definition.
This, of course, was totally unhelpful, since we didn’t have a readOnly marker on our transaction definition, and we were pretty sure that we were setting our FlushMode in our transaction interceptor. Removing the security annotations stuff made the problem go away again. Eventually we, realized the problem was that instead of a proxy with transaction advice and security advice, we received the configured implementation of the interface. Our interceptors hadn’t finished initializing by the time some beans were creating. The culprit turn out to be the manner in which we created our spring context files. The structure of our files was:

   <!--Transaction Stuff -->

   <!-- a lot of bean defs -->

   <!-- Security stuff -->

   <!-- Some more beans -->

Changing that to:

   <!-- Security stuff -->

   <!--Transaction Stuff -->

   <!-- a lot of bean defs -->

   <!-- Some more beans -->

made the problem go away for good. If you’re having weird issues with transactional annotations and security annotations, this would probably be a good place to look.

Read the full post »

Vijedi is a disaster

Posted by Tejus Parikh on June 12, 2007

It might not look it on the surface, but vijedi.net is a disaster. It’s come a long way from the days it was hosted on a abandoned Dell with 48 MB of RAM and a PII-233 processor. I’ve got a blog, a photo gallery, virtual hosts, source repositories, and until 2 days ago, email services all running on this box. Which is fine. Unless you have 18GB of storage. Right no, Vijedi.net needs 70GB, most of this resides on an external USB drive sitting on top of the server box. Adding to the infrastructure WTF, is that I wanted to do more with the box, and Patrick Vokerding, doesn’t publish Slackware releases often enough. Therefore, I have a bunch of core web-libraries compiled from source. Which makes them very hard to upgrade. Also, since I ran out of disk space, there all in very strange locations, which makes it hard, well to do anything at all really.

Read the full post »

Zimbra Collaboration Suite

Posted by Tejus Parikh on June 12, 2007

I finally bit the bullet, bought another box and started running mail functions for vijedi.net through Zimbra. Our reasons for this were pretty standard. Nobody wanted to use the webclient because it wasn’t responsive and felt very clunky. It was a webmail client that felt like a webmail client. Which made the client and it’s shared functions almost useless, since nobody used it, except in case of emergency.

Read the full post »

Return of the Jedi

Posted by Tejus Parikh on July 19, 2007

Ok, so sorry for the bad pun. It only took a month more than I anticipated, but Vijedi is back. The most noticeable change is the new skins, but the major work was behind the scenes. The box now has one sufficiently large hard-drive that’s actually inside the server case, a standard directory structure, and it’s running OpenSUSE with all sorts of goodies like lvm. The goal of having a server that isn’t a nightmare to maintain appears to be accomplished. Unfortunately, some stuff still isn’t working, such as the virtual hosts and the subversion repositories. That, and probably a lot of old cruft hanging around that I neglected to get rid of.

Read the full post »

Eclipse and Subversion on Mac OS X

Posted by Tejus Parikh on August 09, 2007

I don’t think there’s anything more frustrating for a developer than having your development tools slow you down instead of help you get your work done. I’m still a fan of Eclipse, mostly because I’ve used it so much, everything else feels foreign. Trying to develop Java code without it is a lot like trying to find your way out of the middle of a dark forest with neither map nor flashlight.

Read the full post »

There is a Spoon: Integrating ETL into your application

Posted by Tejus Parikh on August 30, 2007

A not-too-uncommon occurrence when building a data-driven application is the realization that you need an honest-to-goodness data warehouse. Having one-off statistical tables scattered throughout your schema just doesn’t cut it after a while. Thankfully, there’s plenty of information on the web on how to build a data-warehouse. Once you create your denormalized, summarized, and dimensioned schema, you face the challenge of getting the data out of your transaction processing database and into the warehouse. That’s the domain of ETL (Extract, Transform, and Load) tools. This post is about how to get Pentaho Data Integration (formerly known as Kettle) embedded within your application. Since I like ‘Kettle’ much better than PDI, I’m going to use Kettle throughout the post. Step 0: Create your transformation The transformation will be very specific to the tables you are trying to transform. However, there are a few general guidelines that will make the process easier. Unless your production environment is your workstation, it’s a good practice to use Kettle variables for all the database fields. To make the spoon gui work, you will need to add a kettle.properties file to your userhome/.kettle/ directory. I’ll show you how to set the properties within your application a little further down. Step 1: Getting your project environment configured The first step is to identify which jars you’ll need to add to your project. Kettle depends on a lot of the usual java libraries, so the new stuff you need to add is probably minimal. From the lib directory of the Kettle distribution, you’ll need kettle.jar. Given that I work on a server application, I was somewhat chagrined that I had to import common.jar from the libswt directory. You’ll probably need to grab a few other things from libext before the kettle stuff will compile. What specifically you need will depend on what you already have. You won’t need everything, since you are only going to embed a small portion of the total Kettle toolkit. Step 2: Create the Skeleton There are three steps to executing a Kettle transformation: initializing the environment, loading the transformation, and execution. Therefore, I created a method that currently calls stubs to handle each of these steps as well as repackage any exceptions that are thrown.


    private void executeKettleTransform(String transformLocation) throws WarehouseDataEtlException {

        try {

            initializeEnvironment();

            TransMeta meta = loadTransform(transformLocation);

            executeTransform(meta);

        } catch(Throwable t) {

            throw new WarehouseDataEtlException("Unable to run transform at " + transformLocation, t);

        }

    }

Step 3: Implement the initialization functions The initialization method needs to do a few things. First, it has to initialize the transformation’s environment, followed by loading the steps, then setting all the variables for the jdbc properties.

    private void initializeEnvironment() {

        EnvUtil.environmentInit();

        StepLoader steploader = StepLoader.getInstance();

        if (!steploader.read()) {

            throw new IllegalStateException("Spoon broke for some reason");

        }

        

        KettleVariables kettleVariables =  KettleVariables.getInstance();

        kettleVariables.setVariable("jdbc.server.name", jdbcServerName);

        kettleVariables.setVariable("jdbc.olap.server.name", jdbcOlapServerName);

        kettleVariables.setVariable("jdbc.dbname", jdbcDBName);

        kettleVariables.setVariable("jdbc.olap.dbname", jdbcOlapDBName);

        kettleVariables.setVariable("jdbc.username", jdbcUsername);

        kettleVariables.setVariable("jdbc.password", jdbcPassword);

    }

One of the more annoying things about working with the Kettle API is it’s use of returning false for failures instead of exceptions. Hence, the boolean check around steploader.read(). Also note that this is where we set the variables for our application runtime. This approach lets the developer run the transformation in the application as well as from the tools provided with the Kettle project, which can be extremely helpful while debugging. Step 4: Load the transformation A Kettle transformation is just an XML file, which provides some flexibility in how you load it within your server app. I chose to put the transformation on the classpath, then use the class loader to get access to it. Since the 2.5.1 release of Kettle does not support loading a transformation directly from a stream, you have to parse the XML then feed it to the transformation metadata object.

    private TransMeta loadTransform(String transformName) throws Exception {

        String transformLocation = getTransformPath(transformName);

        TransMeta transMeta = null;

        InputStream transformStream = this.getClass().getResourceAsStream(transformLocation);

        

        DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();

        DocumentBuilder builder = factory.newDocumentBuilder();

        Document doc = builder.parse(transformStream);



        transMeta = new TransMeta(doc.getFirstChild());

        return transMeta;

    }

A crucial realization is that you need to pass in the first child of your xml document, and not the root of the document itself. Otherwise, Kettle will throw exceptions about being unable to find anything to load. The call to getTransformPath is a call to an internal method that maps a string to a classpath location. Step 5: Execute the Spoon Executing the transformation is just getting an instance of the transformation from the transformation metadata object, then executing it. If you are logging transformation executions to a table, you need to wait for it to finish, then manually call end processing. Failure to do these steps will mean you will never see a row with the status end in the log table. This is especially bad if you are using transformation times in your where clauses for data input steps.

    private void executeTransform(TransMeta transMeta) throws Exception {

        Trans trans = new Trans(LogWriter.getInstance(), transMeta);

        if(!trans.execute(transMeta.getArguments())) {

            throw new RuntimeException("Transformation failed");

        }

        

        trans.waitUntilFinished();

        trans.endProcessing("end");

    }

That’s all there is, and you can easily test it by wrapping a junit around the execute method call. In the next few weeks, I’ll post something about doing near-real-time warehousing using quartz, spring, and the code that I’ve shown here.

Read the full post »

rsync and SVN: A poor man's distributed version control

Posted by Tejus Parikh on September 07, 2007

I probably have too many computers. Often, this leads me to need some version of some source code on one box, that only exists on another. I used to check into a temporary branch, then check out from that branch, but that often leads to unpleasant merge problems. Another thing I’ve tried is just using scp to transfer files, but that can be time consuming with a slow connection or a lot of source. I can’t believe it took me this long to think of it, but the most sensible solution is to use rsync.

Read the full post »

Zimbra aquired by Yahoo!

Posted by Tejus Parikh on September 17, 2007

Obviously a company that’s taken in $30.5 million dollars needs an exit, and for the Zimbra Group $350 million from Yahoo! is not a bad chunk of change. However, as a user of Zimbra, I can’t help but feel a little apprehensive about a great open-source product going under the wing of a company without a lot of OSS cred. More troubling is the phrasing of the post on the Zimbra Blog stating that “We are committed to keeping the current source open and available for use.” No mention is made of the continued existence of the open-source edition. Nor is open-source mentioned on Yahoo!’s press-release or on thier blog.

Read the full post »

Xine screw-up

Posted by Tejus Parikh on October 02, 2007

For some bizarre and unknown reason, my Xine config on the multimedia pc just completely vanished. I blame the crappier power-grid in the city. It’s nowhere near as reliable as Alpharetta. Anyway, this event resulted in a multi-night quest to get audio working again with digital out Like most things of this nature, it wasn’t a lot of configuration, but tons of googling to find the magic settings. The first bit wasn’t too hard. Modifying the Xine config file in /home/user/.xine/config to add:


audio.output.speaker_arrangement:Pass Through

audio.device.alsa_default_device:plug:spdif:0

restored audio output for both music files (mp3,acc,etc) and movies with an AC-3 audio track. The remaining bummer was audio files stuttered like crazy. Launching xine from the command line with xine --verbose=1 audio-file.mp3 displayed a lot of this on the console:

fixing sound card drift by -1998 pts

fixing sound card drift by -2545 pts

fixing sound card drift by -2999 pts

fixing sound card drift by -3306 pts

fixing sound card drift by -3538 pts

fixing sound card drift by -3709 pts

According to the official Xine FAQ, my soundcard (an on-board VIA 8237 mustn’t be very good and is not keeping track of sampling frequencies very well. While I don’t recall ever having this problem at my previous house, I did always think that audio was playing back faster than it should be. Either way, adding

audio.synchronization.force_rate:48000

resulted in a clean audio stream that’s at the same speed as my IPod.

Read the full post »

The Quartz Kettle

Posted by Tejus Parikh on October 08, 2007

A few posts ago, I talked about how to use Kettle to transform data from your normalized structures into something useful for data warehousing. In-order to have a near real-time data-warehouse, we need run our jobs frequently. This could have the unintended side-effect of a small hiccup causing a snowballing slowdown. Therefore, ensuring that two jobs do not run concurrently is a major concern. You could write a shell script to launch pan, record the pid, then check for the pid if cron starts the job again before the previous instance has had the opportunity to complete. Or you could just use Quartz. If you’re familiar with Quartz, setting up a job for a Kettle transform is trivial. The key thing is to implement StatefulJob. A side-effect of persisting the state of the job across invocations has a side-effect of forcing the job to complete before it can be started again. Which means that the downwards spiral of resource consumption is not possible. Another safe-guard is logging when the the time to execute a transformation is longer than the the time allocated. This can be calculated from properties of the JobExecutionContext (context).


long start = System.currentTimeMillis();



runJob();



long runtime = System.currentTimeMillis() - start;

long startTime = context.getFireTime().getTime();

long nextFireTime = context.getNextFireTime().getTime();



if(runtime > (nextFireTime - startTime)) {

  //log error message

}

The call context.getFireTime() returns the time the job started (when the trigger was fired). Next fire time returns the next time the job is supposed to start. If the time between these points is shorter than the time it took run, well then you’ve got a problem. Thankfully, I haven’t run into this yet, so no ideas on how to fix it!

Read the full post »

New House!

Posted by Tejus Parikh on October 11, 2007

Almost two months ago, Sonali and I took the big step of buying our own place. So far, we have to say that we absolutely love it! We’re in the city, near places with great food, and with things to do. The best part: our commutes are half as long. I’ve put some pictures up in the photo gallery. We’ll have a house warming party as soon as some of those empty rooms actually have things in them or we have more than a handful of seats. Whichever comes first.

Read the full post »

Wordpress comment response time

Posted by Tejus Parikh on October 16, 2007

Because spammer suck, and spam assassin is a product, not a person, comments only get posted after they go through moderation. Of course, that means that I have to know someone’s posted a comment. This used to work great, but when I switched to having mail running on a separate machine, wordpress stopped telling me that a comment has been posted. Until now.

Read the full post »

Spring like Validation in DWR

Posted by Tejus Parikh on October 19, 2007

One challenge of a hybrid AJAX and Spring MVC approach is consistent error handling. Since DWR does not have standard validation support, ad-hoc solutions tend to proliferate throughout the source code, leaving both the developer and the end-user confused. This is in contrast to Spring, which includes a validation as part of it’s form processing workflow. Our solution was to graft Spring-like validation on top of our DWR controllers. We’ve implemented this approach on DWR 2.0. Our methods, exposed via DWR now look like this:


    @RemoteMethod

    @AjaxValidators(PersonalInfoValidator.class)

    public void updatePersonalInfo(PersonalInfo pi) {

Now lets look at the magic that makes this possible. The first step is to add a filter in the allow section of the dwr.xml configuration file.

    
This line allows you to intercept a call on a dwr method and process any annotations that you might have added on the method (such as @AjaxValidators). The next step is to implement an AjaxFilter. This class has to process the annotations, find the appropriate bean, then preform the validation.

public class AjaxValidationFilter implements AjaxFilter {



    private ApplicationContext applicationContext;



    public AjaxValidationFilter() {

        applicationContext = (WebApplicationContext) ThreadContext.get(ThreadLocalSpringCreator.BEAN_FACTORY_KEY);

    }



    public Object doFilter(Object obj, Method method, Object[] params, AjaxFilterChain chain) throws Exception {

        AjaxValidators ann = method.getAnnotation(AjaxValidators.class);

        boolean throwErrors = false;

        if(ann != null) {



            int lastParamToValidate = Math.min(params.length, ann.value().length);

            BindException[] errors = new BindException[lastParamToValidate];



            for(int i = 0; i  validatorClass ) throws Exception {

        if(validatorClass == null) return null;

        String[] possibleValidators = applicationContext.getBeanNamesForType(validatorClass);

        return (Validator) applicationContext.getBean(possibleValidators[0]);

    }

}

There’s a few things of note. First is that the value of the annotation is an array. Therefore, you can have one validator per argument to your DWR method. Secondly, we need access to the context where the DWR beans are defined, so that we can find the correct instance of the validator. We use an instantiated bean so that we can inject Managers and other properties into the validator to validate business layer requirements (such as searching the database for name conflicts). The AjaxValidationException class is a normal runtime-exception class that contains an array of org.springframework.validation.Errors objects for errors. It then internally translates this information into a list of NameValue pairs for easy access in javascript. This is our approach, but there are other ways to do this. Now all errors will be marshaled into a class that has the property javaClassName set to net.vijedi.ajax.validation.AjaxValidationException. This allows you to do whatever you need in your error handler. The simplest thing to do is update the contents of a placeholder div by looping through the errors array. However, if your exception can give you name-value pairs, then it’s entirely possible to highlight specific form fields.

Read the full post »

Old Friends, New Opportunity

Posted by Tejus Parikh on October 25, 2007

A little late (since the vast majority of readers of this blog are personally affected), but for historical purposes I’m blogging that I’m joining Appcelerator, Inc (formerly Hakano) as a software developer. I’m extremely excited about this opportunity for a few reasons. One, appcelerator is an open-source RIA platform that, of all things, uses internet technologies. You can actually run Appcelerator apps in a web-browser, no fancy plugins required. Add to that, the team’s full of people I know and respect, which is a big plus. Oh, and did I mention that I’ll get paid to work on open-source stuff?

Read the full post »

Mozilla Prism

Posted by Tejus Parikh on November 05, 2007

So I rolled on over to my blog reader and came across Mark Finkle’s post about the release of Prism for something other than windows.

Read the full post »

WTF is an RIA?

Posted by Tejus Parikh on November 07, 2007

RIA is becoming a hot buzzword in the web development world. RIA vendors promise a lot. They’re meant to be as easy do develop as web-pages, but have all the functionality of a desktop app. However, when you really think about it, an RIA can pretty much be anything that uses the internet for communication and has a graphically widget that can be modified in someway. Everyone stuck at a mega-corp writing a Swing application that talks to something via SOAP can rejoice about being part of the modern wave of application development.

Read the full post »

Drug Laws Suck

Posted by Tejus Parikh on November 29, 2007

I spent the better part of today feeling miserable. I’ve got a head cold, and there is little that’s less convenient than having a sneeze interrupt your train of thought. Of course, I went to the convienice store to get some medication. I bought the Sudafed, hopeful that I’ll soon be on the road to feeling better. But of course not, because they only sell the fake stuff. I bought it anywa and took two doses to no avail.

Read the full post »

Ning for Local Communities

Posted by Tejus Parikh on December 29, 2007

When I discovered that my HOA did not have a website, I considered volunteering to create one. After all, building websites is pretty close to what I do for a living. Then I realized that even if there was a website, keeping it updated in a timely manner would still be problematic. Most websites for small-organizations tend to be hopelessly out-of-date. So my thoughts shifted towards Wordpress or some CMS-type software. That still has it’s problems, though, since blogs aren’t fantastic for static content and giving people editor status might be a bit of a problem. I’d be doing this for a community, so I needed something community-driven. I need a social network.

Read the full post »

Upgrading Zimbra

Posted by Tejus Parikh on December 30, 2007

It’s been a while since I installed Zimbra on my mail server and took the opportunity of some free time to perform the update. At first, I was a little apprehensive, since there wasn’t much documentation available. However, I forged ahead, downloaded the new version and ran the install script. When it asked me to update, I said ‘Y’ and walked away from the computer. When I came back, I had an upgraded Zimbra. Apparently, there wasn’t much documentation because there just isn’t much to document.

Read the full post »

Slackware 12 + E17 = A lot of Bling on a Slow Machine

Posted by Tejus Parikh on December 31, 2007

OpenSUSE never was practical for my little Fujitsu P2120. Little things like it taking 10 minutes to load the software update screen had started to get on my nerves. So I pulled Slackware 12 down, stuck it in the dvd drive, and gave it whirl.

Read the full post »