Thursday, October 10, 2013

Bumper-Sticker Agile

Years ago I was impressed with an article Josh Bloch published entitled Bumper-Sticker API design, full of "pithy maxims" for creating an API.  I was thinking about this unique approach as I reflected on my years of attempts at following agile methodologies in many of its various and sometimes grotesque forms.

There have been a lot of words spoken both in print and on the Internet about how to be more agile, and it can be hard to filter.  In the spirit of Bloch's post, I distilled my experiences down into my own hopefully-pithy maxims.  The ideas are not original to me, but this list captures what I think is important and maybe it will help you deliver just a little more value, faster.

Call it Bumper-Sticker Agile.

Excel at shipping software first.  Shipping software early and often is the first competency.  Master this before you focus on anything else.  Continuously deliver.

Don't start with the elimination of waste.  Don't require a homogeneous tech stack across all teams.  Don't worry about whether teams might be producing overlapping features.  Address efficiency after you have learned how to ship to your customers.

Work tiny.  Favor smaller iterations over longer ones.  Solicit early feedback and immediately apply it to the next cycle of development.

Treat iterations as experiments.  Don't be afraid of failure.  If your iterations are small, little will have been lost.  Learn from the result and improve.

If a customer can't see it, it provides the same value as if you didn't build it.  Iteration without delivery is just one step removed from big up-front design.  You'll get the information you need at the last possible moment rather than the earliest.

Respect the principle of the Last Responsible Moment.  Delay unnecessary decisions to avoid building unnecessary features.  Remember YAGNI.

Accept your ignorance.  Stop assuming you know what the best technology is, or what the market wants, or what your customer actually needs.  Instead deliver working code and validate assumptions with a real customer experience.

Optimize around the team.  Your tools and processes should be in place to support those who do the work, i.e., those typically on the leaf nodes of the org chart.  Optimizing around any form of centralized management, including project or product management, takes power away from the team.

Empower teams with cross-functionality.  An organization silo'ed by function (dev, test, ops, dbas, product, pmo, etc) will defeat collective code ownership.  Factions will be concerned with only their particular function.  Instead, group roles together on a single team and give it sole authority.

Be transparent.  Every member of the team should be able to know what everyone else is working on.  The team should have nothing to hide from stakeholders or each other.

Radiate the right information.  Make it easy to see what the team values, whether it's the status of tasks, tests results, velocity, or whatever they choose.  Physical radiators are often more effective than electronic radiators.

Turn POCs into MVPs.  Proofs of Concept don't deliver value and often unintentionally end up in production.  Use real deliverables to test out new technology.

Make testing and refactoring first-class citizens.  One implies the other.  Do both in tandem to combat software entropy.

Agree on a definition of done.  All stakeholders should understand what done means.  Don't change the definition mid-iteration.  Automate it if possible.

Pair program rather than holding KT sessions.  "Knowledge transfer" done by Big Group Meeting has terrible retention and results in little sense of ownership.  Work with someone side-by-side if you want them to understand what you're teaching them.

Don't "hope" for good outcomes.  Make rational decisions based on real data.  Measure that which you do not understand, or about which you have insufficient data to reason.  Prefer taking action to lengthy deliberation.

Stand-ups are for the team.  Don't allow the meetings to become status updates to a single person.  The team should be talking to each other about a shared commitment.

Keep stand-ups short.  Stand up.  Don't allow one task or story to monopolize the discussion.  Don't bring laptops.  Don't go longer than 15 minutes.

Accept that being agile is hard.  Pair programming is hard.  TDD is hard.  Continuous improvement is hard.  As you transform into an agile organization, expect pain and be ready to stay committed despite failure.




Monday, September 23, 2013

SSO with CAS, Part 2: The Service Provider

In my last post, I explained how to set up an Identity Provider with CAS.  In this example, we'll configure our first Service Provider that will authenticate against it.

1. Set up the Service Provider app

With our CAS server configured, we now need to configure our clients to use it.  These clients are called Service Providers (SP) in SSO parlance, because depend on the Identity Provider (IdP) to authenticate the end user before providing the service.  Although possible to run both the IdP and SPs from the same server, for the most accurate simulation of SSO, the Service Providers should be installed on a completely different box.  For my example, I will assume we have a new machine with a fresh install of Tomcat.

For our first client (I'll call it cas-app1), I will just gate access to a simple Hello World app (just pretend it provides a valuable service).  Tomcat actually provides a Hello World app for us already in it's tomcat7-examples package, so we'll just use that.  This examples package is included by default in the direct download from the Apache Tomcat website, or on Ubuntu you can install it using a tool like apt-get.

(*Note for Ubuntu: apt-get puts the Tomcat webapps folder in /var/lib/tomcat7, but installs tomcat7-examples in /usr/share/tomcat7-examples)

2. Add CAS client JARs to Tomcat's default examples app

As with the CAS server, we'll need to download the CAS client jars (latest as of this writing: 3.2.1), and build the binaries using Maven.  We will plug these into our SP Hello World app to force it to talk to the IdP.
  mvn -Dmaven.test.skip=true package install  

Once the build finishes, there are three subpackages we're interested in: cas-client-core, cas-client-integration-tomcat-common, cas-client-integration-tomcat-v7, located in the modules subdirectory.

We simply copy those jar files into the tomcat7-examples/WEB-INF/lib directory.  The CAS client also requires us to use Apache Commons Logging, so we download that library and drop it into tomcat7-examples/WEB-INF/lib as well.

3. Create a self-signed certificate, and turn on SSL in Tomcat.

This step is identical to steps #2 and #3 for the CAS server.  Again, this is required because CAS requires all parties to communicate using SSL.

4. Add CAS certificate to trusted keystore

In step #5 of the CAS server setup, we exported the certificate for our private key for use in our SPs.  But because this is a self-signed certificate the client SPs consider it untrusted, and will therefore not actually accept data signed with it.  When a browser encounters a self-signed cert it warns the user and then allows him to proceed at his own risk.  But our SSO data is interpreted server-side rather than in the browser, so we need to add it to a trusted store.

There are a couple of ways to do this.  The easiest is to simply add it to Java's own trusted CA file.  Again, we use keytool:
  keytool -import -file cas.crt -alias cas -keystore $JAVA_HOME/jre/lib/security/cacerts  

Answer "yes" to the question of whether the certificate should be trusted.  Note: if multiple versions of Java are installed, the certificate must be imported into the version Tomcat is running with.

If modifying the Java CA file is not an option, an alternative is to import it into its own keystore and tell Tomcat where to find it.
  keytool -import -file cas.crt -alias cas -keystore /path/to/special/keystore  

Then modify the connector as defined in the Tomcat connector documentation:
   <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"  
        maxThreads="150" scheme="https" secure="true"  
        clientAuth="false" sslProtocol="TLS"  
        truststoreFile="/path/to/special/keystore"  
        truststorePass="SPECIAL_KEYSTORE_PASSWORD"  
        />  

5. Add servlet filters to SP app

The last thing we need to do is tell our SP app how to talk to the IdP.  This is done by configuring servlet filters in tomcat7-examples/WEB-INF/web.xml, as shown below.  These filters will intercept incoming calls to our SP, redirect to CAS for authentication if necessary, and redirect the user back to the page they wanted.

Note: replace the example URLs with your own IdP and SP URLs.
   <filter>  
    <filter-name>CAS Authentication Filter</filter-name>  
    <filter-class>org.jasig.cas.client.authentication.AuthenticationFilter</filter-class>  
    <init-param>  
     <param-name>casServerLoginUrl</param-name>  
     <param-value>https://cas.example.com:8443/cas/login</param-value>  
    </init-param>  
    <init-param>  
     <param-name>serverName</param-name>  
     <param-value>https://cas-app1.example.com</param-value>  
    </init-param>  
   </filter>  
   
   <filter>  
    <filter-name>CAS Validation Filter</filter-name>  
    <filter-class>org.jasig.cas.client.validation.Cas10TicketValidationFilter</filter-class>  
    <init-param>  
     <param-name>casServerUrlPrefix</param-name>  
     <param-value>https://cas.example.com:8443/cas</param-value>  
    </init-param>  
    <init-param>  
     <param-name>serverName</param-name>  
     <param-value>https://cas-app1.example.com</param-value>  
    </init-param>  
   </filter>  
   
   <filter>  
    <filter-name>CAS HttpServletRequest Wrapper Filter</filter-name>  
    <filter-class>org.jasig.cas.client.util.HttpServletRequestWrapperFilter</filter-class>  
   </filter>  
   
   <filter>  
    <filter-name>CAS Assertion Thread Local Filter</filter-name>  
    <filter-class>org.jasig.cas.client.util.AssertionThreadLocalFilter</filter-class>  
   </filter>  
   
   <filter-mapping>  
    <filter-name>CAS Authentication Filter</filter-name>  
    <url-pattern>/*</url-pattern>  
   </filter-mapping>  
   
   <filter-mapping>  
    <filter-name>CAS Validation Filter</filter-name>  
    <url-pattern>/*</url-pattern>  
   </filter-mapping>  
   
   <filter-mapping>  
    <filter-name>CAS HttpServletRequest Wrapper Filter</filter-name>  
    <url-pattern>/*</url-pattern>  
   </filter-mapping>  
   
   <filter-mapping>  
    <filter-name>CAS Assertion Thread Local Filter</filter-name>  
    <url-pattern>/*</url-pattern>  
   </filter-mapping>  

6. Test it

The last thing we do is test our app by navigating a browser to our SP: https://cas-app1.example.com/examples/servlets/servlet/HelloWorldExample.

If we've successfully configured everything, we should be redirected to our IdP server where it prompts for a login.  After authenticating (remember any identical user/password combo will work), we should then be immediately routed back to the Hello World app.

If we're unsuccessful, our SP will give us a 404.  In order to see the real problem, we'll need to check the Tomcat server logs.

Conclusion

We now have a working Identity Provider, and have validated it via a very basic Service Provider.  But for us to consider this true SSO, we really need additional SPs and verify that when we log into one, we are also logged into the others.

To do this, we could set up multiple clones of this Hello World SP, but that wouldn't be very interesting.  In my next post, we will configure yet another Hello World SP, but this time integrate with some real-world frameworks we may encounter in the wild, such as Spring and Shiro.

Friday, September 20, 2013

SSO with CAS, Part 1: The Identity Provider



I recently had the opportunity to experiment with SSO using the CAS project (Central Authentication Service).  Unfortunately while the documentation and examples on the CAS website are sometimes helpful, they are also often lacking or out-of-date.  In this and subsequent posts, I'll relate my own experience setting up these servers.

For the purposes of these examples, I provisioned virtual machines on Amazon Web Services EC2 configured with Ubuntu 12, Tomcat 7, and Java 6.  In principle, the steps should not change much with a different OS or servlet container.  However, I assume in this post that you know how to install these things yourself on whatever platform you choose.

Also, CAS requires SSL.  For testing I did not want to purchase a real certificate, so I generated and installed self-signed certificates instead, which creates extra steps that you would not need in a production system.  Obviously if you may skip those steps if you have a real certificate.

1. Build the CAS war

In order to set up SSO we'll first need an Identity Provider (IdP).  In our case, a simple CAS server will do (latest as of this writing: 3.5.2).  The CAS war needs to be build manually, so we download the source and compile it ourselves (which requires Maven).

  mvn -Dmaven.test.skip=true package install  

This will generate a modules subdirectory where the cas-server-webapp.war is located.  Copy that file into the Tomcat webapps folder, and rename it to cas.war.

Note: there is also a cas-server-uber-webapp.war file, which packages all the CAS support jars into the war by default.  Either war will work, but I prefer a leaner webapp to start with, and then add modules as I need them, so I chose the former.

2. Create a self-signed certificate

As mentioned, CAS requires SSL.  So we need to generate a self-signed certificate to get started.  The JDK comes with a very handy tool called keytool that can help us.

  keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/share/tomcat7/.keystore  

This generates a new key pair and adds it to the keystore in /usr/share/tomcat7, which is the home directory for the user tomcat.  The location of the keystore is important: by default, Tomcat will look for certificates in the home directory of the user it's running as.  On Ubuntu, Tomcat runs as the user tomcat, and so we put the keystore in its home directory (note: keytool will create the keystore if it doesn't exist).

As part of creating the keystore and the certificate, the keytool will ask for a password.  The default password used by Tomcat is "changeit" (all lower case), and I recommend following that convention even though it's possible to choose a different password.  Choosing a custom password can lead to some unexpected behavior unless you configure Tomcat correctly.

3. Turn SSL on in Tomcat

The final step to activating SSL is to tell Tomcat to use it.  This is as simple as opening Tomcat's conf/server.xml and uncommenting the appropriate line:

   <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"  
        maxThreads="150" scheme="https" secure="true"  
        clientAuth="false" sslProtocol="TLS"  
        />  

This assumes a lot of defaults.  There are many potential things to customize, and reviewing them is outside the scope of this blog post.  See the Tomcat SSL documentation if you need to do more than this.

4. (Optional) Fix CAS log location

The default logs directory for CAS is the local web directory, and this causes permission problems on Linux.  In order to work around it, we can modify the CAS webapp's log4j settings (located at /WEB-INF/classes/log4j.xml), and point cas.log and perfStats.log to a writable directory.

If you don't do this, you may see log write errors in the Tomcat logs, but CAS will still continue to function as expected.

5. Export CAS certificate

Because we're using a self-signed certificate, we need to export that cert and take it with us to any of the client Service Providers that will use SSO.  That certificate will need to be installed as a trusted source in the certificate chain on those boxes.  We'll get to that in a later post, but for now, export the certificate like so:

  keytool -export -alias tomcat -file cas.crt -keystore /usr/share/tomcat7/.keystore  

Hang on to this file for later.

6. Test it (cas.example.com)

The last thing we do is test our server by navigating a browser to the IdP main URL: https://cas.example.com:8443/cas.  We should be challenged with a login screen.

The default CAS server configuration uses an authentication scheme called SimpleTestUsernamePasswordAuthenticationHandler, which accepts and authenticates any user/password combination that is identical.  That is, admin/admin, jacksmith/jacksmith, etc. will all authenticate.  In the Tomcat logs, CAS warns us this authentication scheme ought never to be used in a production environment, but for now it's okay.  We'll cover user provisioning in a subsequent post.

Conclusion

We now have a fully functional IdP that can respond to CAS requests from any Service Provider (SP) trying to authenticate a user.  In my next post, we'll set up a very simple SP to do just this.

Friday, May 31, 2013

How to Scope Scenarios with JBehave and Guice

If you use JBehave with a dependency injection framework such as Spring or Guice, you may quickly realize that scoping the Step classes gets a little tricky.  By default, both of those frameworks provide two basic scopes: instance (or prototype) scope, and singleton scope.

Neither one of these options is great for use with JBehave.  Instance scope is a problem because JBehave then creates a new Step class for every step in a scenario, making it impossible to share state between steps without using static (global) state.  Singleton scope is a different side of the same problem: state ends up shared among all scenarios.  In either case, to make things work you must remember to clean up the global state after each scenario.

A simpler solution would be to implement a custom "scenario" scope.  I will show you how to do this for Guice below.

First, we need to define a new custom scope by implementing Guice's Scope interface.  This class will be a container that adds and manages our dependencies when the scope is entered, and removes them when the scope is exited.  This could be potentially daunting and error prone, but fortunately the Guice developers have provided us with a default called SimpleScope, which does all this for us.  This class is sufficient for our needs, and you can copy it as-is straight into your source code.

Second, we need to tell JBehave when to actually create a new scope, and when to close the scope out.  Since we want to scope our dependencies to scenarios, we use JBehave's @BeforeScenario and @AfterScenario annotations to enter and exit each scope.  Note that we must inject our copy of SimpleScope, which is what actually manages the scoped dependencies.

 public class ScenarioContext {  
   
   private SimpleScope scope;  
   
   @Inject  
   public ScenarioContext( @Named ( "scenarioScope" ) SimpleScope scope ) {  
     this.scope = scope;
   }  
   
   @BeforeScenario  
   public void beforeScenario() {  
     scope.enter();  
   }  
   
   @AfterScenario  
   public void afterScenario() {  
     scope.exit();  
   }  
 }  

Third, much like the Singleton annotation, we need a binding annotation to inform Guice about how we'd like our step classes scoped.  We will use this new annotation to bind instances to our new SimpleScope class.  We create it simply like thus:

  import static java.lang.annotation.ElementType.METHOD;   
  import static java.lang.annotation.ElementType.TYPE;   
  import static java.lang.annotation.RetentionPolicy.RUNTIME;   
     
  import java.lang.annotation.Retention;   
  import java.lang.annotation.Target;   
     
  import com.google.inject.ScopeAnnotation;   
     
  @Target ( { TYPE, METHOD } )   
  @Retention ( RUNTIME )   
  @ScopeAnnotation   
  public @interface ScenarioScope {}   

There are two parts to the final step.  The first is to actually bind our step classes to our new scope, which is accomplished simply by providing the class file to the binder using .in().  However, we also need to inform Guice about how to manage the SimpleScope container.

 public class AppModule extends AbstractModule {  
   @Override  
   protected void configure() {  
     setUpScenarioScope();  
   
     bind( MySteps.class ).in( ScenarioScope.class );  
   }  
   
   private void setUpScenarioScope() {  
   
     bindScope( ScenarioScope.class, scenarioScope );  

     SimpleScope scenarioScope = new SimpleScope();  
     bind( SimpleScope.class ).annotatedWith( Names.named( "scenarioScope" ) ).toInstance( scenarioScope );  
     bind( ScenarioContext.class ).in( Singleton.class );  
   }  
 }  

The setUpScenarioScope() method above does a couple of things:
  • informs Guice of our new scope, using bindScope()
  • creates an instance of our SimpleScope class for managing dependencies (we only need one)
  • ensures that instance can be injected into our JBehave-annotated context class
  • binds that context in the singleton scope
That's it!  All steps annotated for scenario scope will be able to share data within a single step class, while  guaranteeing a fresh set of steps for every new scenario.

Known issue: This approach is not currently compatible with jbehave-junit-runner library.  That library creates a special JBehave runner which formats the test results in a standard JUnit output, and it relies on a older copy of JBehave that causes a chicken-and-egg problem with Step creation.  A patch has been submitted to fix this, but to date it has not been incorporated into a release.  A workaround is to build from source and apply the patch manually, and make sure you are using JBehave 3.8+.

Wednesday, February 27, 2013

Gradle: Configuration vs Execution

People new to Gradle get trapped very early in the difference between the configuration and execution phases of the Gradle build lifecycle.  Understanding that difference is critical to success in using Gradle as your tool of automation.

Gradle has three build phases: initialization, configuration, and execution.  Initialization is pretty straight-forward, but coming from a more declarative build tool background (like Ant), differentiating the latter two can cause headaches.  The key is to understand the difference between configuring properties and adding actions.

Let's take a simple example:

 task copyFiles (type: Copy) {  
      from "srcDir"  
      into "destDir"  
 }  

Pretty straightforward, right?  I'm using an existing task type--Copy--to copy some files.  When I run this in Gradle, I see the following:

 C:\temp\myProject>gradle  
 :help  
 Welcome to Gradle 1.3.  
 To run a build, run gradle <task> ...  
 To see a list of available tasks, run gradle tasks  
 To see a list of command-line options, run gradle --help  
 BUILD SUCCESSFUL  
 Total time: 2.582 secs  

Which means, of course, that nothing actually executed.  In order to actually run the task, I'd need to explicitly ask Gradle to run it.

 C:\temp\myProject>gradle copyFiles  
 :copyFiles  
 BUILD SUCCESSFUL  
 Total time: 2.04 secs  

So what's really going on here?  Let's add a print statement.

 task copyFiles (type: Copy) {  
      println "copying files"  
      from "srcDir"  
      into "destDir"  
 }  

And the output:

 C:\temp\myProject>gradle copyFiles  
 copying files  
 :copyFiles
 BUILD SUCCESSFUL  
 Total time: 2.736 secs  

Notice something strange?  Our println statement is happening before Gradle executes the task.  What if I run Gradle without any tasks?

 C:\temp\myProject>gradle  
 copying files  
 :help  
 ...
 BUILD SUCCESSFUL  
 Total time: 1.791 secs  

Again, the output still prints, even though we didn't actually execute the task.  Why?

The answer is that all task configurations get executed on every Gradle build, no matter whether the tasks actually execute or not.  Think about it this way: all code inside the main body of a task is setup.  So when we invoke the methods "from" and "into" on the Copy task, Gradle is not actually copying the files ... yet.  It's simply instructing the task: "when it's your turn to execute, here's what I want you to do."

Because our println statement is in the configuration section of the task, it always runs, whether or not the task actually executes.

So the question now becomes, how to we write code that only executes when the task executes?  This is handled by what Gradle calls actions.  Actions are simply chunks of code attached to a task that run--in succession--when the task executes.  Actually performing the copy operation is the Copy task's default action.  We can add actions to any task by invoking doLast().

 task copyFiles (type: Copy) {  
      from "srcDir"  
      into "destDir"  
      doLast {  
           println "copying files"  
      }  
 }  

Now when we run the task, we get the expected behavior.  Our println happens during the execution phase.

 C:\temp\myProject>gradle copyFiles  
 :copyFiles  
 copying files  
 BUILD SUCCESSFUL  
 Total time: 1.932 secs  

Likewise if we omit the task, we no longer see the text.

 C:\temp\gradle2>gradle  
 :help  
 ...  
 BUILD SUCCESSFUL  
 Total time: 1.932 secs  

The method doLast() (and its counterpart doFirst()) simply manipulate the stack of actions attached to a task.  In order to make task definitions a little simpler, Gradle introduces a little syntactic sugar for doLast():

 task newTask << {  
      print "You will only see this in the execution phase"  
 }  

Notice, this is not the same thing as this:

 task newTask  {  
      print "You will see this in the configuration phase"  
 }  

So remember these little rules:
  • Task configuration runs every build (during the configuration phase)
  • Task actions run only when a task is actually run (during the execution phase)
  • Code in the main body of a task declaration is configuration code
  • Add actions to a task using doFirst(), doLast() and the << shortcut
One concluding note: beginning with version 1.4, the Gradle team has begun experimenting with "configuration on demand".  This feature is just like it sounds; Gradle will try to determine which tasks will actually be executed, and only configure those tasks.  This is to mitigate an excessively long configuration phase for a small number of actual executed tasks.




Monday, February 25, 2013

Gradle "extra" properties

Extra properties in Gradle are very versatile--so versatile in fact, that sometimes they can induce a little confusion.  Here's a quick entry to sort them out.

Let's start with the ways to get and set them with a script.  We'll use project properties as an example.

Here are the myriad of ways to set a project property:

 project.ext.myprop1 = 'a'  
 project.ext.set('myprop1', 'a')  
 project.ext['myprop1'] = 'a'  
 project.ext {  
    myprop1 = 'a'  
 }  

And here's how we get one back:

 assert myprop1 == 'a'  
 assert project.myprop1 == 'a'  
 assert project.ext.myprop1 == 'a'  
 assert project.ext.get('myprop1') == 'a'  
 assert project.ext['myprop1'] == 'a'  

These are essentially straight out of the Gradle DSL, but ordered, I think, a little more clearly.  Notice that specifying the 'ext' property is necessary in all cases when setting a property, but it's optional when getting it.

Project properties can be specified both in a single properties.gradle file, or also command-line using the -P operator.  They end up in in the same 'ext' bucket with all other extra properties.

Properties are most commonly set on the project, but can be set on other objects as well.  Any object that implements ExtensionAware has this 'ext' property and can follow these rules.  The only other object type that implements this by default are tasks.

 task myTask {  
      ext.newprop = 'a'  
 }  
 assert myTask.newprop == 'a'  

The existence of properties can be generally be tested using the hasProperty() method as described in the Gradle docs, but be a little careful with it's use.  Due to a bug, it will not work inside allprojects{} or subprojects{} blocks without being qualified.

 project.ext.testProp = 'a'  
 if ( hasProperty('testProp') ) {  
      println "Can see testProp!"  
 }  
 allprojects {  
      if ( project.hasProperty('testProp')) {  
           println "Can see testProp!"  
      }  
      if ( hasProperty('testProp')) {  
           println "Can see testProp!"  
      } else {  
           println "CANNOT see testProp!"  
      }  
 }  

Outputs:

 Can see testProp!  
 Can see testProp!  
 CANNOT see testProp!  

Also, hasProperty() will not work within a settings.gradle file.  You must use a try-catch block to test for the existence of properties.

 try {  
      println myProp // if this doesn't exist, an exception will be thrown  
 } catch (MissingPropertyException e) {  
      ...  
 }  

Using extra properties in Gradle is pretty straightforward, but there are a few hitches to be aware of.  Hopefully this helps the new Gradle user sort them out.

Tuesday, September 18, 2012

To paren or not to paren

Groovy's policy on parenthesis can leave you scratching your head when using Gradle, if you're not careful.  In Groovy, parentheses are optional when the method being invoked has at least one parameter.  Also, a closure can be specified outside any parentheses, provided it's the final argument.

In more concrete terms, given this Gradle config file:

 def myFunc (val, Closure c) { println val; c.call() }  
   
 task hello << {  
      myFunc( "1", { println "All parameters in parentheses" } )  
      myFunc "2", { println "No parameters parentheses" }  
      myFunc( "3" ) { println "Only the closure outside parentheses" }  
 }  

All three of these lines are equivalent.  In each case, the first parameter is passed to myFunc and printed, and the closure is passed and called.

But what if we decide to leave the parentheses out entirely?

 task hello << {  
      myFunc( "1", { println "All parameters in parentheses" } )  
      myFunc "2", { println "No parameters parentheses" }  
      myFunc( "3" ) { println "Only the closure outside parentheses" }  
      myFunc "4" { println "No parenthesis at all" }  
 }  

Gradle throws an error:

 > Could not find method 4() for arguments [build_1cuekq1blf8f476ss7e8ms7254$_run_closure1_closure7@1f52125]  

This is because Groovy actually parses this line as if it looked like this:

 myFunc ("4" ({ println "No parenthesis at all" }) )  

In other words, Groovy thinks you're trying to chain calls: passing the closure to the string "4", and then passing the result of that operation to myFunc().  And it rightly objects.  You might be wondering why anyone would do this, and probably no one would.  But when using Gradle, you can accidentally fall into this trap.

Take, for example, configuring a JAR file using the Java plugin.  Let's say we want to copy all our runtime dependency JARs into a directory in our artifact's internal "/runtime" directory.  (Nevermind why we want to do this, it's just an example!)

 apply plugin: 'java'  
   
 dependencies {  
      runtime (  
           // specify runtime dependencies here  
      )  
 }  
   
 jar {  
      from ( configurations.runtime ) { into "/runtime" }  
 }  

This works as expected.  You end up with a normally constructed JAR, with a special runtime directory that contains all runtime dependencies.

But what if you neglect the parentheses?

 jar {  
      from configurations.runtime { into "/runtime" }  
 }  

Now, rather than copying only the runtime JARs into the runtime directory, it copies all the JAR's files into that directory.  In effect, it's the same as specifying this:

 jar {  
      into "/runtime"  
 }  

This is because the ConfigurationContainer (which wraps the "runtime" Configuration), accepts and executes closures.  So Gradle doesn't give you any error as it did with the "4" example above, and you're left very confused as to why your JAR's directory structures are wrong.

The bottom line is: when in doubt, keep the parentheses there.