Beware of Maven resource filtering – AGAIN!

I recently blogged about problems I’d encountered with Maven filtering resource files that I didn’t actually want filtering resulting in corrupted resources in my target artifact. So you’d think I’d more careful from that point on, right?

Well, it’s just happened again! In the first situation I blogged about, the resource files in question were TrueType font files. In this latest occurrence I couldn’t understand why some native DLLs which I am packaging with my app appeared not to be loading correctly. After much head scratching, it finally dawned on me that they could be getting corrupted during the Maven build. When I checked the POM I found that I’d inadvertently switched on filtering for all resources by mistake with the result that the DLLs were being filtered and ending up corrupted. Once I’d corrected the filtering switch everything started working again.

So the moral is always be aware of the implications of switching Maven resource filtering on!

Passing arguments to surefire when using the Maven release plugin

I’ve recently been using the Maven release plugin more and more at work to simplify the process of releasing various Maven artifacts that we produce. I’ll not go into detail about the release plugin as you can read more about it here, but what I will say is that it does a lot of the manual grunt work for you associated with the release of an artifact e.g. checking for unresolved SNAPSHOT dependencies, updating POM versions, committing to your SCM, creating SCM tags etc. There are a few gotchas and quirks to getting it working reliably (hey, this is Maven we’re talking about!) but once it’s working it makes life a little easier.

We use Hudson extensively as our Continuous Integration server to build and test our Maven projects, and we’ve got several jobs configured to allow releases to be performed using the M2 release Hudson plugin. This was all working just fine until we attempted to release something which had unit tests requiring certain properties to be set defining the environment the tests should be executed in. Doing this from the command line involves passing a couple of properties to the surefire plugin using the argLine plugin parameter as discussed here. However, when the tests were executed as part of the release plugin lifecycle, these properties just weren’t being recognised.

Eventually after some Googling (how often is that the case!) we came across a blog post which discussed a little-documented feature of the release plugin that allows arguments to be passed to the surefire plugin using the -Darguments option. And with a bit of careful nesting of single and double quotes were finally able to get the required properties into the surefire plugin as part of the release plugin lifecycle as follows:

-Darguments=”-DargLine=’-Denv=dev -Dsite=london'”

Uploading files using scp and the Maven Wagon plugin

I’ve been struggling with a little Maven problem for a while but only just managed to find time to look into it in any detail. What I’ve been trying to do is copy an artifact to a remote server using scp. The artifact in question is a WAR which I want to copy to a server hosting Tomcat, so this is not a typical deploy-artifact-to-repository type of requirement.

(As an aside, I know all about the Cargo plugin for deploying web apps to servlet containers but in this instance I’m more interested in the more general issue of copying any artifact, be it a WAR or a JAR or something else, using scp)

In principle this sounds like a very simple thing to do. The Maven Wagon plugin is the tool for the job but the documentation is woefully inadequate and I just could not get it to do what I wanted.

Anyway, after a lot of Googling and, crucially, inspecting Maven debug output from failed attempts at using the plugin I’ve finally cracked it.

Everything I’d seen written about this involved the following aspects…

Configuring details about the server to be copied to (typically in your main Maven settings.xml configuration):

<server>
  <serverId>my-server-id</serverId>
  <user>my-user-name</user>
  <password>my-password</password>
</server>

Using the Wagon plugin to perform the actual copy:

<build>
  <extensions>
    <extension>
       <groupId>org.apache.maven.wagon</groupId>
       <artifactId>wagon-ssh</artifactId>
       <version>1.0-beta-6</version>
     </extension>
   </extensions>

   <plugins>
     <plugin>
       <groupId>org.codehaus.mojo</groupId>
       <artifactId>wagon-maven-plugin</artifactId>
       <version>1.0-beta-3</version>
       <configuration>
         <fromFile>${project.build.directory}/${project.build.finalName}.war</fromFile>
         <url>scp://my-server-id.fully.qualified.domain/path/to/destination</url>
       </configuration>
       <executions>
         <execution>
           <id>upload-war-to-server</id>
           <phase>deploy</phase>
           <goals>
             <goal>upload-single</goal>
           </goals>
         </execution>
       </executions>
     </plugin>
   </plugins>

 </build>

All looks logical… but it simply refused to work, complaining about authentication failures. I knew the corresponding <server> configuration block was using the correct username and password, so the symptoms suggested that it wasn’t finding the <server> configuration. I’d made sure the host part of the server domain in the scp:// URL matched the server id element but it just wouldn’t match them up.

And then I noticed something in the Wagon plugin’s debug output – mention of a serverId property in the configuration. I’d not seen this documented anywhere before, but I thought I’d try adding it to my Wagon plugin configuration all the same…

      <configuration>
        <serverId>my-server-id</serverId></pre>
        <fromFile>${project.build.directory}/${project.build.finalName}.war</fromFile>
        <url>scp://my-server-id.fully.qualified.domain/path/to/destination</url>
      </configuration>

…and all of a sudden it started working! So, in my situation that appears to have been the missing link between my Wagon plugin and the server configuration details.

Beware when filtering TrueType font resources in Maven

I’ve just come up against an interesting problem while loading custom TrueType fonts bundled with a Java Swing GUI built using Maven.

Initially, for simplicity and because I’d never actually tried creating custom fonts from TTF files before, I loaded the TTF file from an absolute location i.e. not relative to the JAR classpath. Everything worked fine so I proceeded to package the TTF file as a regular resource file packaged into my JAR from the standard Maven src/main/resources directory, and changed the font loading code to load it relative to the JAR classpath. That’s when strange things started to happen…

I noticed that certain glyphs in the font were very subtlely wrong. For example, the top of all “S” glyphs was squashed slightly. I switched my code back to using an absolute file path (referencing the /src/main/resources location) and everything looked fine again. That got me wondering if the font file could be getting corrupted somehow when being packaged into the JAR.

And then it dawned on me… I’m filtering other resources so could that be the problem?

You bet it was!

It turns out that the TTF file was being filtered when packaged into the JAR. As soon as I excluded TTF files from this filtering, everything worked as expected again.

So, the moral of this is watch out for resource filtering when using TrueType font files (or any other file that could potentially be damaged by undesirable filtering).