Sunday, December 18, 2016

Upgrading Grails-2.2.1 to Grails-3: Static Assets take a BIG move . . .

I recently upgraded a Grails 2.2.1 web-app to Grails 3.2.1. It was a BIG move forward. Moving static assets (JavaScripts, CSS and Images) to their new assets directory, organizing & setting up directives/manifests in their new home directory, making all required changes to views & templates, testing all views for styles & images, and testing views with AJAX functionality involving JavaScripts... overall, it took a considerable amount my time during the whole upgrade efforts.

Following are some key points I have from my efforts:
  • Grails 3 doesn't come with Resources plugin and hence you will not have ResourceTagLib in your classpath. If you want, you can probably still use Resources plugin in Grails-3 app, but Asset-Pipeline plugin seems to be the viable option.
  • Static assets which can be handled by Asset-Pipeline plugin need a move from web-app directory to grails-app/assets directory, their new home. Files under web-app/css, web-app/js and web-app/images now go into grails-app/assets/stylesheetsgrails-app/assets/javascripts and grails-app/assets/images respectively.
  • Grails-3 comes with jquery-2.2.0.min.js and bootstrap.js(3.3.6). If your Grails 2.2.1 app was dependent on these, you probably had jquery-1.7.2.min.js and bootstrap.js(2.2.2). If so, you will be better off retaining older versions to start with the upgrade process to eliminate this new variance in upgrading-equation.
  • The recommended approach to upgrade Grails 2.x app to Grails 3.x is to first create a new Grails-3 application and start copying all artifacts from old to new locations. Grails-3 documentation's Upgrading section has very well documented details on old and new locations. With respect to static assets, when a new Grails-3 app is created, you will notice images, javascripts and stylesheets sub directories under grails-app/assets. Also, you will have a bunch of static assets already sitting in there. You may have to do some cleanup with these files.
  • Modularize your assets
    1. AppResources.groovy is Resources plugin’s way of modularizing JavaScripts and CSS files by grouping these static assets into modules. But Asset-Pipeline plugin minifies compresses all JavaScript & CSS files, also enables browser cache, and hence static assets are only served once for all pages. So, it may not be required to group/ modularize static assets. If truly needed, for every module (e.g. module1) in AppResources.groovy, an equivalent manifest/directive with module-name.js (e.g. modul1.js) file can be created listing all it’s dependencies.
    2. The directive files application.js and application.css are main manifest files for JavaScripts and CSS.
    3. If you have modularized static assets in Grails-2 app, your main static resource AppResources.groovy should be your reference for re-organizing your static assets in Grails-3 app to minimize changes in views and view templates.
    4. You can modularize your static assets in Grails-3 the same way as in Grails-2 app with no need for ApprRsources.groovy file but with equivalent module manifests/directives created.
    5. Create a one-to-one asset-pipeline directive/manifest file for each of your module defined in AppResources.groovy. For instance, if you have, let's say a common module defined listing all it's dependency resources (both JavaScripts and CSS files), create common.js and common.css asset-pipeline directives that list required JavaScript and CSS dependencies respectively in Grails-3 app.
    6. If you have many modules in your application, your grails-app/assets/javascripts and grails-app/assets/stylesheets will get cluttered and mixed with manifest files and actual assets. You will be better off keeping directives separate from actually assets by keeping actual javascript and css assets under grails-app/assets/javascripts/lib and grails-app/assets/stylesheets/shared sub-directories respectively so that asset-pipeline manifest files can be under main grails-app/assets/javascripts and grails-app/assets/stylesheets directories.
  • Modify views & view templates and change resources tags to equivalent asset-pipeline tags
    1. Replace all <r:script> </r:script> with <asset:script type=”text/javascript”> </asset:script>
    2. Replace <r:external file="/static/images/favicon.ico"/> with <asset:link rel='shortcut icon' href="favicon.ico" type="image/x-icon"/>
    3. Remove all <r:layoutResources/> in <head></head> and replace all <r:layoutResources/> at the very bottom of layout pages with <asset:deferredScripts/> (This is Asset-Pipeline plugin's equivalent of Resources plugin’s deferring scripts to the bottom of the page)
    4. If you have modularized assets in Grails 2.2.1 app, for example, for module 'module1'  replace all <r:require module=”module1”/> with <asset:stylesheet src=”module1”/> and <asset:javascript src=”module1”/> if module1 has both JavaScripts and CSSs in it.
  • If there are other static assets like pdf files that are referenced in views by grails resource tag or it's equivalent method call, it can safely be moved from Grails-2's web-app/pdf to Grails-3's grails-app/assets/pdf and be served by Assets-Pipeline plugin. These assets, like images, need no manifest/directive files and it simply works.

Summary

Grails moved away from Resources plugin in favor of Asset-Pipeline plugin starting from 2.4. Upgrading prior versions of Grails 2.4 apps to 3.x certainly requires considerable development and testing efforts with respect to static assets. So, just be prepared for this BIG move.

References

Resources Plugin Docs
Asset-Pipeline Plugin Docs
Asset-Pipeline Plugin - GitHub source code
Grails Team Blog Post on Migrate from Resources Plugin to Asset-Pipeline Plugin
Very nice Introduction to Asset Pipeline Plugin

My previous posts on Upgrading Grails application from 2.2.1 to 3.2.1

Sunday, December 11, 2016

Know test cases run order in Grails-3 app to help solve issues with "test pollution". . .

In a Grails 3 application that I upgraded recently from Grail 2.2.1 to Grails 3.2.1, it was very puzzling when an integration specification (test case upgraded from Grails 2.2.1) with few feature methods (test case methods) passed locally but failed on the Bamboo CI server. From the kind of failure, it was evident that this particular specification failed due to "data pollution"- some unwanted data hanging around in the database by the time it ran.

The failed specification was an integration test specification and I was under the impression when grails test-app command is run, the test specifications are run in the order of test categories: unit tests, followed by integration tests, followed by functional tests.  If that was the case, the integration test that failed should have never failed as all integration specifications were properly annotated with @Integration and @Rollback and there was no setup() method creating data that couldn't be rolled back by @Rollback annotation causing the pollution. Grails 3 doesn't distinguish integration tests from functional test (at least when they are run as part of integrationTest gradle task), though they are distinguished by code. Typically integration tests extend Spock's Specification and annotated with @Integration and @Rollback, where as functional tests simply extend GebSpec.

Both local and CI server ran test-app task against an empty database. The only difference was: local was on Mac OS X and Bamboo CI server was on Linux. There was a functional test (GebSpec) in the set of integration tests under integration-test/groovy/myapp dir which had a bunch of feature methods. That one obviously did not have any data cleanup methods like: cleanup() or cleanupSpec(). I didn't even want to do any cleanup at the end in that functional spec because it was perfectly fine with local run. That led me to think of test data pollution causing this issue. But the most puzzling question was: "Why this functional test was coming in between and polluting integration test cases?" The only way to find it out was to know the order in which test-cases run and compare local run with Bamboo CI run.

I read through some documentation of Spock and Junit but didn't find an easy way of knowing the order of tests. Spock supports @Stepwise annotation to specify the order, but it was only within a specification. I read through some Gradle documentation about the test task and found a way to tap into the lifecycle of test cases and print the description of each test case that gets run. This helped me finding test-cases run order, and compare local with CI run to nail down the issue. Locally on Mac OS X, test-cases ran in alphabetical order, where as on Bamboo CI server (Linux), they seemed running in random order and one GebSpec functional test that was part of integration tests that got run along with all integration tests during integrationTest task as part of grails test-app task was the culprit. It was a coincidence that the functional specification name alphabetically was the last in the set of integration tests. On Mac OS X, it ran as the very last test case but on Linux it was running in between causing the following test to fail by leaving data in the database and thus leading to "test data pollution".

Following is the code snippet I added in build.gralde that prints each test before it's run:

/** * Configure test and integrationTest tasks. * Added beforeTest closure to get notified before a test is run. The closure simply logs the test descriptor which * indicates the test method that is being executed. Added to help find the test execution order differences between two * test runs or even differences between two systems like local, ci etc. */ test { beforeTest { descriptor -> logger.lifecycle("Running test: " + descriptor) } } integrationTest { beforeTest { descriptor -> logger.lifecycle("Running test: " + descriptor) } }

This will print as shown below, for instance when unit tests are run with the command: grails test-app -unit
:compileJava UP-TO-DATE :compileGroovy :buildProperties :processResources :classes :compileTestJava UP-TO-DATE :compileTestGroovy :processTestResources UP-TO-DATE :testClasses :test Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=2048m; support was removed in 8.0 Running test: Test testIndex(myapp.FirstTestSpec) Running test: Test testList(myapp.FirstTestSpec) Running test: Test testCreate(myapp.FirstTestSpec) Running test: Test testList(myapp.SecondTestSpec) Running test: Test testCreate(myapp.SecondTestSpec) Running test: Test testIndex(myapp.ThirdTestSpec) . . .

Ideally the order in which test cases run should not be a concern at all. But when functional tests and integration tests run together as integration tests, there are high chances of functional tests and integration tests getting mixed up in the sequence of running integration tests, and thus polluting integration tests. So, knowing the order in which test cases run will help solving this problem.

After finding out the issue, I refactored test cases under integration-test/groovy/myapp directory and separated out functional from integration tests into two different folders/packages (myapp.integration and myapp.functional) and even separated out their executions by running unit, integration and functional in 3 steps instead of running all in one step (grails test-app) as follows:
grails -Dgrails.env=development test-app -unit grails -Dgrails.env=development test-app myapp.integration.* -integration grails -Dgrails.env=development test-app myapp.functional.* -integration

This will guarantee that my functional tests which are expected to leave data in the database after run (as they were written) are run as the last group of tests in a bit controlled manner.

References

Gradle Test task documentation

Sunday, November 20, 2016

Upgrading Grails-2 application to Grails-3: Spring Security Core Plugin differences . . .

I recently upgraded one of our Grails 2.2.1 with Spring Security core plugin 1.2.7.3 on Java 1.6 application to Grails 3.2.1 with Spring Security core plugin 3.1.1 on Java 1.8. By following the recommended path detailed out well enough in Grails 3 documentation, I got the following done before I got to the point of successfully running the application:
  • Upgraded one of our in-house plugins: ZipCityState
  • Reorganized Grails artifacts and other files as per Grails-3 app directory structure
  • Rewrote build and other configurations
  • Fixed several code compilation errors and issues resulted due to changed package names of several Grails frame-work classes and some classes that are deprecated and removed
  • Upgraded static resources like images, javascript and stylesheets from resources plugin to asset-pipeline plugin by re-organizing those files and creating appropriate asset-pipeline directives to mimic resource plugin's modules defined in AppResources.groovy
Once all the above are done, I had to make the following changes from the Security aspect for the application to successfully run, display and login:

Static Rules

Static rules are now List of Maps and not just a Map. I covered this in my previous post. Check it out.

Authentication

Change username and password form fields in login page (auth.gsp) from j_username and j_password to username and password.

If you have used UsernamePasswordAuthenticationFilter.SPRING_SECURITY_LAST_USERNAME_KEY somewhere in your code, you need to change that to SpringSecurityUtils.SPRING_SECURITY_LAST_USERNAME_KEY

If you have any pre authentication checks written by extending DefaultPreAuthenticationChecks, the hibernate session seems not created and attached to the current thread at this point.

If you run into any exception like the following, you may need to use either withTransaction or withSession method on the domain object to come over this.

org.springframework.dao.DataAccessResourceFailureException: Could not obtain current Hibernate Session; nested exception is org.hibernate.HibernateException: No Session found for current thread.

Password encryption algorithm differences

The application has an admin account created in the database only once with exists check from the Bootstrap. The login failed for admin user that was created by Grails 2.2.1 app and after upgrading to Grails 3.2.1 with the following exception:

ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[grailsDispatcherServlet] - Servlet.service() for servlet [grailsDispatcherServlet] in context with path [] threw exception [Filter execution threw an exception] with root cause java.lang.AssertionError: Salt value must be null when used with crypto module PasswordEncoder. Expression: salt. Values: salt = admin at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:404) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:650) at grails.plugin.springsecurity.authentication.encoding.BCryptPasswordEncoder.checkSalt(BCryptPasswordEncoder.groovy:49)

The error was bit puzzling and made me to comment out the following Spring security core plugin's configuration property set in application.groovy:

//grails.plugin.springsecurity.dao.reflectionSaltSourceProperty = 'username’

Commenting out that property revealed the issue with the following error:
WARN org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder - Encoded password does not look like BCrypt

After quickly reading through documents of both Grails-2 Spring Security Core Plugin and Grails-3 Spring Security Core Plugin, there was a special mention of Bcrypt algorithm in version 3 documentation. Also, it was specified up front in the Configuration Settings section of the doc that the plugin's default security settings are maintained in DefaultSecurityConfig.groovy file. I checked both plugin 2, plugin 3 and found the following differences:

Grails-3 plugin
password { algorithm = ‘bcrypt’ encodeHashAsBase64 = false bcrypt { logrounds = 10 } hash { iterations = 10000 } }

Grails-2 plugin
password.algorithm = 'SHA-256' password.encodeHashAsBase64 = false password.bcrypt.logrounds = 10

Differences are highlighted. The hash.iterations property is set to 10000 in Grails-3 plugin, but is not set explicitly in Grail-2 plugin. I had to add algorithm and hash.iterations explicitly to match Grails-2 plugin and retain the reflectionSaltSourceProperty in application.groovy.The following are the changes:

grails.plugin.springsecurity.password.algorithm = 'SHA-256' grails.plugin.springsecurity.password.hash.iterations = 1 grails.plugin.springsecurity.dao.reflectionSaltSourceProperty = 'username'

Summary

With static rule configuration changes, password encryption properties changes and code changes to auth.gsp and some security related classes, I was able to get the application successfully migrated from Grails 2.2.1 to Grails 3.2.1 along with upgraded Spring Security core plugin.

References

Grails 3.2.1 documentation
Grails Spring Security 2 documentation
Grails Spring Security 3 documentation


Sunday, November 13, 2016

Upgrading Grails-2 application to Grails-3 - Make static assets available . . .

Recently, I started upgrading a Grails web-application from Grails-2 to Grails-3, particularly from version 2.2.1 to 3.2.1. The perfect minor.patch(2.1) matching of both versions in this process was just a perfect timing-coincidence as Grails was at 2.2.1 when this app was developed and the latest Grails was at 3.2.1 when I started to upgrade.

The process of upgrading was not bad, but it required quite a bit of careful changes. I simply followed the very well documented Grails3 Documentation and get going this effort after successfully migrating an in-house plugin ZipCityState from Grails-2 to Grails-3. This deserves another blog post.

Static Assets

One of the web aspects that needed bit more effort in this upgrade was: Static Assets.

Grails-3 comes with asset-pipeline plugin. Static assets like images, javascript and stylesheets need an upgrade by moving from Resources plugin to Asset-pipeline plugin. There is a nice blog post on this from Grails Team and according to this, the viable option in Grails-3 is asset-pipeline and resources plugin is not available. I will write a bit more detailed post on this later.

After upgrading the application and taking care of several compilation and startup issues along the way, I was finally able to get the application started and running. The login page showed up but all the static assets like stylesheets, javascripts and images were totally missing. This made me look into all asset pipeline directives and related changes I made in migrating static assets to asset-pipeline. Everything looked good but the issue was just puzzling. After spending sometime finally I realized that static assets upgrade to asset-pipeline from Grails-2 to Grails-2 need some attention from security settings as well.

Grails Spring Security Plugin takes a pessimistic locks down approach and locks down all URLs that do not have an applicable URL mapping. 

Usually, Grails-2 security static rules look like:
grails.plugins.springsecurity.controllerAnnotations.staticRules = [ '/login/**': ['IS_AUTHENTICATED_ANONYMOUSLY'], '/js/**': ['IS_AUTHENTICATED_ANONYMOUSLY'], '/css/**': ['IS_AUTHENTICATED_ANONYMOUSLY'], '/images/**': ['IS_AUTHENTICATED_ANONYMOUSLY'], '/**': ['IS_AUTHENTICATED_FULLY'] ]

In Grails-3, add pattern 'assets/**' for static assets as they are now moved under grails-app/assets and get served by asset pipeline plugin through requests URLs like- http://my-app-url/assets/.  Make the static rules look like the following (highlighted is the change):
grails.plugin.springsecurity.controllerAnnotations.staticRules = [ [pattern: '/login/**', access: ['IS_AUTHENTICATED_ANONYMOUSLY']], [pattern: '/assets/**', access: ['IS_AUTHENTICATED_ANONYMOUSLY']], [pattern:'/**', access: ['IS_AUTHENTICATED_FULLY']] ]

There you go, asset pipeline plugin works and all static assets are available, get served and become visible!

References


Sunday, October 16, 2016

Migrate Grails 2.2.1 application data from Oracle to PostgreSQL (Part-2 of 2) . . .

Phase-1 Database Migration

Part -2: Migrate Data from Oracle to PostgreSQL

In Part-1, I covered migrating Grails Application from Oracle to PostgreSQL.

In this Part-2, I cover Data Migration to conclude this two-part series. This is not a comprehensive post by any means. I only cover what I experienced in this process.

After migrating Grails application, my next step was to migrate data. I knew this is where things would become bit tricky. I always feel that Software development is interesting with many competing technologies and products, but is made messy with only few similarities but many differences. Competing technologies start to emerge with little or no standards and deviate so much. Databases are no exception.

During application development, the debate of application logic getting buried in Stored Procedures as opposed to opening up in the application's business/service layer gets to the question of "What if we change the database?". The answer, "We won't change our database" often wins the debate and takes the roots of business-logic deeper and deeper into the database. Lately, PostgreSQL has been making many to rethink of investing in expensive databases. Change is inevitable. The time to "change your database" has come- to some, at least!

My first effort

My first effort was to turn to Google and find any tool(s) that could help migrate data from Oracle to PostgreSQL. After checking several commercial tools, an open-source tool (Ora2Pg), and trying out one tool (Navicat), it was an easy decision for me: decided not to use any tool.

Most of the tools migrate both schema & data, often resulting with several datatype mismatches between Grails application's domain-model and Database schema, especially with fields like boolean that has a close matching database equivalent boolean type in PostgreSQL. It's not easy for the tools to infer these details through database schema/metadata.

The Approach

I took the well-known approach in Software Engineering: ETL - a 3-Step process of migrating data.
  1. Extraction- I leveraged SQL Developer tool to extract data out into delimited files.
  2. Transformation- I wrote Groovy scripts to transform data by transforming data of only those data types that needed special care.
  3. Load- I wrote groovy scripts to load transformed data files into PostgreSQL using its COPY command which is very cool for this task.

Data Migration Step-by-Step

Step-0: Create PostgreSQL database and generate schema

Create PostgreSQL database and generate schema by leveraging Grails and dbm-plugin. I covered this in Part-1.

Step-1: Extract data from Oracle

There are many possible ways to extract/export data from Oracle. I used SQL Developer tool to extract data into delimited multiple files. Each file contains exported data of a specific table.

Certain data types can be transformed during data export by setting preferences. Default Date format is one such that differs form database to database. With this ETL approach, dates certainly need to go through transformations due to default-format differences between databases. This transformation can easily be avoided by taking it into this step of Extraction/Export. SQL Developer offers to set data preferences for a session. With this feature, we can set preferences for Date/Timestamp fields and change the format to default PostgreSQL date format.

Set Data Preferences

Open SQL Developer and connect to database. To check current preferences, go to Preferences > Database > NLS for formats from Oracle SQL Developer main menu item.

Execute the following SQL to alter date format for that particular session:

alter SESSION set NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI:SS';
alter SESSION set NLS_TIMESTAMP_FORMAT = 'MM/DD/YYYY HH24:MI:SSXFF';

Export Data

Go to Tools > Database Export and follow the following steps:
Select Connection
Uncheck Export DDL
Format: delimited
Delimiter: % (your preferred char, I chose % as many of special characters are found in the actual data in some column or other)
Left Enclosure: "
Right Enclosure: "
Save as: Separate Files
File: enter dir name (where the exported data to be saved)

Simply go through next steps to finish. You will have data exported into multiple files in the directory specified with each file name as <TABLE_NAME>_DATA_TABLE.dsv.  For instance if you have a table MY_TABLE, it's exported delimited data file would be: MY_TABLE_DATA_TABLE.dsv.

Step-2: Transform Data

This step requires, finding all data types that need data transformation. Once data type Date is taken care during the Extraction process, I was only left to deal with Boolean/boolean data types, transforming 1/0 to true/false. This requires identifying all such data columns in the entire schema. There are two choices for finding those columns: 1) Database metadata, 2) Grails domain model. I chose Grails domain-model and wrote a simple script to find the list of tables and columns that are mapped to Boolean/boolean properties of domain-model. I had covered this in my previous post: The power of Grails Metadata is there when you need it. Once I found those details, I got everything that I needed to transform data.

I wrote Groovy script for transforming data. Following is the Groovy script:

import groovy.json.JsonSlurper /** * oracle-to-postgres-data-mIgrator.groovy * This groovy script migrates/transforms delimited data-files exported from Oracle to data-files that are PostgreSQL * compliant. It takes care of special data migration needed for data-column types like boolean, and nullable FK columns * that get outputted as "" for nulls when left enclosure(") and right enclosure (") characters are used during data * export. * * Certain dependencies like: base directory the data-files are located in, data-file extension etc. are externalized * into postgres-migration-config.groovy file for making this script little flexible. * * Prerequisites * 1. Run script: boolean-columns-finder.groovy from the application's home dir. * 2. Export all data from Oracle database into delimited files from a tool like: SqlDeveloper. * 3. Adjust any configurations needed in postgres-data-migration-config.groovy file. * Especially config properties like: delimiter and baseDir * * Dependencies (input) * 1. Requires myAppTablesBooleanColumns.json to exist in the application home directory. This is the file that is * result of executing boolean-columns-finder.groovy that contains a map of table names and corresponding list of * boolean columns. * 2. A set of delimited data files exported using a tool like SqlDeveloper. * * Assumptions for Data Export (based on the tool used SqlDeveloper to export oracle data). * . Each table's data is exported into it's own data file with the filename like: _DATA_TABLE.dsv * . To simplify parsing, used % as the delimiter because it's one unused character found in application's data. * . Each data file contains header as the first line which is nothing but column names in capital letters. * . Used left enclosure(") and right enclosure (") characters during data export which encloses column-data like * long text that contains embedded new lines, and nullable FK's that are null. * . Also, default date format is different between oracle and PostgreSQL. The data migration date columns has been * eliminated by leveraging SqlDeveloper to export data in the format that PostgreSQL can import. * This can be achieved by running the following SQLs for the session in SqlDeveloper before data export. * alter SESSION set NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI:SS'; * alter SESSION set NLS_TIMESTAMP_FORMAT = 'MM/DD/YYYY HH24:MI:SSXFF'; * * Result (output) * 1. Migrated data files. Each data file (_DATA_TABLE.dsv) will have a corresponding migrated data file * (_DATA_TABLE.dsv.migrated) * 2. Only data that needs to be migrated (for boolean data) goes through data migration. * The input json file: myAppTablesBooleanColumns.json contains all tables that need data migration. For every * table that needs data migration, the corresponding input data-file name is derived by the above mentioned * filename assumptions. * e.g. table: my_table, data-file: MY_TABLE_DATA_TABLE.dsv, migrated data-file: MY_TABLE_DATA_TABLE.dsv.migrated * 3. Rest of the tables' data that do not need data migration (do not have boolean columns) is a simple passthrough * with just the header migration. The header migration is simply converting column names to lowercase. * * @see postgres-data-migration-config.groovy * @see boolean-columns-finder.groovy * * @author Giri Pottepalem */ def config = new ConfigSlurper().parse(new File('postgres-data-migration-config.groovy').toURI().toURL()) def dataDelimiter = config.data.delimiter def datafileBaseDir = config.data.file.baseDir def datafileExtension = config.data.file.extension def migratedDatafileExtension = config.data.migrated.file.extension def tablesBooleanColumnsFile = config.tables.boolean.columns.file if(!new File(tablesBooleanColumnsFile).exists()){ println "Error: Missing required file ${tablesBooleanColumnsFile} for data migration." System.exit(1) } Map tablesBooleanColumnsMap = new JsonSlurper().parseText( new FileReader(tablesBooleanColumnsFile).text ) def dataFilesNeedDataMigraton = tablesBooleanColumnsMap.keySet().collect { "${it.toUpperCase()}${datafileExtension}" } def allDataFiles = new File(datafileBaseDir).listFiles().name.findAll { it.endsWith(datafileExtension) } List dataFilesJustNeedHeaderMigration = allDataFiles - dataFilesNeedDataMigraton println "Migrating just header for all data files that don't need data migration..." dataFilesJustNeedHeaderMigration.each { dataFilename -> String absoluteDataFilename = "${datafileBaseDir}/$dataFilename" print "Reading data file: $absoluteDataFilename..." File dataFile = new File(absoluteDataFilename) File migratedDataFile = new File("${absoluteDataFilename}${migratedDatafileExtension}") migratedDataFile.withWriter { fileWriter -> dataFile.eachLine { line, lineNumber -> if (lineNumber == 1) { fileWriter.writeLine(line.toLowerCase()) //migrate header } else { fileWriter.writeLine(line) //take data as is } } } println "Migrated" } println "Migrating data..." tablesBooleanColumnsMap.each { String tableName, List booleanColumns -> String dataFilename = "${datafileBaseDir}/${tableName.toUpperCase()}${datafileExtension}" print "Reading data file: $dataFilename..." File dataFile = new File(dataFilename) File migratedDataFile = new File("${dataFilename}${migratedDatafileExtension}") String header def columnNames migratedDataFile.withWriter { fileWriter -> dataFile.eachLine { String line, int lineNumber -> if (lineNumber == 1) { header = line.toLowerCase() //migrate header columnNames = header.split(dataDelimiter) fileWriter.writeLine(header) } else { //migrate data String[] columnData = line.split(dataDelimiter, -1) def booleanColumnIndexes = booleanColumns.collect { columnName -> columnNames.findIndexOf { it.equalsIgnoreCase(columnName) } } booleanColumnIndexes.each { booleanColumnIndex -> if (booleanColumnIndex < columnData.length) { switch (columnData[booleanColumnIndex]) { case null: columnData[booleanColumnIndex] = ''; break case '1': columnData[booleanColumnIndex] = 't'; break case '0': columnData[booleanColumnIndex] = 'f'; break } } } //update data that ended up as "" due to blank to '' in data exoport columnData.eachWithIndex { data, i -> if (data == '""') { columnData[i] = '' } } fileWriter.writeLine(columnData.join(dataDelimiter)) } } } println "Migrated" } println "Done"

Following is a sample of externalized configuration properties as Groovy script in order to make the above script bit flexible:
/** * postgres-data-migration-config.groovy * Configuration properties as groovy script * @author Giri Pottepalem */ data { delimiter = '%' file { baseDir = './export' //dir in which delimited data files exist extension = '_DATA_TABLE.dsv' } migrated { file { extension = '.migrated' } //need to be loaded in specific order due to foreign key constraints files = [ 'MY_TABLE1_DATA_TABLE.dsv.migrated', 'MY_TABLE2_DATA_TABLE.dsv.migrated', ... ] } } appHome = '/gateway' //myApp application home dir //file that contains list of tables and boolean columns for each table. Result of running boolean-columns-finder.groovy tables.boolean.columns.file = "${appHome}/myAppTablesBooleanColumns.json" postgres { dataSource { url = 'jdbc:postgresql://localhost:5432/myApp' user = 'postgres' //superuser for running COPY password = 'password' driverClassName = 'org.postgresql.Driver' } //Adjust sequences with values taken from oracle db by: select sequence_name, last_number from user_sequences; sequences = [ 'hibernate_sequence': 150461, 'table1_sequence': 4041 ] }

Step-3: Load Data

Once data is migrated, find out data dependencies order and load data in the required order that satisfies foreign-key constraints. I covered this in my previous post: Groovy Script to load CSV data files into PostgreSQL database onto Amazon RDS.

References



Sunday, October 09, 2016

The Power of Grails Metadata is there when you need it . . .

Grails underpins the frameworks and technologies like Spring and Hibernate it was built on top of. With the concept of Convention-over-ConfigurationSensible defaults, GORM and many Architectural & Design decisions and solutions to known issues taken into the framework, it multiplies developer's productivity which can otherwise only be imaginable.

In Java space, as frameworks keep getting better and better over years, they take developer to higher levels of coding with no need to deal with low-level details. It's only a myth and not a fact if one argues "frameworks hide details". A good framework does not do that and gives full access to underlying technologies. Grails is one of such.

Recently I had a need to find out the complete list of database Tables and Columns that are mapped to Boolean/boolean domain object properties in a Grails 2.2.1 project. There are two possible ways this can be achieved in a Grails project. 1) By examining metadata of the database 2) By examining metadata of the Grails application execution environment itself. The second option: Grails way, seemed more reliable to me. I started looking into Grails API and could get the details pretty easily from the metadata it exposes.

Following is the code-snippet with parts of interest highlighted:

/** * boolean-columns-finder.groovy * Grails groovy script which finds persistable Boolean/boolean properties of all domain * objects and finds corresponding database mappings of columns and tables. Generates a * JSON file that contains list of Boolean/boolean column-names for each table-name * mapped to a domain object that contains Boolean/boolean properties. * * @author Giri Pottepalem */ import groovy.json.JsonBuilder def sessionFactory = ctx.getBean("sessionFactory") Map tablesBooleanColumnsMap = [:] //get table names and column names of all domain objects that have Boolean/boolean properties grailsApplication.getArtefacts("Domain").each { domainObject -> def dbTableName = sessionFactory.getClassMetadata(domainObject.clazz).tableName def booleanProperties = domainObject.properties.findAll { it.isPersistent() && it.type.name.contains('oolean') }*.name if (booleanProperties) { def dbBooleanColumnNames = booleanProperties.collect { sessionFactory .getClassMetadata(domainObject.clazz) .propertyMapping .getColumnNames(it) }.flatten() tablesBooleanColumnsMap << [(dbTableName): dbBooleanColumnNames] } } //write the map built to a file as JSON def jsonFile = "${grails.util.Metadata.current.getApplicationName()}TablesBooleanColumns.json" new File(jsonFile).write( new JsonBuilder(tablesBooleanColumnsMap).toPrettyString() ) println "Successfully extracted mappings for Boolean/boolean properties found" + " and created a JSON file: $jsonFile"

Note

Grails makes the main interface to a running application available through grailsApplication variable and it's context through ctx variable. This script can be run from grails console and also from the command line.

To run from command line, run it from the project home directory (assuming that the script is located under src/groovy dir of the grails project) like:
grails run-script src/groovy/boolean-columns-finder.groovy

Summary

Grails puts you on the power steering seat. It only underpins all of the technologies & frameworks it was built on. It does not hide any details from you. When you really need low-level details, you have all available to you.

References

Friday, October 07, 2016

Groovy script to load CSV data files into PostgreSQL database onto Amazon RDS . . .

Recently, I worked on a task to migrate a Java 6 Grails 2.2.1 application from Oracle to PostgreSQL. In my recent post (Part-1 of 2), I covered details on migrating the Grails application as such. I am yet to cover details on data migration in my next part. However, in this post, I just wanted to share some of my findings and learnings on loading migrated data onto PostgreSQL database hosted on Amazon RDS.

As always, for any problem, there are multiple solutions. After quickly looking at a couple of commercial tools and an open source tool that offer migrating database from Oracle to PostgreSQL, and trying out trail-version of a commercial tool, I decided not to use any such tool for data migration. Many of these tools simply migrate schema, and attempt to migrate data & other database objects to the closest possible match of the target database. It often results with a fully migrated database, but leaving you to deal with the problem of object-relational impedance mismatch.

Since, I already leveraged Grails database-migration plugin to create new schema, constraints and sequences for PostgreSQL, all I needed was just to get data migrated and loaded into newly created schema. I took the well-known approach in Software Engineering: ETL which is a 3 step process of migrating data. For step 1) Extraction- I leveraged SQL Developer tool to extract data out into delimited files. For step 2) Transformation- I wrote Groovy scripts to transform data by taking care of data of those data types that needed data transformation. For step 3) Load- I wrote groovy scripts to load transformed data files into PostgreSQL using its COPY command which is very cool for this task.

All looked great with my local PostgreSQL database. I was able to script out both the tasks of data transformation and loading. But when I ran my loader script connecting to Amazon RDS hosted PostgreSQL database to load transformed data into, the COPY command simply failed with a misleading message saying "must be superuser for running COPY".

After some investigation and some reading of Amazon docs, I came to know that Amazon RDS doesn't allow PostgreSQL COPY command for security reasons. However, they do allow other \COPY PostgreSQL command, but it requires to be run through psql interactive command-line tool that comes with PostgreSQL comes. After playing with \COPY command in psql, I was able to manually load one migrated data -ile into a specific database table on Amazon RDS. Cool!

My next step was to take it into my loader Groovy script. As Java allows to execute an external program/process concurrently from Java application and Groovy enhances the java.lang.Process class further, it was easy executing psql tool from my script and getting all migrated data files loaded into Amazon RDS hosted PostgreSQL database.

Following is the snippet of Groovy script code with code highlighted that is of particular interest in this post (certain details are omitted for brevity):
@GrabResolver(name='postgres', root='https://mvnrepository.com/artifact/org.postgresql/postgresql') @GrabConfig(systemClassLoader=true) @Grab(group='org.postgresql', module='postgresql', version='9.4.1211.jre6') import groovy.sql.Sql //read config groovy file that contains all necessary configurations //including data-source, list of migrated files in the order to be loaded etc. def config = new ConfigSlurper().parse( new File('postgres-data-migration-config.groovy').toURI().toURL() ) ... def sql = Sql.newInstance( dataSource.url, dataSource.user, dataSource.password, dataSource.driverClassName ) boolean isLoadingToAmazonAws = dataSource.url.contains('amazonaws.com') //load each migrated data file into it's table dataFiles.each { String dataFileName -> println "loading data: $dataFileName" String header = new File("${datafileBaseDir}/${dataFileName}").withReader { return it.readLine() } //transform header: "h1"%"h2"%"h3" into: h1,h2,h3 to form //column names needed for COPY or \COPY command String columnNames = header.replaceAll(dataDelimiter, ',').replaceAll('"', '') ... if(isLoadingToAmazonAws) { //load data onto amazon aws database //build psql process command with required arguments to invoke it as an external process from this script def copyCommand = [ 'psql', "--host=${parsedDataSourceUrl.host}", "--port=${parsedDataSourceUrl.port}", "--dbname=${parsedDataSourceUrl.dbname}", "--username=${dataSource.user}", "--no-password", //suppress password prompt "--command=\\COPY ${tableName}(${columnNames}) FROM '${datafileBaseDir}/${dataFileName}' DELIMITER '${dataDelimiter}' CSV HEADER" ] println copyCommand.join(' ') //this is how it would look like if it was run from command-line //run psql tool as an external process def process = copyCommand.execute() def out = new StringBuffer() def err = new StringBuffer() process.waitForProcessOutput(out, err) //wait for the process to finish println "exit code: ${process.exitValue()}" if (out.size()) { println "Success: $out" } if (err.size()) { println "Error: $err" } } else { //load data to a local database by simply running COPY command as an sql statement String copyStatement = """ COPY ${tableName}(${columnNames}) FROM '${datafileBaseDir}/${dataFileName}' DELIMITER '${dataDelimiter}' CSV HEADER """ println copyStatement boolean status = sql.execute(copyStatement) println status ? "Error loading..." : "Successfully loaded..." } } println "Altering sequence values..." postgresSequences.each { sequenceName, value -> String alterStatement = "ALTER SEQUENCE $sequenceName RESTART WITH $value" println alterStatement boolean status = sql.execute(alterStatement) println status ? "Error altering sequence $sequenceName" : "Successfully altered sequence: $sequenceName to start with: $value" } println "Done"

A sample psql \COPY command for running from a terminal command prompt looks like:
psql \ --host=myhost.rds.amazonaws.com --port=5432 --dbname=my-db \ --username=my-user --no-password \ --command=\COPY my_table(col1, col2) FROM '/my-file.dsv.migrated' DELIMITER ',' CSV HEADER

A sample result of the above psql execution that also gives number of records loaded looks like:
exit code: 0 Success: COPY 43628

Note

There is no password used for executing the \COPY command through psql tool. The password can be stored in ~/.pgpass a file under user home dir or in any other file referenced by env variable PGPASSFILE in the format: hostname:port:database:username:password

References

Sunday, October 02, 2016

Migrate Grails 2.2.1 application from Oracle to PostgreSQL (Part-1 of 2) . . .

Lately, I had to take up the task of migrating one of our Grails 2.2.1 applications. The migration was two folded: 1) From Oracle to PostgreSQL Database and 2) From Grails 2.2.1 to newer 3.x version.

We took a two-phase approach by attempting the Database migration first. In this first-part of two-part posts, I will briefly cover the kind of changes I had to make for migrating the application from Oracle to PostgreSQL.

Phase-1 Database Migration

Part -1: Migrate Grails Application from Oracle to PostgreSQL

The application was a straight forward Java 6, Grails 2.2.1 application with database-migration plugin used for tracking database changes that the application went through during development and afterwards.

My first-decision
There were about a dozen database-migration change scripts in the application. It was a no brainer for me to decide not to look into those history of changelogs. Who knows what kind of database changes that the app had gone through over time and  there is no point in migrating all that history. I decided to regenerate base changelog script that gets me started with the current state of the model as the base.

Following are the steps:

Step-1 Change dependencies - remove Oracle and add PostgreSQL
BuildConfig.groovy
dependencies { ... //runtime 'oracle:ojdbc6:11.2.0.1.0' runtime 'org.postgresql:postgresql:9.4.1211.jre6' //Java 6 PostgreSQL driver ... }

Step-2 Change Datasource
DataSource.groovy
environments { dataSource { pooled = true driverClassName = "org.postgresql.Driver" dialect = "org.hibernate.dialect.PostgreSQLDialect" configClass = HibernateFilterDomainConfiguration.class } ... development { dataSource { dbCreate = "none" //one of 'create', 'create-drop', 'update', 'none' driverClassName = "org.postgresql.Driver" dialect = "org.hibernate.dialect.PostgreSQLDialect" url = "jdbc:postgresql://localhost:5432/myapp" username = "myapp" password = "myapp" } } ... }

Make sure that you have PostgreSQL db installed, myapp database created and accessible by user myapp

Step 3 Regenerate Database changelog and get new schema created
Run the following command from the application home dir to generate baseline changelog:
grails dev dbm-generate-gorm-changelog baseline.groovy
This will generate baseline.groovy file under grails-app/migrations

Step 4 Make necessary changes to take the generated file into the app
Depending on how previous Oracle database migration changes were organized, take the baseline.groovy as the new base changelog file into the application.

I took this as the base changelog by creating the main changelog.groovy file that the plug-in looks for under grails-app/migrations dir. I also created releases/postgresql sub-dir under grails-app/migrations dir and moved the generated baseline.groovy file there. Modified changelog.groovy to reference it as shown below:
changelog.groovy
databaseChangeLog = { include file: 'releases/postgresql/baseline.groovy' }

Step 5 Get database objects created
Run the following database-migration command from the application home directory to get database objects like tables, constraints, sequences etc. created in the database from baseline.groovy changelog. With this your database schema should be is inline with grails domain model.
grails -Dgrails.env=development dbm-update

Step 6 Run the application
Run the app and see if it comes up. If it doesn't, there could be some issues with data type differences between Oracle and PostgreSQL. Check Bootstrap.groovy and see if you have any data initialization that's causing any issues with data types. Also, setting root log-level to debug in Config.groovy will be helpful.

Step 5 Code changes (if any needed)
Make necessary code changes. In our case, we had Grails Hibernate filters plugin used in the app and
boolean/Boolean type was the only data type that needed a special care. With Oracle, it was mapped to database column type: NUMBER. PostgreSQL provides standard sql boolean type. I had to change some of the filters that used boolean properties of domain objects. The following is an example of one such before and after migration change:

//before change static hibernateFilters = { active(condition: 'deleted=0', default:true) } //after change static hibernateFilters = { active(condition: "deleted='false'", default:true) }

Once I made the above code changes for boolean/Boolean data types, the application worked like a charm. I quickly poked around the application to see if anything breaks. Everything seemed working. Also, the application had a bunch of test cases written and all test cases passed.

Step 6 Cleanup (if any needed)
Once everything looks good, with the approach taken not to migrate old database migration scripts and newly regenerated baseline changelog, the old change logs can be safely deleted and also oracle driver if it was added to lib folder can go away.

That was not bad at all ;)

In my next part (Part-2), I will write about Data Migration from Oracle to PostgreSQL which will conclude Phase-1: Database Migration from Oracle to PostgreSQL.

Friday, August 26, 2016

Grails JMS plugin 1.2 hands off messages too early during the plugin relay of application startup . . .

Rio 2016 Olympics concluded recently with the traditional last event of track and field: 4x100 meter relay. In relay, the teams that qualify are the ones that gracefully complete the race. This involves a great co-ordination among team runners. One important rule of this particular race is handing off the baton within a changeover box. Dropping the baton or failing to transfer within the box disqualifies the team. The baton has to be carried till finish line in the race. It is a synchronous race.

Messaging in software applications is like a relay except that it is asynchronous. It is a great way to integrate modern applications asynchronously achieving great performance and scalability with guaranteed delivery of messages. But it coms with certain complexities in handling messages gracefully. Java based applications using JMS API are no exceptions. Spring framework's JMS support greatly simplifies the use of JMS. Grails JMS plugin which underpins Spring JMS makes it even better.

Recently I had to investigate and put a fix for a known issue of Grails JMS plugin that we used in our Grails applications. The known issue is- Messages get queued up when the application is down (expected behavior, of course). But when the app comes up, listeners grab messages too early from queues before the application is fully started. If message processing involves accessing domain model or database, they error out forcing you to deal with this situation.

We have two Grails applications integrated asynchronously through JMS messaging with ActiveMQ as the underlying JMS implementation. Messages flow back and forth between two applications. The most recent app of the two is a RESTful API app developed using Grails 3.1.6 rest-api profile and JMS plugin ver 2.0.0.M1 whereas the other application is a bit older Grails web application using Grails 2.2.1 and Grails JMS plugin 1.2. The Grails 3.1.6 RESTful API app is a gateway for client web/mobile apps that exposes resources in RESTful way and hands off requests to the Grails 2.2.1 web app for further processing. The communication channel between these two apps is JMS through queues. So, it was important to make sure we don't lose any messages either side due to any application down times. During testing everything on the Grails 3.1.6 side looked good. After it came up, it successfully processed messages that were waiting in the queues during it's down time. The other Grails 2.2.1 app simply errored out with t's message processing during a similar situation when it came back up after being down when the messages arrived and waiting in the queues. The following is a high-lvil architectural diagram depicting this.
Applications integrated through JMS

Known Issue with Grails JMS plugin 1.2

As the application starts up, listener containers get started bit early in the plugin race, earlier than dataSource and Hibernate are fully up and ready. Hence, if message processing involves domain-model access or db access, it errors out.

Options tried

Upgrading the plugin to ver 1.3

The plugin version 1,3 seemed to have a fix for this issue when I looked into plugin's GitHub source code  repository. The fix seemed convincing and pretty simple from the source code change. There were three more plugin dependencies added to the loadAfter list of plugins: 'dataSource', 'hibernate', and 'hibernate4'.

Steps to upgrade plugin:

1. Change the dependency in BuildConfig.groovy from compile ":jms:1.2" to compile ":jms:1.3"

plugins { ... compile ":jms:1.3" ... }

2. Refresh dependencies using the following command:
grails -Dgrails.env=<your-env> refresh-dependencies

3. Grails build system prompts as below for a confirmation:
> You currently already have a version of the plugin installed [jms-1.2]. Do you want to update to [jms-1.3]? [y,n]

Press y to upgrade.

However, with this upgrade the issue became bit worse and all listeners completely stopped listening to messages. Not only the messages that were waiting in the queue when the app was down, but also any new messages that come after the application comes up were not picked up by application listeners. After looking at the source code, JmsGrailsPlugin.groovy and comparing it with that of 1.2 plugin, noticed that the method startListenerContainer which starts up all listener container beans was totally missing. 

Upgrading the plugin to 1.3-SNAPSHOT

Then I tried 1.3-SNAPSHOT version by following the above steps to update the plugin and it's  dependencies. This ran into spring ClassNotFoundException (java.lang.NoClassDefFoundError: org/springframework/core/type/classreading/AnnotationMetadataReadingVisitor) and the application wouldn't even come up. When looked into the plugin source code it had the three additional plugin dependencies added to the list loadAfter, and the method startListenerContainer was was also in place. When I compared this source with version to 1.2, the only change I noticed was the additional 3 plugins in the list for loadAfter.

That was puzzling to me and I read Grails documentation carefully word by word to understand how this loadAfter works. The sentence with an example: Here the plugin will be loaded after the controllers plugin if it exists, otherwise it will just be loaded caught my attention. Then I went back to check and see if we had hibernate4 plugin. Apparently, we didn't have that and as hibenate4 plugin was for Grails 2.5.0 or higher. That gave me a clue.

The fix

Having looked at 1.3, 1.3-SNAPSHOT and 1.2 and since we had 1.2 plugin checked into our source repository along with the application's source code, we decided to edit plugin code by adding just 2 dependencies: 'dataSource' and 'hibernate' to the list loadAfter. That worked beautifully as expected and we decided to go with that fix.

Here is the final fix, changed code in JmsGrailsPlugin.groovy from:
def loadAfter = ['services', 'controllers']

to:
// Load jms plugin after the following plugins. // BEWARE, if any of the listed plugins don't exist, the list is ignored // and this jms plugin gets loaded and executed // Ref: http://docs.grails.org/latest/guide/plugins.html#understandingPluginStructure def loadAfter = ['services', 'controllers', 'dataSource','hibernate']

With this fix, the application once came up, gracefully handled all pending messages that were awaiting when it  was down. Completed the message relay by handing off messages gracefully without dropping or erring out of the plugin execution race.

TIP

If due to any reason if it doesn't work, simply check application log file or by other means find out which plugin is the last one in the loading process and set loadAfter with a list containing that plugin. That should delay pushing this plugin to the end in the plugin loading process ;)

There seems no easy way to know all the plugin dependencies of a Grails project.

Running grails command grails dependency-report will list dependency graph.

If using IntelliJ, either Project Settings > Modules > dependencies,  or Project View > External Libraries or even trying to open a file (Mac: CMD + shift + O) for *GrailsPlugin.groovy should give the list of plugins for the project.

Useful Links & References







Sunday, August 14, 2016

Create a Secured Restful API App with Grails 3 and PostgreSQL- Step by Step: Part 3 of 5

Posts on this topic:

Part 3: Add PostgreSQL and start Coding

In Part 2 of this series, I secured my RSET API application with two Grails 3 plugins: Grails Spring Security Core plugin and Grails Spring Security REST pluginWith this, if I start my application by running the command: grails run-app and check http://localhost:8080, I get JSON response containing standard grails application information. When I point my browser at http://localhost:8080/dbconsole I see the embedded H2 db console that grails provides. Grails comes with in-memory H2 database. That is all good for getting started.

Now, lets add PostgreSQL to the application and do some hands-on coding. 

PostgreSQL

PostgreSQL has been gaining momentum lately among developers community and is evolving as a preferred database over Oracle or MySQL. It's also been evolving steadily to offer features of both Relational and NoSQL databases and the best part is: it's open-source.

Download PostgreSQL and install it. Go with all default installation steps. On Mac OS, it gets installed under /Library/PostgreSQL/9.5. In addition, it's good to have a GUI tool to access PostgreSQL db. PgAdmin is one such administration and development GUI tool for PostgreSQL. It's easy to use.

Now, with PostgreSQL installed we are good to do some hands-on.

Environment: Grails 3.1.6, Java 1.8, IntelliJ 15 on Mac OS X 10.9.5

Step 1 Add PostgreSQL support - Add the following run-time dependency in build.gradle file:

dependencies{ ... runtime "org.postgresql:postgresql:9.4.1208.jre7" //Java JDBC 4.1 (JRE 7+) driver for PostgreSQL ... }

Step 2 Edit application.yml, change driverClassname, username, password and url properties of dataSource as shown below (highlighted are the changes):

dataSource: pooled: true jmxExport: true driverClassName: org.postgresql.Driver username: giriapi password: giriapi #dialect = org.hibernate.dialect.PostgreSQLDialect environments: development: dataSource: dbCreate: create-drop url: jdbc:postgresql://localhost:5432/giriapi test: dataSource: dbCreate: update url: jdbc:postgresql://localhost:5432/giriapi production: dataSource: dbCreate: update url: jdbc:postgresql://localhost:5432/giriapi

By making the above changes we changed from Grails default H2 database to PostgreSQL.

Now prepare PostgreSQL database for our application. The following 2 steps show minimal needed: 1) a user role and 2) a database.

Step 3 Create a user role (e.g. giriapi), I used pg Admin III postgres client tool to do this. Below are the screenshot:

Click Definition tab and
enter password for the user

Step 4 Create database (e.g. giriapi) and select the user (giriapi) as the owner created in the above step as shown in the screenshot below:

Step 5 Run the app (command: grails run-app) and check database. Now, I have 4 tables created and two users created under public Schemas in PostgreSQL 9.5 (localhost:5432) database as shown in the screenshot below:

With that we are be all set to get going with PostgreSQL.

Leverage Additional PostgrSQL capabilities

As this is a RESTFul application mostly dealing resources' state represented and transferred as JSON data, it makes sense to leverage JSON capabilities offered by PostgreSQL database. Also, out-of-the-box Grails uses surrogate keys for all tables and strategy for id generation is sequence. PostgreSQL also has good support for UUIDs and I would like to leverage that capability as well. For PostgreSQL UUID native datatype we don't need any extra plugins, but for JSON and many other useful PostgreSQL native types, the Grails Postgresql Extensions Plugin provides hibernate user types. Let's add this plugin to the application now.

Step 6 Add Grails Postgresql Extensions plugin. Make the following changes in build.gradle

Add jcenter repository and plugin dependency
repositories { ... jcenter() //Bintray JCenter repo for grails postgres-extensions plugin ... } dependencies { ... //grails postgresql extensions plugin for JSON type compile 'org.grails.plugins:postgresql-extensions:4.6.6' ... }
*You can delete runtime "com.h2database:h2" dependency as we no longer use H2 in-memory database.
Also, make the following change in application.yml to change the Hibernate dialect (these steps are pretty well documented in the PostgreSQL extensions plugin documentation)

hibernate: dialect: net.kaleidos.hibernate.PostgresqlExtensionsDialect #Grails Postgresql Extensions plugin

Step 7 Now we are all set with Postgresql and it's extension plugin. Now, let's add domain model to the application.

This is the Grails approach that I liked most right from the beginning. Typically, in Web applications using Spring MVC, Hibernate ORM, many projects start either top-down from the front-end creating views and controllers, or bottom-up from the back-end designing database tables & mapping tables to domain objects. Grails makes you start with domain model, from the middle.

Domain model is the core model behind your application and generally is mapped to database. It is an important aspect of any application that needs bit more design considerations that any other aspects. The properties, relationships, data validations etc. must be considered while designing domain model. Once you get the model rightly done and have in place, it becomes the middle ground for your Grails app. Grails will take it further and expand it's reach to both sides- front-end side: the view part of MVC and back-end side: the model part with persistence support.

For this application, I chose my primary hobby Painting as the domain. Like good developers, Artists also like simplicity. So, I kept this model very simple, simple enough to cover all aspects of Grails features that I wanted to explore. The interests of an Artist's small world can be represented in a simple domain model as follows:

Domain Objects
  • Artist, ArtWork, Specification
  • AppUserRoleAppUserRoleAuthenticationToken - domain objects already in place for security
Relationships
  • An Artist can own many ArtWorks (one-to-many)
  • Each Artwork has a Specification (one-to-one)
Step 8 Generate domain classes

Grails comes with pretty good command-line support for both build tasks and code-generation. The command line works in both interactive and non-interactive mode. To start grails in interactive mode, simply run grails command from the project home dir. The interactive mode supports TAB completion for all command. For instance if you are not sure which command to use for creating a domain class, simply typing cr + TAB which lists all create commands available.

I prefer  command line in interactive mode for generating all my Grails artifacts. You don't have to use command line, you can simply create groovy classes in respective folders like domain, controllers, services etc. But I prefer creating my Grails artifacts through command line create* commands as it also generates associated unit/integration test cases. Of course, it generates just skeleton classes, but it's better than creating manually to make sure that test cases are not left behind.

For creating domain class there are two commands: create-domain-class, create-domain-resource. The difference between these two is very subtle. The create-domain-resource command generates domain object annotated with @Resource making the domain object a RESTful resource.

When generating domain classes using these commands, you typically need to specify the name of your Domain class including the package name. But if you have grails.codegen.default.package property set with default package name (I have it set to com.giri) in either application.yml or application.groovy with the base package name, you can ignore the package name. Grails uses the default package name configured.

That's enough insights into this Grails domain object creation. Let's create domain objects as shown below with grails running in interactive mode.

$ grails Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=2048m; support was removed in 8.0 BUILD SUCCESSFUL | Enter a command name to run. Use TAB for completion: grails> grails> create-domain-resource Artist | Created grails-app/domain/com/giri/Artist.groovy | Created src/test/groovy/com/giri/ArtistSpec.groovy grails> create-domain-resource ArtWork | Created grails-app/domain/com/giri/ArtWork.groovy | Created src/test/groovy/com/giri/ArtWorkSpec.groovy grails> create-domain-resource Specification | Created grails-app/domain/com/giri/Specification.groovy | Created src/test/groovy/com/giri/SpecificationSpec.groovy grails>
Highlighted are the commands used to create domain classes. Also, notice that for every domin class it also generates an associated unit testcase.

For instance, the generated domain class Artist looks like:
package com.giri import grails.rest.* @Resource(readOnly = false, formats = ['json', 'xml']) class Artist { }

We can start adding properties, constraints and relationships to these objects. Grails implicitly provides id, version properties. If two Date properties by name dareCreatedlastUpdated are added, Grails takes care of populating them appropriately when the object is created and updated.

The @Resource is the easiest way to create a RESTful resource, in other words, it is the easiest way to expose a domain object as a REST resource. Once you annotate your domain object as a Resource, you don't need to do anything. Grails provides needed controller and URLMappings in RESTful way.

Gotcha
The generated domain class annotated with @Resource is missing uri attribute, without that Grails will not expose your domain object as RESTful resource and when you run grails url-mappings-report command, you will not have RESTful end-points listed for the domain object. So, don't forget to define end-point path through uri property of @Resource annotation for the domain object like:
@Resource(uri = '/api/artists', readOnly = false, formats = ['json', 'xml'])

I have also added few basic properties to Artist domain object including and id of type UUID. Now, Artist domain object looks like:
package com.giri import grails.rest.Resource /** * @author Gpottepalem * Created on Aug 13, 2016 */ @Resource(uri = '/api/artists', readOnly = false, formats = ['json', 'xml']) class Artist { UUID id String firstName String lastName Date dateCreated Date lastUpdated static constraints = { id type: 'pg-uuid', generator: 'uuid2' firstName blank: false lastName blank: false } }
Highlighted is the explicitly defined id property and it's mapping for UUID, if you don't need id to be UUID, you can simply omit it and Grails provides an implicit long id mapped to appropriate db type.

If you run grails url-mappings-report, the output looks like:
Dynamic Mappings | * | ERROR: 500 | View: /error | | * | ERROR: 404 | View: /notFound | | * | /${controller}/${action}?/${id}?(.${format)? | Action: (default action)| Controller: application | * | / | Action: index | Controller: artist | GET | /api/artists/create | Action: create | | GET | /api/artists/${id}/edit | Action: edit | | POST | /api/artists | Action: save | | GET | /api/artists | Action: index | | DELETE | /api/artists/${id} | Action: delete | | PATCH | /api/artists/${id} | Action: patch | | PUT | /api/artists/${id} | Action: update | | GET | /api/artists/${id} | Action: show | Controller: restOauth | * | /oauth/access_token | Action: accessTok | | * | /oauth/${action}/${provider} | Action: (default action)| BUILD SUCCESSFUL

Now, let's start our application and try to access end-point /api/artists for getting the list of Artists.
As we have secured all our end-points /api/**, when you try to get list of Artists by issuing a curl command like this:
$ curl -i -X GET 'http://localhost:8080/api/artists'

We get a Forbidden response.
HTTP/1.1 403 Forbidden Server: Apache-Coyote/1.1 Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Sun, 14 Aug 2016 16:13:49 GMT {"timestamp":1471191229379,"status":403,"error":"Forbidden","message":"Access Denied","path":"/api/artists"}

Step 9 The reason for Forbidden in the above step is because we have secured all /api/** end-points. We need to fix this. To start simple, we allow everyone to access /api/artists end-point. For this, all we need to do is:
  • Annotate our Artists domain object with @Secured(['permitAll]) and
  • add anonymousAuthenticationFilter
After adding the anonymousAuthenticationFilter, application.groovy looks like:
grails.plugin.springsecurity.filterChain.chainMap = [ //Stateless chain // [ pattern: '/api/**', filters: 'JOINED_FILTERS,-anonymousAuthenticationFilter,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter,-rememberMeAuthenticationFilter'], [ pattern: '/api/**', filters: 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter,-rememberMeAuthenticationFilter'],

All I did was, I removed -anonymousAuthenticationFilter from the stateless filter chain. This adds it back to the chain.

Now, if I try curl command to list artists:
$ curl -i -X GET 'http://localhost:8080/api/artists'

I get the following response (notice that the list is empty):
HTTP/1.1 200 OK Server: Apache-Coyote/1.1 X-Application-Context: application:development Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Sun, 14 Aug 2016 16:37:33 GMT []

Since I allowed all to access this end-point, I don't need to login. Let's try a POST request to the end-point /api/artists to create an Artist.
curl -i -X POST -H "Content-Type:application/json" -d '{ "firstName": "Giri", "lastName": "Pottepalem" }' 'http://localhost:8080/api/artists'

I get the following response:
HTTP/1.1 201 Created Server: Apache-Coyote/1.1 X-Application-Context: application:development Location: http://localhost:8080/api/artists/ae5ff974-12e9-418a-b397-3f7cd104ea60 Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Sun, 14 Aug 2016 16:53:20 GMT {"id":{"class":"java.util.UUID","leastSignificantBits":-5505862214121887136,"mostSignificantBits":-5881708311393779318},"dateCreated":"2016-08-14T16:53:20Z","firstName":"Giri","lastName":"Pottepalem","lastUpdated":"2016-08-14T16:53:20Z"}

Now, let's try GET command to get the list:
curl -i -X GET 'http://localhost:8080/api/artists'

I get the following response:
HTTP/1.1 200 OK Server: Apache-Coyote/1.1 X-Application-Context: application:development Content-Type: application/json;charset=UTF-8 Transfer-Encoding: chunked Date: Sun, 14 Aug 2016 16:55:07 GMT [{"id":{"class":"java.util.UUID","leastSignificantBits":-8933569886755140856,"mostSignificantBits":2853479693881265650},"dateCreated":"2016-08-14T16:44:36Z","firstName":"Giri","lastName":"Pottepalem","lastUpdated":"2016-08-14T16:44:36Z"}]

Notice, that id which is of type UUID is bit more elaborate than expected. Also, version is not listed. The Date fields dateCreated and lastUpdated are in the format that's not human friendly.

In the next post, I will customize the response and secure some of the end points.

References

Documentations