2016/10/01

How to easily avoid switch-case statement with function mapping in JavaScript

I won't advertise why it's good to avoid switch-case. While this matter might still be an open dispute I think there are usually better ways to implement what a switch-case does.

My vote would go for polymorphism, unfortunately much JavaScript code out there is not written following good OOP principles. So if you just need to work with given code base but not making a revolution there is a neat way to go:
const actions = {
    value1: actionX,
    value2: actionY,
    value3: actionZ
};

actions[value]();
Where action X, Y, Z are functions.

Of course it's good to add some check for undefined value, like:
actions[value] && actions[value]();

Or an error might be thrown if no mapping found as follows:
const action = actions[value]
        ? actions[value]
        : (value) => throw new Error('No action mapped for value: ' + value);

action(value);
Well, that's it. I believe it helps to reduce clutter in code. Just compare it to a typical switch:
switch (value) {
    case 'value1':
        actionX();
        break;
    case 'value2':
        actionY();
        break;
    case 'value3':
        actionZ();
        break;
    default: throw new Error('No action mapped for value: ' + value)
}


Exactly the same idea can be done in Groovy, Java and other languages if you are not up for polymorphism on a given case. Although for some languages with no functional paradigm, like Java before version 8, you may need to create some sort of Action class definition.

How to easily avoid switch-case statement with function mapping in JavaScript

I won't advertise why it's good to avoid switch-case. While this matter might still be an open dispute I think there are usually better ways to implement what a switch-case does.

My vote would go for polymorphism, unfortunately much JavaScript code out there is not written following good OOP principles. So if you just need to work with given code base but not making a revolution there is a neat way to go:
const actions = {
    value1: actionX,
    value2: actionY,
    value3: actionZ
};

actions[value]();
Where action X, Y, Z are functions.

Of course it's good to add some check for undefined value, like:
actions[value] && actions[value]();

Or an error might be thrown if no mapping found as follows:
const action = actions[value]
        ? actions[value]
        : (value) => throw new Error('No action mapped for value: ' + value);

action(value);
Well, that's it. I believe it helps to reduce clutter in code if you compare it to something like:
switch (value) {
    case 'value1':
        actionX();
        break;
    case 'value2':
        actionY();
        break;
    case 'value3':
        actionZ();
        break;
    default: throw new Error('No action mapped for value: ' + value)
}

Exactly the same can be done in Groovy, Java and other languages if you are not up for polymorphism on a given case. Although for some languages with no functional paradigm, like Java before version 8, you may need to create some sort of Action class definition.

2015/10/12

Static vs dynamic method overloading with Java and Groovy

Groovy code may be quite similar to Java at the first glimpse. This may sometimes lead to caveats. A good example of such is method overloading by an argument type. Code may look the same in both languages but it's going to work different.

Example

Consider the following class:

public class Foo {

   public String bar(Object value) {
      return "Object: " + value;
   }

   public String bar(String value) {
      return "String: " + value;
   }

   public String bar(Integer value) {
      return "Integer: " + value;
   }
}

Although it's a Java class it doesn't really matter at this point whether it's Groovy or Java. The caller class matters.

JAVA: static binding for overloaded methods

Caller class written in Java:
public class StaticBindingExample {

   public static void main(String[] args) {
      Object number = 44;
      Object text = "plop!";

      Foo foo = new Foo();

      foo.bar(number);   // returns "Object: 44"
      foo.bar(text);     // returns "Object: plop!"
   }

}
By Java static nature, calling the bar() method always invokes the one which signature matches the argument declared type.

Groovy: dynamic binding for method overloading

That's the exact same code for caller class as in the previous snippet just written in Groovy:
class DynamicBindingExample {

   static void main(String[] args) {
      Object number = 44
      Object text = 'plop!'

      Foo foo = new Foo()

      foo.bar(number)  // returns "Integer: 44"
      foo.bar(text)    // returns "String: plop!"
   }
}
Here we can see the difference. Despite the arguments for the method call were declared as Object, Groovy dynamic type evaluation always tries to match the closest matching method at runtime. Thus methods relevant for the actual argument type were executed.

Advantage of dynamic binding

Please consider an example of some payment service written in Java:
public class PaymentService {

   private final CustomerRepository customerRepository;
   private final AccountService accountService;

   public PaymentService(CustomerRepository customerRepository, AccountService accountService) {
      this.customerRepository = customerRepository;
      this.accountService = accountService;
   }

   public void pay(Integer customerId, BigDecimal amount) {
      Customer customer = customerRepository.findById(customerId);   // may throw UnknownCustomerException
      accountService.substract(customer, amount);                    // may throw InsufficientFundsException
   }
}
How is going to look exception handling if we add try-catch block around business logic within the pay() method? Well, quite typical:
public void pay(Integer customerId, BigDecimal amount) {
   try {
      Customer customer = customerRepository.findById(customerId);   // may throw UnknownCustomerException
      accountService.substract(customer, amount);                    // may throw InsufficientFundsException
   } catch (UnknownCustomerException ex) {
      handle(ex);
   } catch (InsufficientFundsException ex) {
      handle(ex);
   } catch (Exception ex) {
      handle(ex);
   }
}

private void handle(UnknownCustomerException ex) {
   // relevant logic for handling unknown customer
}

private void handle(InsufficientFundsException ex) {
   // relevant logic for handling insufficient funds
}

private void handle(Exception ex) {
   // relevant logic for handling unexpected exception
}
Obviously if the handling logic isn't complex, delegation to separate methods may be skipped in favour of in-line handling inside of each catch block. Although splitting the logic into methods or encapsulating exception handling in injected collaborator is usually a better way and cleaner separation of concerns.

This way or the other we may see straight away that catch blocks are rather redundant in this situation. How would it look like with Groovy's dynamic overloading? It’s enough to have single, generic type, catch block. Invocation is routed to appropriate handle() method by the argument type anyway:
void pay(Integer customerId, BigDecimal amount) {
   try {
      Customer customer = customerRepository.findById(customerId)   // may throw UnknownCustomerException
      accountService.substract(customer, amount)                    // may throw InsufficientFundsException
   } catch (Exception ex) {
      handle(ex)
   }
}

private void handle(UnknownCustomerException ex) {
   // relevant logic for handling unknown customer
}

private void handle(InsufficientFundsException ex) {
   // relevant logic for handling insufficient funds
}

private void handle(Exception ex) {
   // relevant logic for handling unexpected exception
}
The same implemented with the less amount of a cleaner code? That's what a craftsman appreciates.

The same behaviour with Java static overloading

There is a way to achieve "the same" with pure Java, the Match Maker Design Pattern. I'm not sure about the name of the pattern itself though. I've got the feeling that Martin Fowler, Gang of Four or some other guru might have come with a better definition for such a case. I can't find it at the moment so let’s get back to the code (you're welcome to comment if you know it though).

Simply we may have a "routing" map of class type to be handled to the handler for it. In our case it can be done by adding mentioned map as the PaymentService class field and then using it in the catch block as follows. Let say the handlers map is injected via constructor then we simple have a few more lines of code:
public class PaymentService {
   // …
   private final Map<Class<? extends Exception>, ExceptionHandler> handlers;

   public PaymentService(CustomerRepository customerRepository, AccountService accountService,
                         Map<Class<? extends Exception>, ExceptionHandler> handlers) {
      // … 
      this.handlers = handlers;
   }

   public void pay(Integer customerId, BigDecimal amount) {
      try {
         // …
      } catch (Exception ex) {
         ExceptionHandler handler = handlers.get(ex.getClass());
         handler.handle(ex);
      }
   }
}
Obviously we need the handler interface as well, nothing surprising here:
interface ExceptionHandler {
   public void handle(Exception ex)
}
That's it, isn't it? Well, not quite, to be honest. To get the proper impression of how much more code actually is necessary the best way is to show the complete example. If we were about to encapsulate the same, full logic within a single class it would look like:
public class PaymentService {

   private final CustomerRepository customerRepository;
   private final AccountService accountService;
   private final Map<Class<? extends Exception>, ExceptionHandler> handlers;

   public PaymentService(CustomerRepository customerRepository, AccountService accountService,
                         Map<Class<? extends Exception>, ExceptionHandler> handlers) {
      this.customerRepository = customerRepository;
      this.accountService = accountService;
      this.handlers = createExceptionHandlers();
   }

   public void pay(Integer customerId, BigDecimal amount) {
      try {
         Customer customer = customerRepository.findById(customerId);   // may throw UnknownCustomerException
         accountService.substract(customer, amount);                    // may throw InsufficientFundsException
      } catch (Exception ex) {
         ExceptionHandler handler = handlers.get(ex.getClass());
         handler.handle(ex);
      }
   }

   private Map<Class<? extends Exception>, ExceptionHandler> createExceptionHandlers() {
      HashMap<Class<? extends Exception>, ExceptionHandler> handlers = new HashMap<>();
      handlers.put(UnknownCustomerException.class, createUnknownCustomerExceptionHandler());
      handlers.put(InsufficientFundsException.class, createInsufficientFundsExceptionHandler());
      handlers.put(Exception.class, createUnexpectedExceptionHandler());
      return handlers;
   }

   private ExceptionHandler createUnknownCustomerExceptionHandler() {
      return new ExceptionHandler() {
         @Override
         void handle(Exception ex) {
            // relevant logic for handling unknown customer
         }
      };
   }

   private ExceptionHandler createInsufficientFundsExceptionHandler() {
      return new ExceptionHandler() {
         @Override
         void handle(Exception ex) {
            // relevant logic for handling insufficient funds
         }
      };
   }

   private ExceptionHandler createUnexpectedExceptionHandler() {
      return new ExceptionHandler() {
         @Override
         void handle(Exception ex) {
            // relevant logic for handling unexpected exception
         }
      };
   }

}

Summary

Clearly solution complexity may grow really fast if one wants to mimic dynamic binding behaviour in a language which by its nature does it the static way. Dynamic method overloading comes then as really helpful thing which allows avoiding unnecessary clutter in the code.

On the other hand it suits well rather simpler scenarios of dealing with objects from the same inheritance tree. It doesn't have to always be the best approach though. Too much logic placed within a single class is almost never a good idea. As usual the trick is to choose the proper solution for the job as well as the programming language itself.

2014/12/31

Merge multiple log files preserving entries order

Problem

When solving issues occurred in bigger, especially multi-threading or even multi-processes, applications it happens there is a need to work with multiple log files written at the same time.
To get a full view of what's happened in such an app the most convenient would be to have a single log file combined with chronological order of log entries preserved.

This article shows how to achieve this with just a linux sort command without breaking multi-lines entries (f.g. Java stacktraces).

Example

As my case was almost the same I took below samples from stackoverflow question (which features my answer as well).
Additionally, I prepended each entry with date to match my case better. This doesn't change anything though.

Input

To simplify example all log entries are from the same date, hour and minute, but solution works for any date/time log entries.

file1.log
2014-12-31 11:48:18.825 [main] INFO  org.hibernate.cfg.Environment - HHH000206: hibernate.properties not found
2014-12-31 11:48:55.784 [main] INFO  o.h.tool.hbm2ddl.SchemaUpdate - HHH000396: Updating schema

file2.log
2014-12-31 11:48:35.377 [qtp1484319352-19] ERROR c.w.b.c.ControllerErrorHandler -
org.springframework.beans.TypeMismatchException: Failed to convert value of type   'java.lang.String' to required type 'org.joda.time.LocalDate'; nested exception is    org.springframework.core.convert.ConversionFailedException: Failed to convert from type     java.lang.String to type @org.springframework.web.bind.annotation.RequestParam   @org.springframework.format.annotation.DateTimeFormat org.joda.time.LocalDate for value    '[2013-03-26]'; nested exception is java.lang.IllegalArgumentException: Invalid format: "    [2013-03-26]"
    at org.springframework.beans.TypeConverterSupport.doConvert(TypeConverterSupport.java:68) ~[spring-beans-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.beans.TypeConverterSupport.convertIfNecessary(TypeConverterSupport.java:45) ~[spring-beans-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.validation.DataBinder.convertIfNecessary(DataBinder.java:595) ~[spring-context-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:98) ~[spring-web-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:77) ~[spring-web-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:162) ~[spring-web-3.2.1.RELEASE]

Expected output

As you can see to preserve chronological order upon merge entry from file2.log has to get between two entries from file1.log. We want stacktrace to stick with its entry as well.

2014-12-31 11:48:18.825 [main] INFO  org.hibernate.cfg.Environment - HHH000206: hibernate.properties not found
2014-12-31 11:48:35.377 [qtp1484319352-19] ERROR c.w.b.c.ControllerErrorHandler -
org.springframework.beans.TypeMismatchException: Failed to convert value of type   'java.lang.String' to required type 'org.joda.time.LocalDate'; nested exception is    org.springframework.core.convert.ConversionFailedException: Failed to convert from type     java.lang.String to type @org.springframework.web.bind.annotation.RequestParam   @org.springframework.format.annotation.DateTimeFormat org.joda.time.LocalDate for value    '[2013-03-26]'; nested exception is java.lang.IllegalArgumentException: Invalid format: "    [2013-03-26]"
    at org.springframework.beans.TypeConverterSupport.doConvert(TypeConverterSupport.java:68) ~[spring-beans-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.beans.TypeConverterSupport.convertIfNecessary(TypeConverterSupport.java:45) ~[spring-beans-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.validation.DataBinder.convertIfNecessary(DataBinder.java:595) ~[spring-context-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:98) ~[spring-web-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:77) ~[spring-web-3.2.1.RELEASE.jar:3.2.1.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:162) ~[spring-web-3.2.1.RELEASE]
2014-12-31 11:48:55.784 [main] INFO  o.h.tool.hbm2ddl.SchemaUpdate - HHH000396: Updating schema

Solution

sort -nmbs -k1.1,1.4 -k1.6,1.7 -k1.9,1.10 -k2.1,2.2 -k2.4,2.5 -k2.7,2.8 -k2.10,2.12 file1.log file2.log > merged.log
The above command does the trick. Of course, input may be provided with wildcards as well. Like file?.log or *.log instead specifying each file separately.

Explanation

According to the man pages used switches mean:
-n, --numeric-sort - compare according to string numerical value.
-b, --ignore-leading-blanks - ignore leading blanks.
-s, --stable - stabilize sort by disabling last-resort comparison
-m, --merge - merge already sorted files; do not sort
-k, --key=POS1[,POS2] - start a key at POS1 (origin 1), end it at POS2 (default end of line)
It's not easy to get comprehensive info about sort command. However while experimenting with the command I got some insight and I'll try to give a some explanation. If you find any inaccuracies or mistakes please leave a comment.

Compare only numeric values

The -n switch is supposed to speed up numeric comparison. We want that.

Apart from this however, apparently it stops comparing key whenever there is a non-numeric character in it. That's crucial for keeping multi-lines entries like stacktraces in place (see below).

Merge, don't sort

Log files are already ordered so we don't want to sort them again, only determine which line goes first upon merging. That's why -m switch is used.

When merge switch is combined with numeric sort, lines which doesn't have numeric values for specified keys 'are preferred' in comparison over lines with proper keys (those containing date/time). This way stacktrace lines are copied to output file until next line with date/time key is spotted.

I think that's a bit accidental behaviour but really crucial for our case.

Specify proper keys

The most important part are keys for sort (merge) comparison. A key is specified with -k switch followed by a column position in a file line. Let say -k1,5 means that sort (merge) comparison is done by all columns from column 1 to column 5. By default columns are delimited by blanks, like spaces or tabs.
Specifying only a single column like -k2 results with comparison done by the column and everything behind it till the end of a line. That's why it's important to specify at least two columns or the same column twice, like -k1,1 if there is only one column we want to order by.

As we are concern in preserving date-time log entries order, it seems that in our case sufficient should be a key like -k1,2. First column is date, second column is time, voila!. There is a catch though. As mentioned above, key comparison with -n switch used will stop on any non-numeric character, which is dash after year in the sample log entries. This means that only a year going to be compared as a sort key upon merging and we'd end up with a the same result as if cat *.log > merged.log command was used -considering all log entries in input files are from the same year. Which is usually the case, obviously.
On the other hand not using -n switch sorts input files which will result with all the stactrace lines (for all the stacktraces) clustered in alphabetical order at the top of the file. Not good...
That's why keys needs to be specified in more granular way by pointing to specific character positions in each column. It can be done with a dot and in-a-column-position of a character like -k1.1,1.4 for four digits of a year, then -k1.6,1.7 for first and second digit of a month, -k1.9,1.10 for day digits, -k2.1,2.2 for hour digits, and so on.

Keys themselves may be provided in a different order for the command if your logs format is different (all input files need to share the same format though). Let say each entry starts with a date but written as 12/31/2014. Just go with following key switches then:
-k1.7,1.10 -k1.1,1.2 -k1.4,1.5  (year, month, day).

Ignore leading blanks

This probably doesn't changes anything for our particular case but I left -b just in case as the most of stactrace lines begin with spaces.

Stabilize sort

The -s switch disables last-resort comparison. That's the default behaviour which, whenever keys by which two lines are being compared are the same, falls back to full string comparison of the whole lines.
We like to preserve original order of log entries even for lines logged in the exact same millisecond. That's why this switch may be helpful for our case. Moreover it may slightly speed up the command as well.

2013/06/29

Grails enum custom database value mapping

About

How to map a custom value and type of enum constants into database with Grails Domain.

TL;DR

Just add id field to the enum class and set its value for each enum constant.

Example case

When modelling domain there is often some enum domain class introduced. Such as WhateverType or SomethingsStatus. Let say we want to use ordinal instead of default GORM's text mapping.
class SomeDomainElement {
    Level level

    static mapping = {
        level enumType: 'ordinal'
    }
}
Our enum presents itself as follows:
enum Level {
    EASY,
    MEDIUM,
    HARD
}
Latter introduction of a new level may happen. Lest say ADVANCED. It would be really tempting to place such between levels MEDIUM and HARD. This however would have changed the position number of level HARD. Stop! Database mapping will change either. What about already stored 'levels HARD'?

Solution

Remove the mapping block from SomeDomainElement. It won't be needed any more. Just add the id field to the enum constants.
enum Level {
    EASY(1),
    MEDIUM(2),
    ADVANCED(4),
    HARD(3)

    final int id
    private Level(int id) { this.id = id }
}
The field must be named id so Grails would map it automatically as DB value.

Any serializable and known by Hibernate type can be used instead of int. Like char, String, BigDecimal, Date and so on.

2013/04/06

Thought static method can't be easy to mock, stub nor track? Wrong!

No matter why, no matter is it a good idea. Sometimes one just wants to check or it's necessary to be done. Mock a static method, woot? Impossibru!

In pure Java world it is still a struggle. But Groovy allows you to do that really simple. Well, not groovy alone, but with a great support of Spock.

Lets move on straight to the example. To catch some context we have an abstract for the example needs. A marketing project with a set of offers. One to many.

import spock.lang.Specification

class OfferFacadeSpec extends Specification {

    OfferFacade facade = new OfferFacade()

    def setup() {
        GroovyMock(Project, global: true)
    }

    def 'delegates an add offer call to the domain with proper params'() {
        given:
            Map params = [projId: projectId, name: offerName]

        when:
            Offer returnedOffer = facade.add(params)

        then:
            1 * Project.addOffer(projectId, _) >> { projId, offer -> offer }
            returnedOffer.name == params.name

        where:
            projectId | offerName
            1         | 'an Offer'
            15        | 'whasup!?'
            123       | 'doskonaƂa oferta - kup teraz!'
    }
}
So we test a facade responsible for handling "add offer to the project" call triggered  somewhere in a GUI.
We want to ensure that static method Project.addOffer(long, Offer) will receive correct params when java.util.Map with user form input comes to the facade.add(params).
This is unit test, so how Project.addOffer() works is out of scope. Thus we want to stub it.

The most important is a GroovyMock(Project, global: true) statement.
What it does is modifing Project class to behave like a Spock's mock. 
GroovyMock() itself is a method inherited from SpecificationThe global flag is necessary to enable mocking static methods.
However when one comes to the need of mocking static method, author of Spock Framework advice to consider redesigning of implementation. It's not a bad advice, I must say.

Another important thing are assertions at then: block. First one checks an interaction, if the Project.addOffer() method was called exactly once, with a 1st argument equal to the projectId and some other param (we don't have an object instance yet to assert anything about it).
Right shit operator leads us to the stub which replaces original method implementation by such statement.
As a good stub it does nothing. The original method definition has return type Offer. The stub needs to do the same. So an offer passed as the 2nd argument is just returned.
Thanks to this we can assert about name property if it's equal with the value from params. If no return was designed the name could be checked inside the stub Closure, prefixed with an assert keyword.

Worth of  mentioning is that if you want to track interactions of original static method implementation without replacing it, then you should try using GroovySpy instead of GroovyMock.

Unfortunately static methods declared at Java object can't be treated in such ways. Though regular mocks and whole goodness of Spock can be used to test pure Java code, which is awesome anyway :)

2013/02/21

Grails session timeout without XML

This article shows clean, non hacky way of configuring featureful event listeners for Grails application servlet context. Feat. HttpSessionListener as a Spring bean example with session timeout depending on whether user account is premium or not.

Common approaches

Speaking of session timeout config in Grails, a default approach is to install templates with a command. This way we got direct access to web.xml file. Also more unnecessary files are created. Despite that unnecessary files are unnecessary, we should also remember some other common knowledge: XML is not for humans.

Another, a bit more hacky, way is to create mysterious scripts/_Events.groovy file. Inside of which, by using not less enigmatic closure: eventWebXmlEnd = { filename -> ... }we can parse and hack into web.xml with a help of XmlSlurper.
Even though lot of Grails plugins do it similar way, still it’s not really straightforward, is it? Besides, where’s the IDE support? Hello!?

Examples of both above ways can be seen on StackOverflow.

Simpler and cleaner way

By adding just a single line to the already generated init closure we have it done:
class BootStrap {

    def init = { servletContext ->    
        servletContext.addListener(OurListenerClass)    
    }    
}

Allrighty, this is enough to avoid XML. Sweets are served after the main course though :)

Listener as a Spring bean

Let us assume we have a requirement. Set a longer session timeout for premium user account.
Users are authenticated upon session creation through SSO.

To easy meet the requirements just instantiate the CustomTimeoutSessionListener as Spring bean at resources.groovy. We also going to need some source of the user custom session timeout. Let say a ConfigService.
beans = {    
    customTimeoutSessionListener(CustomTimeoutSessionListener) {    
        configService = ref('configService')    
    }    
}

With such approach BootStrap.groovy has to by slightly modified. To keep control on listener instantation, instead of passing listener class type, Spring bean is injected by Grails and the instance passed:
class BootStrap {

    def customTimeoutSessionListener

    def init = { servletContext ->    
        servletContext.addListener(customTimeoutSessionListener)
    }    
}

An example CustomTimeoutSessionListener implementation can look like:
import javax.servlet.http.HttpSessionEvent    
import javax.servlet.http.HttpSessionListener    
import your.app.ConfigService    
    
class CustomTimeoutSessionListener implements HttpSessionListener {    
    
    ConfigService configService
    
    @Override    
    void sessionCreated(HttpSessionEvent httpSessionEvent) {    
        httpSessionEvent.session.maxInactiveInterval = configService.sessionTimeoutSeconds
    }    
    
    @Override    
    void sessionDestroyed(HttpSessionEvent httpSessionEvent) { /* nothing to implement */ }    
}
Having at hand all power of the Spring IoC this is surely a good place to load some persisted user’s account stuff into the session or to notify any other adequate bean about user presence.

Wait, what about the user context?

Honest answer is: that depends on your case. Yet here’s an example of getSessionTimeoutMinutes() implementation using Spring Security:
import org.springframework.security.core.context.SecurityContextHolder    
    
class ConfigService {

    static final int 3H = 3 * 60 * 60
    static final int QUARTER = 15 * 60
    
    int getSessionTimeoutSeconds() {    
    
        String username = SecurityContextHolder.context?.authentication?.principal    
        def account = Account.findByUsername(username)    
    
        return account?.premium ? 3H : QUARTER
    }    
}
This example is simplified. Does not contain much of defensive programming. Just an assumption that principal is already set and is a String - unique username. Thanks to Grails convention our ConfigService is transactional so the Account domain class can use GORM dynamic finder.
OK, config fetching implementation details are out of scope here anyway. You can get, load, fetch, obtain from wherever you like to. Domain persistence, principal object, role config, external file and so on...

Any gotchas?

There is one. When running grails test command, servletContext comes as some mocked class instance without addListener method. Thus we going to have a MissingMethodException when running tests :(

Solution is typical:
def init = { servletContext ->
    if (Environment.current != Environment.TEST) {    
        servletContext.addListener(customTimeoutSessionListener)    
    }    
}
An unnecessary obstacle if you ask me. Should I submit a Jira issue about that?

TL;DR

Just implement a HttpSessionListener. Create a Spring bean of the listener. Inject it into BootStrap.groovy and call servletContext.addListener(injectedListener).