Most of my team's applications authenticate off of our application specific user data stored in a good old relational database. However we have a single, internal operations application that uses the company wide Active Directory (AD) server for user authentication. When I first developed the application there were thoughts of many more applications performing authentication against AD and LDAP directory access via Java has always been a little awkward so we decided to deploy an Atlassian Crowd instance. Crowd works well and exposes a simple REST interface for user authentication but it was one extra server and application to maintain and monitor. Given that we only had one application using it and we are hitting one AD instance on the backend, it became a bit of unnecessary overhead.
I've never been much of an AD or even LDAP expert, but I decided to look into authenticating off of AD directly from my Java application. The best LDAP library I found is Spring LDAP and we already use Spring for all of dependency injection so it was a natural fit. I expected to spend a day or two wading through distinguished names (DNs), properties, AD hierarchies, etc. but to my surprise I was able to get it up and running in a few lines of code as shown here:
To get all the various configuration values I needed, I simply looked at my Jenkins LDAP setup as well as the original Crowd configuration. Between the two and through a little poking around using JXplorer, I was able to find all the information I needed.
The active directory at my company requires an authenticated user in order to browse the directory so I specify this information as the user DN and password. I can then issue authentication requests against any user in AD. Spring LDAP hides a lot of the gory details of crawling the directory, finding the user, and performing the authentication check. You'll obviously have to tweak the base DN for your own configuration and in a heavily used application, you'll probably also want to look into Spring LDAP's pooling support.
In the long run I'd like to get Crowd more fully configured and supported so I can point a bunch of my internal tools to it (like Git, SVN, Jenkins, etc) but for now I can shut it down and let my one application hit AD directly.
Ramblings on my life and work including family, coding, and whatever else I'm doing.
Thursday, November 21, 2013
Monday, October 28, 2013
Helmsman 1.0 Available
I'm happy to announce the first release of Helmsman, a simple, easy to configure, easy to deploy, service control tool. My current project is a micro-services architecture which requires starting, stopping, and checking the status of 12 to 15 daemons on a couple of machines during deployments and maintenance sessions. Helmsman makes this process simple, reliable, and quick. You tell Helmsman where to steer the ship and it politely abides.
This led to a rewrite in Python which was chosen because it was quick to put together and has pretty good subprocess execution and monitoring capabilities. Unfortunately the implementation tried to be a little too fancy and do some nice ASCII art to show service status which would cause the interpreter to crash in various term configurations. We also ran into issues keeping the Python install (and supporting modules) consistent across the various Solaris Sparc, Solaris x86, RedHat x86, OSX, and Windows machines in use through-out different environments (a discussion for another time).
I then spent some time looking for alternatives. Unfortunately I didn't find anything that was simple to install, cross platform, and would be maintainable by our (mostly Java) development team. I looked at monit, Chef, Puppet, RunDeck, Jenkins, Upstartd, etc. but they felt way too heavy weight or got us back into the issue of needing another runtime across all of our machines. We're not a huge shop so having to build out Puppet scripts to consistently install a runtime to start and stop services just doesn't seem like time well spent.
Given that our main applications are written in Java and we already maintain JVM installs on all machines and our developers know Java well, it seemed like an obvious choice. I spent a few hours playing around with commons-exec and how to format the output and debugging information to be readable and support all terminals, I was able to rewrite the Python scripts in a day. Helmsman was born.
History
Helmsman's legacy is a collection of SystemV style init scripts that were copied to different machines and manually maintained or symlinked all over the place. Needless to say that didn't scale very well and the scripts started to drift between the machines. We also ran into issues with permissions because we didn't want the entire development team or QA team having root access but the scripts needed to be maintained, linked to the appropriate run level, and services controlled.This led to a rewrite in Python which was chosen because it was quick to put together and has pretty good subprocess execution and monitoring capabilities. Unfortunately the implementation tried to be a little too fancy and do some nice ASCII art to show service status which would cause the interpreter to crash in various term configurations. We also ran into issues keeping the Python install (and supporting modules) consistent across the various Solaris Sparc, Solaris x86, RedHat x86, OSX, and Windows machines in use through-out different environments (a discussion for another time).
I then spent some time looking for alternatives. Unfortunately I didn't find anything that was simple to install, cross platform, and would be maintainable by our (mostly Java) development team. I looked at monit, Chef, Puppet, RunDeck, Jenkins, Upstartd, etc. but they felt way too heavy weight or got us back into the issue of needing another runtime across all of our machines. We're not a huge shop so having to build out Puppet scripts to consistently install a runtime to start and stop services just doesn't seem like time well spent.
Given that our main applications are written in Java and we already maintain JVM installs on all machines and our developers know Java well, it seemed like an obvious choice. I spent a few hours playing around with commons-exec and how to format the output and debugging information to be readable and support all terminals, I was able to rewrite the Python scripts in a day. Helmsman was born.
My Process
I deploy Helmsman with our application. So our deployment scripts (automated via Jenkins) push a copy of Helmsman out with our deployment, stop all the services using the existing version, move everything out of the way, install the new deployment, and then use the new Helmsman to start everything back up. This makes it super simple to make sure that the same version is on every machine and that all changes are getting pushed out reliably just like the rest of our build.
In test/stage environments, I have versions setup for QA to use to start and stop key services during testing to test failover, redundancy, etc. We also use the groups feature to define services that need to stay up even when in maintenance mode or services that should run at the warm standby site.
Features
Some of the features include:- Simple Java properties configuration format
- One jar deployment
- Base configuration shared across all environments/machines
- Per machine configuration overrides
- Simple service start/stop ordering (no dependency model)
- Parallel or serial service execution
Configuration is done via Java properties files which list the names of the services and then a few basic properties for each service. The "services" are simply a command to be executed which follows the SystemV init script style of taking an argument of start, stop or status. These scripts can be custom written but in most cases they will be provided by frameworks like Java Service Wrapper (JSW) or Yet Another Java Service Wrapper (YAJSW) or by your container.
Get It
You can grab the source from my Github repo or grab a precompiled version from my Github hosted MVN repo. Checkout the Github page for more details on how to use the tool.
Let me know if you find a use for Helmsman in your process. Hopefully it makes your life a bit easier.
Wednesday, July 31, 2013
Vaadin, Shiro, and Push
I've been using Vaadin for the past few months on a large project and I've been really impressed with it. I've also been using Apache Shiro for all of the projects authentication, authorization, and crypto needs. Again, very impressed.
Up until Vaadin 7.1, I've just been relying on my old ShiroFilter based configuration of Shiro using the DefaultWebSecurityManager. While this configuration wasn't an exact fit for a Vaadin rich internet application (RIA), it worked well enough that I never changed it. The filter would initialize the security manager and the Subject and it was available via the SecurityUtils as expected.
Then Vaadin 7.1 came along with push support via Atmosphere. Depending on the transport used, Shiro's SecurityUtils can no longer be used because it depends on the filter to bind the Subject to the current thread but, for example, a Websocket transport won't use the normal servlet thread mechanism and a long standing connection may be suspended and resumed on different threads.
There is a helpful tip for using Shiro with Atmosphere where the basic idea is to not use SecurityUtils and to simply bind the subject to the Atmosphere request. Vaadin does a good job of abstracting away the underlying transport which means there is little direct access to the Atmosphere request; however Vaadin does implement a VaadinSession which is the obvious place to stash the Shiro Subject.
First things first, I switched from using the DefaultWebSecurityManager to just using the DefaultSecurityManager. I also removed the ShiroFilter from my web.xml. With the modular design of Shiro I was still able to use my existing Realm implementation and just rely on implementing authc/authz in the application logic itself. The Vaadin wiki has some good, general examples of how to do this. Essentially this changes the security model from web security where you apply authc/authz on each incoming HTTP request to native/application security where you implement authc/authz in the application and assume a persistent connection to the client.
Next up, I needed a way to locate the Subject without relying on SecurityUtils due to the thread limitations mentioned above. Following the general idea of using Shiro with Atmosphere, I wrote a simple VaadinSecurityContext class that provides similar functionality but binds the Subject to the VaadinSession rather than to a thread. Now that I don't have the SecurityUtils singleton anymore, I rely on Spring to inject the context into my views (and view-models) as need using the elegant spring-vaadin plugin.
At this point everything was working and I have full authc/authz with Shiro and Vaadin push support. But, the Shiro DefaultSecurityManager uses a DefaultSessionManager internally to manage the security Session for the Subject. While you could leave it like this, I didn't like the fact that my security sessions were being managed separately from my Vaadin/UI sessions. This was going to be a problem when it came to session expiration because Vaadin already has UI expiration times and VaadinSession expiration times and I was now introducing security Session expiration times. The odds of getting them all to work together nicely was slim and I can imagine users getting randomly logged out while still having valid UIs or VaadinSessions.
My solution was to write a custom Shiro SessionManager and inject it into the DefaultSecurityManager. My implementation is very simple with the assumption that whenever a Shiro Session is needed, a user specific VaadinSession is available. The VaadinSessionManager creates a new session (using Shiro's SimpleSessionFactory) and stashes it in the user specific VaadinSession. Expiration of the Shiro Session (and Subject) are now tied to the expiration of the VaadinSession. While I could have used the DefaultSessionMananger and implemented a custom Shiro SessionDAO, I didn't see that the DefaultSessionManager offered me much given that I did not want Session expiration/validation support.
So that's it. I wire it all up with Spring and I now have Shiro working happily with Vaadin. The best part is that none of my existing authc/authz code changed because it all simply works with the Shiro Subject obtained via the VaadinSecurityContext. In the future if I need to change up this configuration, I expect that my authc/authz code will remain exactly the same and all the changes will be under the hood with some Spring context updates.
I'm interested to hear if anyone else found a good way to link up these two great frameworks or if you see any holes in my approach. I'm no expert on Atmosphere and Vaadin does a good bit of magic to dynamically kickoff server push, but so far things have been working well. Best of luck!
Up until Vaadin 7.1, I've just been relying on my old ShiroFilter based configuration of Shiro using the DefaultWebSecurityManager. While this configuration wasn't an exact fit for a Vaadin rich internet application (RIA), it worked well enough that I never changed it. The filter would initialize the security manager and the Subject and it was available via the SecurityUtils as expected.
Then Vaadin 7.1 came along with push support via Atmosphere. Depending on the transport used, Shiro's SecurityUtils can no longer be used because it depends on the filter to bind the Subject to the current thread but, for example, a Websocket transport won't use the normal servlet thread mechanism and a long standing connection may be suspended and resumed on different threads.
There is a helpful tip for using Shiro with Atmosphere where the basic idea is to not use SecurityUtils and to simply bind the subject to the Atmosphere request. Vaadin does a good job of abstracting away the underlying transport which means there is little direct access to the Atmosphere request; however Vaadin does implement a VaadinSession which is the obvious place to stash the Shiro Subject.
First things first, I switched from using the DefaultWebSecurityManager to just using the DefaultSecurityManager. I also removed the ShiroFilter from my web.xml. With the modular design of Shiro I was still able to use my existing Realm implementation and just rely on implementing authc/authz in the application logic itself. The Vaadin wiki has some good, general examples of how to do this. Essentially this changes the security model from web security where you apply authc/authz on each incoming HTTP request to native/application security where you implement authc/authz in the application and assume a persistent connection to the client.
Next up, I needed a way to locate the Subject without relying on SecurityUtils due to the thread limitations mentioned above. Following the general idea of using Shiro with Atmosphere, I wrote a simple VaadinSecurityContext class that provides similar functionality but binds the Subject to the VaadinSession rather than to a thread. Now that I don't have the SecurityUtils singleton anymore, I rely on Spring to inject the context into my views (and view-models) as need using the elegant spring-vaadin plugin.
At this point everything was working and I have full authc/authz with Shiro and Vaadin push support. But, the Shiro DefaultSecurityManager uses a DefaultSessionManager internally to manage the security Session for the Subject. While you could leave it like this, I didn't like the fact that my security sessions were being managed separately from my Vaadin/UI sessions. This was going to be a problem when it came to session expiration because Vaadin already has UI expiration times and VaadinSession expiration times and I was now introducing security Session expiration times. The odds of getting them all to work together nicely was slim and I can imagine users getting randomly logged out while still having valid UIs or VaadinSessions.
My solution was to write a custom Shiro SessionManager and inject it into the DefaultSecurityManager. My implementation is very simple with the assumption that whenever a Shiro Session is needed, a user specific VaadinSession is available. The VaadinSessionManager creates a new session (using Shiro's SimpleSessionFactory) and stashes it in the user specific VaadinSession. Expiration of the Shiro Session (and Subject) are now tied to the expiration of the VaadinSession. While I could have used the DefaultSessionMananger and implemented a custom Shiro SessionDAO, I didn't see that the DefaultSessionManager offered me much given that I did not want Session expiration/validation support.
So that's it. I wire it all up with Spring and I now have Shiro working happily with Vaadin. The best part is that none of my existing authc/authz code changed because it all simply works with the Shiro Subject obtained via the VaadinSecurityContext. In the future if I need to change up this configuration, I expect that my authc/authz code will remain exactly the same and all the changes will be under the hood with some Spring context updates.
I'm interested to hear if anyone else found a good way to link up these two great frameworks or if you see any holes in my approach. I'm no expert on Atmosphere and Vaadin does a good bit of magic to dynamically kickoff server push, but so far things have been working well. Best of luck!
Thursday, February 7, 2013
HazelcastMQ Stompee Now Available
After implementing Stomper, the Java STOMP server, I wanted an easy way to test it and demonstrate its functionality. As of today, Stompee, the Java STOMP client is available at GitHub. Stompee is a generic STOMP client, and therefore should work with any STOMP server, but was designed and tested against Stomper.
In most cases, you're probably better off just using the HazelcastMQ JMS APIs rather than using Stompee, but there may be some cases where you want all your components, regardless of language, using STOMP for message passing.
While working on Stompee, I found and fixed a couple of bugs in HazelcastMQ. Most importantly, I switched the queue MessageConsumer implementation from listening to Hazelcast events to using a separate polling thread. This should result in better performance and handle the case were messages queued before the consumer started are properly consumed immediately.
I added a few new examples in the hazelcastmq-examples module that demonstrates how to use Stomper, Stompee, and the JMS APIs together. At this point I'm ready to begin some serious production testing and push for a 1.0 release. Let me know what you think.
In most cases, you're probably better off just using the HazelcastMQ JMS APIs rather than using Stompee, but there may be some cases where you want all your components, regardless of language, using STOMP for message passing.
While working on Stompee, I found and fixed a couple of bugs in HazelcastMQ. Most importantly, I switched the queue MessageConsumer implementation from listening to Hazelcast events to using a separate polling thread. This should result in better performance and handle the case were messages queued before the consumer started are properly consumed immediately.
I added a few new examples in the hazelcastmq-examples module that demonstrates how to use Stomper, Stompee, and the JMS APIs together. At this point I'm ready to begin some serious production testing and push for a 1.0 release. Let me know what you think.
Sunday, February 3, 2013
Patio Bench Project Complete
I took a small break from coding this weekend and spent some time in the workshop. I completed my first large project, a patio bench, using the plans from Woodworking for Mere Mortals. I ran into a few issues with incorrect measurements in the published cut list but overall it was a fun and pretty easy project. Now I just need to wait for the temperature to get above freezing for a few days so I can drag it outside and get it painted.
I'm already on to building a table saw sled to fix the fact that my cheap fence is always out of square. I also owe a few people some children's toys for gifts but after that I'll be looking for my next large project. I'm thinking maybe a name puzzle step stool for my daughter.
I'm already on to building a table saw sled to fix the fact that my cheap fence is always out of square. I also owe a few people some children's toys for gifts but after that I'll be looking for my next large project. I'm thinking maybe a name puzzle step stool for my daughter.
HazelcastMQ Stomper Now Available
As I mentioned in a previous post, to completely move off of ActiveMQ as my message broker, I would need a messaging solution for some of the C and C++ components in my system. Currently these components use the STOMP interface exposed by the ActiveMQ broker. After getting the initial version of HazelcastMQ pushed to GitHub, I started working on a STOMP server implementation. Today I'm happy to announce that the first cut at the server is now also available in GitHub as HazelcastMQ Stomper.
Stomper is implemented as a layer on HazelcastMQ so it uses all the standard JMS components for message sending and receiving. Using the JMS components allows a STOMP client to easily interact with JMS clients creating a language agnostic messaging solution on Hazelcast. While Stomper was written and tested with HazelcastMQ JMS components, it should work with any JMS provider if you wanted to use it as a generic STOMP server implementation. You can also use Stomper's STOMP server without dealing with any of the JMS APIs other than creating the initial connection factory.
With Stomper and HazelcastMQ you can now send a message to a queue or topic via the STOMP API and receive it via the JMS API (all on a distributed cluster of Hazelcast nodes). The same works in the opposite direction. This is extremely useful in a mixed language environment where Python, Ruby, C, or C++ components need to interact with Java components without having to implement a custom message passing solution.
Check out the examples in the hazelcastmq-examples module and give it a try. The next step is to write a simple STOMP client (to be named HazelcastMQ Stompee) which will allow me to write some simple producer/consumer examples using just the STOMP API (i.e. no direct JMS interaction).
Stomper is implemented as a layer on HazelcastMQ so it uses all the standard JMS components for message sending and receiving. Using the JMS components allows a STOMP client to easily interact with JMS clients creating a language agnostic messaging solution on Hazelcast. While Stomper was written and tested with HazelcastMQ JMS components, it should work with any JMS provider if you wanted to use it as a generic STOMP server implementation. You can also use Stomper's STOMP server without dealing with any of the JMS APIs other than creating the initial connection factory.
With Stomper and HazelcastMQ you can now send a message to a queue or topic via the STOMP API and receive it via the JMS API (all on a distributed cluster of Hazelcast nodes). The same works in the opposite direction. This is extremely useful in a mixed language environment where Python, Ruby, C, or C++ components need to interact with Java components without having to implement a custom message passing solution.
Check out the examples in the hazelcastmq-examples module and give it a try. The next step is to write a simple STOMP client (to be named HazelcastMQ Stompee) which will allow me to write some simple producer/consumer examples using just the STOMP API (i.e. no direct JMS interaction).
Thursday, January 24, 2013
HazelcastMQ Code Available
I setup a GitHub repository for my HazelcastMQ implementation. The README contains a list of what I have working and what still needs to be done. I plan on throwing a few more hours at it this week to try to get a few more features in there soon.
Tuesday, January 22, 2013
Hazelcast JMS Provider
I'm currently using Apache ActiveMQ on a project in a "network of brokers" configuration with Apache Camel used to implement the enterprise integration patterns. From a development point of view, you work with a Camel generated proxy and setup routes to transform dispatch, and receive messages. This hides all the Java Messaging Service (JMS) complexities behind Camel's JMS (or ActiveMQ) components. My system primarily uses this setup for low volume, asynchronous work requests or RPC request/reply operations.
For the most part things work well but it has been a constant fight trying to get a stable system. The issues range from configuration complexity, network instability, deployment requirements, and just plain old bugs. Both ActiveMQ and Camel have a rather large set of dependencies and properly configuring a JMS broker and clients is rather tricky. A small configuration issue can cause your system to slowly grind to halt (usually only under production loads)! While ActiveMQ is one of the easier brokers to configure and use (it has Spring namespace bindings and can be used without JNDI), I could still imaging a full time position for broker management, configuration, monitoring, etc.
On another part of the system I've been prototyping some distributed locking using Hazelcast, an in-memory data grid that implements Java's Map, List, Queue, and Lock interfaces. I've been extremely impressed with how easy it is to get a Hazelcast cluster up and running and the fact that it has few, if any dependencies is another plus. As a data grid, clustering is a first class citizen in Hazelcast unlike a lot of JMS brokers which use a hub and spoke pattern with a "network of brokers" as a bit of an after thought for high availability.
While working on the project, I got to thinking about switching out the ActiveMQ broker with Hazelcast using queues and topics. There are a number of pros and cons to consider:
Pros
After a day or so of work I have a simple, working implementation of a JMS provider on top Hazelcast which supports message producers and consumers on topics, queues, and temporary queues. I was even able to swap out the ActiveMQ ConnectionFactory for my HazelcastMQ ConnectionFactory with no code changes in my application. So far things are looking promising. The big advantage of implementing the JMS APIs is that it "just works" with Apache Camel, the Spring JmsTemplate API, or other JMS consumer.
There are some limitations that I don't have a good solution for yet. For example, implementing consumer message selectors isn't an option with Hazelcast queues (although they do support them with maps). Also, some of the JMS options like message persistence can't be honored because persistence would be all or nothing configured within Hazelcast. I'm hoping I can continue to work around these issues to support the majority of common usage patterns.
I'll post again soon with some more examples and hopefully I'll have the code available shortly if there is interest.
For the most part things work well but it has been a constant fight trying to get a stable system. The issues range from configuration complexity, network instability, deployment requirements, and just plain old bugs. Both ActiveMQ and Camel have a rather large set of dependencies and properly configuring a JMS broker and clients is rather tricky. A small configuration issue can cause your system to slowly grind to halt (usually only under production loads)! While ActiveMQ is one of the easier brokers to configure and use (it has Spring namespace bindings and can be used without JNDI), I could still imaging a full time position for broker management, configuration, monitoring, etc.
On another part of the system I've been prototyping some distributed locking using Hazelcast, an in-memory data grid that implements Java's Map, List, Queue, and Lock interfaces. I've been extremely impressed with how easy it is to get a Hazelcast cluster up and running and the fact that it has few, if any dependencies is another plus. As a data grid, clustering is a first class citizen in Hazelcast unlike a lot of JMS brokers which use a hub and spoke pattern with a "network of brokers" as a bit of an after thought for high availability.
While working on the project, I got to thinking about switching out the ActiveMQ broker with Hazelcast using queues and topics. There are a number of pros and cons to consider:
Pros
- I'm looking to use it for distributed locks so I could remove a dependency on ActiveMQ/JMS by using it for both locks and messages
- It is designed from the ground up for clustering and data distribution while clustering is a bit of an add-on for ActiveMQ (and other MQs)
- It is much simpler to configure and support than a JMS serves
- It has built in support for distributed queues
- I could leverage it in the future as a distributed cache
- I could continue to hide it behind Camel so there would be little to no code change in my code
- The distributed queues are in memory in Hazelcast so messages would be lost if all nodes went down (it should survive any single node failure). That being said, my queues are empty 95% of the time because the messages are consumed quickly. Hazelcast also has persistence support for maps which I believe back the queue implementation so some form of persistence may be possible.
- It exposes a custom REST API but not a generic STOMP API so I would need to either implement a STOMP server or rewrite some of our existing non-Java code that uses the STOMP interface on ActiveMQ
- Camel has basic Hazelcast support (as a cache) but I would need to write some non-trivial code to support message marshaling and transaction management
- Performance is a bit of an unknown. There are performance numbers for raw distributed map read/writes but not so much as a message passing system. That being said, I have very low performance requirements for the bus (10s - 100s of messages a second but not much more than that).
After a day or so of work I have a simple, working implementation of a JMS provider on top Hazelcast which supports message producers and consumers on topics, queues, and temporary queues. I was even able to swap out the ActiveMQ ConnectionFactory for my HazelcastMQ ConnectionFactory with no code changes in my application. So far things are looking promising. The big advantage of implementing the JMS APIs is that it "just works" with Apache Camel, the Spring JmsTemplate API, or other JMS consumer.
There are some limitations that I don't have a good solution for yet. For example, implementing consumer message selectors isn't an option with Hazelcast queues (although they do support them with maps). Also, some of the JMS options like message persistence can't be honored because persistence would be all or nothing configured within Hazelcast. I'm hoping I can continue to work around these issues to support the majority of common usage patterns.
I'll post again soon with some more examples and hopefully I'll have the code available shortly if there is interest.
Subscribe to:
Posts (Atom)