Wednesday, 7 September 2011

Crashing Extreme WM-200

We've had a couple of instances where an Extreme WM-200 wireless controller will lockup and become unresponsive (though still functional, we can't log on to it), and in each instance we've found it's due to the internal hard disk filling up with log messages (specifically the /var/log/messages file, which should rotate but doesn't). This is pretty easy to fix, but finding out a) the cause and b) how to fix it can be very fiddly, so hopefully this can save someone a frustrating evening.

You'll need to console onto the failed unit and reboot it. We need to get it into single user mode, so when the GRUB screen comes up arrest it by pressing a key, then hit 'e' to edit the GRUB loader. Press 1, to select line 1, then 'e' again to edit it, and add 1 to the end of the line. Then press Enter and hit b to boot the edited loader. This should go through a basic boot cycle, dropping you into a bash shell with just a hash.

From here you can verify if it is indeed a full disk that's causing the trouble, just type df and see if any of the mounts are 100% used.

So far so good, now we need to remount the filesystems so we can clear some space. After much digging in the /etc directory, the following lines should build a usable filesystem for you:



mount -n -t proc none /proc
mount -a -t nonfs
mount -o defaults,noatime,nodiratime /dev/ide/host0/bus1/target0/lun0/part1 /mnt/flash &> /dev/null
mount -o defaults,noatime,nodiratime /dev/ide/host0/bus0/target0/lun0/part2 /original_root/
mount -o defaults,noatime,nodiratime /dev/ide/host0/bus0/target0/lun0/part5 /original_root/var/controller/images
mount -o defaults,noatime,nodiratime /dev/ide/host0/bus0/target0/lun0/part7 /original_root/var/controller/log
mount -o defaults,noatime,nodiratime /dev/ide/host0/bus0/target0/lun0/part6 /original_root/var/controller/log/cdr




From here it's just a simple matter of browsing to the /var/log directory and issuing:


rm -f messages


Type reboot, and you should be good to go.

Friday, 19 August 2011

JavaDoc Inheritance

Every so often you come across something so staggeringly simple and useful, yet relatively unknown. In the interests of cutting down the amount of class/method documentation we had to produce, I was researching whether people tended to document methods in interfaces, classes or both. It turns out there is a very elegant solution to give you comprehensive documentation without repeating yourself:

Document the methods in the interface, then inherit the documentation into the class, and provide fixed documentation for any class specific methods.

To let javadoc handle the inheritance from the interface, you can just comment using a tag: {@inheritDoc}. For example:


interface foo {

/**
* documentation!
*/
public void x();
}


class bar implements foo {

/* {@inheritDoc} */
public void x() {
doStuff;
}
}


One thing to note - this doesn't apply to class documentation, only to method documentation.

Tuesday, 7 June 2011

OpenSessionInView - Outside the View Pt. II

This update is a followup to my previous article about extending Hibernate Sessions without using OpenSessionInView - Outside the View . in that article, I introduced the idea of using Spring's HibernateInterceptor and Aspect Oriented Programming to wrap objects that would need extended Hibernate support.

That worked fantastically for us, for about 10 minutes, until we realised that Spring's HibernateInterceptor was only capable of weaving beans referenced in the AppContext. We had a definite requirement to weave objects loaded and instantiated at run-time - for instance objects loaded as part of a Quartz scheduled job.

The solution we came up with was a mix of AspectJ and annotations. We defined a simple annotation called SessionManaged:


public @interface SessionManaged {

}


We will use this annotation to mark classes or methods we want to have Hibernate session scope. We then define our own AspectJ aspect, the HibernateInterceptorAdvice :


@Aspect
@Configurable
public class HibernateInterceptorAdvice {

private static Logger logger = Logger.getLogger(HibernateInterceptorAdvice.class);

@Autowired
private SessionFactory sessionFactory;

HibernateInterceptorAdvice() {
}

// Only execute around @SessionManaged annotated methods/objects
@Around("execution(@com.essensys.bluefin.annotations.SessionManaged * *(..)) && !within(HibernateInterceptorAdvice)")
public Object interceptCall(ProceedingJoinPoint joinPoint) throws Throwable {


/** Perform the Pre-execution logic **/
// Fetch new Session
Session session = SessionFactoryUtils.getSession(sessionFactory, true);

// Get Session Holder
SessionHolder sessionHolder = (SessionHolder) TransactionSynchronizationManager
.getResource(sessionFactory);

// Check for existing session
boolean existingTransaction = (sessionHolder != null && sessionHolder
.containsSession(session));

if(logger.isDebugEnabled())
logger.debug("Existing Session: "+existingTransaction);

// If we have no existing session, create a new one and bind it
if (!existingTransaction) {
if (sessionHolder != null) {
sessionHolder.addSession(session);
} else {
TransactionSynchronizationManager.bindResource(sessionFactory,
new SessionHolder(session));
}
}


/** Perform the Business Logic call and return it's Value **/
// Now we have an opened session, proceed with the execution
try {

Object retVal = joinPoint.proceed();
return retVal;

// Re-throw any exceptions, but make note in the Aspect
} catch (Exception e) {

if(logger.isDebugEnabled())
logger.debug("Exception Encountered in Aspect",e);

throw e;


/** Perform the Post-execution Logic **/
} finally {

// Check to see if we used an existing session, if so do nothing
// if we used a new session, unbind it.
if (existingTransaction) {

if(logger.isDebugEnabled())
logger.debug("Not Unbinding Existing Session");

} else {

// Close Session
SessionFactoryUtils.closeSession(session);

// Check Session is still bound
if (sessionHolder == null
|| sessionHolder.doesNotHoldNonDefaultSession()) {

if(logger.isDebugEnabled())
logger.debug("Unbinding Session");

// Unbind Session
TransactionSynchronizationManager.unbindResource(sessionFactory);
}
}

}

}

}


The reason we use AspectJ for this is that we can weave target objects at load-time, rather than at run-time like the Spring AOP. To make sure this happens, you may need to download a Spring Instrumented classloader for your application server (see the Spring documentation under Load-Time Weaving), and you'll need the following in your AppContext:


<context:component-scan base-package="pkg" />
<context:annotation-config/>

<context:load-time-weaver/>
<context:spring-configured/>


And you'll need to create an aop.xml file in your META-INF directory that will map out your aspects and define which classes in the package are to be weaved:


<!DOCTYPE aspectj PUBLIC "-//AspectJ//DTD//EN" "http://www.eclipse.org/aspectj/dtd/aspectj.dtd">
<aspectj>
<weaver options="-verbose ">
<!-- only weave classes in our application-specific packages -->
<include within="pkg.console..*"/>
</weaver>

<aspects>
<!-- weave in just this aspect -->
<aspect name="pkg.HibernateInterceptorAdvice"/>
</aspects>
</aspectj>



When a weaved class is loaded, either at run-time or at load-time, this setup should take care of the Hibernate session. If there is an existing one, it will use that, if not it will create a new one, and close it after the execution of the annotated method.

Thursday, 6 January 2011

OpenSessionInView - Outside the View

OpenSessionInView is a well known session management pattern, used to open and close sessions in a web applicaiton at the start/end of an HTTP session. This assumes that the HTTP session actually represents a viable unit of work, and as such it allows things such as Object Relational Mapping managers to work within a single session for the duration of the user's HTTP session. Why would this be necessary? The key driver in my work for using OpenSessionInView has been the use of Lazy Initialization for Hibernate relationships. For instance, Object A has a relation to a set of Object B. Loading Object A through a Hibernate DAO is fine. But passing Object A to Object C, which then tries to call A.getB(), will throw a LazyInit exception with a 'no session' error. Using OpenSessionInView allows the same session to be used for the duration of the HTTP request that triggered this set of work, meaning that you can lazily load any depth of relationships (A.getB().getD().getE()... ) without worrying. Of course, you could always just tell all of your relationships to Eagerly load, but if you have a tightly related data model you may well end up loading your whole database into memory everytime you pull an object from a DAO.

So OpenSessionInView works very nicely with servelet/HTTP based systems, and it is very easy to set up in Spring, there's loads of documentation on line, and at its core it just needs you to add a Servlet Filter to your web.xml :


<filter>
<filter-name>openSessionInViewFilter</filter-name>
<filter-class>
org.springframework.orm.hibernate3.support.OpenSessionInViewFilter
</filter-class>
<init-param>
<param-name>singleSession</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>flushMode</param-name>
<param-value>AUTO</param-value>
</init-param>
<init-param>
<param-name>sessionFactoryBeanName</param-name>
<param-value>sessionFactory</param-value>
</init-param>
</filter>

<filter-mapping>
<filter-name>openSessionInViewFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>



But what if the actions you want to take are not initiated by a request from the View? In that case, a servlet filter can't help - it will never be fired by the filter chain as there's never a servlet called! What we need is a way to wrap our Session around a particular chain of execution. Some areas that I have come across this requirement are when calling some functionality at load time (usually during development), and when calling functionality through a scheduling API like Quartz. In these cases, we will get Session errors due to lazy loading even though our OpenSessionInView filter is working fine.

Thankfully Spring provides a variation on the Servlet filter that can be applied as an Aspect to any given Object, without needing any re-coding, and without needing to roll our own session handling code in any business object that needs this functionality. This is called the OpenSessionInView Interceptor, and is functionally the same as the servlet filter. To do this, you'll need to add two new beans to your app context:


<bean id="hibernateInterceptor"
class="org.springframework.orm.hibernate3.HibernateInterceptor">
<property name="sessionFactory">
<ref bean="sessionFactory" />
</property>
</bean>

<bean id="autoProxyCreator"
class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator">
<property name="interceptorNames">
<list>
<idref local="hibernateInterceptor" />
</list>
</property>
<property name="beanNames">
<list> <!--this proxies every bean with the specified pattern -->
<value>testbean</value>
</list>
</property>
</bean>


The first object, HibernateInterceptor, creates the Aspect itself. This does assume you are using Hibernate 3. The second is a Spring AutoProxyCreator. Essentially, this bean takes a list of bean names to proxy, which can be explicit names or patterns to match. For each, when instatiated it wraps the bean in a proxy object, to which it applies the HibernateInterceptor. This effectively binds the Hibernate Session for the lifetime of that proxied Bean, allowing it to operate as though it had been invoked through an OpenSessionInView handled Servlet.

Friday, 24 September 2010

Collaborative Coding

Just a quick post this time, to spread the word about Ethercodes, which is essentially multiplayer notepad, with syntax highlighting. Very handy for those using agile methods, or just getting a bit of help when two (or more!) heads are better than one. Check it out, free and easy.

Friday, 6 August 2010

Paravirtualising an HVM CentOS VM

If you've installed a CentOS (or, for that matter, any other linux distro) on XenServer without using one of the templates, you may have found that you cannot install XenTools, meaning that you can't perform live migrations, and the VM may not perform as well as could be expected. This is because when it is installed in this way, it typically is not Paravirtualised, and is installed instead using Hardware virtualisation (HVM). Paravirtualising an HVM DomU is a pain, but it is definitely do-able, and here's how. There's a fair bit of documentation around on the net, but I found none of the techniques worked fully for me, so here's my step by step guide. YMMV.

The first step is to install a Xen-Aware kernel onto the new host. Assuming the server has Internet connectivity, you can do this simply with the following command (as root):

yum install kernel-xen

This will find, and prompt you to download, the latest version of CentOS' Xen aware kernel. Now we need to generate a new initrd file without some of this kernel's SCSI drivers. To do this, execute the following:


cd /boot/
mkinitrd --omit-scsi-modules --with=xennet --with=xenblk \
--preload=xenblk initrd-$(uname -r)-no-scsi.img $(uname-r)xen


This will make an initrd file without the SCSI drivers, but including the Xen networking and block device modules. If you get an error, uname might be returning an invalid version, so in place of $(uname -r) just insert the desired version string (ie 2.6.18-194.8.1.el5). You can verify this by running:

ls /boot/

and looking for a new file called something like 'initrd-2.6.18-194.8.1.el5xen-no-scsi.img'.

Now we need to make some changes to the GRUB bootloader. If you look at the file /boot/grub/menu.lst, you will see the Xen Kernel install has added a new option for us (as Option 0). It should be looking something like this:


# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
#boot=/dev/hda
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.8.1.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.18-194.8.1.el5
module /vmlinuz-2.6.18-194.8.1.el5xen ro root=/dev/VolGroup00/LogVol00
module /initrd-2.6.18-194.8.1.el5xen.img
title CentOS (2.6.18-164.15.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-164.15.1.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-164.15.1.el5.img
title CentOS (2.6.18-128.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-128.el5.img


You can see we have three kernel options here, and the first is the Xen aware one. There are a few changes we need to make to this file. The first is to change the line:

default=1


to:

default=0


This will tell the bootloader to default to the first option, or our Xen Kernel. There is a further slight change to be made to the entry for the Xen kernel to allow it to work with the XenServer pygrub bootloader. We need to remove the following line:

kernel /xen.gz-2.6.18-194.8.1.el5


And edit the line below to replace the word 'module' with 'kernel'. Then go to the line below this and replace 'module' with 'initrd'. You will also need to change the file name so that it reads 'initrd-2.6.18-194.8.1.el5xen-no-scsi.img'; the name of the initrd file we created earlier. Version numbers of course might vary. The file should now look like the following:


# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
#boot=/dev/hda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.8.1.el5xen)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.8.1.el5xen ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-194.8.1.el5xen-no-scsi.img
title CentOS (2.6.18-164.15.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-164.15.1.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-164.15.1.el5.img
title CentOS (2.6.18-128.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-128.el5.img


We now need to make some changes to the XenServer configuration for this DomU.

Firstly, you will need to shutdown the VM.

Once this is done, click on the Dom0 server in XenCentre (or connect via SSH), and navigate to the Console pane. You will need to gather a chain of information as follows:

First, we need to get the UUID of the DomU. This is the Xen internal unique ID. You can get it with the following command:

xe vm-list name-label= --minimal


The output will be a single line containing the UUID, which will look something like this: c58fcc8a-4f5d-c695-d6a1-29c6063b9296

Once we have this, copy it to the clipboard and save it in a notepad window, as we'll use it in the next command. We need to find the VBD UUID for our DomU's disk. We can do this with the following:

xe vm-disk-list vm=


This will return something like the following:


Disk 0 VBD:
uuid ( RO) : 55f9199b-a5ca-f6ff-67c2-6e1830547f0b
vm-name-label ( RO): backup-01.egh
userdevice ( RW): 0


Disk 0 VDI:
uuid ( RO) : 998d344a-5ec6-4bf9-8f34-190b7ad12fb6
name-label ( RW): 0
sr-name-label ( RO): backup-01.egh
virtual-size ( RO): 7945689497


The one we're interested in is the VBD, whose UUID is 55f9199b-a5ca-f6ff-67c2-6e1830547f0b in the above. Like before, save this to a notepad file for later.

Next step is to set this VBD to bootable (needs the VBD UUID from the last step):

xe vbd-param-set uuid= bootable=true


We need to turn off the HVM boot policy for this VM now we're paravirtualised (needs the VM UUID from the first step):

xe vm-param-set uuid= HVM-boot-policy=


The end after the = is blank intentionally, this has the effect of 'unsetting' the parameter. Next is to set up a new PV bootloader to use pygrub (needs the VM UUID from the first step):

xe vm-param-set uuid= PV-bootloader=pygrub


Finally we do some configuration of the PV to handle graphical output from screen and console:

xe vm-param-set uuid= PV-args="console=xvc0"


You should be able to start the VM up again now. You may need to restart XenCentre if keyboard input to the VM's console is not working - this can be a side effect of the conversion from HVM to PV.

Now that we're back up and Paravirtualised, we can install XenTools (recommended). To do this, log into the VM as root. From XenCentre, right click on the running VM and select 'Install XenTools' from the menu. You will see this loads a disk image called 'xs-tools.iso' to /dev/xvdd. We can now install XenTools with the following set of commands:


mkdir /media/cdrom
mount /dev/xvdd /media/cdrom
cd /media/cdrom/Linux
./install.sh


This will run the installer. Once complete, reboot the VM, and the install should be done. You can verify this is complete by clicking on the running VM in XenCentre, and going to the 'General' tab. Next to 'Virtualization State' you will see it says Optimized, showing that we're now fully paravirtualised.

One last thing. The Xen network drivers for some reason cause the newly paravirtualised VM to lose it's network connectivity, however it is very easy to restore. Log onto the machine as root, and issue the following commands:


cd /etc/sysconfig/network-scripts/
mv ifcfg-eth0.bak ifcfg-eth0
ifup eth0


Test with a couple of pings, and all should be well. If something does go horribly wrong, most of the time you can revert to booting in HVM with the following XenServer CLI command:

xe vm-param-set uuid= HVM-boot-policy="BIOS Order"


Once you've executed this, before you start the VM up again you'll need to go into the properties for the VM and tell it to boot from the Hard Disk, otherwise you'll just get a message in the console saying no bootable media found.

If you get a pygrub boot error, this may be down to omitting the command which sets the VBD paramater 'bootable' to true.

If during boot you get an error such as:


Kernel panic - not syncing VFS: Cannot open root device "VolGroup00/LogVol00"


This typically means that something went wrong when you generated your initrd file - it is still using the Xen Kernel with the original SCSI drivers. Try re-creating this, and make sure your GRUB menu.lst file points to the correct initrd file.

Thursday, 22 July 2010

Extreme Networks, BGP and Route Policies

Setting up simple BGP topologies using Extreme Networks' XOS based switches is pretty straightforward, but I've found there to be a slight lack of documentation around anything simpler than route-reflectors and extended communities.

The basic topology I had was a SummitStack of two x480 switches, and a pair of Gigabit links connecting the switches to an MPLS VRF operated by another company. We want a simple active/passive setup, with one link acting as a backup, and the other carrying all traffic, unless an event occurs to take it offline. Here's a basic setup for the BGP config on the SummitStack (AS Numbers and IP Addresses have been changed to protect the innocent).


configure bgp AS-number 65001
configure bgp routerid 242.242.242.242
enable bgp fast-external-fallover
configure bgp add network 190.190.190.0/23
create bgp neighbor 194.194.194.193 remote-AS-number 65002
enable bgp neighbor 194.194.194.193
create bgp neighbor 242.242.242.241 remote-AS-number 65002
enable bgp neighbor 242.242.242.241
enable bgp


This should be sufficient to bring the BGP sessions up and to start advertising our 190.190.190.0/23 network, and learning whatever is sent back. Note that in order to advertise the 190.190.190.0/23 network there needs to be a route for this in the switch's route table - you may need to route it to a local loopback at first.

However we need a way to enforce the active/passive nature of these links, which can be done using a local preference, (as long as the other party will reciprocate). Local preference values for BGP routes default to 100, so all we need to do is increase the local preference for routes learned through our preferred primary circuit, and they will take precedence in the route table. We can do this using route policies. We can also use route policies to have a bit more control over what routes we send to our peers, and what routes we'll accept in return. In this specific case we're only interested in learning a default route (0.0.0.0/0), and advertising our 190.190.190.0/23 network. We can build some route policies that will enforce this for us. First, lets build the policies that will restrict what we learn, and set the local preference.

We'll create two policy files, one for the primary circuit called bgp_pri_policy.pol, and one for the backup, called bgp_sec_policy.pol.

Here's bgp_pri_policy.pol:


entry default_pri {
if match all {
nlri 0.0.0.0/0 exact;
next-hop 194.194.194.193;
} then {
origin egp;
local-preference 500;
permit;
}
}

entry deny_other {
if {
} then {
deny;
}
}


Note that it matches on the next-hop (this determines its come from our primary peer), and on the NLRI, or the route we've learned. The first entry accepts 0.0.0.0/0 from 194.194.194.193 only, and sets its local preference to 500. This is higher than the default 100, so will be preferred. bgp_sec_policy.pol is very similar, only with a different next-hop and a lower local preference:


entry default_sec {
if match all {
nlri 0.0.0.0/0 exact;
next-hop 242.242.242.241;
} then {
origin egp;
local-preference 100;
permit;
}
}

entry deny_other {
if {
} then {
deny;
}
}


Note that local preference is being set to 100 (we could have left it, as 100 is default, but it is here for completeness' sake). We can now create a final policy to limit what we send out to our peers, good practice to prevent any unexpected route leaks. We will create a policy called bgp_exp_policy.pol with a single entry:


entry export_policy {
if match all {
nlri 190.190.190.0/23;
} then {
permit;
}
}


This uses an implicit deny to block anything that is not permitted. We now just need to apply these policies to our peers:


configure bgp neighbor 194.194.194.193 route-policy in bgp_pri_policy
configure bgp neighbor 194.194.194.193 route-policy out bgp_exp_policy

configure bgp neighbor 242.242.242.241 route-policy in bgp_sec_policy
configure bgp neighbor 242.242.242.241 route-policy out bgp_exp_policy


If you refresh the BGP now you should see the routes behaving as expected.