WLC School for Network Admin’s Who Can Read Real Good: Part 1

•November 30, 2009 • 5 Comments

So the day started out normally, servers were humming along, projects were running on schedule, no one was complaining about the network speeds… ok, so it wasn’t that normal after all; but I digress. I had a meeting with some folks and received a set of specifications for some software we were to utilize for medical records. Lo and behold in the requirements was the necessity for a network connection whenever the application was to be used. We had nothing in place for that but thick concrete and broken dreams.

After reviewing the requirements and workflow a little deeper our team began to see a light at the end of the tunnel. We began to see that we only needed connection in the medical service areas, and not everywhere in between. We can do this, and not only deliver for the medical folks but begin building a foundation of an empire… or at least of a wireless infrastructure.

A few key factors drove us to our final conclusion. One, we knew that to cover the necessary areas with a strong signal we would need quite a few access points. That single nugget of knowledge leads us to a second criterion: central management. Fortunately I have had several installations utilizing the Cisco 4400 series WLC’s and between personal experience and the need for centrally managed wireless, Cisco was an easy starting point.

As I said before, I had planned on using my previous experiences with the 2100 and 4400 WLC’s to create a smooth introduction to the wonders of wireless. First I took a look at the new 5500 series WLC and was very happy with a couple key components.

  • Licensing is software driven, not hardware driven. It can be expanded to support more AP’s with a licensing key.
  • There is a single management interface (in the past there was a Management interface as well as an AP Management interface, using 2 IP’s normally on the same network.
  • OfficeExtend which creates a secure tunnel between an AP, presumably at a teleworker site, and the main Controller.
  • ***REMEMBER TO ORDER THE GBIC’S*** there are no usable Ethernet ports available on any of the Enterprise WLC’s

So I said sign me up.

One of the things to consider once you have decided on wireless (any wireless) is what your network needs will be, that will determine the method of security and the availability of that network throughout your enterprise. In my scenario I had the need for the following:            

Department Terminal Type Employee Security Type DB Location
Agency PC Domain Member Yes PEAP Radius (AD)
Medical Embedded Yes WPA2 PSK
Guest PC Off Domain No Webauth Local
Employee PC Off Domain Yes Webauth Radius (AD)

 

Each of these four scenarios represents a different approach. We will visit each of those here. Part 1 will be the initial setup of the WLC box, Radius, and the Core Switching, Part 2 will be PEAP authentication, Part 3 will be WPA2 with PSK, Part 4 will be Webauth with Local Authentication DB and finally Part 5 will be Webauth with Radius Authentication.

Part 1

In this Scenario we are using the above network information to build out our initial Wireless Infrastructure. The setup will be similar regardless of what elements you have in the mix but just to cover my bases I am working on a Windows Server 2003 Enterprise R2 32-bit Domain Controller, a Cisco 3560E running IOS 12.2(35)SE5, Cisco CT5508 running 6.0.182.0, and Aironet 1142N LWAP’s with the 12.4(21a)JA IOS running on them. From here on I will reference the 5508 as the WLC, the Server as the DC, the 3560E as the Core and the 1142’s as the AP’s. Now on to the nitty-gritty…

Server 2003 DC Configuration

In this scenario we are using Microsoft Active Directory to provide AAA information to the requesting entity (the WLC). The specific service which will be handling this is the IAS service or Internet Authentication Service. With that said, go to Control Panel -> Add or Remove Programs -> Add/Remove Windows Components. When the Windows Components Wizard window opens, scroll down to Networking Services, select it and move down to Internet Authentication Service, click the checkbox and click the “OK” button. Click “Next” then “Finish” to wrap it up.

You must also set up Certificate Services which means you might as well set up IIS (Internet Information Services) which will allow you to web enroll certificates. So the first step is to go back into Control Panel -> Add or Remove Programs -> Add/Remove Windows Components. When the Windows Components Wizard window opens, scroll down to Application Server and select it, click the “Details” button, and on the “Application Server” Window scroll down to Internet Information Services (IIS) and check the box (it will appear as a light grey, that’s OK. Click the “OK” button, Followed by “Next” and “Finish”.

To set up Certificate Services you must be on a DC. Additionally you will be unable to rename the machine or change its Domain membership without killing all the Cert’s you have issued. You will get a much longer version of that on the popup. At which you say ok and move on. This will be an Enterprise root CA, select that radio button and choose “Next”.

For the Common Name, I usually stick with the hostname of the server in question.

Then choose defaults for the rest of the wizard. *POOF* you now have a Self-Signed Certification Authority.

Now that you have all the requisite servers in place we will need to make sure they play fairly together. Go to the IAS console via Administrative Tools -> Internet Authentication Service. On this window we will begin tying our elements together.

First you must integrate IAS with AD. You will do so by right-clicking on “Internet Authentication Service (Local)” and choose the “Register the Server in Active Directory” menu choice. Next you will right-click on RADIUS Clients” and choose the “New RADIUS Client” option. In the window that appears populate your information and click “Next”.

In this window, leave the Client-Vendor: drop down on RADIUS Standard. Choose a shared secret (keep this key in mind, we will enter it on our WLC) I am choosing $WLC$ecret007.

Once you click “Finish” the new Client will appear in the IAS

Next I like to remove all the default Access Policies and add my own. The thing to remember about Access Policies is that it works from the top down (like most ACL’s). Each Access Policy will be unique to the scenario that we are driving, meaning that we will not be doing the Access Policy just yet.

Next we have the Core, nothing too big there. The hardest part is the DHCP portion to allow for the AP’s to get the proper WLC settings. The segment is color-coded for sanity’s sake. From Cisco’s site:

The hex string is assembled by concatenating the TLV values shown below:

Type + Length + Value
Type is always f1

(hex). Length is the number of controller management IP addresses times 4 in hex. Value is the IP address of the controller listed sequentially in hex.

The relevant info is as follows:

ip dhcp pool SERVERS

network 10.1.10.0 255.255.255.0

default-router 10.1.10.1

dns-server 10.1.10.10

option 60 ascii “Cisco AP c1140”

option 43 hex f104.0a01.0a0a

 

interface Port-channel1

description LAG Connection to WLC 5508

switchport trunk encapsulation dot1q

switchport trunk allowed vlan add 10

switchport mode trunk

 

interface GigabitEthernet0/10

description connection to 5508 WLC

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode on

 

interface GigabitEthernet0/11

description connection to 5508 WLC

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode on

 

interface Vlan10

description Servers

ip address 10.1.10.1 255.255.255.0

no shutdown

 

vlan 10

name Servers

state active

 

We will be adding bunches to the core once we get into each scenario. At this point we are only trying to get the bare essentials for communication between the key devices.

Start with the WLC OFF. Now the 5508 gives several choices for the initial configuration, for us, we are using the console port. The cables came with the appliance. The settings are standard Cisco VT-100 and are as follows:

  • 9600 baud
  • 8 data bits
  • 1 stop bit
  • No parity
  • No hardware flow control
  1. Power On the console.
  2. Enter the Controller Name:
  3. Enter the Administrator Username/password (the default is admin/admin)
  4. Enter DHCP
  5. Yes to LAG (Plug port 1 and 2 GBIC’s to the Core port G0/10, G0/11)
  6. Enter the Management Interface IP: 10.1.10.50
  7. Enter the Management Interface Mask: 255.255.255.0
  8. Enter the Default Router IP: 10.1.10.1
  9. Enter the VLAN identifier of the management interface: 10
  10. Enter the IP of the default DHCP server that will supply IP addresses to clients: 10.1.10.50
  11. Enter the IP of the virtual interface: 1.1.1.1
  12. Enter the name of the mobility group/RF group: WIFI
  13. Enter the network name: WIFI
  14. To Require clients to request an IP address from a DHCP server: NO
  15. Configure a RADIUS server : YES
  16. Enter RADIUS Server IP: 10.1.10.10
  17. Enter RADIUS Server Port: 1812
  18. Enter Secret Key: $WLC$ecret007
  19. County Code: US
  20. 802.11b: YES
  21. 802.11a: YES
  22. 802.11g: YES
  23. RRM: YES
  24. NTP: YES, enter time server IP:10.1.10.10

That sums up the wizard. Now it should be able to communicate with the core and the AAA server and the AP’s.

And that sums up Part 1, the Infrastructure. We will get more scenario specific in the next part.

 

 

 

Terminal Services: It’s not really PFM (Pure F***ing Magic)

•November 25, 2009 • Leave a Comment

I have been frustrated in my Terminal Services environment because every time I seem go get my problems put to bed, they wake up again and meaner than ever. I have approximately 250 TS users with 50 users logged on at any given time. We are running Server 2003 R2 Enterprise and when I initially arrived on the scene we were running two TS machines on a Microsoft Virtual Server platform and a third on a standalone physical machine. They were load-balanced via Microsoft NLB Cluster services and would stop functioning sporadically. The only solution at the time was to tear down the NLB Cluster and rebuild it. Soon thereafter we left the Microsoft virtual environment in lieu of VMWare. We went that route specifically for Site Recovery Manager and the ability to get VM’s restored to our DR facility in fairly short order. So with that I had 3 very beefy servers geared up as ESX 4.0 Hosts. Placed them in my Virtual Center, and installed 2 VERY beefy TS machines (first mistake). I created two 4 core 8 GB servers with 100 GB of storage each. I set up a default Microsoft NLB (second mistake) to load balance both of my TS Servers.

Well, as some of you may have already experienced, it doesn’t quite work that easily. The symptom was that I could not reach the second server. In fact the second server had issues reaching the network consistently as well. After some research I found out that it was due to the way that the NLB handles mac-addresses and the NLB Cluster IP and the way that VMWare handle RARP flood requests. I am not going to deep dive right now but you can find out more about it here. . The short of it was that I needed to configure the NLB in Multicast Mode. So I did, and that too didn’t work. So I took it to the next level and disabled RARP transmission as outlined here, and all seemed good… for a while. V-Motion was acting up after that, mainly because VMWare did not notify the host that the virtual server moved. This ruined my plans for dynamic VM resource management for the entire vswitch. There had to be a better way

I dug down deeper and really began honing in on the ARP/MAC and Cluster IP issue. I started looking at my Cisco switch for ways to solve my problem. I found it. I needed to create static ARP and MAC entries in my switches directly connected to the VM Host. The following commands worked for me (edited of course)

Config t

arp 10.0.100.10 03bf.0a00.640a ARPA

mac-address-table static 03bf.0a00.640a vlan 1 int Fa0/1 Fa0/15 Fa0/16

wr mem

  • Where 10.0.100.10 is the NLB Cluster IP address
  • 03bf.0a00.640a is the virtual MAC address of the NLB Cluster itself
  • vlan 1 is the vlan that the vswitch the VM is on
  • Fa0/1 Fa0/15 Fa0/16 are the interfaces connecting to the VM Hosts

And I had stability at layer 2/3… but of course that was not good enough.

Shortly thereafter I started getting complaints that the performance was just too slow. I looked at the summary of the VM and I saw that the Consumed Host CPU was minimal and that the memory was also minimal. It was then that I started thinking virtually, not physically. VMWare has an evil habit of waiting until all the assigned cores are available before putting through a process. When I have a 4 core requirement, on a relatively busy VM Host, it takes a long time to get all 4 core’s free to get anything done. So I began to employ the Zerg Rush strategy for TS boxes (hey it worked in Starcraft). I created a small 2 core 4 GB ram TS template and deployed many of them. We have licenses for Server 2003 Enterprise and therefore had a 1 to 3 exchange rate of Physical to Virtual. I also kept most of them on the same VM Host since similar applications would be competing for 2 cores in a similar way, thus giving preference to none. My performance woes seemed to disappear, but as you can guess… seeming is believing.

There are some shortcomings of Server 2003 NLB that make this tool a bit inadequate. All the servers must be on the same network, the Affinity is configurable however restrictions based on connection number are not. If a member of NLB cluster goes down, the NLB will still attempt to route users to it. There is no reporting to speak of, and finally there is a 32 server limit. It is because of these reasons (and because of the goofy way NLB handles arp) that I decided to go with a 3rd party NLB solution. I ended up choosing loadbalancer.org’s virtual appliance and couldn’t be happier. It uses a loopback adapter on each server with a high metric and the cluster IP to overcome the arp issue. I can choose various weighted approaches to load balancing. I get reporting, health checks, and can use the NLB for a myriad of load balancing scenarios. It was quick to set up and the servers are good to go; now I can party… I wish.

While I have a solid layer 2/3 foundation with a robust NLB setup bringing redundancy and availability to my environment, I am hamstrung at layer 7 itself. The Terminal Servers themselves were just not performing adequately for more than a couple days. I was receiving the following errors every couple days:

“Windows – Low on Registry Space – The system
has reached the maximum size allowed for the system part of the registry.
Additional storage requests will be ignored.”

 

“Windows was unable to load the profile but has
logged on with the default profile system. Default – Insufficient system
resources exist to complete the requested service.”

I would reset some lingering disconnected sessions as well as eventually reboot the system. All would be well for a while until the message came back. Additionally I noticed that the temporary Profiles in C:\Documents and Settings was eating up all my space. So I figured, “Hey! I have a SAN with plenty of space; I’ll just mount an iSCSI drive and put the Documents and Settings there.” I know, I’m brilliant right!

“Documents and Settings is a Windows system folder and is required for Windows to run properly. It cannot be moved or renamed.”

The problem then was that all the articles I read were really focused on an unattended install with a unattend.txt file. I already had machines in production, I didn’t want to have to build a new machine and create a new template to experiment with this plan. So I took the following article and read to the registry path edit .

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

When I went to the registry path I found the setting to change. I changed the ProfilesDirectory entry to reflect the new iSCSI drive I had mounted. I then deleted all the non stock GUID’s (kept All users, Administrator, Default User).

I was not worried because we have roaming profiles for our users; all the profiles on that machine were temporary. I then navigated to the c:\Documents and Settings folder and deleted all the non stock profiles. I copied over the All Users, Administrator, and Default User folder to the new location and after a reboot I was done with that. Testing showed the new users getting profile creation on the new drive.

As for the registry issue, I dug up this article which made sense. The legacy printers were dragging down my user profiles and creating relics and hogging space. I added the PrinterMasKey to the HKEY_USERS\.default\printers
registry subkey and rebooted the server. That made quick work of the registry error. I made sure I was on the latest service pack then rebooted. The step-by-step is below.

To enable this hotfix, you must create the PrinterMaskKey registry subkey. To do this, follow these steps:

  • Click Start, click Run, type regedit, and then click OK.
  • Locate the following registry subkey:
  • HKEY_USERS\.default\printers
  • Right-click the registry subkey that you located in step 2, point to New, click Key, type PrinterMaskKey, and then press ENTER.
  • Exit Registry

When all was said and done, I then wanted to reclaim my space. I ran a defrag, emptied the recycle bin. I then downloaded sdelete , extracted the sdelete.exe file and saved it to the root of my C:\ . I then ran sdelete –c on the server to zero out all the space on my vhd. Finally I shut down the TS VM and migrated it to another store, since the drive is thin provisioned I was able to get my space back and move on from there. Now I hope I can rest… we will see.

 

Protection From Worms Isn’t Just for Dogs

•November 18, 2009 • Leave a Comment

Protecting Against Email Worms for Real! (For ASA folks)

I received a call today from my local ISP who politely informed me that my organization was spamming others with Viagra and Free Money spam. Many who are reading this will say “yup, worm”. And those who just said that were right.

Now in my case I was notified by my carrier of the issue, however others may have a different set of symptoms alerting them. They may start out with users complaining that the people that they have been emailing are not getting the message. You might even send out a test message from your machine to your side business hobby site and get the message. You might even go so far as to say “I just got through fine; it is probably an issue with their mail service”. As the day goes on more and more users are complaining that it seems random email recipients are not getting their messages. Then finally someone with a hosted email filter forwards an email to you saying that you have been blacklisted!

WHAT?

Yup. From here you Google “blacklist check” and run through the standard routine, checking your domain against dnsbl, spamhaus, and the like. (or you can skip that and click on the aforementioned links). There you may find some interesting red “X”s. Well what now?

If you have an ASA, it is as simple as logging into the GUI. Once it is up click the “Monitoring” button at the top of the page.

Once there choose Logging at the bottom of the navigation panel on the left side of the page.

 

Then select Real-Time Log Viewer in the navigation pane, in the main window choose “Debugging” as the Logging Level with a buffer limit of 1000. Then click the “View Button”. This will open the Real-Time Log Viewer window where you will have the opportunity to view all the traffic hitting the Firewall.

Watch or for traffic with a Source IP inside your network and a Destination Port of 25.

Now that you know who is causing the issue, go ahead and run your normal cleanup on that PC, I personally like MalwareBytes, but you can use whatever you are comfortable with.

Now that you have the issue resolved on the actual culprit, you should protect your network from this happening again. If you have an ASA or a PIX you can log into the CLI and run the following commands (xxx.xxx.xxx.xxx represents your actual mail server(s) :

access-list inside-in extended permit tcp host xxx.xxx.xxx.xxx any eq smtp

access-list inside-in extended permit tcp host xxx.xxx.xxx.yyy any eq smtp

access-list inside-in extended deny tcp any any eq smtp

access-list inside-in extended permit ip any any

access-group inside-in in interface outside

 

line 1 allows mail server 1 to use smtp outbound

line 2 allows mail server 2 to use smtp outbound

line 3 blocks all other outbound mail traffic

line 4 allows all other outbound traffic

line 5 applies these rules to the inside interface on traffic coming into it

 

Now if you have an smtp server in your DMZ which handles all your inbound and outbound smtp connections (Like an Exchange Front End or Edge Access Server) and have applications like printers or network monitors sending email traffic to it you will need to add the following (where zzz.zzz.zzz.zzz represents your smtp server in the DMZ):

 

access-list inside-in line 3 extended permit tcp any host zzz.zzz.zzz.zzz eq 25

 

(note if this is your scenario, you can probably get away with eliminating lines 1 and 2)

 

At this point you need to test to make sure it worked. Open a command prompt and telnet to a mail server you know of, here are some of Yahoo’s

 

g.mx.mail.yahoo.com   98.137.54.238
a.mx.mail.yahoo.com   67.195.168.31
b.mx.mail.yahoo.com   74.6.136.65 66.196.82.7

 

The command will look like this

telnet g.mx.mail.yahoo.com  25

If you get a reply back you have made a mistake somewhere, it should say:

Now, log into your mail server (or SMTP relay server) and run the command again, this time it should look like this:

If you did not get results similar to these, try, try, again. Fiddle with it and you will get it, you should be pretty close as it is. Last step is to contact all those places that blacklisted you and follow their instructions to clear your name. You are now (more or less) safe from being harassed by your ISP or being blacklisted, that is unless you really are a dirty, hardcore spammer.

Sean Fretenborough

Can’t Fix Stupid

•November 17, 2009 • Leave a Comment

Ron White made a fortune out of the phrase however in the Information Technology field it is a mantra that can lead to ultimate disaster. It is easy to feel superior to others when we are dealing in areas as esoteric as networking and computers. How many times have we confronted individuals who couldn’t operate a light switch much less a PC and wonder how they ever became a Director or Vice-President. This superiority complex is fed most by those individuals who view the world as one-dimensional.

I recall taking classes a long time ago on managing a Windows 2000 Active Directory Infrastructure. The class began with a 2 week primer on basic PC and Network maintenance. I distinctly recall an argument between one of my classmates and the instructor over the number of Interrupts available on a said motherboard, the actual clock cycle of a particular Pentium III processor, and whether or not the AGP interface was necessary. He was a PC repair technician and since he was in his element, he felt the instructor had no credibility. I can’t say I was a little smug when he failed a couple Microsoft exams and was put in his place when we began discussing permissions, groups, and OU’s. By the end of the class the student was eating out of the teachers hands.

If we only look at a single dimension and in our case the PC usage one, we can begin to foster a disdain for the people we are serving. Because Mr. Q cannot set his out-of-office reply without the helpdesk we are inclined to think that somehow we are better suited in the role of Director of Sales and Marketing… yea right. Isn’t ego funny? All it takes is one dimension of a job in which we excel and because of that we are better in them all, or so we think.

We can’t fix stupid, but we can redefine it. I will not argue that there are stupid people in this world, moreover there are a whole lot of smart people who do very stupid things, especially when it comes to technology, but I can guarantee most of us fit into one of the above categories.

A focus on multiple dimensions is nothing new. If we look for the best in other we will find it easier to work with or for them. Once we see the best in others, we can find ways to improve ourselves and most certainly deliver better service. Ultimately the better we can serve our clients, the better their perception of us will become and the less likely they are to view the IT department as a necessary evil.

So I guess maybe we can fix stupid, but to do so we must start at its source… ourselves.