MOSS 2007 Search Service Is Currently Offline

Posted by Amber Pham on October 8th, 2009

This error is showing up in quite a few forum posts with no definitive solution. I was able to resolve this for a MOSS 2007 small farm running on Windows Server 2008 64-bit. The main symptom is that when you try to open your search settings in the SSP, you get this message in the browser window:

The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress.

In my case, looking in the Windows Application log, I had event ID 10036 messages from Gatherer.  These messages indicated that the search service account did not have access to stored procedures in two of the databases.

The problem resulted from changing the search service account without adding permissions for the new account to the search and SSP databases.  After adding the account permissions in SQL and restarting the osearch service for SharePoint, the search settings in the SSP were available.

During troubleshooting, I also found a set of three event IDs (6398, 6482, 7076) and messages that and Administrative job could not run.  That problem was resolved with this hotfix:

NAP 802.1X for Windows XP XP3

Posted by Amber Pham on September 21st, 2009

Microsoft has written a step-by-step instructional for setting up a proof of concept lab to demonstrate NAP with 802.1X on the new Windows 2008 NPS. NPS on Windows 2008 replaces IAS on Windows 2003, and new Network Access Protection functionality is now built in. The guide can be downloaded from here:  The guide is very detailed and easy to follow, but there’s one catch: it’s written for Vista, and there are differences from the way 802.1X authentication works on XP.  I got it working by compiling information from several sources and updating the step-by-step document with the changes.  Here, I will tell you what edits I made, so you can do the same.

On page 19, under Top Level Heading: Install the Group Policy Management feature
The heading below it should read:
“To install the Group Policy Management feature,”
“To install the NPS server role.”

On page 25, under Heading: Verify NAP policies, in the numbered list under “To verify NAP policies”
2. reads:
“Verify that the NAP connection request policy you created in the previous procedure is first in the processing order, or that other policies that match NAP client authentication attempts are disabled. Also verify that the status of this policy is Enabled. The default name of this policy is NAP 802.1X (Wired). ”

Add to that: “Open the policy and navigate to Settings > Authentication Methods.  Make sure Override network policy authentication settings is checked and that under EAP types, Microsoft: Protected EAP (PEAP) is shown.”

In the section starting on page 26, under the Top Level Heading: Configure NAP client setting in Group Policy, under “To configure NAP client settings in Group Policy:”
between steps 12 and 13, insert the following:
13.  In the console tree, navigate to Computer Configuration\Windows Settings\Security Settings\Network Access Protection\NAP Client Configuration\Enforcement Clients.
14.  In the details pane, right-click each enforcement client you want to enable, and then click Enable.
15.  In the console tree, navigate to Computer Configuration\Windows Settings\Security Settings\Wired Network (IEEE 802.3) Policies.
16.  Right-click the Wired Network…and click Create a New Windows Vista Policy.  Name the policy, and make sure Use Wired AutoConfig is checked.
17.  Click on the security tab and Enable IEEE 802.1X… and for Select and network authentication method, select Microsoft: Protected EAP (PEAP).
18.  Click Properties… and make sure Validate server certificate is checked.  Also check Enable Fast Reconnect and Enable Quarantine checks.  Select Authentication Method should show Secured password (EAP-MSCHAP v2).  Click OK.

Side note: As I was troubleshooting, the NPS log in the expanded Windows 2008 Event Viewer was invaluable to tracking down issues.  You no longer have to read IAS format logs for basic troubleshooting.

I needed to move a SharePoint 2007 front end from a Windows 2003 32-bit to a Windows 2008 64-bit server while leaving the databases on the existing SQL 2005 server. The contents of this article appear many different places and appear to be the Microsoft accepted method for accomplishing the change: If you want to move everything, that’s the way to go. Since I just wanted to move the SharePoint server itself, and I wanted a way to fall back in case there were compatibility issues, I created a staged approach. It consisted of the following major steps:

1. Build up the new Windows 2008 x64 server, install MOSS 2007 SP1 on it, and create a test site.
2. Install all applications and components that are on the original server onto the new server.
3. Plan downtime and migrate one Site Collection.
4. Test the Site Collection, and then record the exact steps that worked best.
5. Migrate the other Site Collections and decommission the 2003 server.

Since there were many steps and tricks, I wanted to share the full process. These are the assumptions about the SharePoint environment for the purposes of the directions: MOSS 2007; Windows 2003 front-end that also hosts the Central Administration Site; backend SQL 2005 server; needed to do a staged migration to ensure smooth transition for production sites; Maintained same SQL 2005 server on back end, but it would have been the same process with a new server.

1. Document your existing installation. Record such items as:

  • third-party web parts
  • specialized DLLs – make sure there is a version compiled for 64-bit OS
  • templates (stsadm -o enumtemplates)
  • packages (stsadm -o enumsolutions)
  • presence of static paths
  • which web applications are linked to which databases

2. Prepare the Windows 2003 server:
Make sure it is at least upgraded to MOSS SP1. If possible, update it to the latest cumulative update.
The Sharepoint installer account will need to be a local administrator on the SQL server, and you will need to log into the SharePoint server as that account during the installation process.
3. Prepare the Windows 2008 x64 sever:
a. Use these instructions to install MOSS 2007 on the new server:
b. Add any web parts or other specialized components recorded in Step 1.
c. Configure the permissions.
d. Configure the SSP. It is theoretically possible to migrate an SSP, but I found the procedure to be more trouble than comparing the two side-by-side and replicating the setttings.
4. Perform a test site migration:
a. Make a SQL backup of the content database.
b. Create a blank database with a new database name.
c. Restore the backup into the new database.
d. Create a web application on the new server, and specify the new database name during the creation process.
e. Check the site collection administrators to make sure you are there.
f. If required, do an IIS reset (”iisreset /noforce” at the command line).
g. If using a host header for the site (, create a DNS entry pointing to the new server with a test site name (
5. After testing of the migration is complete, perform the production migration:
a. Notify users that there will be some downtime.
b. Check that no timer jobs are running.
c. Quiesce the farm for five minutes.
d. Run the preparetomove command for your content database.
e. Make a SQL backup of the content database.
f. Restore the SQL backup over the top of the test database for the new farm.
g. In Central Administration, remove and re-add the content database to the web application.
h. IIS reset.
i. Test internal and external (if applicable) access to the site. Also do some functionality checks: alerts, search (after a full crawl), navigation (static links). Check the Windows event logs for errors.
6. Cleanup:
a. Remove the web application and IIS site from the original farm.
b. Remove the SharePoint installer account from the local administrators on the SQL server.
c. Remove the DNS entry for the testing site.
7. Back up your new environment as soon as it is in a satisfactory state.

One final note: Since I was using a fully qualified domain name for the site name, and I wanted to check functionality of the site from the local server, I ran into the Loopback Check security feature, in which Windows 2008 blocks requests coming from the local machine to prevent reflection attacks. This resulted in a 401 error. As explained here, do not simply set disableloopbackcheck = 1 to get around this. Instead, browse the site from another machine, or use Method 1 from this Microsoft article, in which you specify the host names that should be allowed locally.

A brief history of WEP cracking

Posted by Irving Popovetsky on June 29th, 2009
Year Number of 802.11 packets required to crack WEP
2001 – 2004 5-10 million  (FMS attack)
2004 – 2007 500k (unique IVs) on average for 128-bit WEP  (Korek attack)
2007 – 2008 40k (ARP packets) using the PTW attack
2008 – Present 25k (replayed packets)  using the ARP replay and/or chopchop replay, with combined PTW+Korek analysis

Error upgrading MOSS 2007 to Service Pack 1

Posted by Amber Pham on May 12th, 2009

I needed to upgrade MOSS 2007 farm on a Windows 2003 Server from the RTM version to Service Pack 1 as a required step to prepare for a migration to Windows 2008 64-bit.  The process involved installing the WSS 3.0 SP1, ignoring the Configuration Wizard, running the MOSS 2007 SP1, then running the Configuration Wizard.  In this case, the  last step of the Configuration Wizard failed.   The Wizard screen said to look in the event log, which showed three errors similar to this:

Configuration of SharePoint Products and Technologies failed.  Configuration must be performed in order for this product to operate properly.  To diagnose the problem, review the extended error information located at C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\LOGS\PSCDiagnostics_5_8_2009_19…log, fix the problem, and run this configuration wizard again.

I consulted the PSCDiagnostics log, but all it did was repeat the error message shown above.  Looking in the Upgrade.log file, the last entry looked like this:

[SPManager] [INFO] [5/8/2009 7:45:10 PM]: Inplace Upgrade session finishes. root object = SPFarm Name=SharePoint_Config, recursive = True. 1 errors and 0 warnings encountered.

Searching back up through the file, I found one area where errors were reported:

[SPManager] [ERROR] [5/8/2009 7:45:03 PM]: Pre-Upgrade [SPSite Url=http://sitename/sites/TestSite] failed. Microsoft.SharePoint.Upgrade.SPSiteWssSequence has the ContinueOnFailiure bit set. Moving on to the next object in sequence.
[SPManager] [ERROR] [5/8/2009 7:45:03 PM]: The system cannot find the path specified. (Exception from HRESULT: 0×80070003)

The error was caused by a site that is no longer there.  This happens sometimes when a site is deleted from SharePoint, but it doesn’t get removed all the way from the configuration database.  It will often show up in Central Administration after you have deleted it.

Looking through the Microsoft SharePoint Service Pack 1 installation documentation, there were no fixes in the troubleshooting section that fit this scenario.  I found many articles about removing orphaned sites, but this site was not a true orphan (its parent was not missing).  If you have a true orphan, you can use the stsadm parameter databaserepair.

In this case, the solution was to detach the content database within SharePoint then reattach it.  When you do that, it removes the old entry from the configuration database.  This operation can normally be done by detaching and reattaching a content database within Central Administration, but since the upgrade was not complete and the farm disabled, I used stsadm.

stsadm –o deletecontentdb –url http://sitename –database contentdatabasename –databaseserver sqlserver\sqlinstance
stsadm –o addcontentdb –url http://sitename –database contentdatabasename –databaseserver sqlserver\sqlinstance

*The deletecontentdb parameter does not delete the database, it only detaches the reference to it from your SharePoint farm.

After this, I ran psconfig at the command line:

psconfig -cmd upgrade -inplace b2b -wait -force

Once that completed successfully, I launched the Configuration Wizard manually from All Programs > Administrative Tools > SharePoint Products and Technologies Configuration Wizard to get visual confirmation that the upgrade was completed.

After upgrading SharePoint, always check the Upgrade.log in %COMMONPROGRAMFILES%\Microsoft Shared\Web server extensions\12\LOGS for “Finished upgrading SPFarmn Name=<configuration database>”, “0 errors and 0 warnings” at the end.

Breathe new life into a bogged-down CoyotePoint Load Balancer with DSR

Posted by Irving Popovetsky on April 7th, 2009

Let me start by saying this:   I am not a fan of CoyotePoint load balancers.    My support experiences so far have all been atrocious.   The system architecture is a cheap imitation of F5’s BigIP architecture from a decade ago which constantly limits me.    I’m convinced that people only buy these things because they’re cheap.

I’ve been working with a customer who’s exceeded the throughput capabilities of their Equalizer E350si load balancer.   Although the marketing materials will tell you that this unit is capable of throughput up to 700Mbps (hah!),  we were maxing out and dropping packets above 50 Mbps.

Rant:  You see, the problem lies in Coyote’s system architecture.  This E350si is powered by a NetBurst-architecture Pentium 4 2.8Ghz, with HyperThreading disabled.   Coyote uses a FreeBSD-4 based kernel, which was well known for it’s slow timers, slow interrupt handling, and immature device polling implementation.  In this classic system architecture, each incoming packet generates an Interrupt Request (IRQ), which must be serviced by the CPU in a time-slotted fashion.   So what we have is a load balancer which reports it’s CPU as being mostly idle,  but in reality cannot handle packets quickly enough.   THIS IS NOT HOW YOU DESIGN NETWORK EQUIPMENT, PEOPLE.   End rant.

Good news:   In version 8, Coyote introduced a new mode of operation called DSR, or Direct Server Return.  DSR is quite clever, really, because it get’s around Coyote’s packet handling limitation (to a big degree) by handling the incoming network packets, but allowing the web servers to respond directly to clients.   This cuts the number of TCP packets the Coyote has to process in half,  and cuts the number of Ethernet frames by much more if you consider that the return packets are much larger.

Here’s how it works.   In a traditional setup,  the Coyote receives a packet on its external interface (em1), makes a load balancing decision, and then forwards to the packet along to a host behind its internal interface (em0).   Most shops NAT here as well, for security and/or IP address conservation reasons.   So the Coyote must perform Layer 2 – 4 (or 7) processing of the packet as it receives it,   then make a load balancing decision,  then translate the packet (that’s the T in NAT),  then re-process the packet going out the internal interface.   Then, rinse, lather, repeat for the return packet.   Such is the life of a typical load balancer.

In DSR mode, you start by chopping off the Internal interface of the load balancer altogether and eliminating NAT.   This requires moving your webservers onto publicly routable IP addresses, so please make sure they are firewalled properly.   Now you have your load balancer and webservers all on the same ethernet segment.   You create a VIP (Virtual IP) on the load balancer, and then add that SAME VIP address as a loopback address to the webservers!

You’re probably scratching your head, wondering how this is going to work.  I know that I was.  Here’s the magical part.   Only the load balancer responds to ARP requests for the VIP.  The webservers have Apache listening on the VIP address,  but don’t respond to ARP requests at all on that address.   Each incoming packet is sent from the router to the MAC address of the load balancer,  which then makes a load balancing decision and then sends an identical copy of that packet to the MAC address of the web server.   Let me say that again.  The load balancer performs no more translation, it literally just copies the packet over to the webserver.   Since the source MAC address is unchanged,  the web server replies directly to the router and skips the load balancer entirely.

Sounds a bit scary, but works well.  Except for one thing.   In their brilliance,  the Coyote folks created a section in the Manual with configuration instructions for “Linux/Unix Systems”, but ACTUALLY put in instructions for BSD-like systems only.  Who runs FreeBSD anymore?   DON’T TRY THESE INSTRUCTIONS ON A LINUX SERVER UNLESS YOU WANT TO LOCK YOURSELF OUT.

On Linux,  the correct way to create the loopback address is by adding a “labelled” loopback interface,  but ALWAYS set the netmask of your new interface to “″.   If you match the netmask of the VIP,  your webserver will stop responding to packets on it’s external interface.  Very bad.

So,  assume your public VIP address is  (fake, to protect the innocent),  and your webserver’s address is   Create a loopback address like so:

/sbin/ifconfig lo:vip inet netmask

Then, the output of “ifconfig -a” looks something like this:

eth0      Link encap:Ethernet  HWaddr 00:40:A4:8E:B0:1A
inet addr:  Bcast:  Mask:
RX packets:47131905 errors:0 dropped:0 overruns:0 frame:0
TX packets:77804088 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5111837334 (4875.0 Mb)  TX bytes:104047655003 (99227.5 Mb)

lo        Link encap:Local Loopback
inet addr:  Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:1311164 errors:0 dropped:0 overruns:0 frame:0
TX packets:1311164 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:439975560 (419.5 Mb)  TX bytes:439975560 (419.5 Mb)

lo:vip    Link encap:Local Loopback
inet addr:  Mask:

If it all works, you should be able to confirm correct operation by using tcpdump or Wireshark/Ethereal on the webserver, and verifying that the SOURCE address is your VIP address and you’re seeing lots of 200 OK messages.

tshark -n -i eth0 -R http.response port 80

Lesson learned: High traffic Wordpress Site Operation

Posted by Irving Popovetsky on April 7th, 2009

I recently had the pleasure of helping out with a Wordpress blog which had gone supernova.   Within hours of being linked to from several major news sites,  the server couldn’t stay up for 10 minutes without something terrible happening.

Unfortunately, Wordpress isn’t setup for high performance operation out of the box.  Each page request is very CPU and database intensive.

Under benchmarking, we discovered that each of the customer’s HP DL385 servers could serve between 5-10 Wordpress page view’s per second, depending on the page.  (For the pedantic,  I’m considering anything that hits PHP as a page view.)  And this is AFTER I had put major effort into MySQL performance tuning.    Something had to be done, 5 page views per second is just not going to cut the mustard.

In comes the WP-Super-Cache plugin to save the day.   WP-Super-Cache is a plugin which seems like it should be installed with every Wordpress instance by default.   It writes out entire pages to static .html files,  and then instructs Apache to serve up the static .html files directly (using mod_rewrite),  therefore avoiding any CPU-gobbling calls to PHP or the database.   But WP-Super-Cache is smart,  it automatically expires cached pages when the content is updated  (by the author, or via comments).

As a result,  we went from 5-10 page views per second to between 500-2000 theoretical page views per second.   At this point we were hitting bandwidth bottlenecks,  which is where I like to be.   As long as webservers can serve up enough data to fill their own pipe, you have happy system administrators (and UNhappy network administrators).

Critical PDF Vulnerabilities in Blackberry Enterprise Server

Posted by Amber Pham on January 13th, 2009

Research in Motion has just released security bulletin KB17118 that announces a new set of vulnerabilities in the Blackberry Attachment Service that runs on Blackberry Enterprise Server (BES). According to Blackberry, “these vulnerabilities could enable a malicious individual to send an email message containing a specially crafted PDF file, which when opened for viewing on a BlackBerry smartphone, could cause memory corruption and possibly lead to arbitrary code execution on the computer that hosts the BlackBerry Attachment Service.”

It is strongly recommended that you read bulletin KB17118, then download and install the patch, called Service Pack 6 Interim Security Software Update 2, from The security bulletin also offers a workaround that reduces the functionality of BES but protects the server from exploits of the Attachment Service vulnerabilities.

The affected versions of the server software are BlackBerry Enterprise Server software version 4.1 Service Pack 3 (4.1.3) through 4.1 Service Pack 6 (4.1.6), including the latest maintenance release.

Recently, I got an excellent chance to put my money where my mouth is.

In the past, I’ve warned Windows shops to use unique local Administrator passwords wherever possible.  I’ve even proven the dangers of using the same local Administrator password during a penetration test in 2007.   Combine this with the fact that  I rarely have anything polite to say about VBscript (it’s not a pretty language to work with), and we have the perfect karmic storm.

Yours truly, coding in VBscript, tasked with setting a unique, strong passwords on each one of a few hundred machines.   Here’s what I came up with:

' ChangeLocalAdminOnServers.vbs
' Created by Irving Popovetsky (irving@prostructure)
' 12/15/2008, ProStructure Consulting
' Warning:  This script will begin changing passwords as soon
' as it collects a complete list of machine names.
' Read and understand this code carefully before executing,
' and always remember to fill in your own variables where appropriate.
' We assume no liability for damages that may be caused by running
' this code in your production environment!!

On Error Resume Next

Dim fso, MyFile
Set fso = CreateObject("Scripting.FileSystemObject")

' ***CHANGEME*** Change the output file to a location you trust, like an
' Encrypted folder or USB stick that can be stored away
' In the future, this could be improved to output directly to PGP or equivalent.
Set MyFile = fso.CreateTextFile("c:\Temp\Changedservers.txt", True)


Set objConnection = CreateObject("ADODB.Connection")
Set objCommand =   CreateObject("ADODB.Command")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"
Set objCommand.ActiveConnection = objConnection

objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE 

' ***CHANGEME*** Fill in your own Domain name here
objCommand.CommandText = _
    "SELECT Name FROM 'LDAP://dc=DOMAIN,dc=INTERNAL' WHERE objectCategory='computer'"
Set objRecordSet = objCommand.Execute


Do Until objRecordSet.EOF
    strComputer = objRecordSet.Fields("Name").Value

	' ***CHANGEME*** Skip the Domain Controllers  - fill in your own values here
	if Instr(1,strComputer, "DOMAINCONTROLLER1") Then objRecordSet.MoveNext
	if Instr(1,strComputer, "DOMAINCONTROLLER2") Then objRecordSet.MoveNext
	if Instr(1,strComputer, "DOMAINCONTROLLER3") Then objRecordSet.MoveNext

	' Irving - Random password
	Dim intUpperLimit, intLowerLimit, strPassword
	strPassword = ""
	intUpperLimit = 126
	intLowerLimit = 33

	For i = 1 to 12
	    intASCIIValue = Int(((intUpperLimit - intLowerLimit + 1) * Rnd) _
	        + intLowerLimit)
	    strPassword = strPassword & Chr(intASCIIValue)

	'  Perform the Action.  Write out the computername/password then execute
	MyFile.WriteLine(strComputer & "   " & strPassword)
	Set objUser = GetObject("WinNT://" & strComputer & "/Administrator")
	objUser.SetPassword strPassword



Credits to The Scripting Guy’s article on scripting the change of the local administrator password. Very funny article,  I’m a big fan of the Scripting Guy.

Now, there are certainly some improvements that can be made, and WILL be made if I ever have to use this thing again.   First off, the ability to define the output location and LDAP search path.   Second, automatically determining if a server is a domain controller and skipping it.  You DEFINITELY DO NOT want this script hitting a Domain Controller, because it will change the Domain’s Administrator account, and that can be a bad thing.  Trust me, I already learned that lesson, at least I had the password in my output file.

Interviewed for Inc. Technology

Posted by Irving Popovetsky on September 19th, 2008

I was recently interviewed by Michelle Rafter for Inc. Technology about best practices for Administrative Passwords.

Article link:  Psst! Whats the Password?