Synthetic Transactions - What we are looking for

I am in the process of evaluating a number of different Synthetic Transaction Application vendors, and have devised the basic selection criteria. Each of these is given a different weighted value based upon how important it is for the application to provide this functionality. A few are "Nice to Have" but the majority are pieces that the application we choose needs to do well.

Cross-Browser -
The Synthetic Transactions need to come from a variety of browsers. We have no control over what browser will be used to access our application. Our application supports all browsers (sometimes with some tweaks). It is necessary that the Synthetic Transaction application will be able to perform tests as Chrome, Firefox and if possible multiple flavors of Internet Explorer (11,10, 9). How these browser tests are performed, whether through an emulation software of by full browser rendering support will dictate how well a vendor ultimately performs in this category. Failure to be able to emulate at least the three current browser versions will make the vendor ineligible.

Worldwide Testing Locations -
The goal of selecting a Synthetic Transaction partner is to fill the need we have for measuring the global availability of our application. We would like to have the flexibility of running the same tests from any browser, from any location. We can already perform selenium based tests in house. The whole reason we are evaluating a vendor is to fulfill this global testing need. Only products that have at least 8 global testing locations will be considered. The sheer number of testing locations does not indicate the clear winner in this category; but the ease with which the tests can be created and maintained. Ideally we would create one standard Test. This test could then be run in any of the selected browsers and then in turn run from any given test location. Managing this as a single object is ideal. The ability to perform the same tests from any global location is also a factor.

Testing Engine -
We currently use a combination of Selenium and Gherkin scripts to perform availability tests within our own grid. We would like to be able to directly import these scripts into the selected application. This will allow us to use all of the tests we have already created in house, as well as continue to use our internal Testing Tier as the source for additional test needs going forward. This will save tremendously on the cost to implement and maintain the solution. If a more standard exported version of the script can be used between our selenium grid and the vendor, then that is acceptable as well. It is assumed that all vendors will have some browser based plugin that allows us to create tests manually.

Import/Export Tests-
The application should have some methodology to import the scripts to be used for testing. Similarly, the application should be able to export any created tests into a format that may be consumed elsewhere. This should ease the migration between our internal testing grid and our chosen vendor as well as give us portability in our testing in case we wish to move to another solution in the future.

Test Failures-
It is important to understand the flexibility provided in determining what constitutes a test failure.  The following Examples should be the minimum options: threshold Time Exceeded, Server error (400, 500 message) page timeout. Other items to consider would be page weight (i.e. is the page the kb size we expect to see?), or broken links.

Alerting-
Since the goal of the exercise is to create a new key measurable for availability and performance, we need to be notified if a test fails, the application should have the ability to generate an email or SMS messages (or both) when a test fails. This is the minimum requirement. Additional consideration would be if there is an escalation methodology available to alerts, the ability to silence alerts, or the ability to create quite periods.

Private Testing site -
Our application benefits from a very engaged and active user base. Many customers have requested additional monitoring for their facilities. We see the need to be able to deploy a Node, appliance, or virtual machine at specific sites around the world; most likely at a customer site, as very desirable. Deploying an object at the network perimeter (outside the firewall) would be sufficient for this need.

API-
The Synthetic Transaction Application must have API support for all features. We have a wealth of Developers and scripters that can leverage this functionality to reap the most of the tool. This functionality should be included within the base offering. We would prefer a REST API solution.

Integration points-
We have a number of other software packages that we use to manage our infrastructure. The vendor should have pre-built bridges or integration points with as many of these other vendors as possible.

WebServices-
A number of critical components of our application are presented as WebServices. The Synthetic Transaction Application must have the ability to define Header and Body information to be able to communicate with a WebService. This is the minimum functionality we require. Optimally the application should be able to evaluate the response returned from the server for more than a web status code, parsing the response for the expected output of the webservice.

Content Delivery Network (CDN)-
The Synthetic Transaction application should identify the source of all information it receives during the transaction. This should highlight what content was received from our private cloud and which content was received from the CDN.  Testing methodologies designed specifically to evaluate CDN performance would be welcome as well.

Data Retention -
In order to create service baselines and track growth trends, we need to have access to transactional data for at least 6 months. The object level data - with all the granular transaction details is not needed for this length of time, but it would be nice to be able to access the granular detail for at least two weeks’ worth of history. We assume that some data level rollup will occur after this two-week period. The application should support the ability to export complete transactions, or flag specific transactions for archival that are not subject to data rollup.

Flexible Reporting-
The application should support the ability to create custom dashboards and reports. If only one is an option, then dashboards are preferred. The application should be able to schedule the automatic running and delivery of reports. If possible, the reporting feature should be able to reference archived or exported data sets as well.

Quick Test Interval-
As soon as we complete a synthetic transaction, we intend to immediately run it again. We want to be sure we can schedule them to run non-stop essentially, recording the results with each pass. This normally translates to a 1 minute to 5 minute schedule for each test. This same level of testing should be available regardless of locality.

FTP-
FTP is widely used by our application and our customers. We would like to be able to perform upload and download tests against our FTP servers to verify availability.

How well each vendor completes each of these features will determine which ones we select for a proof of concept. Once I have narrowed down the list, i will provide specifics in how each vendor compares.

Synthetic Transactions and End User Monitoring

Software is alive.

It is created, it grows, and matures and eventually it may even die. It consumes our hours and feeds off of our sweat and our tears. We love our little creation, and need to make sure that it stays healthy. Understanding how to accomplish that can be the hard part.

Much of the time spent in my IT Career has been reacting to software woes. Ideally we design checks and monitors to make sure we can predict when something is about to go wrong. But as things evolve and change, these checks and guesses become more elaborate. As an administrator, it is easy to perceive the problem as a resource constraint; but recently this has become more difficult to do. Gone are the times of exhausted ram or sustained CPU spikes. These convenient bogeymen are getting banished with each new increment of Moore's law. It is critical to understand the entire process to understand where the problem lies.

The physical health of the system is just as important as the software, but for now we are assuming that the physical level is being monitored. I will indeed comeback to this idea as we look for a total solution to monitoring.

Baseline
Before you know if you have a problem, you need to know what a good day looks like. Gathering performance data while an issue is occurring is great, but without a relevant control or test case the data tells us nothing. We need a complete picture to be able to draw conclusions. When you are working with a cloud application the control data needs are even greater. Test data from North America is not the same as test data from China. Local counters and metrics will help define if the internal resources are operating as they should - but ideally we have been collecting that data all along. Locality is normally shrugged off as the "Last Mile" problem. and maybe it is, but without relevant data how are we to know what normal looks like? And how many hours must you spend with the customer attempting to convince them that the issue lies outside of your offering.

Having the appropriate tests and metrics in place ahead of time makes this conversation much simpler. Presenting the customer with comparisons of what the operations look like today versus yesterday, or last month is invaluable currency in these discussions.

Now we need to decide how to gather this data

Application Performance Management (APM)
APM is huge right now. There are many big players in this market and they all offer variations on a theme. However, most APM solutions are focused at the Developer or the DevOps team. There are most certainly pieces of a complete APM package that system administrators would love to get their hands on, but the pitch isn't really meant for them. The corporate decision for an APM package did not include me. We have already chosen AppDynamics for the solution based upon a number of key selection criteria and are in the process of implementing it.

The two gleaming gems of the APM offering for system Administrators are Synthetic Transactions and End User Monitoring.

Synthetic Transactions
Synthetic Transactions allow you to simulate activity through a web browser. Most often, the user installs a browser plug-in, then navigates the web application outlining the test they would like to perform. The plug-in generates a script (phantom.js, selenium, jmeter, etc.) that the Synthetic transaction scheduler can then replay. Tests created this way essentially act like an internet user accessing the application. This type of test can be used for a number of purposes such as application availability, load testing, and automated application testing.

Testing like this can be set up using virtual machines to create a local selenium test grid. We actually have this setup in house to do automated application testing against our test tier. We are able to make sure that the application framework, as well as what we have defined as the application "critical path" are working as expected at the end of each day. We have no intention of building a global selenium test grid to perform the level of testing that we would need to check the availability of our application. We need to monitor our application, but are not in the monitor business so I will look to the experts to provide us with what we need.

There are a number of companies competing in this space currently, some that specialize in some specific use cases and others that offer Synthetic Transactions as part of a larger Application Performance Monitoring tool. I will revisit this in other posts as I move through my evaluations of these companies.

End User Monitoring (EUM)
If your application already HAS users, then why not let them do the testing for you? At least, that is the idea behind EUM. EUM typically provides statistics on how fast pages load, but in some instances can keep track of exactly how users browse your site. This information can be incredibly useful as it shows you how the users are ACTUALLY performing tasks versus how you THINK they are. Similar to other Trace operations, there may be concerns on the overall strain on resources this level of tracking may put on your actual servers. These services may be enabled selectively to accommodate these concerns.

These two tools drastically alter the way that a System Administrator can think about monitoring. Like physics is the visible application of math and science, Synthetic transactions and EUM are visible, repeatable tests that relay an incredible amount of information to an administrator. It now becomes possible to align a complete known test operation with the back end performance metrics. These repeatable tests allow us to create a complete picture of what the application (and the underlying infrastructure) looks like when performing normally. This creates a wonderful baseline on which we can measure predictable growth.

These tests also allow us to create a whole new layer of SLA statistics that we can use to gauge the health of the system. Combined with relevant APM data, poor performing pieces of the application can be identified along with the impact to the underlying systems.

The Search for Monitoring

Having worked within an organization with tons of scripting and coding talent, it is easy to take for granted the ablility to make a custom widget, or Thing, or whatever it is that you may need to get something done. And that is great.. until it isn't.

Eventually you realize that this collection of tools and widgets isn't particularly anything. It most certainly isn't any ONE thing for sure. And now, when it comes to trying to maintain this  - Thing -  it is impossible. The guy who wrote it the Thing may no longer work here; or the language it is written in has no business being hosted on the internet; or heaven forbid, we just no longer have the time to work on it. Suddenly we realize that this wonderful collection of tools has stopped serving its purpose. That purpose ultimately was to make our lives easier. Updating and maintaining this Thing could in no way be defined as easy. Even at the best of times this Thing whirred and clicked and got almost everything just right.. until it stopped working. Or stopped working reliably (really the same thing).

So now, after the years of having watched this Thing be born, and grow and mature, we now see that it is old, and cranky. We love it still. It does marvelous things (and just the way we want them), but it no longer does them reliably. It has come time to put our Thing out to pasture. This could be a painful conversation, but I think most of us would gladly give up the 1000 notifications and emails the Thing likes to send when it suffers an infarction, and move onto something sleek and new. But how?

How do you move from something that is handcrafted (by you), to something impersonal and Universal? Clearly no one asked US how these things should be done? We have done this before, can they not see how our way is the best?

I think we may still have some grieving to do for our Thing.

So officially the Hunt is on for a new platform - a new solution to the need for monitoring.

A new start

I have not blogged in a while. I thought about it for sure, but it never raised to the top of the list of things i needed to be doing. Now that I am in a new spot in my IT Career, I realize that there were lots of cool things that I was doing, and maybe we were the only ones doing it, and it is sad that I do not have a record. What brings me back to the blog however, is realizing that what I am researching and working in is covered primarily by analysts - and that is great - to a certain degree. There is not much information out there from the guys actually doing the work. So I am going to try that.

I dusted off my livejournal page, updated the style and content to something akin to web 2.0 and will try to create content that maybe someone will find value in.

Windows Server 2008 - Memory Leak - SMB2


 With a bit of luck during yesterday’s memory leak issue we were able to correctly identify the root cause of massive memory consumption. It would seem that when you are copying files between two Windows 2008 x64 servers it first copies the file into memory before sending the file to it’s destination.  The larger the file you try to copy the more memory it will consume and the more it will increase the likelihood of us running out of memory. This problem does not manifest when files are copied between different Windows Server versions (2003 to 2008). I’ll include a technical explanation below.

  To protect us going forward we are implementing a moratorium on copying large files through Windows Explorer from any Windows 2008 servers  when the destination OS is any of the following: Vista (any version), Windows 7 (any version), and Windows 2008.

Technical Explanation

  The memory leak issue was first noticed all the way back on Friday, November the 13th. It has had recurrences on the November 24th, December 2nd, and December 16th. Below find an image from some perfmon counters running on Primary Database server yesterday during the problem period that shows the Cache Bytes increasing (file copy consuming available memory) and the Available Memory Bytes decreasing.

     Around 10:10AM on the 16th, when the Available Memory sharply rises corresponds to when the file copy of the a database backup crashed and when SQL Server was forced to yield some memory to the OS. 
  This morning, to make sure we understood the problem, we executed the following tests while watching the Available Memory and Cache Bytes: 

 Test Resulted in Large Memory Consumption

Large file copy within Windows Explorer between 2 Windows 2008 Servers (The OS decides to use SMB2)
 

 

Test Resulted in No Memory Consumption Problems 

 Large file copy within Windows Explorer between 1 Windows 2003 Server and 1 Windows 2008 Server

Large file copy within Windows Explorer between 2 Windows 2008 Servers with SMB2 disabled

Large file copy between 2 Windows 2008 Servers using ESEUTIL to copy the file

  
  According to Microsoft the Windows Explorer is performing as designed when executing a copy of a large file. When Windows Server determines that it has 2, Windows 2008 servers communicating, it uses SMB2 to facilitate the transfer, which leverages a buffered file copy. There are many utilities that suffer from this problem as well (OS commands COPY, XCOPY, and ROBOCOPY (2008 only)).
  The only safe work around when moving files between 2 Windows 2008 Servers is to use an Exchange utility called ESEUTIL. One of the methods this utility supports is what’s called an unbuffered copy, ie 0 impact on available memory’s bottom line.
  This is just craziness. I’m going to follow up more with our other available channels to push this complaint more. This would mean that going forward in order to appropriately size our memory, we would need to account for the size of the largest file we’re going to be copying as well as the application being hosted on the server. 

 These are the things that I’m going to be following up on with Microsoft as soon as possible: 

What happens on the server when it reaches 0 memory

SQL Yielding Memory when it’s configured not to

Large file copy consuming memory being considered as functioning as designed

Verifying that the cluster is configured appropriately


Windows Server 2008 DNS (SP1 and SP2)

Joy of Joys!

I manage multiple domains internally, all for various things. One is our standard domain that our users authenticate against, another is an authentication domain for our customers to authenticate against. Both have Test/Dev/Lab versions as well. There are no trusts between any of them, but our developers need to be able to access them. We do this by creating Secondary DNS zones on each Domain controller. No big news here really, but just talkilng.

Well recently I have run into issues where certain developers local environments would fail and it would always come back to DNS. The secondary stub on one domain controller or other would be missing the domain DNS information. The stub would be blank, except for the IP address of the server that I had specified to transfer the zone from on initial setup. If I re-transferred from master, it would repopulate. But I had to manually instigate this. The server would blithely hum along claiming to get the new zone updates. If I did a reload on the zone, I would get an error:

Log Name:      DNS Server
Source:        Microsoft-Windows-DNS-Server-Service
Date:          12/9/2009 8:38:20 AM
Event ID:      6527
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      one domaincontroller.com
Description:
Zone devlopmentdomain.com expired before it could obtain a successful zone transfer or update from a master server acting as its source for the zone.  The zone has been shut down.

But unless I did that the servers would act like nothing was wrong. I tested this by leaving one of the zones in a failed state for 48 hours. Through looking at the logs I was able to narrow it down to a pair of domain controllers in a specific subnet (where all my users are) that were losing the zone information. This sent me into a flurry of activity in getting firewall packet captures and the like but still I did not see anything. - But I was able to reproduce the error.

The error would only manifest if I rebooted the master server that these DNS Server held secondary zones for. I thought perhaps it was an expiration interval, and that 2008 was having an issue after failing to receive an update. I shutdown one of the master DNS Servers, and waited for the expiration interval. Still the zone was fine. I then powered the DNS server back on. after boot, the zone was still ok on the secondary zone. Then I logged in. And as soon as the DNS File was updated the secondary zones lost their information.  So now I knew the cause - but Why?

As it turns out, the Two domain controllers that were having their secondary zone data truncated were some of the oldest 2008 servers I had built. They were only on SP1, where as all of the other domain controllers in the environment were on SP2. Crazy. So which update in the pack of SP2 changed DNS functionality so basically? That I will have to dig to find. Just another reason to make sure updates are applied in lock-step i guess...

HMC Production Deployment

So - We have everything in place now to begin the HMC Deployment. Working with the HMC support guys at microsoft, we had to make a few changes to our proposed plan.

The biggest gotcha was that the MPS Web Services piece MUST be on its own. In our lab environment we have it on the same box as the MPS Database and MPS Engine. We really did have to do some shoe-horning to get it to work. HMC Support says this is not supported - and will not work with most implementations. Also, All 3 of the MPS pieces (MPS Engine, MPS Database, MPS Web Services) MUST be on x86, and SHOULD NOT be virtualized.

Once around this wrinkle - which included needing another piece of hardware since we couldn't use VM, we started the deployment.

I ran across an issue during the Deployment:

************************************************************************************************************************************************ 

Exception: Microsoft.Provisioning.DeploymentTool.Engine.DeploymentExceptionDeploymentFailed

HResult: -2146233088

Message: Deployment interrupted because of a failure. See inner exception.

 

Stack Trace:

   at Microsoft.Provisioning.DeploymentTool.Engine.Deployment.DoDeploymentWork()

   at Microsoft.Provisioning.DeploymentTool.MainForm.ExecuteDeploymentSlice()

 

--------------------

 

Inner Exception (1): Microsoft.Provisioning.DeploymentTool.Engine.NamedProcedureException

HResult: -2146233088

Message: <errorContext description="/There is no such object on the server. (Exception from HRESULT: 0x80072030)/AddUserToGroup" code="0x80072030" executeSeqNo="22"><errorSource namespace="Active Directory Provider" procedure="Group Add" /><errorSource namespace="Preferred DC Active Directory Provider" procedure="Group Add" /><errorSource namespace="Deployment Automation" procedure="TryGroupAdd_" /><errorSource namespace="Deployment Automation" procedure="ExecuteDeploymentStep_" /><errorSource namespace="Deployment Automation" procedure="ExecuteDeployment_" /><errorSource namespace="Deployment Automation" procedure="GroupAddWindowsBasedHosting_" /><errorSource namespace="Deployment Automation" procedure="ConfigureMPSSQLServiceAccount" /></errorContext>

 

Stack Trace:

   at Microsoft.Provisioning.DeploymentTool.Engine.ExecuteNamedProcDeploymentAction.CheckForFinished()

   at Microsoft.Provisioning.DeploymentTool.Engine.DeploymentAction.Update()

************************************************************************************************************************************************

I saw this during the lab deployment too - but did not think to document it. This error occurs because the Deployment tool cannot find the MPSSQLService account. Now, this can happen two ways; If you do not name the account EXACTLY - (firstname = MPSSQLService, Userprincipalname = MPSSQLService, sAMAccountName = MPSSQLService) or if you do not create the account in the default Users OU. I managed to do the former in the Lab Environment, and the latter in the production buildout.

Hosted Messagine and Collaboration (HMC) for Exchange 2007

Well, the time has come.
Microsoft support has finally stated that our Hosting environment is no longer supported. Even though we followed the initial setup instructions provided on technet - Microsoft will no longer support companies hosting email using address list segregation. Now we are looking at migration to a Hosted Messaging and Collaboration (HMC) environment. I can see where this is desirable for Microsoft. This creates a very standardized deployment and management infrastructure for hosting exchange. All permission changes and management groups are created programatically with the tools. What I do not like is the additional infrastructure requirements. I have proofed out an installation environment in our lab like the following:

DC1 - Domain Controller, DNS Host
DC2 - Domain Controller, Certificate Server
ExchFE - 2007 Client Access Server / Hub Transport Server
ExchBE - 2007 Mailbox Server
MPSWS - MPS Web Services
MPS - Microsoft Provisioning Services, Microsoft Provision services Database
SCom - Microsoft System Center, and MPS reporting.

The documentation for HMC has abour 21 servers involved, with every aspect being clustered/redundant, and every single role being segregated. For us - this is unrealistic. I have decided to combine some roles that most people would not - because I have no intention of allowing our "customers" to manage their exchange environments. This is not as bad as it sounds - as we have management of the accounts built into our ERP package. We also do not use local clusters for any services in our environment. We have a very robust DR architecture in place with snapshots taking placing and shipped to a remote facility with and RPO and RTO of 1 hour. We are not to the point where we want an unsolicited automatic fail-over of any service. Plus - Exchange 2010 supports a distributed database redundancy model that does NOT use windows Clustering.

Exchange 2007 - Offline Address books do not get created for Public Folder Distribution

I finally decomissioned my last 2003 exchange server. All was going well - until I created a couple more hosted companies. I noticed that Offline address books were failing to get generated for them. Specifically, since the Exchange 2007 server become the new Public folder server, If I created a new Offline Address Book - it didnot get the version 2, 3, or 4 files created in Public Folders. The root folder is there, but no subfolders. Generation and updates for all other Offline Address books - and their public folder versions works fine. Only newly created ones (since the 2007 server became the Public folder server) fail to get generated.

If I Manually ran the update (update-OfflineAddressbook OABName) I received the following errors in the even log:

Event Type:      Error
Event Source:      MSExchangeSA
Event Category:      OAL Generator
Event ID:      9344
Date:            2/13/2009
Time:            11:22:44 AM
User:            N/A
Computer:      Exchange Server
Description:
OALGen could not find the address list with the Active Directory object GUID of '/guid=E71B78238CC3BC4C83B90B59271994AD' in the list of available address lists.  Please check the offline address list configuration object.
- OABName
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Event Type:      Error
Event Source:      MSExchangeSA
Event Category:      OAL Generator
Event ID:      9334
Date:            2/13/2009
Time:            11:22:44 AM
User:            N/A
Computer:      Exchange Server
Description:
OALGen encountered error 8004010f while initializing the offline address list generation process. No offline address lists have been generated. Please check the event log for more information.
- OABName
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

And Again - this didNOT happen across the board. Only for the new Offline Address books I have created since moving the Public Folder database to the 2007 Server. I struggled for a while, and then rememberd that there is a Setting at the store level that must be defined. the addressBookRoots attribute on the Microsoft Exchange container in ADSIEdit was the Key. Each address list that must be generated needs to be lincluded here.Because these address lists were not added there - Public folder versions were not getting generated.

Windows XP will Download But fails to install Windows Updates


Follow the steps below to solve this annoying problem when the Windows XP updates will download but won't install.

Make sure Windows installer 3.1 is installed correctly

Click on the following link to visit the Microsoft Download Center and download Windows installer 3.1

Download Windows Installer 3.1

Reboot the computer and try to download Windows Updates, if they still fail to install, continue with the next step.

Stop the Automatic Update Service

  1. Click on Start, Run
  2. Type the following command and press Enter

    services.msc

  3. Right-click on the Automatic Updates option in the Name column and click Stop
  4. Close the Services window


Show Hidden Files and Folders

  1. Open My Computer
  2. Click on Tools, Folder Options
  3. Click on the View tab
  4. Under the Hidden Files and Folders section, select "Show Hidden Files and Folders"
  5. Click Ok

Delete the Previously Downloaded Windows Updates

  1. Open My Computer
  2. Double-click on Drive C (or whatever drive Windows is installed on)
  3. Double-click on the Windows folder
  4. Double-click on the SoftwareDistribution folder
  5. Double-click on the Download folder
  6. Click on Edit on the menu bar
  7. Click on Select All
  8. Click on File on the menu bar
  9. Click on Delete and delete everything in the download folder
  10. Return to the SoftwareDistribution folder by clicking on the green up arrow on the toolbar
  11. Double-click on the EventCache folder
  12. Click on Edit on the menu bar
  13. Click on Select All
  14. Click on File on the menu bar
  15. Click on Delect and delete everything in the eventcache folder


Restart the Automatic Updates Service

  1. Click on Start, Run
  2. Type the following command and press Enter

    services.msc

  3. Double-click on the Automatic Updates option in the Name column
  4. Click on the Start button under Service Status to restart the service
  5. Select Automatic under Startup Type to start the service each time Windows starts
  6. Click Ok
  7. Close the Services Window

Download the Latest Version of Windows Update Agent

Click on the following link to download the latest version of the Windows Update Agent and save it to your desktop.

Download Windows Update Agent

Warning:

If you receive a message stating the Update Agent is already installed follow these extra steps:

  1. Click Start, Run
  2. Click the Browse button
  3. Navigate to where you saved WindowsUpdateAgent30-x86.exe on your desktop and click it one time
  4. Click on the Open button
  5. On the Open line, go to the end of the command. After the last quotation mark type the following 

    /wuforce

  6. The line should look something like the following now:

    "C:\
    Documents and Settings\username\Desktop\WindowsUpdateAgent30-x86.exe" /wuforce
  7. Click Ok and install the Update Agent

Download and Install the KB927891 Update

Click on the following link to download the KB927891 Update for Windows XP or click here to read more about the update

 

 
Restart your computer and try downloading and installing any Windows XP Updates again

Windows Updates Download but Won't Install after Repair Install of Windows XP

A new problem has appeared recently since Microsoft has changed their Windows Update files. The problem appears after a repair install of Windows XP has been performed. The "new" Windows Update features a file called wups2.dl that was not present in the Windows XP found on the CD used to perform the repair install. This file remains on the hard drive but the registry entries for it are missing after a repair install. This causes the Windows updates to download but fail to install properly.

If you experience this problem, follow these steps to re-register the file wups2.dll in Windows XP
  1. Stop the Windows Automatic Updates Service via the command prompt

    • Click on Start, Run
    • Type CMD and Click Ok
    • At the command prompt, type the following command, then press Enter

      net stop wuauserv

  2. Re-Register the WUPS2.DLL file

    • Still at the command prompt, type the following command and press Enter

      regsvr32   %windir%\system32\wups2.dll

  3. Restart the Automatic Updates Service

    •  
    • At the command prompt, type the following commands, pressing Enter after each.

      net start wuauserv
      exit
  1.  
  1. Restart your computer, and run Windows Update again. This time the updates should download and install properly.
For more information on this problem, you can
visit the Microsoft Support Article 943144 

Information originally posted at (
http://www.pchell.com/support/windows_updates_download_but_wont_install.shtml)