March 2005 - Posts
This is great: Servicepack 1 for Windows Server 2003 and the x64-Versions of Windows Server 2003 and Windows XP have been finally released 3 hrs ago (sorry, I was sleeping, it got released 5:20 PM at PST which is 03:20 AM here in Germany).
Windows Server 2003 Servicepack 1 contains all the Security Fixes and enhancements we've seen with Windows XP Servicepack 2 (such as Windows Firewall, Popup-Blocker in Internet Explorer, Support for "no-execution" Hardware, Wireless LAN enhancements,…) but it also contains new features such as
- Post-Setup Security Updates (PSSU): If you setup a new server with Servicepack 1 installed the PSSU will turn the Windows Firewall on and let no-one connect remotely until a Admin confirms it's patched and ready to open up.
- Server Configuration Wizard (SCW): Helps you to lock down the server - the SCW is scanning the installed server roles (such as Fileserver or Domain Controller) and helps locking it down by disableing services, configuring the Windows Firewall, securing Registry a.s.o.
- Many other technology updates, such as the Access-based Enumeration for Shared Folders or configuration changes like the new default Tombstone-Lifetime for new Forests.
Find documents and downloads underneath the
In my Blog about DHCP, DNS and the DNSUpdateProxy-Group
I was stating that for security reasons you really don't need to use the DNSUpdateProxy-Groups for most scenarios. However Bob has asked a very good question, which is worth another entry in this Blog. Thanks for that Question.
Q: How does this solve the problem that the DNSUPDATEPROXY group was designed to fix, namely the prevention of stale records and the ability of upgrade clients (NT --> 2000) to refresh and update records created for them by the DHCP server?
A: The DnsUpdateProxy-Group was originally designed for two reasons:
- Allow redundancy: Using a DHCP-Cluster or DHCP-Servers which are configured for the same scope you they need to be able to update the records assigned from the other node. I stated in my previous post that there's no reason for the DnsUpdateProxy-Group in that scenario - you can configure a service-account to use by both nodes which will do the update.
- Allow to update records previously created from a DHCP-Server from Clients in a migration scenario. That's what you are referring too. Lets discuss this a bit more.
If a DHCP-Server is member of the DnsUpdateProxy-Group he is creating DNS-Records which are allowed to be updated by authenticated users, so they are not secure.
By default the DHCP-Server will create and update the Forward-Lookup-Entry (A-Record which resolves the Name to the IP) for downlevel-clients which are not able to update those records themselves. He will also update the Reverse-Lookup-Entry (PTR-Record which resolved the IP-Adress to the Full-Qualified Name) for all Clients (He will always be the owner of the IP, whether the Client will be the owner of the Name).
In a migration scenario NT4 and lower clients won't create their DNS-Entries, so the DHCP-Server will take care of it (if he's configured to do so). If the DHCP-Server is member of the DnsUpdateProxy-Group there's a big advantage: as soon as you upgrade the client to Windows 2000 or Windows XP the client will be able to update it's own record, since the DHCP-Server didn't secure it.
It is valid if you want to have the DHCP-Server in the DnsUpdateProxy-Group during the client-migration period (but please don't run it on a DC in that case - we really don't want DCs doing the domain critical updates unsecured), however there are other possibilities as well:
- Do you really need the old clients in DNS? Usually you want to be able to resolve them and you have a running WINS-Infrastructure anyways. So configure the DHCP-Server not to perform updates for downlevel-clients, and configure the DNS-Server to ask the WINS-Server for records he doesn't know. After the migration of the client he's able to create his record since there's no DNS-Record for him yet.
- If you perform the migration of the clients in a short period: You'd be able to delete the client records before the migration, or even every evening when the users are gone. The clients will recreate their records anyways the next day when booting up anyways (for small to medium businesses/domains).
- If you know when you'll migrate which client, you'll also be able to delete just those records.
- You are able to anonymize the client records.
Gosh - this blog-entry is long - sorry for those who pull it as RSS, however I haven't found a possibility to blog offline with providing an excerpt for RSS.
Back to the topic - here are some commands which may help you:
To use WINS-Forward-Lookup:
In the DNS Management Console select the zone, then select the Properties of the Zone, go to the WINS-Tab and select the WINS-Server(s) to ask if the record is not in the zone.
To make sure the DHCP-Server does not create downlevel-entries:
In the DHCP Management Console check the properties of the server and the properties of the specific scopes that the checkbox "Dynamically update DNS A and PTR records for DHCP clients that do not request updates (for example, clients running Windows Nt 4.0)" is not checked.
To "anonymize" a DNS-Record (allow authenticated users to update it, and remove the ACL for the client computer account):
dsacls "DC=xpclient,DC=company.com,CN=MicrosoftDNS,DC=DomainDnsZones,DC=company,DC=com" /G "nt authority\authenticated users":CCRCWSWP /R company\xpclient$
You can also use dnscmd /RecordDelete to delete single or more dns-objects, or use a VB-Script which provides more functionality to query clients and delete or modify their DNS-Records.
Note: dsacls and dnscmd are out of the support tools which you'll find on the Windows Server 2003 CD in the directory [CD]:\support\tools.
I hope this helps - there are multiple possibilities for that scenario, but I think most important is not to run the DHCP-Server in the DnsUpdateProxy-Group for a very long time - interim for a migration of the clients would be OK if you are aware of the thread.
If you need to restore your domain controller, or you need to make an authoritative restore of Active Directory, you need a backup which is younger than 60 days (by default). The reason here fore is that every object in Active Directory which gets deleted will remain as a tombstone, to make sure that the information to delete this object is replicated to every DC before physically deleting it from the store. The Tombstone is the object with limited attributes, such as the GUID, Name and SID of the object, and the mark that it's deleted. The garbage collection of Active Directory takes care to finally delete tombstones which are older than the tombstone-lifetime.
So that's the reason why you are not allowed to use a backup which is older than the tombstone lifetime - you would reintroduce objects which were already deleted and may run into unexpected behaviors.
So why did I say you'll get more time? In Forests which are installed on top of a Windows Server 2003 including Servicepack 1 the new default tombstone-lifetime is tripled to 180 days. If you don't dcpromo the forests first DC with Servicepack 1 already installed you'll still have the default tombstone-lifetime of 60 days.
You can check your tombstone-lifetime using the following command which comes with Windows Server 2003:
dsquery * "CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=yourdomain,DC=com" -scope base -attr tombstonelifetime
The Tombstone-lifetime applies to all domains in your forest.
Note: Even if I wrote that you are able to use older backups now, I do recommend to run a Active Directory Backup at least every day, but you do not need to backup every DC. You should have a backup of at least every domain. One suggestion would be to backup the FSMO-Role owners (you'll get those with netdom query fsmo) per domain. The older an backup is, the more problems you will get with changed objects, most public issue are computer account which are not able to connect to the domain because the accounts password has been changed after the backup was performed. Computer Accounts change their password every 15 to 30 days, so take the number of computers in your domain, divide by 15 and you have a guess how many computer accounts change their password every day. This is also the reason why images of a workstation fall out of the domain after about 15 days (about 7 days with NT4). But that's a different topic.
Now the word is spread, so I can blog that as well: Windows Server 2003 SP1 (and the x64 Version) will finally introduce the feature that folders underneath a share can be hidden when the user browsing the parent folder or share has no read-permissions on that folder. It's been requested for years, and finally made it's way to Windows Server. This feature is called Access-based Enumeration.
You can switch this on and off for every share with a commandline-tool (abetool.exe) and in the final version this is supposed to be supported by the GUI as well.
Joe Richards, MVP for Windows Server - Directory Services and author of the Joeware-Tools
has created a the tool ShrFlgs
which also enables the Access-based Enumeration on Shares.
The last weeks have been very busy. First of all I had to train a couple employees of a German company who maintained their OS/390-Host but whose jobs are switching to administer Windows Server 2003 and AD. That was the hardest training I ever gave, I was adjusting content and labs every night in the hotel room while training them during the day so that they got the most out of their training. But it has been absolutely the most interesting training I ever gave as well - they were very interested and some were very deep in their technology area and wanted to know how this compares in Windows Server and AD. So I had to teach the basics but also discuss the specifics about the Security Model in Windows (what Tokens contain, what Security Descriptors contain, how that is matched a.s.o.), about the Kernel Architecture and a lot of other interesting topics. It was great (and thanks for the great Feedback as well)!
After the last day of training I drove over 500 kilometers through bad weather and a lot of traffic jams to Hanover where to the CeBit Tradeshow. On Friday Evening (for me night) we had dinner with some community-members and MS-Folks. It was in a nice location and I met a lot of people I haven't seen for a while. On Saturday there was a Microsoft Community-Influencer-Meeting. I was presenting a "Cross-Community Success-Story" there, I enjoyed the possibility to spread the word about how great communities are and how you can ease your tasks and works being able to rely on strong communities. In the evening I drove back to Munich, another 630 kilometers with bad weather. Needless to say that I was totally tired arriving back home after that week.
The next week was cool - I've got a Barebone 64-bit System which is configured for maximum performance for Virtual Server. So I was able to create whole testing infrastructures (I've already tested running 10 Servers at the same time) for Windows Server 2003 and R2. I've tested the x64 Version of SP1, and I've tested the Release 2 of Windows Server 2003 which is due late this year. I just love it - hope to be able to spread the word about the new Directory Services Features soon.
OK - I'll stop here - two more things: I've decided to demo a part of the Virtualization Blog
via Blogcast/Webcast or something like that soon, and I've got a great question from Bob about the DNS-Updateproxy Blog
. This one is on my high priority list to do as soon as I have a few minutes since it deserves some more beef and clarification. Stay tuned…
There's a common question in the Newsgroups, which I'd like to clarify:
Q: Is the Infrastructure Master allowed to run on a Domain Controller which also holds the Global Catalog Server?
One of the common replies and misunderstood rumors is that the Infrastructure Master (IM) is only allowed to run on a Global Catalog Server (GC) if every Domain Controller (DC) in the Forest is Global Catalog Server. That rumor is just based on misleading wording.
The infrastructure masters job is to compare objects of the local domain against objects in other domains of the same forest. If the server holding the infrastructure master is also a global catalog it won't ever see any differences, since the global catalog holds a partitial copy of every object in the forest itself. Therefore the
infrastructure master won't do anything in its domain. However if every DC in the Domain is also global catalog server there's no job for the IM since the GC already knows about the objects of other domains. So if
you look at the job the IM has to do, it's pretty clear that it may reside on a GC if it's a single domain forest (no need to pull updates from other domains). It's also pretty clear that it may reside on a GC if it's in a multiple domain forest but every DC in the domain where the IM runs on the GC are also GCs (no need to pull updates since the GC knows everything).
So the following infrastructure is a valid configuration:
R-DC1 (GC + IM)
R-DC3-x (must be GC)Other domain:
O-DC3-x (might or might not be GC, does not matter)
The first domain does not need to pull updates since the GCs know everything, the other domain has the IM running on a non-GC so it pulls the updates and replicates them to other DCs.
The following KB states that correctly:http://support.microsoft.com/kb/223346/EN-US/
So to be short:
The Infrastructure Master is not allowed to run on a Global Catalog Server if both of the following conditions apply:
- there are multiple Domains in the Forest
- there are Domain Controllers in the same Domain which are not Global Catalog Servers
The Infrastructure Master is allowed to run on a Global Catalog Server in a Domain if either
- there's only one Domain in the Forest
- every Domain Controller in a Domain where the Infrastructure runs on a GC is also Global Catalog Server (there is no none-GC-DC in the domain)
248047 Phantoms, Tombstones and the Infrastructure Master
Details about the Active Directory EventId 1419