Compliance in DevOps and public cloud.

As a DevOps engineer, you’ve achieved greatness. You’ve containerised everything, built your infrastructure and systems in the cloud and you’re deploying every day, with full test coverage and hardly any outages. You’re even starting to think you might really enjoy your job.

Then why are your compliance teams so upset?

Let’s take a step back. You know how to build secure applications, create back ups, control access to the data and document everything, and in general you’re pretty good at it. You’d do this stuff whether there were rules in place or not, right?

Not always. Back in the late 90’s, a bunch of guys in suits decided they could get rich by making up numbers in their accounts. Then in 2001 Enron filed for bankruptcy and the suits went to jail for fraud. That resulted in the Sarbanes-Oxley Act, legislation which forced publicly listed firms in the US to enforce controls to prevent fraud and enable effective audits.

Sarbanes-Oxley isn’t the only law that makes us techies do things certain ways though. Other compliance rules include HIPAA, ensuring that firms who handle clinical data do so properly; GDPR, which ensures adequate protection of EU citizens’ personal data; and PCI-DSS, which governs the use of payment card data in order to prevent fraud (and isn’t a law, but a common industry standard). Then there are countless other region and industry specific rules, regulations, accreditations and standards such as ISO 27001 and Cyber Essentials.

Aside from being good practice, the main reason you’d want to abide by these rules is to avoid losing your job and/or going to jail. It’s also worth recognising that demonstrating compliance can provide a competitive advantage over organisations that don’t comply, so it makes business sense too.

The trouble is, compliance is an old idea applied to new technology. HIPAA was enacted in 1996, Sarbanes-Oxley in 2002 and PCI DSS in 2004 (though it is frequently updated). In contrast the AWS EC2 service only went out of beta in late 2008, and the cloud has we know it has been around for just a few years. Compliance rules are rarely written with cloud technology in mind, and compliance teams sometimes fail to keep up to date with these platforms or modern DevOps-style practices. This can make complying with those rules tricky, if not downright impossible at times. How do you tell an auditor exactly where your data resides, if the only thing you know is that it’s in Availability Zone A in region EU-West-1? (And don’t even mention to them that one customer’s Zone A isn’t the same as another’s).

As any tech in a regulated industry will appreciate, compliance with these rules is checked by regular, painful and disruptive audits. Invariably, audits result in compliance levels looking something like a sine wave:

This is because when an audit is announced, the pressure is suddenly on to patch the systems, resolve vulnerabilities, update documents and check procedures. Once the audit is passed, everyone relaxes a bit, patching lags behind again, documentation falls out of date and the compliance state drifts away from 100%. This begs the question, if we only become non-compliant between audits, is the answer to have really, really frequent audits?

In a sense, yes. However, we can no longer accept that audits with spreadsheet tick box exercises, and infosec sign-off at deployment approval stage actually work. Traditional change management and compliance practices deliberately slow us down, with the intention of reducing the risk of mistakes.

This runs counter to modern DevOps approaches. We move fast, making rapid changes and allowing teams to be autonomous in their decision making. Cloud technology confuses matters even further. For example, how can you easily define how many servers you have and what state they’re in, if your autoscaling groups are constantly killing old ones and creating new ones?

From a traditional compliance perspective, this sounds like a recipe for disaster. But we know that making smaller, more frequent changes will result in lower overall risk than large, periodic changes. What’s more, we take humans out of the process wherever possible, implementing continuous integration and using automated tests to ensure quality standards are met.

From a DevOps perspective, let’s consider compliance in three core stages. The first pillar represents achieving compliance. That’s the technical process of ensuring workloads and data are secure, everything is up to date, controlled and backed up. This bit’s relatively easy for competent techs like us.

The second pillar is about demonstrating that you’re compliant. How do you show someone else, without too much effort, that your data is secure and your backups actually work? This is a little more difficult, and far less fun.

The third pillar stands for maintaining compliance. This is a real challenge.  How do you ensure that with rapid change, new technology, and multiple teams’ involvement, the system you built a year ago is still compliant now? This comes down to process and culture, and it’s the most difficult of the three pillars to achieve.

But it can be done. In DevOps and Agile culture, we shift left. We shorten feedback loops, decrease batch size, and improve quality through automated tests. This approach is now applied to security too, by embedding security tests into the development process and ensuring that it’s automated, codified, frictionless and fast. It’s not a great leap from there towards shifting compliance left too, codifying the compliance rules and embedding them within development and build cycles.

First we had Infrastructure as Code. Now we’re doing Compliance as Code. After all, what is a Standard Operating Procedure, if not a script for humans? If we can “code” humans to carry out a task in exactly the same way every time, we should be able to do it for machines.

Technologies such as AWS Config or Inspec allow us to constantly monitor our environment for divergence from a “compliant” state. For example, if a compliance rule deems that all data at rest is encrypted, we can define that rule in the system and ensure we don’t diverge from it – if something, human or machine, creates some unencrypted storage, it will be either be flagged for immediate attention or automatically encrypted.

One of the great benefits of this approach is that the “proof” of compliance is baked into your system itself. When asked by an auditor whether data is encrypted at rest, you can reassure them that it’s so by showing them your rule set. Since the rules are written as code, the documentation (the proof) is the control itself.

If you really want your compliance teams to love you (or at least quit hassling you), this automation approach can be extended to documentation. Since the entire environment can be described in code at any point in time, you can provide point-in-time documentation of what exists in that environment to an auditor, or indeed at any point in time previously, if it’s been recorded.

By involving your compliance teams in developing and designing your compliance tools and systems, asking what’s important to them, and building in features that help them to do their jobs, you can enable your compliance teams to serve themselves. In a well designed system, they will be able to find the answers to their own questions, and have confidence that high-trust control mechanisms are in place.

Compliance as Code means that your environment, code, documentation and processes are continuously audited, and that third, difficult pillar of maintaining compliance becomes far easier:

This is what continuous compliance looks like. Achieve this, and you’ll see what a happy compliance team looks like too.

 

The OSI model for the cloud

While I was putting together a talk for an introduction to AWS, I was considering how to structure it and thought about the “layers” of cloud technology. I realised that the more time I spend talking about “cloud” technology and how to best exploit it, manage it, develop with it and build business operations using it, the more some of our traditional terminologies and models don’t apply in the same way.

Take the OSI model, for example:

 

When we’re managing our own datacentres, servers, SANS, switches and firewalls, we need to understand this. We need to know what we’re doing at each layer, who’s responsible for physical connectivity, who manages layer three routing and control, and who has access to the upper layers. We use the terms “layer 3” to describe IP-based routing or “layer 7” to describe functions interacting at a software level, and crucially, we all know what each other means when we use these terms.

With virtualisation, we began to abstract layers 3 and above (layer 2? Maybe?) into software defined networks, but we were still in control of the underlying layers, just a little less “concerned” about them.

Now, with cloud tech such as AWS and Azure, even this doesn’t apply any longer. We have different layers of control and access, and it’s not always helpful to try to use the OSI model terms.

We pay AWS or Azure, or someone else, to manage the dull stuff – the cables, the internet connections, power, cooling, disks, redundancy, and physical security. Everything we see is abstract, virtual, and exists only as code. However, we still possess layers of control and management. We may create multiple AWS accounts to separate environments from each other, we’ll create different VPCs for different applications, multiple subnets for different functions, and instances, services, storage units and more. Then we might hand off access to these to developers and testers, to deploy and test applications.

The point is that it seems we don’t yet have a common language, similar to the OSI model, for cloud architecture. Below is a first stab at what this might be. It’s almost certainly wrong, and certainly can be improved.

Let’s start with layer 1 – the physical infrastructure. This is entirely in the hands of the cloud provider such as AWS. Much of the time, we don’t even know where this is, let alone have any visibility of what it looks like or how it works. This is analagous to layer 1 of the OSI model too, but more complex. It’s the physical machines, cabling, cooling, power and utilities present in the various datacentres used by the cloud providers.

Layer 2 is the hypervisor. The software that allows the underlying hardware to be utilised – this is the abstraction between the true hardware and the virtualised “hardware” that we see. AWS uses Xen, Azure uses a modified Hyper-V, and others use KVM. Again, we don’t have access to this layer, but a GUI or CLI layered on top. For those of us who started our IT careers managing physical machines, then adopted virtualisation, we’ll be familiar with how layer 2 allowed us to create and modify servers far quicker and easier than ever before.

Layer 3 is where we get our hands dirty. The Software Defined Data Centre(SDDC). From here, we create our cloud accounts and start building stuff. This is accessed via a web GUI, command line tools, APIs or other platforms and integrations. This is essentially a management layer, not a workload layer, in that it allows us to govern our resources, control access, manage costs, security, scale, redundancy and availability. It is here that “infrastructure as code” becomes a reality.

Layer 4. The Native Service (such as S3, Lambda, or RDS) or machine instance (such as EC2) layer. This is where we create actual workloads, store data, process information and make things happen. At this level, we could create an instance inside a VPC, set up the security groups and NACLs, and provide access to a developer or administrator via RDP, SSH, or other protocol. At this layer, humans that require access don’t need Layer 3 (SDDC) access in order to do their job. In many ways, this is the actual IaaS (Infrastructure as a Service) layer.

Layer 5. I’m not convinced this is all that different to layer 4, but it’s useful to distinguish it for the purpose of defining *who* has access. This layer is analogous to layer 7 of the OSI, that is, it’s end-user-facing, such as the front end of a web application, the interactions taking place on a mobile app, or the connectivity to IoT devices. Potentially, this is also analogous to SaaS (Software as a Service), if you consider it from the user’s perspective. Layer 5 applications exist as a function of the full stack underneath it – the physical resources in datacentres, the hypervisor, the management layer, virtual machines and services, and the code that runs on or interacts with the services.

Whether something like an OSI model for cloud becomes adopted or not, we’re beginning to transition into a new realm of terminology, and the old definitions no longer apply.

I hope you found this useful, and I’d love to hear your feedback and improvements on this model. Take a look at ISO/IEC 17788 if you’d like to read more about cloud computing terms and definitions.

Finally, if you’d like me to speak and present at your event or your business, or provide consultation and advice, please get in touch. 

tom@ec2-34-242-84-40.eu-west-1.compute.amazonaws.com

@tomgeraghty

https://www.linkedin.com/in/geraghtytom/

The Three Ways

The three ways are one of the underlying principles of what some people call DevOps (and what other people call “doing stuff right”). Read on for a description of each approach, which when combined, will help you drive performance improvements, higher quality services, and reduce operational costs.

1. Systems thinking.

Systems thinking involves taking into account the entire flow of a system. This means that when you’re establishing requirements or designing improvements to a structure, process, or function, you don’t focus on a single silo, department, or element. This principle is reflected in the “Toyota way” and in the excellent book “The Goal” by Eliyahu M. Goldratt and Jeff Cox. By utilising systems thinking, you should never pass a defect downstream, or increase the speed of a non-bottleneck function. In order to properly utilise this principle, you need to seek to achieve a profound understanding of the complete system.

It is also necessary to avoid 100% utilisation of any role in a process; in fact it’s important to bring utilisation below 80% in order to keep wait times acceptable. See the graph below.

2. Amplification of feedback loops.

Any (good) process has feedback loops – loops that allow corrections to be made, improvements to be identified and implemented, and those improvements to be measured, checked and re-iterated. For example, in a busy restaurant kitchen, delivering meatballs and pasta, if the guy making the tomato sauce has added too much salt, it’ll be picked up by someone tasting the dish before it gets taken away by the waiter, but by then the dish is ruined. Maybe it should be picked up by the chef making the meatballs, before it’s added to the pasta? Maybe it should be picked up at hand-off between the two chefs? How about checking it before it even leaves the tomato sauce-guy’s station? By shortening the feedback loop, mistakes are found faster, rectified easier, and the impact on the whole system – and the product – is lower.

3. Continuous Improvement.

A culture of continual experimentation, improvement, taking risks and learning from failure will trump a culture of tradition and safety every time. It is only by mastering skills and taking ownership of mistakes that we can take those risks without incurring costly failures.

Repetition and practice is the key to mastery, and by considering every process as an evolutionary stage rather than a defined method, it is possible to continuously improve and adapt to even very dramatic change.

It is important to allocate time to improvement, which could be a function of the 20% “idle” time of resources if you’ve properly managed the utilisation of a role. Without allocating time to actually focus on improvement, inefficiencies and flaws will continue and amplify well beyond the “impact” of reducing utilisation of said resource.

By utilising the three ways as above, by introducing faults into systems to increase resilience, and by fostering a culture that rewards risk taking while owning mistakes, you’ll drive higher quality outcomes, higher performance, lower costs and lower stress!

For my presentation on the Three Ways, click here. Feel free to use, adapt, and feed back to me 🙂

10 elements of managing a successful IT team

  • Give time to your team
    • 1-1’s, development reviews, PDR’s, working together on projects, or just time for a coffee and a chat. Whatever you call it, it’s important to regularly spend time with each of the team members. Rarely, if ever, will you find that one of these sessions wasn’t worthwhile. Just don’t rush it.
  • Make sure everyone has a role.
    • Every single member of your team is important, and everyone needs to feel that their efforts are worthwhile, whether it’s setting up new servers, systems, and infrastructure, or manning the telephones and taking calls. Nobody likes to feel like the spare wheel, and it’s unproductive, but it can easily happen.
  • Take them with you.
    • Going to a conference, seminar, networking event or similar? Take one of the team with you, and prioritise the junior members. It’s a great learning experience for them, and a good bonding exercise for the both of you. You don’t need to do this every time, but depending on the size of the team, it should at least be possible to do this once a year per team member.
  • Put the team first.
    • Your team get things done. Without them, you’re nothing. Put them first, and make sure they know you’re fighting their corner. Even if it means you taking the hit for something, or to the detriment of your reputation in the business, ultimately if your team see you working hard for them, they’ll work hard for you. In the long run, this is what matters more.
  • Be a good role model
    • Demonstrate a good work / life balance. This isn’t easy, and particularly in IT, where the servers don’t sleep just because you do, but if you can show that you work when you need to, and relax when you can by making the most of your free time, it’ll set an example that will help prevent burn-out and make for a more productive, enjoyable work environment.
    • Don’t be late. Set standards that the rest of the team can abide by. Get to work on time, be prompt for meetings. Don’t be a “Do as I say, not as I do” boss.
    • Be tidy. If you want your team to keep a tidy workspace, it’s going to be a lot easier if you set a good example.
    • Put in the extra hours when you need to, but make sure you take those holidays that you earn. Don’t make your team feel guilty if they ask for time off.
    • Customer service – put the customer first. In internal IT departments, the customer is the end-user, and the old stereotype of IT helpdesk staff disliking end users still holds true in many cases. Make sure your team know that while half of their job is technical, in some ways the most important half is good old customer service. Set an example by providing excellent service to your customers.
    • Respect your colleagues – set a good example by not complaining about your colleagues in the business. Even if you’ve been terribly disappointed or let down by one of your peers, don’t pass that down to your team. It’s demotivating for them to hear, and can damage relationships between departments and teams. Be open, but not negative.
    • Enjoy your job and be positive! If you don’t enjoy what you do, it’ll be clear to your team, but if you enjoy what you do, that positivity will spread.
  • Ask for feedback
    • Don’t be afraid to ask for feedback from your team. This can be intimidating, especially in person, but it’s absolutely invaluable. Asking “is there anything I could be doing that I’m currently not doing?” or “What could I be doing better?” will provide you with superb information to help you develop and improve as a manager, and help to identify any issues that could be hindering the team’s productivity. If the answer to both of these questions is “nothing”, then well done – however make sure you ask it regularly and phrase it differently each time to tease out any issues.
  • Keep up to date.
    • Ask for regular updates on performance, tasks, challenges, difficulties and successes. Whether you do this via email, phone, in person, or some other way will depend on your particular circumstances. Personally, I like the “15/five” style of weekly report via email, meaning it should take them 15 minutes to write, and you 5 minutes to read, but use whatever works for you.
  • Focus on development.
    • IT careers are all about what you know, and what experience you have. If you let your staff development fall behind, not only will they become less productive, but they’ll be thinking about moving on to somewhere else to continue to learn and develop their skills and knowledge.
    • Engender a culture of learning and knowledge sharing. In our team, we share “discoveries” every Friday via group emails, demonstrating what we’ve learned or discovered that week, from how to create a new maintenance task in SQL Server, what the new features of the iPhone 6 will be, or even facts about dinosaurs, particle accelerators, or IT industry figures…
  • Follow through on what you say.
    • This should go without saying, but you see it all the time. If you say you’ll do something, do it. Or, if it turns out that you can’t, don’t have time, or the situation changes, inform your team and explain why.
  • Be the best that you can be.
    • No pressure, right? Always strive to be as good as you can possibly be. Don’t burn yourself out, but be constantly looking for ways to improve yourself, the team, the environment, your business and your role. Be awesome.

 

Have I missed anything? I’m sure I have, so let me know by commenting.

Work in IT? Here’s how to ask for a pay rise.

Either ask for a review or 1-1 with your manager, or wait until the next scheduled one. I’d prefer one of my team to ask me for a chat about salaries rather than ambush me with a request, but whatever works with your company culture.

In terms of negotiating, use the following:

  • What have you achieved in your role in the business, and what benefit has that returned? Ignore your standard duties – that’s what you’re employed for anyway. If you do something that clearly makes/saves the business £100k pa, a few k raise is an easy decision.
  • What’s the pay grade for your job across your industry? If you’re good, I don’t want to lose you just because I didn’t pay you enough. Equally, be careful of earning over industry average – you’ll be stuck in a job.
  • Be aware of any mistakes or failures you’ve had. It’s no good shouting about the £100k project you managed if you also ran one that lost £150k.
  • Look at the financial status of the business. If the business is doing well and has turned a sizeable profit, highlight it. This not only shows that the business could afford to give you the raise, but that you’re savvy enough to understand the commercial world you operate in. If the business turned a loss, be very wary of asking for a raise.
  • Have a backup plan. Could you ask for an additional training course? A performance-related bonus instead of a flat raise? If times are hard for the business, could you suggest a post-dated raise, or extra holiday in lieu of pay?
  • Be aware that with a raise comes extra responsibility. Don’t make your manager regret their decision to invest extra money in you. If taking that raise means working an extra few hours a week and extra pressure to hit targets, do you still want it?
  • Play the long game. Don’t suddenly start putting in a few extra hours here and there a few days before you ask. Be consistently excellent long-term.
  • Be aware of the rest of your team. It’s potentially worth suggesting not just a raise for yourself, but a blanket raise for the team, or certain members. Do you want to be the one on £10k more than your team-mate?
  • Ultimately, make the decision easy to make for your manager. They’re going to have to justify it in their budget, and potentially go to ask their boss for the money to pay you anyway. They don’t want to regret their decision.
  • Finally. Don’t forget to actually ask for the pay rise.

How to write an SPF record

An SPF record is a DNS TXT record (like A records and MX records) that indicates to receiving mail servers whether an email has come from a server that is “allowed” to send email from that domain. I.e. it’s a check that should prevent spammers impersonating your domain. It does rely on the receiving server actually doing the check, which not all do, so it’s not by any means fool proof, but it should help prevent mass email from your organisation to customers being flagged as potential spam.

 

Below is an example SPF record for capitalfmarena.com:

(this is in the public domain – you can look up an organisation’s SPF record by using online SPF checkers)

 

“v=spf1 ip4:93.174.143.18 mx a:service69.mimecast.com mx a:service70.mimecast.com a:capitalfmarena.com -all”

 

V=spf1 specifies the type of record this is. (SPF)

 

Ip4: pass if the IP senders IP address matches the addresses we send mail from.

 

mx a: pass if sender’s IP matches an ‘MX’ record in the domain

 

a: pass if Sender’s IP matches an ‘A’ record in the domain

 

The –all indicates that all other senders fail the spf test. (+all would mean anyone can send mail.)

(~all was used when spf was still being implemented, and is a soft fail, but shouldn’t really be used any longer other than when you’re transitioning between mail hosts or something)

 

Mechanisms are tested in order and any match will pass the email. A non-match results in a neutral state, until it gets to the end of the string where the –all mechanism will fail it.

 

IT & Web Infrastructure Masterclasses

Through March 2013, I’m running a set of IT and Web infrastructure masterclasses in Nottingham (in conjunction with PCM Projects), for people who don’t necessarily work in IT, but need to know (or would benefit from knowing) some of the basics.

The intended audience is small business owners or managers, where you may have to deal with IT contractors or staff and decide IT and web strategy, but you’re not comfortable that you know enough about it to make informed decisions. For example, there are an almost infinite number of ways to keep your business data accessible, secure, backed up, and away from prying eyes, but which way is best for you? How should you manage your website – should you pay someone else to design and host it, or bring it in-house? How should you handle email, on what sort of server? How should you plan for business growth? How do you protect your business from viruses, malware, spam, and hacking attempts?

These are the sort of questions that I will help you with – you don’t need any knowledge of IT or the web already, and because the groups are small – around 6 people – you’ll be able to ask questions and find out information specific to how your business operates.

You’ll then have enough knowledge to go to your suppliers or contractors, and ask the right questions, purchase the right services, at the right price.

There are four sessions, as below, and you can book yourself on them by visiting the eventbrite page for the events. Contact me for any further information.

 

Technically Speaking – 4 March

Topics to include: an overview of web/IT infrastructure and how it all fits together; an update on the current climate; domain names, analytics, and connections to social technology.

 

Email & Communication – 11 March

Topics to include: different service providers and set-ups (e.g., using hosted email, managing it in-house) and getting it all working for PCs and on mobile devices; good email practice, transferring data and keeping it secure.

 

Internet Security – 18 March

Topics to include: how to stay safe and keep trading; what are the threats – viruses, hack attacks, theft, loss of confidential or valuable data; keeping your business (and family) safe on the internet; and keeping your systems up to date and secure.

 

Data storage – 25 March

Topics to include: managing data storage and growth in your business; internal networks and cloud storage; back-ups; access controls, speed vs. reliability vs. cost.

Virtual Domain Controllers and time in a hyper-V environment

In a “normal” (read: physical) domain environment, all the domain member machines such as servers and PCs use the PDC (Primary Domain Controller) as the authoritiative time source. This keeps all the machines in a domain synchronised to within a few milliseconds and avoids any problems due to time mismatch. (If you’ve ever tried to join a PC to a domain with a significantly different time setting, you’ll see how this can affect active directory operations).

However, virtual machines are slightly different. VMs use their virtual host as the authoritative time server – it’s essential that the virtual host and the guests operate on the same time. Run the below command in a command prompt on a VM:

C:\>w32tm /query /source

And it should return:

VM IC Time Synchronization Provider

If you run the same command on the host itself, it’ll just return the name of one of the domain controllers in your network (probably, but not necessarily, the PDC).

Now, what if your domain controllers are virtual? They’ll be using their host machine’s time as the source, but the hosts themselves will be using the PDC as an authoritative time source – the problem is clear: they’re using each other as authoritative time sources and network time will slowly drift away from the correct time.

You may decide to disable integration services for the guest (the PDC), and configure an authoritative external time source, but if the PDC is rebooted or goes offline and comes back online with a different time than the host (such as a restore), you’ll have problems. Granted, this should fix 90% of issues, but I wouldn’t recommend it as a solution.

Disable integration services in hyperV

 

 

 

 

 

 

 

In an ideal world, you’d still have at least one physical PDC, which would use an external time source, and would serve time to all other machines in the network, but if your infrastructure is such that you only have virtual domain controllers, you’ll need to do something a little different. The best way to this is to set your virtual hosts to use the same external (reliable) time source. This does of course require that your virtual hosts have access to the internet, but at least you should be able to add firewall rules to enable access to a fixed range of NTP servers, which should pose no security threat.

To do this, log on to your (windows) virtual host (in this case, I’m using Hyper-V server 2008 R2).

Run

C:\>w32tm /query /source

And it’ll return one of the domain controllers.

Use the command prompt to open regedit, and navigate to HKLM-System-CurrentControSet-services-w32time-parameters.

It’ll probably look like this:

 

 

 

 

 

Change the “Type” entry to “NTP” and if you desire, change the NtpServer entry to something other than windows time, although you can leave it if you wish.

registry time settings

 

 

 

 

Now that you’ve changed the registry entries, run:

net stop w32time & net start w32time

then

w32tm /query /source

And it should return the new internet time servers.

Run:

w32tm /resync /force

to force a resync of the machine’s clock.

Log on to the virtual machine running on this host, and check the time. Force a resync if you want – it won’t do any harm, and at least you’ll know it’s synced.

If you now run:

W32tm /monitor

on any machine, it will display the potential time servers in your network, and the time offset between them. If all is correct in your network, the offset should be pretty small (though it will never be zero)

domaincontroller1.domain.local *** PDC ***[ipaddress:123]:
    ICMP: 0ms delay
    NTP: +0.0000000s offset from domaincontroller2.domain.local
        RefID: 80.84.77.86.rev.sfr.net [86.77.84.80]
        Stratum: 2
domaincontroller2.domain.local[ipaddress:123]:
    ICMP: 0ms delay
    NTP: -0.0827715s offset from domaincontroller1.local
        RefID: 80.84.77.86.rev.sfr.net [86.77.84.80]
        Stratum: 2
Warning:
Reverse name resolution is best effort. It may not be
correct since RefID field in time packets differs across
NTP implementations and may not be using IP addresses.

 

If you find a domain member machine (whether it’s a server or simple client) which is not set to use the proper domain NTP server, run the below command:

w32tm /config /syncfromflags:DOMHIER /update

This command instructs the machine to search for and use the best time source in a domain hierarchy.

 

Fixing “the trust relationship between this workstation and the primary domain failed” without leaving the domain or restarting.

Sometimes you’ll find that for any one of a multitude of reasons, a workstation’s computer account becomes locked or somehow otherwise disconnected from the actual workstation (say, if a machine with the same name joins the network, or if it’s been offline for a very long time). When you try to log on to the domain you’ll get a message that states:

 

“the trust relationship between this workstation and the primary domain failed”

 

Now, what I would normally do in this situation is un-join and re-join the workstation to the domain, which works, but creates a new SID (Security Identifier) and can therefore break existing trusts in the domain with that machine, and of course it requires a reboot. So if you don’t want to reboot, and you don’t want to break existing trusts, do this:

 

Use netdom.exe in a command prompt to reset the password for the machine account, from the machine with the trust problem.

 

netdom.exe resetpwd /s:<server> /ud:<user> /pd:*

<server> = a domain controller in the joined domain

<user> = DOMAIN\User format with rights to change the computer password

 

* = the domain user password

 

That should do it, in *most* cases.



Find mailboxes that are set to automatically forward email in Exchange 2010

Every time someone leaves your organisation, you’ll probably need to forward their mail to another mailbox, but over time this can get disorganised and messy. Use the below command to extract a .csv formatted table of mailboxes that have a forwarding address:

Get-Mailbox -resultsize 6000 | Where {$_.ForwardingAddress -ne $null} | Select Name, ForwardingAddress, organizationalunit, whencreated, whenchanged, DeliverToMailboxAndForward | export-csv E:\forwardedusers.csv

I set a limit of 6000 because we have almost that many mailboxes, and the limit in this case is the number of mailboxes this will query, rather than the number of actual results. I’m sure this means that there’s a more efficient way of running this query, but it’s not like you’re doing this every day, so it doesn’t really matter.

Once you’ve got this information, you might want to match this up with further details about the users that own these mailboxes. Use the Active Directory powershell tools with Server 2008 to extract this information.

Fire up a powershell on a domain controller (or remotely), and run “import-module activedirectory”.

Then execute:

Get-Aduser -SearchBase "DC=yourdomain,DC=local" -properties SamAccountName,description | export-csv c:\allusers.csv

At the “Filter:” prompt, type:

name –like “*”

Than get this data into excel in two different worksheets.

Use the VLOOKUP tool to compare the two worksheets (in a third one), and collate the fields for the user’s name, forwarding address, and description:

In your “working worksheet” make the first column pull the display name from the mail worksheet, then name the second column “description” (this is what I’m looking for, anyway), and the third columns can be any other data you’d like to show, such as OU, modified dates, or suchlike.

In the description column, enter:

=VLOOKUP(mail!A2,allusers!$D:$E,2,FALSE)

“mail” refers to the worksheet containing data extracted from Exchange, and A2 should be the first user’s Name field (copy this downwards to that you’re looking up A3, A4, A5, etc.

“allusers” refers to the Active directory information worksheet – so in this case it will attempt to match the mail A2 field with anything in the D column in allusers (this being the first column in the $D:$E array, and will then return the corresponding value from the E column in allusers (because I’ve specified “2”, which in my case is the description field.) The FALSE bit at the end ensures that you’re searching for an exact match.

Copy this formula down along with the list of users that have email forwarding enabled, and you’ll have a list of forwarded users along with their names, descriptions, modified dates, OUs, and any other data you like.