The Three Ways

The three ways are one of the underlying principles of what some people call DevOps (and what other people call “doing stuff right”). Read on for a description of each approach, which when combined, will help you drive performance improvements, higher quality services, and reduce operational costs.

1. Systems thinking.

Systems thinking involves taking into account the entire flow of a system. This means that when you’re establishing requirements or designing improvements to a structure, process, or function, you don’t focus on a single silo, department, or element. This principle is reflected in the “Toyota way” and in the excellent book “The Goal” by Eliyahu M. Goldratt and Jeff Cox. By utilising systems thinking, you should never pass a defect downstream, or increase the speed of a non-bottleneck function. In order to properly utilise this principle, you need to seek to achieve a profound understanding of the complete system.

It is also necessary to avoid 100% utilisation of any role in a process; in fact it’s important to bring utilisation below 80% in order to keep wait times acceptable. See the graph below.

2. Amplification of feedback loops.

Any (good) process has feedback loops – loops that allow corrections to be made, improvements to be identified and implemented, and those improvements to be measured, checked and re-iterated. For example, in a busy restaurant kitchen, delivering meatballs and pasta, if the guy making the tomato sauce has added too much salt, it’ll be picked up by someone tasting the dish before it gets taken away by the waiter, but by then the dish is ruined. Maybe it should be picked up by the chef making the meatballs, before it’s added to the pasta? Maybe it should be picked up at hand-off between the two chefs? How about checking it before it even leaves the tomato sauce-guy’s station? By shortening the feedback loop, mistakes are found faster, rectified easier, and the impact on the whole system – and the product – is lower.

3. Continuous Improvement.

A culture of continual experimentation, improvement, taking risks and learning from failure will trump a culture of tradition and safety every time. It is only by mastering skills and taking ownership of mistakes that we can take those risks without incurring costly failures.

Repetition and practice is the key to mastery, and by considering every process as an evolutionary stage rather than a defined method, it is possible to continuously improve and adapt to even very dramatic change.

It is important to allocate time to improvement, which could be a function of the 20% “idle” time of resources if you’ve properly managed the utilisation of a role. Without allocating time to actually focus on improvement, inefficiencies and flaws will continue and amplify well beyond the “impact” of reducing utilisation of said resource.

By utilising the three ways as above, by introducing faults into systems to increase resilience, and by fostering a culture that rewards risk taking while owning mistakes, you’ll drive higher quality outcomes, higher performance, lower costs and lower stress!

Streaming music services and the future of consuming music

I’m listening to Spotify while I write this. I’ve been a premium subscriber since early 2010, which means I’ve so far paid spotify £390 of which around 70% has gone to the artists. It took me a while to get used to the idea that i didn’t “own” the music I was listing to, but the benefits of being able to listen to anything I wanted to, whenever i wanted, and the chance to discover new music made up for it and I now believe that as long as streaming services exist, I’ll never buy a CD again. I won’t bang on about how great it is, because you’re generally either into streaming or not, and that usually depends on how you listen to your music.

There’s a lot of bad press about streaming services and the supposed bad deal that the content creators (artists) get paid from it.  Atoms for Peace pulled their albums from Spotify and other streaming services, with band members Thom Yorke and Nigel Godrich criticising these companies for business models that they claimed were weighted against emerging artists. I disagree. Anyone that thinks they can create some music and make a living from it using streaming services is living in a dream world. The music business has changed, and for the better in my opinion. Gone are the days when a band could release a CD, sell hundred of thousands or millions of copies and rake in the big bucks (but don’t forget the record labels and other third parties taking their lion’s share). Some people compare streaming to that old business model, and that’s where it looks like the artists are getting a worse deal, but it’s not a fair comparison.

Musician Zoë Keating earned $808 from 201,412 Spotify streams of tracks from two of her older releases in the first half of 2013, according to figures published by the cellist as a Google Doc. Spotify apparently pays 0.4 cents (around 0.3p) per stream to the artist. When artists sell music (such as a CD), they get a one-off cut of the selling price. When that music is being streamed, they get a (much smaller) payment for every play. Musician Sam Duckworth recently explained how 4,685 Spotify plays of his last solo album earned him £19.22, but the question is just as much about how much streams of the album might earn him over the next 10, 20, 30 years.

If you created an album yourself, and you had a choice between two customers – one who would by the CD, giving you a £0.40 cut, and one who would stream it, providing you with £0.004 per stream, which customer would you choose? Part of this actually might depend on how good you think your music is, and how enduring its appeal will be. If it’s good enough, and al the songs on that album are good (all killer, no filler!), then it’s going to get played a lot, making streaming more lucrative over time, but if it’s poor, with only a couple of decent tracks, and maybe not as enduring as it could be (think Beatles vs One Direction), then a CD is going to be more lucrative, because after a year or so that CD is going to be collecting dust at the bottom of the shelf never to be played again.

I can’t easily find a way to show the number of plays per track in my spotify library, apart from my last.fm scrobble stats, which won’t be entirely accurate as they only record what I listen to in online mode, but I’ve pasted the top plays per artist below:

The Gaslight Anthem (621 plays)

Chuck Ragan (520 plays)

Frank Turner (516 plays)

Silversun Pickups (425 plays)

Biffy Clyro (305 plays)

Ben Howard (302 plays)

Sucioperro (241 plays)

Eddie Vedder (225 plays)

Blind Melon (173 plays)

Foo Fighters (166 plays)

Iron & Wine (141 plays)

Saosin (121 plays)

Benjamin Francis Leftwich (119 plays)

Cory Branan (116 plays)

Twin Atlantic (112 plays)

Kassidy (101 plays)

Funeral for a Friend (94 plays)

Molly Durnin (89 plays)

Crucially, of the 18 artists above, at least 4 or 5 are artists that I discovered on spotify. The radio and “discover” tools on it are actually really good (90% of the time), and of those 4-5 discovered artists, I’ve seen two of them live in the past year or so. If we stop trying to think in pure instant revenue terms, streaming services provide a great part of a business model that includes long term small payments to artists and allows consumers to discover new music more easily.

Artists need to build themselves a business that incorporates records, songs, merchandise and/or tickets, and look for simple ways to maximise all those revenues.

Crucially, they also need to start developing premium products and services for core fanbase – fans who have always been willing to buy more than a gig ticket every year and a record every other, but who were often left under-supplied by the old music business. Which is why, for artists, the real revolution caused by the web isn’t the emerging streaming market, but the boom in direct to fan and pre-order sites.

Frank Turner believes we may eventually move towards a model where all music is free, but artists are fairly compensated. Talking about piracy and torrenting, he says:  “I can kind of accept that people download music without paying for it, but when the same people complain about, say, merch prices or ticket prices, I get a little frustrated.” “I make the vast majority of my living from live, and also from merch. Record sales tick over.”

If you look at Frank Turner’s gig archive, you’ll see he’s performed at almost 1500 live shows from 2004 to 2013. Most of the musicians I know do what they do because they love playing music, and particularly so in front of an audience. I personally believe that live music should be the core of any musician’s revenue stream, with physical music sales, streaming, merchandise, advertising, sponsorship, and other sales providing longer term revenue. Frank seems pretty hot on spotify, and has released a live EP exclusive to the service.

I also believe the format of live shows will change too. I love small gigs in dark little venues such as the Rescue Rooms in Nottingham, but as artists become more popular and play larger venues, there is naturally some loss of fan interaction. With the use of mobile technology, social networks, and heavy duty wifi (802.11ac for example), large venues can begin to allow the artists to interact with fans and provide a more immersive experience. Prior to or while the artist is on stage, content can be pushed to the mobile devices of those in the audience, telling them what track is being played for example, with links to download or stream it later, provision of exclusive content such as video and photo, merchandise, future gig listings, and event the ability to interact with other fans in the venue or otherwise.

The future is a healthier relationship between services like Spotify and musicians, where both can find more ways to make money by pointing fans towards tickets, merchandise, box-sets, memberships, crowdfunding campaigns such as songkick’s detour, and turning simple concerts into fuller experiences for fans.

How to write an SPF record

An SPF record is a DNS TXT record (like A records and MX records) that indicates to receiving mail servers whether an email has come from a server that is “allowed” to send email from that domain. I.e. it’s a check that should prevent spammers impersonating your domain. It does rely on the receiving server actually doing the check, which not all do, so it’s not by any means fool proof, but it should help prevent mass email from your organisation to customers being flagged as potential spam.

 

Below is an example SPF record for capitalfmarena.com:

(this is in the public domain – you can look up an organisation’s SPF record by using online SPF checkers)

 

“v=spf1 ip4:93.174.143.18 mx a:service69.mimecast.com mx a:service70.mimecast.com a:capitalfmarena.com -all”

 

V=spf1 specifies the type of record this is. (SPF)

 

Ip4: pass if the IP senders IP address matches the addresses we send mail from.

 

mx a: pass if sender’s IP matches an ‘MX’ record in the domain

 

a: pass if Sender’s IP matches an ‘A’ record in the domain

 

The –all indicates that all other senders fail the spf test. (+all would mean anyone can send mail.)

(~all was used when spf was still being implemented, and is a soft fail, but shouldn’t really be used any longer other than when you’re transitioning between mail hosts or something)

 

Mechanisms are tested in order and any match will pass the email. A non-match results in a neutral state, until it gets to the end of the string where the –all mechanism will fail it.

 

IT & Web Infrastructure Masterclasses

Through March 2013, I’m running a set of IT and Web infrastructure masterclasses in Nottingham (in conjunction with PCM Projects), for people who don’t necessarily work in IT, but need to know (or would benefit from knowing) some of the basics.

The intended audience is small business owners or managers, where you may have to deal with IT contractors or staff and decide IT and web strategy, but you’re not comfortable that you know enough about it to make informed decisions. For example, there are an almost infinite number of ways to keep your business data accessible, secure, backed up, and away from prying eyes, but which way is best for you? How should you manage your website – should you pay someone else to design and host it, or bring it in-house? How should you handle email, on what sort of server? How should you plan for business growth? How do you protect your business from viruses, malware, spam, and hacking attempts?

These are the sort of questions that I will help you with – you don’t need any knowledge of IT or the web already, and because the groups are small – around 6 people – you’ll be able to ask questions and find out information specific to how your business operates.

You’ll then have enough knowledge to go to your suppliers or contractors, and ask the right questions, purchase the right services, at the right price.

There are four sessions, as below, and you can book yourself on them by visiting the eventbrite page for the events. Contact me for any further information.

 

Technically Speaking – 4 March

Topics to include: an overview of web/IT infrastructure and how it all fits together; an update on the current climate; domain names, analytics, and connections to social technology.

 

Email & Communication – 11 March

Topics to include: different service providers and set-ups (e.g., using hosted email, managing it in-house) and getting it all working for PCs and on mobile devices; good email practice, transferring data and keeping it secure.

 

Internet Security – 18 March

Topics to include: how to stay safe and keep trading; what are the threats – viruses, hack attacks, theft, loss of confidential or valuable data; keeping your business (and family) safe on the internet; and keeping your systems up to date and secure.

 

Data storage – 25 March

Topics to include: managing data storage and growth in your business; internal networks and cloud storage; back-ups; access controls, speed vs. reliability vs. cost.

Virtual Domain Controllers and time in a hyper-V environment

In a “normal” (read: physical) domain environment, all the domain member machines such as servers and PCs use the PDC (Primary Domain Controller) as the authoritiative time source. This keeps all the machines in a domain synchronised to within a few milliseconds and avoids any problems due to time mismatch. (If you’ve ever tried to join a PC to a domain with a significantly different time setting, you’ll see how this can affect active directory operations).

However, virtual machines are slightly different. VMs use their virtual host as the authoritative time server – it’s essential that the virtual host and the guests operate on the same time. Run the below command in a command prompt on a VM:

C:\>w32tm /query /source

And it should return:

VM IC Time Synchronization Provider

If you run the same command on the host itself, it’ll just return the name of one of the domain controllers in your network (probably, but not necessarily, the PDC).

Now, what if your domain controllers are virtual? They’ll be using their host machine’s time as the source, but the hosts themselves will be using the PDC as an authoritative time source – the problem is clear: they’re using each other as authoritative time sources and network time will slowly drift away from the correct time.

You may decide to disable integration services for the guest (the PDC), and configure an authoritative external time source, but if the PDC is rebooted or goes offline and comes back online with a different time than the host (such as a restore), you’ll have problems. Granted, this should fix 90% of issues, but I wouldn’t recommend it as a solution.

Disable integration services in hyperV

 

 

 

 

 

 

 

In an ideal world, you’d still have at least one physical PDC, which would use an external time source, and would serve time to all other machines in the network, but if your infrastructure is such that you only have virtual domain controllers, you’ll need to do something a little different. The best way to this is to set your virtual hosts to use the same external (reliable) time source. This does of course require that your virtual hosts have access to the internet, but at least you should be able to add firewall rules to enable access to a fixed range of NTP servers, which should pose no security threat.

To do this, log on to your (windows) virtual host (in this case, I’m using Hyper-V server 2008 R2).

Run

C:\>w32tm /query /source

And it’ll return one of the domain controllers.

Use the command prompt to open regedit, and navigate to HKLM-System-CurrentControSet-services-w32time-parameters.

It’ll probably look like this:

 

 

 

 

 

Change the “Type” entry to “NTP” and if you desire, change the NtpServer entry to something other than windows time, although you can leave it if you wish.

registry time settings

 

 

 

 

Now that you’ve changed the registry entries, run:

net stop w32time & net start w32time

then

w32tm /query /source

And it should return the new internet time servers.

Run:

w32tm /resync /force

to force a resync of the machine’s clock.

Log on to the virtual machine running on this host, and check the time. Force a resync if you want – it won’t do any harm, and at least you’ll know it’s synced.

If you now run:

W32tm /monitor

on any machine, it will display the potential time servers in your network, and the time offset between them. If all is correct in your network, the offset should be pretty small (though it will never be zero)

domaincontroller1.domain.local *** PDC ***[ipaddress:123]:
    ICMP: 0ms delay
    NTP: +0.0000000s offset from domaincontroller2.domain.local
        RefID: 80.84.77.86.rev.sfr.net [86.77.84.80]
        Stratum: 2
domaincontroller2.domain.local[ipaddress:123]:
    ICMP: 0ms delay
    NTP: -0.0827715s offset from domaincontroller1.local
        RefID: 80.84.77.86.rev.sfr.net [86.77.84.80]
        Stratum: 2
Warning:
Reverse name resolution is best effort. It may not be
correct since RefID field in time packets differs across
NTP implementations and may not be using IP addresses.

 

If you find a domain member machine (whether it’s a server or simple client) which is not set to use the proper domain NTP server, run the below command:

w32tm /config /syncfromflags:DOMHIER /update

This command instructs the machine to search for and use the best time source in a domain hierarchy.

 

Fixing “the trust relationship between this workstation and the primary domain failed” without leaving the domain or restarting.

Sometimes you’ll find that for any one of a multitude of reasons, a workstation’s computer account becomes locked or somehow otherwise disconnected from the actual workstation (say, if a machine with the same name joins the network, or if it’s been offline for a very long time). When you try to log on to the domain you’ll get a message that states:

 

“the trust relationship between this workstation and the primary domain failed”

 

Now, what I would normally do in this situation is un-join and re-join the workstation to the domain, which works, but creates a new SID (Security Identifier) and can therefore break existing trusts in the domain with that machine, and of course it requires a reboot. So if you don’t want to reboot, and you don’t want to break existing trusts, do this:

 

Use netdom.exe in a command prompt to reset the password for the machine account, from the machine with the trust problem.

 

netdom.exe resetpwd /s:<server> /ud:<user> /pd:*

<server> = a domain controller in the joined domain

<user> = DOMAIN\User format with rights to change the computer password

 

* = the domain user password

 

That should do it, in *most* cases.



Find mailboxes that are set to automatically forward email in Exchange 2010

Every time someone leaves your organisation, you’ll probably need to forward their mail to another mailbox, but over time this can get disorganised and messy. Use the below command to extract a .csv formatted table of mailboxes that have a forwarding address:

Get-Mailbox -resultsize 6000 | Where {$_.ForwardingAddress -ne $null} | Select Name, ForwardingAddress, organizationalunit, whencreated, whenchanged, DeliverToMailboxAndForward | export-csv E:\forwardedusers.csv

I set a limit of 6000 because we have almost that many mailboxes, and the limit in this case is the number of mailboxes this will query, rather than the number of actual results. I’m sure this means that there’s a more efficient way of running this query, but it’s not like you’re doing this every day, so it doesn’t really matter.

Once you’ve got this information, you might want to match this up with further details about the users that own these mailboxes. Use the Active Directory powershell tools with Server 2008 to extract this information.

Fire up a powershell on a domain controller (or remotely), and run “import-module activedirectory”.

Then execute:

Get-Aduser -SearchBase "DC=yourdomain,DC=local" -properties SamAccountName,description | export-csv c:\allusers.csv

At the “Filter:” prompt, type:

name –like “*”

Than get this data into excel in two different worksheets.

Use the VLOOKUP tool to compare the two worksheets (in a third one), and collate the fields for the user’s name, forwarding address, and description:

In your “working worksheet” make the first column pull the display name from the mail worksheet, then name the second column “description” (this is what I’m looking for, anyway), and the third columns can be any other data you’d like to show, such as OU, modified dates, or suchlike.

In the description column, enter:

=VLOOKUP(mail!A2,allusers!$D:$E,2,FALSE)

“mail” refers to the worksheet containing data extracted from Exchange, and A2 should be the first user’s Name field (copy this downwards to that you’re looking up A3, A4, A5, etc.

“allusers” refers to the Active directory information worksheet – so in this case it will attempt to match the mail A2 field with anything in the D column in allusers (this being the first column in the $D:$E array, and will then return the corresponding value from the E column in allusers (because I’ve specified “2”, which in my case is the description field.) The FALSE bit at the end ensures that you’re searching for an exact match.

Copy this formula down along with the list of users that have email forwarding enabled, and you’ll have a list of forwarded users along with their names, descriptions, modified dates, OUs, and any other data you like.



Today’s illegal downloaders are the entertainment industry execs of the future.

The entertainment industry has been slow to embrace the internet, and it’s fair to say that it still hasn’t got its head around a business model that enables consumers to purchase music and films at a reasonable price, online, and without heavy-handed restrictions on use.

itunes was (is) only successful because it’s easy to use. For people with one computer, one iphone or ipod, and an itunes account, it’s the easiest way in the world to purchase music and film downloads online. However, if you have more than one computer, and/or more than one playback device, the DRM imposed upon itunes downloads restricts your use and enjoyment of your purchase measurably. The alternatives are to use Amazon, play.com, or another DRM-free download outlet, or simply use a P2P “illegal” download service. Frequently, due to the technology behind P2P / torrent downloads, it’s also the quickest way to get hold of digital media. This is patently absurd. I can think of no other product where the “free” version is easier to get hold of and easier to use than any paid-for option.

Note: I’m not endorsing illegal downloading, simply stating that if it’s as easy, or easier, than paid-for downloads, people are going to do it. Make paid-for downloads more attractive by removing DRM, introducing an easy, more universal, very fast method of purchase and download, and look at other ways of adding value (bonus content, concert tickets, graphics and other media), and people will pay for them.

The music and film industry simply don’t understand the new business models, or are not willing to change their own. Instead, they foist anti-piracy adverts onto rental and purchased DVDs (Why?! I’ve just paid for it after all.), they add DRM to their own products, making them more difficult to use (the king of “anti-features”), they hunt down file-sharers and threaten them with court cases, and they insist on sticking by their mantra that one illegal download equals one lost sale (if you were let loose in a sweet shop and told it was all free, you’d grab more than if you were paying for the stuff, right?).

Of course, it’s not all bad. The people that are using P2P and torrent technology to acquire digital media today are the business and entertainment industry executives of the future, and they will understand this technology and these business models better than anyone in the industry at the moment. Maybe we’ll soon see a fresh wave of new businesses, new record labels, new legal download outlets, and an industry that sees its customers as valued clients, rather than a thorn in its side.