MEC 2014 wrap-up by the numbers

The MEC 2014 conference team sent out a statistical summary of the conference to speakers, and it makes for fascinating reading. I wanted to share a few of the highlights of the report because I think it makes some really interesting points about the state of the Exchange market and community.

First: the 101 sessions were attended by a total of 13,079 people. The average attendance across all sessions was 129, which is impressive (though skewed a bit by the size of some of the mega-sessions; Microsoft had to make a bet that lots of people would attend these sessions, which they did!). In terms of attendance, the top 10 sessions were mostly focused on architecture and deployment:

  • Exchange Server 2013 Architecture
  • Ready, set, deploy: Exchange Server 2013
  • Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?
  • Exchange Server 2013 Tips & Tricks
  • The latest on High Availability & Site Resilience
  • Exchange hybrid: architecture and deployment
  • Experts Unplugged: Exchange Deployment
  • Exchange Server 2013 Transport Architecture
  • Exchange Server 2013 Virtualization Best Practices
  • Exchange Design Concepts and Best Practices
RS IV, not life size To put this in perspective, the top session on this list had just over 600 attendees and the bottom had just under 300. Overall attendance to sessions on the architecture track was about double that of the next contender, the deployment and migration track. That tells me that there is still a large audience for discussions of fundamental architecture topics, in addition to the day-in, day-out operational material that we’d normally see emerging as the mainstay of content at this point in the product lifecycle.Next takeaway: Tim McMichael is a rock star. He captured the #1 and #2 slots in the session ratings, which is no surprise to anyone who’s ever heard him speak. I am very hopeful that I’ll get to hear him speak at Exchange Connections this year. The overall quality of speakers was superb, in my biased opinion. I’d like to see my ratings improve (more demos!) but there’s no shame in being outranked by heavy hitters such as Tony, Michael, Jeff Mealiffe, Ross Smith IV (pictured at left; not actual size), or the ebullient Kamal Janardhan. MEC provides an excellent venue for the speakers to mingle with attendees, too, both at structured events like MAPI Hour and in unstructured post-session or hallway conversations. To me, that direct interaction is one of the most valuable parts of attending a conference, both as a speaker and because I can ask other speakers questions about their particular areas of expertise.

Third, the Unplugged sessions were very popular, as measured both by attendance numbers and session ratings. I loved both the format and content of the ones I attended, but they depend on having a good moderator— someone who is both knowledgeable about the topic at hand and experienced at steering a group of opinionated folks back on topic when needed. While I am naturally bad at that, the moderators overall did an excellent job and I hope to see more Unplugged sessions at future events. When attendees added sessions to their calendar, the event staff used that as a means of gauging interest and assigning rooms based on the likely number of attendees. However, looking at the data shows that people flocked to sessions based on word-of-mouth and didn’t necessarily update their calendars; I calculated the attendance split by dividing the number of people who attended an actual session by the number who said they would attend. If 100 calendared the session but 50 attended, that would be a 50% split. The average split across all sessions (except one) was 53.8%— not bad considering how dynamic the attendance was. The one session I left out was “Experts Unplugged: Architecture – HA and Storage”, which had a split of 1167%! Of the top 10 splits (i.e. sessions where the largest percentage of people stood by their original plans), 4 were Unplugged sessions.

Of course, MEC was much more than the numbers, but this kind of data helps Microsoft understand what people want from future events, measured not just by asking them but by observing their actual preferences and actions. I can’t wait to see what the next event, whenever it may be, will look like!

Leave a comment

Filed under UC&C

“What could I learn from that?”

Yesterday the boys and I were headed to the Huntsville Museum of Art, which from our house requires taking I-565 eastbound. As we approached the onramp, our progress was slowed by a large volume of backed-up traffic, interrupted by a convoy of fire engines and an ambulance. They headed west, and we eventually got on the road headed east, but not before craning our necks trying to see what the fuss was about. This sort of reaction to an accident or unusual event nearby is quite human. We are very much driven by spectacle, and often our reaction is based out of an unhealthy curiosity.

I say that because one thing I’ve consciously tried to do as a pilot is ask myself “what could I learn from that?” when reviewing aviation accident results. The aviation world has no shortage of well-documented accidents, ranging from the very large to the very small. Let’s leave out big-iron accidents, which are almost vanishingly rare; in the general aviation corner, we have several sources that analyze accidents or near-misses, including the annual Nall Report,the long-running “I Learned About Flying From That” and “Aftermath” columns in Flying, the NTSB accident database, and plenty more besides. So with that in mind, when I saw the headline “2013 F/A-18 crash: Out of fuel, out of time and one chance to land” in Stars and Stripes, my first thought wasn’t “cool! a jet crash!” but rather “Hmm. I wonder if there’s anything in common between flying an F-18 off a carrier and a Cessna off a 7500’ runway.”

It turns out that the answer is “yes, quite a bit.”

The article covers the chronology of an F-18 crash involving an aircraft from VF-103 operating off EISENHOWER. During mid-air refueling (which is frequent but by no means less complex or dangerous for being frequently practiced), the aerial refueling hose became entangled and broke off. This damaged the refueling probe on the Super Hornet. This was serious but not immediately an emergency; the pilot was within easy diversion range to Kandahar, but elected to return to the ship because he thought that’s what the air wing commander wanted them to do. A series of issues then arose— I won’t recount them all here except to say that some of them were due to what appear to this layman to be poor systems knowledge on the part of the pilot, while others involve simple physics and aerodynamics. The article is worth reading for a complete explanation of what happened.

The jet ended up in the water; both pilot and NFO ejected safely.

What did I learn from this? Several things, which I’ll helpfully summarize:

  • The problems all started due to a mechanical failure caused by unexpected turbulence. Takeaway: no matter how good a pilot you are, you aren’t in control of the weather, the air, or the terrain around you.
  • Diverting to Kandahar would have been easy, but the pilot chose not to because he made an assumption about what his CO wanted. Two problems here: what happens when you assume and the pressures we often put on ourselves to get somewhere even when conditions call for a divert or no-go. Could I be subject to the same pressures and make a poor decision because of get-there-itis?
  • “The pilot had been staring at that probe and the attached basket for more than an hour but failed to realize its effect on the fuel pumps.” You can’t ever stop paying attention. The pilot flew for 400 miles without noticing that his fuel state wasn’t what it should have been. Could I be lulled into missing an early indication of a fuel or engine problem during a long, seemingly routine flight?
  • The aircraft was 11 miles from EISENHOWER and was ordered to divert to Masirah, 280NM away, then had to turn back to the ship 24 minutes later. The pilot didn’t decide this, a rear admiral on the ship did. The article didn’t say whether the pilot questioned or argued with that decision. In the civil aviation world, the pilot in command of an aircraft “is directly responsible for, and is the final authority as to, the operation of that aircraft.”  I imagine there’s something similar in military aviation; even if not I’d rather be arguing with the admiral on the deck than having him meet my plane guard after they fish me out of the water. Would I have the courage to make a similar decision against the advice of ATC or some other authority?
  • In at least two instances the pilot made critical decisions— including to eject the crew— without communicating them to his NFO. NASA and the FAA lean very heavily on the importance of crew resource management, in part of situations like Asiana 211, United 173, and American 965. (Look ‘em up if you need to). When I fly am I seeking appropriate input from other pilots and ATC? Do I give their input proper consideration?
I don’t mean for this post to sound like armchair quarterbacking. I wasn’t there, and if I had been I’d probably be dead because, despite years of fantasizing to the contrary, I’m not a fighter pilot. However, I am a very firm believer in learning from the mistakes of others so I don’t make the same mistakes myself, and I think there’s a lot to learn from this incident.

Leave a comment

Filed under Aviation

Microsoft updates Recoverable Items quota for Office 365 users

Remember when I posted about the 100GB limit for Personal Archive mailboxes in Office 365? It turns out that there was another limit that almost no one knew about, primarily because it involves mailbox retention. As of today, when you put an Office 365 mailbox on In-Place Hold, the size of the Recoverable Items folder is capped at 30GB. This is plenty for the vast majority of customers because a) not many customers use In-Place Hold in the first place and b) not many users have mailboxes that are large enough to exceed the 30GB quota. Multiply two small numbers together and you get another small number.

However, there are some customers for whom this is a problem. One of the most interesting things about Office 365 to me is the speed at which Microsoft can respond to their requests by changing aspects of the service architecture and provisioning. In this case, the Exchange team is planning to increase the size of the Recoverable Items quota to 100GB. Interestingly, they’re actually starting by increasing the quota for user mailboxes that are now on hold— so from now until July 2014, they’ll be silently increasing the quota for those users. If you put a user on hold today, however, their quota may not be set to 100GB until sometime later.

If you need an immediate quota increase, or if you’re using a dedicated tenant, you’ll still have to use the existing mechanism of filing a support ticket to have the quota increased.

There’s no public post on this yet, but I expect one shortly. In the meantime, bask in the knowledge that with a 50GB mailbox, 100GB Personal Archive, and 100GB Recoverable Items quota, your users probably aren’t going to run out of mailbox space any time soon.

Leave a comment

Filed under Office 365, UC&C

Two-factor authentication for Outlook and Office 2013 clients

I don’t usually put on my old man hat, but indulge me for a second. Back in February 2000, in my long-forgotten column for TechNet, here’s what I said about single-factor passwords:

I’m going to let you in on a secret that’s little discussed outside the security world: reusable passwords are evil.

I stand by the second half of that statement: reusable passwords are still evil, 14 years later, but at least the word is getting out, and multi-factor authentication is becoming more and more common in both consumer and business systems. I was wrong when I assumed that smart cards would become ubiquitous as a second authentication factor; instead, the “something you have” role is increasingly often filled by a mobile phone that can receive SMS messages. Microsoft bought into that trend with their 2012 purchase of PhoneFactor, which is now integrated into Azure. Now Microsoft is extending MFA support into Outlook and the rest of the Office 2013 client applications, with a few caveats. I attended a great session at MEC 2014 presented by Microsoft’s Erik Ashby and Franklin Williams that both outlined the current state of Office 365-integrated MFA and outlined Microsoft’s plans to extend MFA to Outlook.

First, keep in mind that Office 365 already offers multi-factor authentication, once you enable it, for your web-based clients. You can use SMS-based authentication, have the service call you via phone, or use a mobile app that generates authentication codes, and you can define “app passwords” that are used instead of your primary credentials for applications— like Outlook, as it happens— that don’t currently understand MFA. You have to enable MFA for your tenant, then enable it for individual users. All of these services are included with Office 365 SKUs, and they rely on the Azure MFA service. You can, if you wish, buy a separate subscription to Azure MFA if you want additional functionality, like the ability to customize the caller ID that appears when the service calls your users.

With that said, here’s what Erik and Franklin talked about…

To start with, we have to distinguish between the three types of identities that can be used to authenticate against the service. Without going into every detail, it’s fair to summarize these as follows:

  • Cloud identities are homed in Azure Active Directory (AAD). There’s no synchronization with on-premises AD because there isn’t one.
  • Directory sync (or just “dirsync”) uses Microsoft’s dirsync tool, or an equivalent third-party tool, to sync an on-premises account with AAD. This essentially gives services that consume AAD a mostly-read-only copy of your organization’s AD.
  • Federated identity uses a federation broker or service such as Active Directory Federation Services (AD FS), Okta, Centrify, and Ping to allow your organization’s AD to answer authentication queries from Office 365 services. In January 2014 Microsoft announced a “Works With Office 365 – Identity” logo program, so if you don’t want to use AD FS you can choose another federation toolset that better meets your requirements.

Client updates are coming to the Office 2013 clients: Outlook, Lync, Word, Excel,  PowerPoint, and SkyDrive Pro. With these updates, you’ll see a single unified authentication window for all of the clients, similar (but not necessarily identical) to the existing login window you get on Windows when signing into a SkyDrive or SkyDrive Pro library from within an Office client. From that authentication window, you’ll be able to enter the second authentication factor that you received via phone call, SMS, or authentication app. During the presentation, Franklin (or maybe Erik?) said “if you can authenticate in a web browser, you can authenticate in Office clients”— very cool. (PowerShell will be getting MFA support too, but it wasn’t clear to me exactly when that was happening).

These client updates will also provide support for two specific types of smart cards: the US Department of Defense Common Access Card (CAC) and the similar-but-civilian Personal Identity Verification (PIV) card. Instead of using a separate authentication token provided by the service, you’ll plug in your smart card, authenticate to it with your PIN, and away you go.

All three of the identity types of these methods provide support for MFA; federated identity will gain the ability to do true single sign-on (SSO) jn Office 2013 clients, which will be a welcome usability improvement. Outlook will get SSO capabilities with the other two identity types, too.

How do the updates work? That’s where the magic part comes in. The Azure Active Directory Authentication Library (ADAL) is being extended to provide support for MFA. When the Office client makes a request to the service the service will return a header that instructs the client to visit a security token service (STS) using OAuth. At that point, Office uses ADAL to launch the browser control that displays the authentication page, then, as Erik puts it, “MFA and federation magic happens transparent to Office.” If the authentication succeeds, Office gets security tokens that it caches and uses for service authentication. (The flow is described in more detail in the video from the session, which is available now for MEC attendees and will be available in 60 days or so for non-attendees).

There are two important caveats that were a little buried in the presentation. First is that MFA in Outlook 2013 will require the use of MAPI/HTTP. More seriously, MFA will not be available to on-premises Exchange 2013 deployments until some time in the future. This aligns with Microsoft’s cloud-first strategy, but it is going to aggravate on-premises customers something fierce. In fairness, because you need the MFA infrastructure hosted in the Microsoft cloud to take advantage of this feature, I’m not sure there’s a feasible way to deliver SMS- or voice-based MFA for purely on-prem environments, and if you’re in a hybrid, then you’re good to go.

Microsoft hasn’t announced a specific timeframe for these updates (other than “second half calendar 2014”), and they didn’t say anything about Mac support, though I would imagine that the rumored v.next of Mac Office would provide this same functionality. The ability to use MFA across all the Office client apps will make it easier for end users, reducing the chance that they’ll depend solely on reusable passwords and thus reducing the net amount of evil in the world— a blessing to us all.

Leave a comment

Filed under Office 365, UC&C

Script to download MEC 2014 presentations

Yay for code reuse! Tom Arbuthnot wrote a nifty script to download all the Lync Conference 2014 presentations, and since Microsoft used the same event management system for MEC 2014, I grabbed his script and tweaked it so that it will download the MEC 2014 session decks and videos. It only works if you are able to sign into the MyMEC site, as only attendees can download the presentations and videos at this time. I can’t guarantee that the script will pull all the sessions but it seems to be working so far— give it a try. (And remember, the many “Unplugged” sessions weren’t recorded so you won’t see any recordings or decks for them). If the script works, thank Tom; if it doesn’t, blame me.

Download the script

2 Comments

Filed under UC&C

The value of lagged copies for Exchange 2013

Let’s talk about… lagged copies.

For most Exchange administrators, the subject of lagged database copies falls somewhere between “the Kardashians’ shoe sizes” and “which of the 3 Stooges was the funniest” in terms of interest level. The concept is easy enough to understand: a lagged copy is merely a passive copy of a mailbox database where the log files are not immediately played back, as they are with ordinary passive copies. The period between the arrival of a log file and the time when it’s committed to the database is known as the lag interval. If you have a lag interval of 24 hours set to a database, a new log for that database generated at 3pm on April 4th won’t be played into the lagged copy until at least 3pm on April 5th (I say “at least” because the exact time of playback will depend on the copy queue length). The longer the lag interval, the more “distance” there is between the active copy of the mailbox database and the lagged copy.

Lagged copies are intended as a last-ditch “goalkeeper” safety mechanism in case of logical corruption. Physical corruption caused by a hardware failure will happen after Exchange has handed the data off to be written, so it won’t be replicated. Logical corruption introduced by components other than Exchange (say, an improperly configured file-level AV scanner) that directly write to the MDB or transaction log files wouldn’t be replicated in any event, so the real use case for the lagged copy is to give you a window in time during which logical corruption caused by Exchange or its clients hasn’t yet been replicated to the lagged copy. Obviously the size of this window depends on the length of the lag interval, and whether or not it is sufficient for you to a) notice that the active database has become corrupted b) play the accumulated logs forward into the lagged copy and c) activate the lagged copy depends on your environment.

The prevailing sentiment in the Exchange world has largely been “ I do backups already so lagged copies don’t give me anything.” When Exchange 2010 first introduced the notion of a lagged copy, Tony Redmond weighed in on it. Here’s what he said back then:

For now, I just can’t see how I could recommend the deployment of lagged database copies.

That seems like a reasonable stance, doesn’t it? At MEC this year, though, Microsoft came out swinging in defense of lagged copies. Why would they do that? Why would you even think of implementing lagged copies? It turns out that there are some excellent reasons that aren’t immediately apparent. (It may help to review some of the resiliency and HA improvements delivered in Exchange 2013; try this this excellent omnibus article by Microsoft’s Scott Schnoll if you want a refresher.) Here are some of the reasons why Microsoft has begun recommending the use of lagged copies more broadly.

1. Lagged copies are better in 2013

Exchange 2013 includes a number of improvements to the lagged copy mechanism. In particular, the new loose truncation feature introduced in SP1 means that you can prevent a lagged copy from taking up too much log space by adjusting the the amount of log space that the replay mechanism will use; when that limit is reached the logs will be played down to make room. Exchange 2013 (and SP1) also make a number of improvements to the Safety Net mechanism (discussed fully in Chapter 2 of the book), which can be used to play missing messages back into a lagged copy by retrieving them from the transport subsystem.

2. Lagged copies are continuously verified

When you back up a database, Exchange checks the page checksum of every page as it is backed up by computing the checksum and comparing it to the stored checksum; if that check fails, you get the dreaded JET_errReadVerifyFailure (-1018) error. However, just because you can successfully complete the backup doesn’t mean that you’ll be able to restore it when the time comes. By comparison, the Exchange log playback mechanism will log errors immediately when they are encountered during log playback. If you’re monitoring event logs on your servers, you’ll be notified as soon as this happens and you’ll know that your lagged copy is unusable now, not when you need to restore it. If you’re not monitoring your event logs, then lagged copies are the least of your problems.

3. Lagged copies give you more flexibility for recovery

When your active and passive copies of a database become unusable and you need to fall back to your lagged copy, you have several choices, as described in TechNet. You can easily play back every log that hasn’t yet been committed to the database, in the correct order, by using Move-ActiveMailboxDatabase. If you’d rather, you can play back the logs up to a certain point in time by removing the log files that you don’t want to play back. You can also play messages back directly from Safety Net into the lagged copy.

4. There’s no hardware penalty for keeping a lagged copy

Some administrators assume that you have to keep lagged copies of databases on a separate server. While this is certainly supported, you don’t have to have a “lag server” or anything like unto it. The normal practice in most designs has been to store lagged copies on other servers in the same DAG, but you don’t even have to do that. Microsoft recommends that you keep your mailbox databases no bigger than 2TB. Stuff your server with a JBOD array of the new 8TB disks (or, better yet, buy a Dell PowerVault MD1220) and you can easily put four databases on a single disk: the active copy of DB1, the primary passive copy of DB2, the secondary passive copy of DB3, and the lagged copy of DB4. This gives you an easy way to get the benefits of a 4-copy DAG while still using the full capacity of the disks you have: the additional IOPS load of the lagged copy will be low, so hosting it on a volume that already has active and passive copies of other databases is a reasonable approach (one, however, that you’ll want to test with jetstress).

It’s always been the case that the architecture Microsoft recommends when a new version of Windows or Exchange is released evolves over time as they, and we, get more experience with it in the real world. That’s clearly what has happened here; changes in the product, improvements in storage hardware, and a shift in the economic viability of conventional backups mean that lagged copies are now much more appropriate for use as a data protection mechanism than they were in the past. I expect to see them deployed more and more often as Exchange 2013 deployments continue and our collective knowledge of best practices for them improves.

1 Comment

Filed under UC&C

MEC 2014 wrapup

BLUF: it was a fantastic conference, far and away the best MEC I’ve attended. The quality of the speakers and technical presentations was very high, and the degree of community interaction and engagement was too.

I arrived in Austin Sunday afternoon and went immediately to dinner at County Line on the Lake, a justly famous Austin BBQ restaurant, to put on a “thank you” dinner for some of the folks who helped me with my book. Unfortunately, the conference staff had scheduled a speakers’ meeting at the same time, and a number of folks couldn’t attend due to flight delays or other last-minute intrusions. Next time I’ll poll invitees for their preferred time, and perhaps that will help. However, the dinner and company were both excellent, and I now have a copy of the book signed by all in attendance as a keepsake— a nice reversal of my usual pattern of signing books and giving them away.

Monday began with the keynote. If you follow me (or any number of other Exchange MVPs) on Twitter, you already know what I think: neither the content of the presentation nor its actual presentation was up to snuff when compared either to prior MEC events or other events such as Lync Conference. At breakfast Monday, Jason Sherry and I were excitedly told by an attendee that his Microsoft account rep insisted that he attend the keynote, and for the life of me I couldn’t figure out why until the tablet giveaway. That raised the energy level quite a bit! I think that for the next MEC, Julia White should be handed the gavel and left to run the keynote as she sees fit; I can guarantee that would result in a more lively and informative event.  (For another time: a review of the Venue 8 Pro, which I like a great deal based on my use of it so far). One area where the keynote excelled, though, was in its use of humor. The video vignette featuring Greg Taylor and David Espinoza was one of the funniest such I’ve ever seen, and all of the other bits were good as well— check them out here. The keynote also featured a few good-natured pokes at the community, such as this:

Ripped

For the record, although I’ve been lifting diligently, I am not (yet) built like the guy who’s wearing my face on screen… but there’s hope.

I took detailed notes on each of the sessions I attended, so I’ll be posting about the individual sessions over the next few days. It’s fair to say that I learned several valuable things at each session, which is sort of the point behind MEC. I found that the quality of the “unplugged” sessions I attended varied a bit between sessions; the worst was merely OK, while the best (probably the one on Managed Availability) was extremely informative. It’s interesting that Tony and I seemed to choose very few of the same sessions, so his write-ups and mine will largely complement each other. My Monday schedule started with Kamal Janardhan’s session on compliance and information protection. Let me start by saying that Kamal is one of my favorite Microsoft people ever. She is unfailingly cheerful, and she places a high value on transparency and openness. When she asks for feedback on product features or futures, it’s clear that she is sincerely seeking honest feedback, not just saying it pro forma. Her session was great; from there, I did my two back-to-back sessions, both of which went smoothly. I was a little surprised to see a nearly-full room (I think there were around 150 people) for my UM session, and even more surprised to see that nearly everyone in the room had already deployed UM on either Exchange 2010 or 2013. That’s a significant change from the percentage of attendees deploying UM at MEC 2012. I then went to the excellent “Unplugged” session on “Exchange Top Issues”, presented by the supportability team and moderated by Tony. After the show closed for the day, I was fortunate to be able to attend the dinner thrown by ENow Software for MVPs/MCMs and some of their key customers. Jay and Jess Gundotra, as always, were exceptional hosts, the meal (at III Forks) was excellent, and the company and conversation were delightful. Sadly I had to go join a work conference call right after dinner, so I missed the attendee party.

Tuesday started with a huge surprise. On my way to the “Exchange Online Migrations Technical Deep Dive” session (which was good but not great; it wasn’t as deep as I expected), I noticed the picture below flashing on the hallway screens. Given that it was April Fool’s Day, I wasn’t surprised to see the event planners playing jokes on attendees, I just wasn’t expecting to be featured as part of their plans. Sadly, although I’m happy to talk to people about migrating to Office 365, the FAA insists that I do it on the ground and not in the air. For lunch, I had the good fortune to join a big group of other Dell folks (including brand-new MVP Andrew Higginbotham, MCM Todd Hawkins, Michael Przytula, and a number of people from Dell Software I’d not previously met) at Iron Works BBQ. The food and company were both wonderful, and they were followed by a full afternoon of excellent sessions. The highlight of my sessions on Tuesday was probably Charlie Chung’s session on Managed Availability, which was billed as a 300-level session but was more like a 1300-level. I will definitely have to watch the recording a few times to make sure I didn’t miss any of the nuances.

Surprise!

This is why I need my commercial pilot’s license— so I can conduct airborne sessions at the next MEC.

Tony has already written at length about the “Exchange Oscars” dinner we had Tuesday night at Moonshine. I was surprised and humbled to be selected to receive the “Hall of Fame” award for sustained contributions to the Exchange community; I feel like there are many other MVPs, current and past, who deserve the award at least as much, if not more. It was great to be among so many friends spanning my more than 15 years working with Exchange; the product group turned out en masse and the conversation, fellowship, and celebration was the high point of the entire conference for me. I want to call out Shawn McGrath, who received the “Best Tool” award for the Exchange Remote Connectivity Analyzer, which became TestExchangeConnectivity.com. Shawn took a good idea and relentlessly drove it from conception to implementation, and the whole world of Exchange admins has benefited from his effort.

Wednesday started with the best “Unplugged” session I attended: it covered Managed Availability and, unlike the other sessions I went to, featured a panel made mostly of engineers from the development team. There were a lot of deep technical questions and a number of pointed roadmap discussions (not all of which were at my instigation). The most surprising session I attended, I think, was the session on updates to Outlook authentication— turns out that true single sign-on (SSO) is coming to all the Office 2013 client applications, and fairly soon, at least for Office 365 customers. More on that in my detailed session write-ups. The MVPs were also invited to a special private session with Perry Clarke. I can’t discuss most of what we talked about, but I can say that I learned about the CAP theorem (which hadn’t even been invented when I got my computer science degree, sigh), and that Perry recognizes the leadership role Exchange engineering has played in bringing Microsoft’s server products to high scale. Fun stuff!

Then I flew home: my original flight was delayed so they put me on one leaving an hour earlier. The best part of the return trip might have been flying on one of American’s new A319s to Huntsville. These planes are a huge improvement over the nasty old MD80s that AA used to fly DFW-HSV, and they’re nicer than DL’s ex-AirTran 717s to boot. So AA is still in contention for my westbound travel business.

A word about the Hilton Austin Downtown, the closest hotel to the conference center: their newly refurbished rooms include a number of extremely practical touches. There’s a built-in nightlight in the bathroom light switch, and each bedside table features its own 3-outlet power strip plus a USB port, and the work desk has its own USB charging ports as well. Charging my phone, Kindle, Venue 8 Pro, and backup battery was much simpler thanks to the plethora of outlets. The staff was unfailingly friendly and helpful too, which is always welcome. However, the surrounding area seemed to have more than its share of sirens and other loud noises; next time I might pick a hotel a little farther away.

I’ll close by saying how much I enjoyed seeing old friends and making new ones at this conference. I don’t have room (or a good enough memory) to make a comprehensive list, but to everyone who took the time to say hello in the hall, ask good questions in a session, wave at me across the expo floor, or pass the rolls at dinner— thank you.

Now to get ready for TechEd and Exchange Connections…

Leave a comment

Filed under UC&C