Category Archives: UC&C

MEC 2014 wrap-up by the numbers

The MEC 2014 conference team sent out a statistical summary of the conference to speakers, and it makes for fascinating reading. I wanted to share a few of the highlights of the report because I think it makes some really interesting points about the state of the Exchange market and community.

First: the 101 sessions were attended by a total of 13,079 people. The average attendance across all sessions was 129, which is impressive (though skewed a bit by the size of some of the mega-sessions; Microsoft had to make a bet that lots of people would attend these sessions, which they did!). In terms of attendance, the top 10 sessions were mostly focused on architecture and deployment:

  • Exchange Server 2013 Architecture
  • Ready, set, deploy: Exchange Server 2013
  • Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?
  • Exchange Server 2013 Tips & Tricks
  • The latest on High Availability & Site Resilience
  • Exchange hybrid: architecture and deployment
  • Experts Unplugged: Exchange Deployment
  • Exchange Server 2013 Transport Architecture
  • Exchange Server 2013 Virtualization Best Practices
  • Exchange Design Concepts and Best Practices
RS IV, not life size To put this in perspective, the top session on this list had just over 600 attendees and the bottom had just under 300. Overall attendance to sessions on the architecture track was about double that of the next contender, the deployment and migration track. That tells me that there is still a large audience for discussions of fundamental architecture topics, in addition to the day-in, day-out operational material that we’d normally see emerging as the mainstay of content at this point in the product lifecycle.Next takeaway: Tim McMichael is a rock star. He captured the #1 and #2 slots in the session ratings, which is no surprise to anyone who’s ever heard him speak. I am very hopeful that I’ll get to hear him speak at Exchange Connections this year. The overall quality of speakers was superb, in my biased opinion. I’d like to see my ratings improve (more demos!) but there’s no shame in being outranked by heavy hitters such as Tony, Michael, Jeff Mealiffe, Ross Smith IV (pictured at left; not actual size), or the ebullient Kamal Janardhan. MEC provides an excellent venue for the speakers to mingle with attendees, too, both at structured events like MAPI Hour and in unstructured post-session or hallway conversations. To me, that direct interaction is one of the most valuable parts of attending a conference, both as a speaker and because I can ask other speakers questions about their particular areas of expertise.

Third, the Unplugged sessions were very popular, as measured both by attendance numbers and session ratings. I loved both the format and content of the ones I attended, but they depend on having a good moderator— someone who is both knowledgeable about the topic at hand and experienced at steering a group of opinionated folks back on topic when needed. While I am naturally bad at that, the moderators overall did an excellent job and I hope to see more Unplugged sessions at future events. When attendees added sessions to their calendar, the event staff used that as a means of gauging interest and assigning rooms based on the likely number of attendees. However, looking at the data shows that people flocked to sessions based on word-of-mouth and didn’t necessarily update their calendars; I calculated the attendance split by dividing the number of people who attended an actual session by the number who said they would attend. If 100 calendared the session but 50 attended, that would be a 50% split. The average split across all sessions (except one) was 53.8%— not bad considering how dynamic the attendance was. The one session I left out was “Experts Unplugged: Architecture – HA and Storage”, which had a split of 1167%! Of the top 10 splits (i.e. sessions where the largest percentage of people stood by their original plans), 4 were Unplugged sessions.

Of course, MEC was much more than the numbers, but this kind of data helps Microsoft understand what people want from future events, measured not just by asking them but by observing their actual preferences and actions. I can’t wait to see what the next event, whenever it may be, will look like!

Leave a comment

Filed under UC&C

Microsoft updates Recoverable Items quota for Office 365 users

Remember when I posted about the 100GB limit for Personal Archive mailboxes in Office 365? It turns out that there was another limit that almost no one knew about, primarily because it involves mailbox retention. As of today, when you put an Office 365 mailbox on In-Place Hold, the size of the Recoverable Items folder is capped at 30GB. This is plenty for the vast majority of customers because a) not many customers use In-Place Hold in the first place and b) not many users have mailboxes that are large enough to exceed the 30GB quota. Multiply two small numbers together and you get another small number.

However, there are some customers for whom this is a problem. One of the most interesting things about Office 365 to me is the speed at which Microsoft can respond to their requests by changing aspects of the service architecture and provisioning. In this case, the Exchange team is planning to increase the size of the Recoverable Items quota to 100GB. Interestingly, they’re actually starting by increasing the quota for user mailboxes that are now on hold— so from now until July 2014, they’ll be silently increasing the quota for those users. If you put a user on hold today, however, their quota may not be set to 100GB until sometime later.

If you need an immediate quota increase, or if you’re using a dedicated tenant, you’ll still have to use the existing mechanism of filing a support ticket to have the quota increased.

There’s no public post on this yet, but I expect one shortly. In the meantime, bask in the knowledge that with a 50GB mailbox, 100GB Personal Archive, and 100GB Recoverable Items quota, your users probably aren’t going to run out of mailbox space any time soon.

Leave a comment

Filed under Office 365, UC&C

Two-factor authentication for Outlook and Office 2013 clients

I don’t usually put on my old man hat, but indulge me for a second. Back in February 2000, in my long-forgotten column for TechNet, here’s what I said about single-factor passwords:

I’m going to let you in on a secret that’s little discussed outside the security world: reusable passwords are evil.

I stand by the second half of that statement: reusable passwords are still evil, 14 years later, but at least the word is getting out, and multi-factor authentication is becoming more and more common in both consumer and business systems. I was wrong when I assumed that smart cards would become ubiquitous as a second authentication factor; instead, the “something you have” role is increasingly often filled by a mobile phone that can receive SMS messages. Microsoft bought into that trend with their 2012 purchase of PhoneFactor, which is now integrated into Azure. Now Microsoft is extending MFA support into Outlook and the rest of the Office 2013 client applications, with a few caveats. I attended a great session at MEC 2014 presented by Microsoft’s Erik Ashby and Franklin Williams that both outlined the current state of Office 365-integrated MFA and outlined Microsoft’s plans to extend MFA to Outlook.

First, keep in mind that Office 365 already offers multi-factor authentication, once you enable it, for your web-based clients. You can use SMS-based authentication, have the service call you via phone, or use a mobile app that generates authentication codes, and you can define “app passwords” that are used instead of your primary credentials for applications— like Outlook, as it happens— that don’t currently understand MFA. You have to enable MFA for your tenant, then enable it for individual users. All of these services are included with Office 365 SKUs, and they rely on the Azure MFA service. You can, if you wish, buy a separate subscription to Azure MFA if you want additional functionality, like the ability to customize the caller ID that appears when the service calls your users.

With that said, here’s what Erik and Franklin talked about…

To start with, we have to distinguish between the three types of identities that can be used to authenticate against the service. Without going into every detail, it’s fair to summarize these as follows:

  • Cloud identities are homed in Azure Active Directory (AAD). There’s no synchronization with on-premises AD because there isn’t one.
  • Directory sync (or just “dirsync”) uses Microsoft’s dirsync tool, or an equivalent third-party tool, to sync an on-premises account with AAD. This essentially gives services that consume AAD a mostly-read-only copy of your organization’s AD.
  • Federated identity uses a federation broker or service such as Active Directory Federation Services (AD FS), Okta, Centrify, and Ping to allow your organization’s AD to answer authentication queries from Office 365 services. In January 2014 Microsoft announced a “Works With Office 365 – Identity” logo program, so if you don’t want to use AD FS you can choose another federation toolset that better meets your requirements.

Client updates are coming to the Office 2013 clients: Outlook, Lync, Word, Excel,  PowerPoint, and SkyDrive Pro. With these updates, you’ll see a single unified authentication window for all of the clients, similar (but not necessarily identical) to the existing login window you get on Windows when signing into a SkyDrive or SkyDrive Pro library from within an Office client. From that authentication window, you’ll be able to enter the second authentication factor that you received via phone call, SMS, or authentication app. During the presentation, Franklin (or maybe Erik?) said “if you can authenticate in a web browser, you can authenticate in Office clients”— very cool. (PowerShell will be getting MFA support too, but it wasn’t clear to me exactly when that was happening).

These client updates will also provide support for two specific types of smart cards: the US Department of Defense Common Access Card (CAC) and the similar-but-civilian Personal Identity Verification (PIV) card. Instead of using a separate authentication token provided by the service, you’ll plug in your smart card, authenticate to it with your PIN, and away you go.

All three of the identity types of these methods provide support for MFA; federated identity will gain the ability to do true single sign-on (SSO) jn Office 2013 clients, which will be a welcome usability improvement. Outlook will get SSO capabilities with the other two identity types, too.

How do the updates work? That’s where the magic part comes in. The Azure Active Directory Authentication Library (ADAL) is being extended to provide support for MFA. When the Office client makes a request to the service the service will return a header that instructs the client to visit a security token service (STS) using OAuth. At that point, Office uses ADAL to launch the browser control that displays the authentication page, then, as Erik puts it, “MFA and federation magic happens transparent to Office.” If the authentication succeeds, Office gets security tokens that it caches and uses for service authentication. (The flow is described in more detail in the video from the session, which is available now for MEC attendees and will be available in 60 days or so for non-attendees).

There are two important caveats that were a little buried in the presentation. First is that MFA in Outlook 2013 will require the use of MAPI/HTTP. More seriously, MFA will not be available to on-premises Exchange 2013 deployments until some time in the future. This aligns with Microsoft’s cloud-first strategy, but it is going to aggravate on-premises customers something fierce. In fairness, because you need the MFA infrastructure hosted in the Microsoft cloud to take advantage of this feature, I’m not sure there’s a feasible way to deliver SMS- or voice-based MFA for purely on-prem environments, and if you’re in a hybrid, then you’re good to go.

Microsoft hasn’t announced a specific timeframe for these updates (other than “second half calendar 2014”), and they didn’t say anything about Mac support, though I would imagine that the rumored v.next of Mac Office would provide this same functionality. The ability to use MFA across all the Office client apps will make it easier for end users, reducing the chance that they’ll depend solely on reusable passwords and thus reducing the net amount of evil in the world— a blessing to us all.

Leave a comment

Filed under Office 365, UC&C

Script to download MEC 2014 presentations

Yay for code reuse! Tom Arbuthnot wrote a nifty script to download all the Lync Conference 2014 presentations, and since Microsoft used the same event management system for MEC 2014, I grabbed his script and tweaked it so that it will download the MEC 2014 session decks and videos. It only works if you are able to sign into the MyMEC site, as only attendees can download the presentations and videos at this time. I can’t guarantee that the script will pull all the sessions but it seems to be working so far— give it a try. (And remember, the many “Unplugged” sessions weren’t recorded so you won’t see any recordings or decks for them). If the script works, thank Tom; if it doesn’t, blame me.

Download the script

2 Comments

Filed under UC&C

The value of lagged copies for Exchange 2013

Let’s talk about… lagged copies.

For most Exchange administrators, the subject of lagged database copies falls somewhere between “the Kardashians’ shoe sizes” and “which of the 3 Stooges was the funniest” in terms of interest level. The concept is easy enough to understand: a lagged copy is merely a passive copy of a mailbox database where the log files are not immediately played back, as they are with ordinary passive copies. The period between the arrival of a log file and the time when it’s committed to the database is known as the lag interval. If you have a lag interval of 24 hours set to a database, a new log for that database generated at 3pm on April 4th won’t be played into the lagged copy until at least 3pm on April 5th (I say “at least” because the exact time of playback will depend on the copy queue length). The longer the lag interval, the more “distance” there is between the active copy of the mailbox database and the lagged copy.

Lagged copies are intended as a last-ditch “goalkeeper” safety mechanism in case of logical corruption. Physical corruption caused by a hardware failure will happen after Exchange has handed the data off to be written, so it won’t be replicated. Logical corruption introduced by components other than Exchange (say, an improperly configured file-level AV scanner) that directly write to the MDB or transaction log files wouldn’t be replicated in any event, so the real use case for the lagged copy is to give you a window in time during which logical corruption caused by Exchange or its clients hasn’t yet been replicated to the lagged copy. Obviously the size of this window depends on the length of the lag interval, and whether or not it is sufficient for you to a) notice that the active database has become corrupted b) play the accumulated logs forward into the lagged copy and c) activate the lagged copy depends on your environment.

The prevailing sentiment in the Exchange world has largely been “ I do backups already so lagged copies don’t give me anything.” When Exchange 2010 first introduced the notion of a lagged copy, Tony Redmond weighed in on it. Here’s what he said back then:

For now, I just can’t see how I could recommend the deployment of lagged database copies.

That seems like a reasonable stance, doesn’t it? At MEC this year, though, Microsoft came out swinging in defense of lagged copies. Why would they do that? Why would you even think of implementing lagged copies? It turns out that there are some excellent reasons that aren’t immediately apparent. (It may help to review some of the resiliency and HA improvements delivered in Exchange 2013; try this this excellent omnibus article by Microsoft’s Scott Schnoll if you want a refresher.) Here are some of the reasons why Microsoft has begun recommending the use of lagged copies more broadly.

1. Lagged copies are better in 2013

Exchange 2013 includes a number of improvements to the lagged copy mechanism. In particular, the new loose truncation feature introduced in SP1 means that you can prevent a lagged copy from taking up too much log space by adjusting the the amount of log space that the replay mechanism will use; when that limit is reached the logs will be played down to make room. Exchange 2013 (and SP1) also make a number of improvements to the Safety Net mechanism (discussed fully in Chapter 2 of the book), which can be used to play missing messages back into a lagged copy by retrieving them from the transport subsystem.

2. Lagged copies are continuously verified

When you back up a database, Exchange checks the page checksum of every page as it is backed up by computing the checksum and comparing it to the stored checksum; if that check fails, you get the dreaded JET_errReadVerifyFailure (-1018) error. However, just because you can successfully complete the backup doesn’t mean that you’ll be able to restore it when the time comes. By comparison, the Exchange log playback mechanism will log errors immediately when they are encountered during log playback. If you’re monitoring event logs on your servers, you’ll be notified as soon as this happens and you’ll know that your lagged copy is unusable now, not when you need to restore it. If you’re not monitoring your event logs, then lagged copies are the least of your problems.

3. Lagged copies give you more flexibility for recovery

When your active and passive copies of a database become unusable and you need to fall back to your lagged copy, you have several choices, as described in TechNet. You can easily play back every log that hasn’t yet been committed to the database, in the correct order, by using Move-ActiveMailboxDatabase. If you’d rather, you can play back the logs up to a certain point in time by removing the log files that you don’t want to play back. You can also play messages back directly from Safety Net into the lagged copy.

4. There’s no hardware penalty for keeping a lagged copy

Some administrators assume that you have to keep lagged copies of databases on a separate server. While this is certainly supported, you don’t have to have a “lag server” or anything like unto it. The normal practice in most designs has been to store lagged copies on other servers in the same DAG, but you don’t even have to do that. Microsoft recommends that you keep your mailbox databases no bigger than 2TB. Stuff your server with a JBOD array of the new 8TB disks (or, better yet, buy a Dell PowerVault MD1220) and you can easily put four databases on a single disk: the active copy of DB1, the primary passive copy of DB2, the secondary passive copy of DB3, and the lagged copy of DB4. This gives you an easy way to get the benefits of a 4-copy DAG while still using the full capacity of the disks you have: the additional IOPS load of the lagged copy will be low, so hosting it on a volume that already has active and passive copies of other databases is a reasonable approach (one, however, that you’ll want to test with jetstress).

It’s always been the case that the architecture Microsoft recommends when a new version of Windows or Exchange is released evolves over time as they, and we, get more experience with it in the real world. That’s clearly what has happened here; changes in the product, improvements in storage hardware, and a shift in the economic viability of conventional backups mean that lagged copies are now much more appropriate for use as a data protection mechanism than they were in the past. I expect to see them deployed more and more often as Exchange 2013 deployments continue and our collective knowledge of best practices for them improves.

1 Comment

Filed under UC&C

MEC 2014 wrapup

BLUF: it was a fantastic conference, far and away the best MEC I’ve attended. The quality of the speakers and technical presentations was very high, and the degree of community interaction and engagement was too.

I arrived in Austin Sunday afternoon and went immediately to dinner at County Line on the Lake, a justly famous Austin BBQ restaurant, to put on a “thank you” dinner for some of the folks who helped me with my book. Unfortunately, the conference staff had scheduled a speakers’ meeting at the same time, and a number of folks couldn’t attend due to flight delays or other last-minute intrusions. Next time I’ll poll invitees for their preferred time, and perhaps that will help. However, the dinner and company were both excellent, and I now have a copy of the book signed by all in attendance as a keepsake— a nice reversal of my usual pattern of signing books and giving them away.

Monday began with the keynote. If you follow me (or any number of other Exchange MVPs) on Twitter, you already know what I think: neither the content of the presentation nor its actual presentation was up to snuff when compared either to prior MEC events or other events such as Lync Conference. At breakfast Monday, Jason Sherry and I were excitedly told by an attendee that his Microsoft account rep insisted that he attend the keynote, and for the life of me I couldn’t figure out why until the tablet giveaway. That raised the energy level quite a bit! I think that for the next MEC, Julia White should be handed the gavel and left to run the keynote as she sees fit; I can guarantee that would result in a more lively and informative event.  (For another time: a review of the Venue 8 Pro, which I like a great deal based on my use of it so far). One area where the keynote excelled, though, was in its use of humor. The video vignette featuring Greg Taylor and David Espinoza was one of the funniest such I’ve ever seen, and all of the other bits were good as well— check them out here. The keynote also featured a few good-natured pokes at the community, such as this:

Ripped

For the record, although I’ve been lifting diligently, I am not (yet) built like the guy who’s wearing my face on screen… but there’s hope.

I took detailed notes on each of the sessions I attended, so I’ll be posting about the individual sessions over the next few days. It’s fair to say that I learned several valuable things at each session, which is sort of the point behind MEC. I found that the quality of the “unplugged” sessions I attended varied a bit between sessions; the worst was merely OK, while the best (probably the one on Managed Availability) was extremely informative. It’s interesting that Tony and I seemed to choose very few of the same sessions, so his write-ups and mine will largely complement each other. My Monday schedule started with Kamal Janardhan’s session on compliance and information protection. Let me start by saying that Kamal is one of my favorite Microsoft people ever. She is unfailingly cheerful, and she places a high value on transparency and openness. When she asks for feedback on product features or futures, it’s clear that she is sincerely seeking honest feedback, not just saying it pro forma. Her session was great; from there, I did my two back-to-back sessions, both of which went smoothly. I was a little surprised to see a nearly-full room (I think there were around 150 people) for my UM session, and even more surprised to see that nearly everyone in the room had already deployed UM on either Exchange 2010 or 2013. That’s a significant change from the percentage of attendees deploying UM at MEC 2012. I then went to the excellent “Unplugged” session on “Exchange Top Issues”, presented by the supportability team and moderated by Tony. After the show closed for the day, I was fortunate to be able to attend the dinner thrown by ENow Software for MVPs/MCMs and some of their key customers. Jay and Jess Gundotra, as always, were exceptional hosts, the meal (at III Forks) was excellent, and the company and conversation were delightful. Sadly I had to go join a work conference call right after dinner, so I missed the attendee party.

Tuesday started with a huge surprise. On my way to the “Exchange Online Migrations Technical Deep Dive” session (which was good but not great; it wasn’t as deep as I expected), I noticed the picture below flashing on the hallway screens. Given that it was April Fool’s Day, I wasn’t surprised to see the event planners playing jokes on attendees, I just wasn’t expecting to be featured as part of their plans. Sadly, although I’m happy to talk to people about migrating to Office 365, the FAA insists that I do it on the ground and not in the air. For lunch, I had the good fortune to join a big group of other Dell folks (including brand-new MVP Andrew Higginbotham, MCM Todd Hawkins, Michael Przytula, and a number of people from Dell Software I’d not previously met) at Iron Works BBQ. The food and company were both wonderful, and they were followed by a full afternoon of excellent sessions. The highlight of my sessions on Tuesday was probably Charlie Chung’s session on Managed Availability, which was billed as a 300-level session but was more like a 1300-level. I will definitely have to watch the recording a few times to make sure I didn’t miss any of the nuances.

Surprise!

This is why I need my commercial pilot’s license— so I can conduct airborne sessions at the next MEC.

Tony has already written at length about the “Exchange Oscars” dinner we had Tuesday night at Moonshine. I was surprised and humbled to be selected to receive the “Hall of Fame” award for sustained contributions to the Exchange community; I feel like there are many other MVPs, current and past, who deserve the award at least as much, if not more. It was great to be among so many friends spanning my more than 15 years working with Exchange; the product group turned out en masse and the conversation, fellowship, and celebration was the high point of the entire conference for me. I want to call out Shawn McGrath, who received the “Best Tool” award for the Exchange Remote Connectivity Analyzer, which became TestExchangeConnectivity.com. Shawn took a good idea and relentlessly drove it from conception to implementation, and the whole world of Exchange admins has benefited from his effort.

Wednesday started with the best “Unplugged” session I attended: it covered Managed Availability and, unlike the other sessions I went to, featured a panel made mostly of engineers from the development team. There were a lot of deep technical questions and a number of pointed roadmap discussions (not all of which were at my instigation). The most surprising session I attended, I think, was the session on updates to Outlook authentication— turns out that true single sign-on (SSO) is coming to all the Office 2013 client applications, and fairly soon, at least for Office 365 customers. More on that in my detailed session write-ups. The MVPs were also invited to a special private session with Perry Clarke. I can’t discuss most of what we talked about, but I can say that I learned about the CAP theorem (which hadn’t even been invented when I got my computer science degree, sigh), and that Perry recognizes the leadership role Exchange engineering has played in bringing Microsoft’s server products to high scale. Fun stuff!

Then I flew home: my original flight was delayed so they put me on one leaving an hour earlier. The best part of the return trip might have been flying on one of American’s new A319s to Huntsville. These planes are a huge improvement over the nasty old MD80s that AA used to fly DFW-HSV, and they’re nicer than DL’s ex-AirTran 717s to boot. So AA is still in contention for my westbound travel business.

A word about the Hilton Austin Downtown, the closest hotel to the conference center: their newly refurbished rooms include a number of extremely practical touches. There’s a built-in nightlight in the bathroom light switch, and each bedside table features its own 3-outlet power strip plus a USB port, and the work desk has its own USB charging ports as well. Charging my phone, Kindle, Venue 8 Pro, and backup battery was much simpler thanks to the plethora of outlets. The staff was unfailingly friendly and helpful too, which is always welcome. However, the surrounding area seemed to have more than its share of sirens and other loud noises; next time I might pick a hotel a little farther away.

I’ll close by saying how much I enjoyed seeing old friends and making new ones at this conference. I don’t have room (or a good enough memory) to make a comprehensive list, but to everyone who took the time to say hello in the hall, ask good questions in a session, wave at me across the expo floor, or pass the rolls at dinner— thank you.

Now to get ready for TechEd and Exchange Connections…

Leave a comment

Filed under UC&C

Getting ready for MEC 2014

Wow, it’s been nearly a month since my last post here. In general I am not a believer in posting stuff on a regular schedule, preferring instead to wait until I have something to say. All of my “saying” lately has been on behalf of my employer though. I have barely even had time to fly. For another time: a detailed discussion of the ins and outs of shopping for an airplane. For now, though, I am making my final preparations to attend this year’s Microsoft Exchange Conference (MEC) in Austin! My suitcase is packed, all my devices are charged, my slides are done, and I am prepared to overindulge in knowledge sharing, BBQ eating, and socializing.

It is interesting to see the difference in flavor between Microsoft’s major enterprise-focused conferences. This year was my first trip to Lync Conference, which I would summarize as being a pretty even split between deeply technical sessions and marketing focused around the business and customer value of “universal communications”. In reviewing the session attendance and rating numbers, it was no surprise that the most-attended sessions and the highest-rated sessions tended to be 400-level technical sessions such as Brian Ricks’ excellent deep-dive on Lync client sign-in behavior. While I’ve never been to a SharePoint Conference, from what my fellow MVPs say about it, there was a great deal of effort expended by Microsoft on highlighting the social features of the SharePoint ecosystem, with a heavy focus on customization and somewhat less attention directed at SharePoint Online and Office 365. (Oh, and YAMMER YAMMER YAMMER YAMMER YAMMER.) Judging from reactions in social media, this focus was well-received but inevitably less technical given the newness of the technology.

That brings us to the 2014 edition of MEC. The event planners have done something unique by loading the schedule with “Unplugged” panel discussions, moderated by MVP and MCM/MCSM experts and consisting of Microsoft and industry experts in particular technologies. These panels provide an unparalleled opportunity to get, and give, very candid feedback around individual parts of Exchange and I plan on attending as many of them as I can. This is in no way meant to slight the many other excellent sessions and speakers that will be there. I’d planned to summarize specific sessions that I thought might be noteworthy, but Tony published an excellent post this morning that far outdoes what I had in mind, breaking down sessions by topic area and projected attendance. Give it a read.

I’m doing two sessions on Monday: Exchange Unified Messaging Deep Dive at 245p and Exchange ActiveSync: Management Challenges and Best Practices at 1145a. The latter is a vendor session with the folks from BoxTone, during which attendees both get lunch (yay) and the opportunity to see BoxTone’s products in action. They’re also doing a really interesting EAS health check, during which you provide CAS logs and they run them through a static analysis tool that, I can almost guarantee, will tell you things you didn’t know about your EAS environment. Drop by and say hello!

Leave a comment

Filed under UC&C

“Ceres” Search Foundation install error in Exchange 2013 SP1

When deploying the RTM build of Exchange 2013 SP1, I found that one of my servers was throwing an error I hadn’t seen before during installation. (The error message itself is below for reference,) I found few other reports, although KB article 2889663 reports a similar problem with CU1 and CU2, caused by a trailing space in the PSModulePath environment variable. That wasn’t the problem in my case. Brian Reid mentioned that he’d had the same problem a few times, and that re-running setup until it finished normally was how he fixed it. So I tried that, and sure enough, the install completed normally. In most cases I wouldn’t bother to post a blog article saying “this problem went away on its own,” but the error seemed sufficiently unusual that I thought it might be helpful to document it for future generations.

Warning:
An unexpected error has occurred and a Watson dump is being generated: The following error was generated when "$error.Clear();
            if ($RoleProductPlatform -eq "amd64")
            {
                $fastInstallConfigPath = Join-Path -Path $RoleBinPath -ChildPath "Search\Ceres\Installer";
                $command = Join-Path -Path $fastInstallConfigPath -ChildPath "InstallConfig.ps1";
                $dataFolderPath = Join-Path -Path $RoleBinPath -ChildPath "Search\Ceres\HostController\Data";

                # Remove previous SearchFoundation configuration
                &$command -action u -silent;
                try
                {
                    if ([System.IO.Directory]::Exists($dataFolderPath))
                    {
                        [System.IO.Directory]::Delete($dataFolderPath, $true);
                    }
                }
                catch
                {
                    $deleteErrorMsg = "Failure cleaning up SearchFoundation Data folder. - " + $dataFolderPath + " - " + $_.Exception.Message;
                    Write-ExchangeSetupLog -Error $deleteErrorMsg;
                }

                # Re-add the SearchFoundation configuration
                try
                {
                    # the BasePort value MUST be kept in sync with dev\Search\src\OperatorSchema\SearchConfig.cs
                    &$command -action i -baseport 3800 -dataFolder $dataFolderPath -silent;
                }
                catch
                {
                    $errorMsg = "Failure configuring SearchFoundation through installconfig.ps1 - " + $_.Exception.Message;
                    Write-ExchangeSetupLog -Error $errorMsg;

                    # Clean up the failed configuration attempt.
                    &$command -action u -silent;
                    try
                    {
                        if ([System.IO.Directory]::Exists($dataFolderPath))
                        {
                            [System.IO.Directory]::Delete($dataFolderPath, $true);
                        }
                    }
                    catch
                    {
                        $deleteErrorMsg = "Failure cleaning up SearchFoundation Data folder. - " + $dataFolderPath + " - " + $_.Exception.Message;
                        Write-ExchangeSetupLog -Error $deleteErrorMsg;
                    }
                }
            }
        " was run: "Error occurred while uninstalling Search Foundation for Exchange.System.Exception: Cannot determine the product name registry subkey, neither the 'RegistryProductName' application setting nor the 'CERES_REGISTRY_PRODUCT_NAME' environment variable was set
   at Microsoft.Ceres.Common.Utils.Registry.RegistryUtils.get_ProductKeyName()
   at Microsoft.Ceres.Exchange.PostSetup.DeploymentManager.DeleteDataDirectory()
   at Microsoft.Ceres.Exchange.PostSetup.DeploymentManager.Uninstall(String installDirectory, String logFile)
   at CallSite.Target(Closure , CallSite , Type , Object , Object )".

2 Comments

Filed under UC&C

Office 365 Personal Archives limited to 100GB

There’s a bit of misinformation, or lack of information, floating around about the use of Office 365 Personal Archives. This feature, which is included in the higher-end Office 365 service plans (including E3/E4 and the corresponding A3/A4 plans for academic organizations), is often cited as one of the major justifications for moving to Office 365. It’s attractive because of the potential savings from greatly reducing PST file use and eliminating (or at least sharply reducing) the use of on-premises archiving systems such as Enterprise Vault.

Some Microsoft folks have been spreading the good news that archives are unlimited (samples here and here), and so have many consultants, partners, and vendors– including me. In fact, I had a conversation with a large customer last week in which they expressed positive glee about being able to get their data out of on-prem archives and into the cloud.

The only problem? Saying the archives are unlimited isn’t quiiiiite true.

If you read the service description for Exchange Online (which we all should be doing regularly anyway, as it changes from time to time), you’ll see this:

Clip from Nov 2013 O365 service description

Clip from Nov 2013 O365 service description

See that little “3″? Here’s its text:

Each subscriber receives 50 GB of storage in the primary mailbox, plus unlimited storage in the archive mailbox. A default quota of 100 GB is set on the archive mailbox, which will generally accommodate reasonable use, including the import of one user’s historical email. In the unlikely event that a user reaches this quota, a call to Office 365 support is required. Administrators can’t increase or decrease this quota.

So as an official matter, there is no size limit. As a practical matter, the archive is soft-limited to 100GB, and if you want to store more data than that, you’ll have to call Microsoft support to ask for a quota increase. My current understanding is that 170GB is the real limit, as that is the maximum size to which the quota can currently be increased. I don’t know if Microsoft has stated this publicly anywhere yet but it’s certainly not in the service descriptions. That limit leads me to wonder what the maximum functional size of an Office 365 mailbox is– that is, if Microsoft didn’t have the existing 100GB quota limit in place, how big a mailbox could they comfortably support? (Note that this is not the same as asking what size mailbox Outlook can comfortably support, and I bet those two numbers wouldn’t match anyway.) I suppose that in future service updates we’ll find out, given that Microsoft is continuing to shovel mailbox space at users as part of its efforts to compete with Google.

Is this limit a big deal? Not really; the number of Office 365 customers who will need more than 100GB of archive space for individual user mailboxes is likely to be very small. The difference between “unlimited” and “so large that you’ll never encounter the limit” is primarily one of semantics. However, there’s always a danger that customers will react badly to poor semantics, perhaps because they believe that what they get isn’t what they were promised. While I would like to see more precision in the service descriptions, it’s probably more useful to focus on making sure that customers (especially those who are heavy users of on-premises archives or PST files) know that there’s currently a 100GB quota, which is why I wrote this post.

For another time: a discussion of how hard, or easy, it is to get large volumes of archive data into Office 365 in the first place. That’s one of the many topics I expect to see explored in great depth at MEC 2014, where we’ll get the Exchange team’s perspective, and then again at Exchange Connections 2014, where I suspect we’ll get a more nuanced view.

3 Comments

Filed under Office 365, UC&C

Getting ready for Lync Conference 2014 (bonus Thursday Trivia #106)

So, first: here’s the view from my second-floor home office:

PaulR  Dell 20140213 003

Actually, I had to walk across the street to get this particular shot, but it was worth it. We got about 4” or so of snow in my neighborhood; I got out of Raleigh just in time to miss their snowmageddon, which suits me fine. The boys and I had a good time about 10pm last night throwing snowballs and watching big, fat flakes fall. The roads are passable now and will get better as it warms, but tonight it’ll be cold again and they’ll probably refreeze.

I’m making my final preparations for Lync Conference 2014 next week. I’m presenting a total of four times:

  • VOICE401, “Deep Dive: Exchange 2013 and Lync 2013 Unified Messaging Integration”, is on Wednesday at 1pm in Copperleaf 10. This session will cover some of the internals of Exchange UM; it’s targeted at Lync admins who may not have much knowledge of Exchange but are already familiar with SIP signaling and the like.
  • SERV301, “Exchange 2013 and Lync 2013: ‘Better Together’ Demystified”, is on Tuesday at 2pm in Copperleaf 9, and there is a repeat scheduled for Wednesday at 430p (also in Copperleaf 9). This session covers all the places where Exchange and Lync tie together so that you get a bette experience when both are deployed.
  • On Tuesday at 430p, I’m taking part in an informal session on Exchange-y stuff at the Microsoft booth in the exhibit hall. This is super informal, so it’s probably the best place to drop by and say hello if you can.

Dell has a pretty heavy presence at the show; Michael Przytula is presenting a session covering the Lync device ecosystem (Wednesday, 230p, Bluehorn 1-3) that I think will be pretty neat, because who doesn’t love shiny devices? George Cordeiro and Doug Davis are both doing sessions around how to identify the actual ROI of a Lync deployment, which is something customers often ask about before deployment. Even if that doesn’t sound interesting, the Dell booth will be staffed by some of our hotshot Lync guys (including Louis Howard and Scott Moore), and we’re giving away a Venue 11 Pro and a bunch of very nice Jabra and Plantronics headsets.

Now, your trivia for the week:

Leave a comment

Filed under General Stuff, UC&C

Office 365 token disclosure flaw: patch your desktops now

Happy New Year! To start the year off right, let’s talk about security. More to the point, let’s talk about Office 365 security.

One of the ways I often talk about Office 365 to customers is this: any time you move to a hosted service, you’re placing a bet that your hosting provider can do something better or cheaper than you do. Maybe they’ll deliver better uptime than you can afford to provide, or they’ll offer global reach, or some feature or function that you don’t currently have. As with any other bet, you have to carefully evaluate the odds and your counterparty (the person offering the bet). One of the big arguments in favor of Office 365 has been its security: Microsoft has invested a huge amount of money in physical and logical security for Office 365. Tie this in with the huge investment (several billion dollars and counting) brought about by Trustworthy Computing and you can see why Microsoft is eager to tout the security of their products: they have made huge strides over the last ten years. (Sadly, many other vendors are still as bad as they were back in 2005… let that thought sink in for a few minutes.)

In December, Microsoft released a patch, MS13-104, which every organization using Office 365 should immediately deploy. Microsoft rated this bulletin as “important” using their severity scale. While I understand that the “critical” severity is usually reserved for flaws that could allow remote code execution, I think this is just as bad because it allows an attacker to silently steal every document you have in a SharePoint Online document library.

Wow.

Keep this tab open, then open a new tab and use it to start figuring out how to patch your clients ASAP if you’re using SharePoint Online. Then you can come back.

I won’t repeat the excellent analysis performed by Adallom Security, the folks who reported the flaw to Microsoft in May 2013. That’s right: they reported in May 2013, and the patch was issued in December 2013. That’s a minimum of 7 months of days-of-risk, which is bad enough without considering how long this flaw was being exploited before Adallom found it. However, I do want to make a couple of additional points.

First, they wrote their post before the recent spate of disclosures surrounding the NSA’s Targeted Access Operations (TAO) team and their catalog of exploits. There is of course no evidence that NSA developed or was using this particular exploit, but this is exactly the kind of silent, virtually undetectable attack that is the specialty of nation-states. The fact that Adallom’s customer is a large, high-profile enterprise is potentially bad news for Office 365 sales efforts, given that those customers are already a little leery of cloud services because of a perceived lack of security controls.

Second, this exploit apparently doesn’t work against Exchange Online or Lync Online, but that hasn’t been proven conclusively. Don’t hold off patching Office 2013 just because you aren’t using SharePoint Online.

Third, it seems to me that this kind of flaw is the natural consequence of breaking new ground. Seamlessly tying together on-premises and cloud services through a complex desktop suite is something that no other software company has even attempted: the major Office 365 competitors, such as Box.net and Google, don’t offer traditional desktop productivity apps, preferring instead to run inside the browser, where the design patterns and potential vulnerabilities of authentication are much better understood. So I don’t think of this as sloppiness necessarily on Microsoft’s part: sometimes in complex systems, people make mistakes. 210+ days-of-risk makes me a little nervous though.

My overall takeaway: if you have truly sensitive data that you want to protect, putting it in the cloud is not necessarily any more risky than keeping it on-premises. That may seem counterintuitive, but an entity that is determined to get your data has many potential avenues of attack, and my experience tells me that the vast majority of sites have a number of local vulnerabilities (such as poor patching practices, poor intrusion detection, or inattention to basic security practices) that put them at higher risk than a relatively esoteric, hard-to-exploit flaw like this one. if you don’t believe me, just look at the number of sites hit by Cryptolocker and various banking-related Trojans. Put another way, you don’t need to worry about defending yourself against NSA if you can’t even manage to defend yourself against script kiddies.

Now go forth and patch!

Leave a comment

Filed under Office 365, UC&C

Office 365 beta exams: a few thoughts

Last week I took the beta versions of the two MCSA exams for Office 365: 71-346 is Managing Office 365 Identities and Requirements and 71-347 is Enabling Office 365 Services. I thought it might be useful to write up a few NDA-safe notes on the exams and the topics they cover. Keep in mind that the questions on the beta exam are there because they’re being tested; the objective domains (ODs), or areas of knowledge being tested, won’t change but the specific questions probably will as the beta identifies “bad” questions (those that everyone gets right or everyone gets wrong are immediately suspect!) The Microsoft exam development process is really complicated; to summarize, by the time the exams hit beta, the knowledge areas to be tested are set in stone but the questions themselves can be modified, or thrown out, based on beta exam feedback.

First, be forewarned that there are no formal study materials for these exams. I hear that Office 365 Admin Inside Out from MS Press is decent, but haven’t read it yet. Be prepared to do a lot of binging to look up specific things that you want to know how to do.

Second, the absolute best way to prepare for the exam is to sign up for a trial Office 365 E3/E4 tenant and make sure that you know how to do everything mentioned in the exam objectives in both PowerShell and the GUI. This is baloney, and it has been a hot topic of debate in the MVP community. IMHO there is little value in asking an examinee to show that they know how to do something in PS which is trivial to do in the GUI, especially if it’s a one-time task like setting up Azure RMS. Nonetheless, that’s the requirement.

For 346, specific things you should probably know include:

  • How to add a new tenant, from scratch. This includes choosing a region (and what effect that has), setting the domain purpose, and confirming domain ownership.
  • How to configure DNS records and firewall settings: SRV, CNAME, and MX records, what they point to, etc.
  • How to design ADFS: how to size it, when to use SQL Server instead of WID, and so on. Note that actually doing HA or DR with ADFS is not one of the topics listed in the OD, but you’ll need to know how to do it anyway. The ADFS 2.0 documentation content map is very helpful here.
  • How to administer (parts of) ADFS, including installing it (prerequisites too) on both Windows 2008 and 2012 (but not R2), controlling filtering, and managing dirsync. I have heard that there are questions in the pool that cover ADFS 3.0 but don’t know if that’s true.
  • How you’d conduct a pilot, including how to use connected accounts and mail forwarding.
  • What the different administrative roles in 365 are for and what they can do, including how to manage delegated admins.
  • How to provision / license users through the 365 Admin Center.
  • Basic account management through PowerShell: creating users, modifying their properties, licensing them, etc. Nothing too exotic; I expect most Exchange and Lync admins can do these types of things now without difficulty.
  • How to provision, enable, and administer AD RMS, a surprisingly cool technology that Brian Reid has written about at length already.
  • What the mail flow/message hygiene reports are and what you can do with them
  • How to do daily admin tasks: checking service health, using the RSS feeds, opening service tickets, etc.
  • Troubleshooting using the Remote Connectivity Analyzer and MOSDAL

347 is a little more of a mixed bag because it contains both admin-level material similar to ODs in 346 plus a smorgasbord of other stuff. The most important thing to know here: you must know how to do stuff with SharePoint Online. Out of the 53 questions on my beta exam, 12 of them (22.6%) were related to SPO.  Given that about 0.5% of my actual knowledge relates to SPO, that was a problem. I don’t use it, and I haven’t worked on the SPO-related parts of any deployments for Dell customers, so I was unprepared. Don’t be like me. Be prepared to demonstrate that you know:

  • All about Click-to-Run, including how it differs from MSI installations, how you customize what gets installed, how the installs themselves work, etc.
  • All about Office Telemetry. Never heard of it? Neither had I. Its inclusion in these exams seems a bit odd, since I suspect you’d see people running it before deploying Office 2013 on-prem too. It’s been a while since I was directly involved in the world of desktop deployment, though, so maybe everyone but me knows about them.
  • How to manage SPO site collections, including how to share and unshared them, set quotas, etc.
  • How to provision (including how to license) Excel and Visio Services
  • How to manage proxy, reply-to/default addresses, resource mailboxes, external contacts, and groups in Exchange— standard stuff for working Exchange admins.
  • How to work with archiving policies on both Exchange and Lync, including integration with Exchange 2013’s in-place hold mechanism
  • How to set up Lync settings for external access, including visibility of presence and per-user access to PIC

Again, you need to know how to do these things in both PowerShell and the GUI, despite the fact that many of the tasks in the ODs will be things you do once (or maybe quarterly, at most).

Should you take the beta exams? It depends, I guess. They cost the same as the “real” exam, and they’re subject to the same “Second Shot” MS program that grants you one retake of a failed exam. So you could sign up and take the beta now for $150, then take the real exam for free if you don’t pass. Based on the state of the exam questions I saw, and the lack of structured training materials, I don’t recommend that you rush to take the exam, though; the real version goes live on 17 February. Until then, your time would probably be better spent setting up a scratch tenant that you can play with, then running through the list of ODs to make sure that you know how to do the things on the list.

I’d be interested in hearing from people who took the exam to see how well you think the exam actually matches up with what Office 365 admins and designers need to know in the real world.

1 Comment

Filed under Office 365, UC&C

MEC and Lync Conference 2014 session list (partly) released

The fine folks in charge of organizing the Microsoft Exchange Conference have released a partial list of the sessions that will be on offer, as well as a list of speakers (oddly enough, the speakers are in alphabetical order by first name… ooops). There are some surprises in the mix, and I expect a few more once the full list of sessions is released in the near future.

First, there’s clearly a heavy emphasis on panel-style discussions: there are no fewer than 8 “Experts Unplugged” sessions featuring product managers from the Exchange team. I’m moderating the UM panel session, which should be a good opportunity for people to have their in-depth UM questions answered by the PMs who own the features in UM. In addition, the support team has a session called “Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?” that I can almost guarantee will be worth your time. Amir, Jennifer, Scott, Shawn, Tim, and Nino did a very similar panel at the MVP summit and it was extremely informative— plus they’re a fun bunch to talk to. I expect the other panels to be of equal quality, and the fact that there’s one per track is a good sign that the Exchange team is interested in getting two-way feedback from the community.

Second, there’s a nice mix of topics covered: a number of sessions promise to compare or contrast the on-premises and service environments (I’m particularly looking forward to “Engineers vs Mechanics”), and there seems to be a balance between architectural-focused sessions that explain design principles and sessions focused more narrowly on how to administer, manage, or use features such as RBAC (presented by Bhargav Shukla, who taught RBAC for the late lamented MCM program) and archiving. This balance between explaining why features work a particular way and how to use them was a hallmark of MEC last year, and I’m pleased to see it continuing in the sessions this year.

There are a couple of sessions whose abstracts are missing or incomplete. For example, the “Enterprise Social” session promises to “discuss Social experiences in the MSFT suite beyond e-mail.” I’d bet $5 that this is a code phrase for “talking about Yammer,” but we’ll see. As we get closer to MEC, expect to see more detailed abstracts, as well as additional sessions.

Turning abruptly to Microsoft’s other major unified communications conference: I’m speaking for the first time at Lync Conference (which lacks a catchy acronym so far: I suggest “LyC”, pronounced “like”). The session list is worth a careful review; I don’t know if there are more sessions forthcoming, but the ones that are there focus much more heavily on on-premises topics than the MEC sessions do, and there’s an entire track titled “Business Value” dedicated to helping attendees identify areas where Lync can add value to their environments and then squeeze that value out as rapidly as possible. There is also a “Lync Online” track shown in the track selection pulldown but it shows no sessions right now— I’m sure they’ll appear in the near future. It looks like the content for the developer-focused track will be super technical; it will be interesting to see how the level of detail in those sessions compares to the developer-track session at MEC. I get the sense that there will be more admins-who-are-interested-in-development at MEC and more developers-who-write-code-every-day at LyC, but I could be wrong.

My Lync Conference session is a 300-level look at integration between Exchange 2013 and Lync 2013. It’s nicely complemented by Jens Trier Rasmussen’s 400-level session on the same topic; we’ll be working together to coordinate topics. The Lync Conference also features sessions presented by sponsors; Dell (or, more precisely, Michael Przytula, my boss) will be presenting one. I’ll have more to say about its contents when we get closer to showtime.

I’m looking forward to both shows— meeting with the community is always really energizing, and both shows have a great session lineup. If you haven’t already registered for one or both, you should strongly consider it while early registration is still ongoing. What you learn in a single session can easily save you (or make you) enough money to make the entire trip worthwhile, and the social and community benefits of attending are icing on the cake. See you there!

Leave a comment

Filed under General Stuff, UC&C

Android 4.4/KitKat Exchange ActiveSync problems; fixed in 4.4.1?

Apple’s iOS has gotten a deservedly bad reputation for its Exchange ActiveSync implementation. But, to their credit, things seem to be fairly stable with the latest iOS 7.0.4 update. On the other hand, Google seems to have largely gotten a free pass on the quality of its EAS implementation; in fact, for quite some time Android didn’t include EAS functionality, although some individual vendors did. The latest release, 4.4 (or “KitKat”, a particularly nasty type of candy, at least in the US), includes EAS as part of the core OS, but it appears to have some bugs, including at least one that I am still trying to get a good understanding of.

First, there appears to be a problem with client certificate authentication, i.e. it doesn’t work. To Google’s credit, they maintain a public bug-tracking system where everyone can see the bug report and status, at least of this particular bug. Imagine a world where Microsoft and Apple were similarly transparent about bugs in their major products… OK, back to reality; Google of course doesn’t do the same for their proprietary products, just for open-source efforts such as Android. On the other hand, this kind of public reporting lets people show their ignorance; check out this thread, where a couple of engineers for a competing product show that they haven’t read the protocol specs in detail (hint: see this discussion of WindowSize to spot the flaw in their argument).

Anyway, Tony pointed out this particular problem to the Exchange community just before Thanksgiving. Recently I was contacted by a customer who was seeing another widespread KitKat issue: devices persistently pounding the server with EAS Sync commands, over and over and over and… well, you get the idea. Although I haven’t seen a clear cause identified, Google claims to have fixed this problem in the 4.4.1 update (see the reply by Ersher in page 24 of this thread), so the question becomes whether all the users claiming to be affected by this bug have upgraded.

Actually, the question becomes at what point Exchange administrators begin to proactively block new mobile device OS releases! While I’m not quite ready to declare a fatwa on all new device releases, it is beginning to look at though organizations with diverse BYOD populations might be well served to establish some kind of criteria for staging support of new releases. Apple, Microsoft, and Google all offer developer access to new OS releases, often months in advance, so one possibility is to establish a pool of test devices for new OS releases— something which many sites already do with new desktop OS releases. The logistics of working out such a program might be challenging, but I think the effort might be well worth it if it prevents unpleasant surprises caused by device-side EAS misbehavior.

There’s another, perhaps less palatable, option on the horizon. Now that we have OWA for Devices (known colloquially as Mobile OWA, or MOWA, within Microsoft), if you were so inclined you could block all iOS device access and require your users to use MOWA. Since there’s no MOWA version for Android yet (and there may never be; Microsoft hasn’t given any hints), this wouldn’t be a comprehensive solution, and it would likely aggravate users to a high degree… but as improvements in MOWA performance and capability roll out, it might become a more viable option.

(side note: speaking of aggravation, it’s amazing how aggravated Google’s customers get when they don’t receive an official answer from Google in the time frame they expect. At least Google gives official answers in their support forums, something you are unlikely to see happen much in the support fora offered for iOS and Windows Phone!)

One thing I’d like to see emerge is something akin to collaborative spam filtering— when I report a message as spam to my filtering service, that message is filtered for other subscribers too. It seems like BoxTone or another company might be able to offer a subscription service to customers that gives them early alerts to wide-scale problems reported by other customers, such as regional outages in a carrier network or a pattern of sync misbehavior for a specific device family. I know I’d be happy to pay money for a service that would give me early warning of apparent problems with new device software releases— what about you?

15 Comments

Filed under UC&C

Microsoft, encryption, and Office 365

So the gloves are starting to come off: Microsoft general counsel Brad Smith wrote a long blog post this morning discussing how Microsoft plans to protect its customers’ data from unlawful interception by “unauthorized government access”. He never specifically mentions NSA, GCHQ, et al, but clearly the Five Eyes partners are who he’s talking about. Many other news outlets have dissected Smith’s post in detail, so I wanted to focus on a couple of lesser-known aspects.

First is that Microsoft is promising to use perfect forward secrecy (PFS) when it encrypts communications links. Most link-encryption protocols, including IPsec and SSL, use a key exchange algorithm known as Diffie-Hellman to allow  the two endpoints can agree on a temporary session key by using their longer-term private/public key pairs. The session key is usually  be renegotiated for each conversation. If Eve the eavesdropper or Mallet the man-in-the-middle intercept the communications, they may be able to decrypt it if they can guess or obtain the session key. Without PFS, an attacker who can intercept and record a communication stream now and can guess or obtain the private key of either endpoint can decrypt the stream. Think of this like finding a message in a bottle written in an unknown language, then next year seeing Rosetta Stone begin to offer a course in the language. PFS protects an encrypted communication stream now from future attack by changing the way the session keys are generated and shared. Twitter, Google, and a number of other cloud companies have already deployed PFS (Google, in fact, started in 2011) so it is great to see Microsoft joining in this trend. (A topic for another day: under what conditions can on-premises Exchange and Lync use PFS? Paging Mark Smith…)

Second is that Microsoft is acknowledging that they use data-at-rest encryption, and will be using it more often. Probably more than any other vendor, Microsoft is responsible for democratizing disk encryption by including BitLocker in Windows Vista and its successors, then steadily improving it. (Yes, I know that TrueCrypt and PGP predated BitLocker, but their installed bases are tiny by comparison.) Back in 2011 I wrote about some of the tradeoffs in using BitLocker with Exchange, and I suspected that Microsoft was using BitLocker in their Office 365 data centers, a suspicion that was confirmed recently during a presentation by some of the Office 365 engineering team and, now, by Smith’s post. Having said that, data-at-rest encryption isn’t that wonderful in the context of Office 365 because the risk of an attacker (or even an insider) stealing data by stealing/copying physical disks from an Office 365 data center is already low. There are many layers of physical and procedural security that help keep this risk low, so encrypting the stored data on disk is of relatively low value compared to encrypting the links over which that data travels.

The third aspect is actually something that’s missing from Smith’s post, expressed as one word: Skype. Outlook.com, Office 365, SkyDrive, and Azure are all mentioned specifically as targets for improved encryption, but nothing about Skype? That seems like a telling omission, especially given Microsoft’s lack of prior transparency about interception of Skype communications. Given the PR benefits that the company undoubtedly expects from announcing how they’re going to strengthen security, the fact that Smith was silent on Skype indicates, at least to suspicious folks like me, that for  now they aren’t making any changes. Perhaps the newly-announced transparency centers will provide neutral third parties an opportunity to inspect the Skype source code to verify its integrity.

Finally, keep in mind that nothing discussed in Smith’s post addresses targeted operations where the attacker (or government agency, take your pick) mounts man-in-the-middle attacks (QUANTUM/FOXACID)  or infiltrates malware onto a specific target’s computer. That’s not necessarily a problem that Microsoft can solve on its own.

Leave a comment

Filed under Office 365, UC&C