Tag Archives: Exchange 2013

Microsoft updates Recoverable Items quota for Office 365 users

Remember when I posted about the 100GB limit for Personal Archive mailboxes in Office 365? It turns out that there was another limit that almost no one knew about, primarily because it involves mailbox retention. As of today, when you put an Office 365 mailbox on In-Place Hold, the size of the Recoverable Items folder is capped at 30GB. This is plenty for the vast majority of customers because a) not many customers use In-Place Hold in the first place and b) not many users have mailboxes that are large enough to exceed the 30GB quota. Multiply two small numbers together and you get another small number.

However, there are some customers for whom this is a problem. One of the most interesting things about Office 365 to me is the speed at which Microsoft can respond to their requests by changing aspects of the service architecture and provisioning. In this case, the Exchange team is planning to increase the size of the Recoverable Items quota to 100GB. Interestingly, they’re actually starting by increasing the quota for user mailboxes that are now on hold— so from now until July 2014, they’ll be silently increasing the quota for those users. If you put a user on hold today, however, their quota may not be set to 100GB until sometime later.

If you need an immediate quota increase, or if you’re using a dedicated tenant, you’ll still have to use the existing mechanism of filing a support ticket to have the quota increased.

There’s no public post on this yet, but I expect one shortly. In the meantime, bask in the knowledge that with a 50GB mailbox, 100GB Personal Archive, and 100GB Recoverable Items quota, your users probably aren’t going to run out of mailbox space any time soon.

Leave a comment

Filed under Office 365, UC&C

Two-factor authentication for Outlook and Office 2013 clients

I don’t usually put on my old man hat, but indulge me for a second. Back in February 2000, in my long-forgotten column for TechNet, here’s what I said about single-factor passwords:

I’m going to let you in on a secret that’s little discussed outside the security world: reusable passwords are evil.

I stand by the second half of that statement: reusable passwords are still evil, 14 years later, but at least the word is getting out, and multi-factor authentication is becoming more and more common in both consumer and business systems. I was wrong when I assumed that smart cards would become ubiquitous as a second authentication factor; instead, the “something you have” role is increasingly often filled by a mobile phone that can receive SMS messages. Microsoft bought into that trend with their 2012 purchase of PhoneFactor, which is now integrated into Azure. Now Microsoft is extending MFA support into Outlook and the rest of the Office 2013 client applications, with a few caveats. I attended a great session at MEC 2014 presented by Microsoft’s Erik Ashby and Franklin Williams that both outlined the current state of Office 365-integrated MFA and outlined Microsoft’s plans to extend MFA to Outlook.

First, keep in mind that Office 365 already offers multi-factor authentication, once you enable it, for your web-based clients. You can use SMS-based authentication, have the service call you via phone, or use a mobile app that generates authentication codes, and you can define “app passwords” that are used instead of your primary credentials for applications— like Outlook, as it happens— that don’t currently understand MFA. You have to enable MFA for your tenant, then enable it for individual users. All of these services are included with Office 365 SKUs, and they rely on the Azure MFA service. You can, if you wish, buy a separate subscription to Azure MFA if you want additional functionality, like the ability to customize the caller ID that appears when the service calls your users.

With that said, here’s what Erik and Franklin talked about…

To start with, we have to distinguish between the three types of identities that can be used to authenticate against the service. Without going into every detail, it’s fair to summarize these as follows:

  • Cloud identities are homed in Azure Active Directory (AAD). There’s no synchronization with on-premises AD because there isn’t one.
  • Directory sync (or just “dirsync”) uses Microsoft’s dirsync tool, or an equivalent third-party tool, to sync an on-premises account with AAD. This essentially gives services that consume AAD a mostly-read-only copy of your organization’s AD.
  • Federated identity uses a federation broker or service such as Active Directory Federation Services (AD FS), Okta, Centrify, and Ping to allow your organization’s AD to answer authentication queries from Office 365 services. In January 2014 Microsoft announced a “Works With Office 365 – Identity” logo program, so if you don’t want to use AD FS you can choose another federation toolset that better meets your requirements.

Client updates are coming to the Office 2013 clients: Outlook, Lync, Word, Excel,  PowerPoint, and SkyDrive Pro. With these updates, you’ll see a single unified authentication window for all of the clients, similar (but not necessarily identical) to the existing login window you get on Windows when signing into a SkyDrive or SkyDrive Pro library from within an Office client. From that authentication window, you’ll be able to enter the second authentication factor that you received via phone call, SMS, or authentication app. During the presentation, Franklin (or maybe Erik?) said “if you can authenticate in a web browser, you can authenticate in Office clients”— very cool. (PowerShell will be getting MFA support too, but it wasn’t clear to me exactly when that was happening).

These client updates will also provide support for two specific types of smart cards: the US Department of Defense Common Access Card (CAC) and the similar-but-civilian Personal Identity Verification (PIV) card. Instead of using a separate authentication token provided by the service, you’ll plug in your smart card, authenticate to it with your PIN, and away you go.

All three of the identity types of these methods provide support for MFA; federated identity will gain the ability to do true single sign-on (SSO) jn Office 2013 clients, which will be a welcome usability improvement. Outlook will get SSO capabilities with the other two identity types, too.

How do the updates work? That’s where the magic part comes in. The Azure Active Directory Authentication Library (ADAL) is being extended to provide support for MFA. When the Office client makes a request to the service the service will return a header that instructs the client to visit a security token service (STS) using OAuth. At that point, Office uses ADAL to launch the browser control that displays the authentication page, then, as Erik puts it, “MFA and federation magic happens transparent to Office.” If the authentication succeeds, Office gets security tokens that it caches and uses for service authentication. (The flow is described in more detail in the video from the session, which is available now for MEC attendees and will be available in 60 days or so for non-attendees).

There are two important caveats that were a little buried in the presentation. First is that MFA in Outlook 2013 will require the use of MAPI/HTTP. More seriously, MFA will not be available to on-premises Exchange 2013 deployments until some time in the future. This aligns with Microsoft’s cloud-first strategy, but it is going to aggravate on-premises customers something fierce. In fairness, because you need the MFA infrastructure hosted in the Microsoft cloud to take advantage of this feature, I’m not sure there’s a feasible way to deliver SMS- or voice-based MFA for purely on-prem environments, and if you’re in a hybrid, then you’re good to go.

Microsoft hasn’t announced a specific timeframe for these updates (other than “second half calendar 2014”), and they didn’t say anything about Mac support, though I would imagine that the rumored v.next of Mac Office would provide this same functionality. The ability to use MFA across all the Office client apps will make it easier for end users, reducing the chance that they’ll depend solely on reusable passwords and thus reducing the net amount of evil in the world— a blessing to us all.

Leave a comment

Filed under Office 365, UC&C

Script to download MEC 2014 presentations

Yay for code reuse! Tom Arbuthnot wrote a nifty script to download all the Lync Conference 2014 presentations, and since Microsoft used the same event management system for MEC 2014, I grabbed his script and tweaked it so that it will download the MEC 2014 session decks and videos. It only works if you are able to sign into the MyMEC site, as only attendees can download the presentations and videos at this time. I can’t guarantee that the script will pull all the sessions but it seems to be working so far— give it a try. (And remember, the many “Unplugged” sessions weren’t recorded so you won’t see any recordings or decks for them). If the script works, thank Tom; if it doesn’t, blame me.

Download the script

2 Comments

Filed under UC&C

The value of lagged copies for Exchange 2013

Let’s talk about… lagged copies.

For most Exchange administrators, the subject of lagged database copies falls somewhere between “the Kardashians’ shoe sizes” and “which of the 3 Stooges was the funniest” in terms of interest level. The concept is easy enough to understand: a lagged copy is merely a passive copy of a mailbox database where the log files are not immediately played back, as they are with ordinary passive copies. The period between the arrival of a log file and the time when it’s committed to the database is known as the lag interval. If you have a lag interval of 24 hours set to a database, a new log for that database generated at 3pm on April 4th won’t be played into the lagged copy until at least 3pm on April 5th (I say “at least” because the exact time of playback will depend on the copy queue length). The longer the lag interval, the more “distance” there is between the active copy of the mailbox database and the lagged copy.

Lagged copies are intended as a last-ditch “goalkeeper” safety mechanism in case of logical corruption. Physical corruption caused by a hardware failure will happen after Exchange has handed the data off to be written, so it won’t be replicated. Logical corruption introduced by components other than Exchange (say, an improperly configured file-level AV scanner) that directly write to the MDB or transaction log files wouldn’t be replicated in any event, so the real use case for the lagged copy is to give you a window in time during which logical corruption caused by Exchange or its clients hasn’t yet been replicated to the lagged copy. Obviously the size of this window depends on the length of the lag interval, and whether or not it is sufficient for you to a) notice that the active database has become corrupted b) play the accumulated logs forward into the lagged copy and c) activate the lagged copy depends on your environment.

The prevailing sentiment in the Exchange world has largely been “ I do backups already so lagged copies don’t give me anything.” When Exchange 2010 first introduced the notion of a lagged copy, Tony Redmond weighed in on it. Here’s what he said back then:

For now, I just can’t see how I could recommend the deployment of lagged database copies.

That seems like a reasonable stance, doesn’t it? At MEC this year, though, Microsoft came out swinging in defense of lagged copies. Why would they do that? Why would you even think of implementing lagged copies? It turns out that there are some excellent reasons that aren’t immediately apparent. (It may help to review some of the resiliency and HA improvements delivered in Exchange 2013; try this this excellent omnibus article by Microsoft’s Scott Schnoll if you want a refresher.) Here are some of the reasons why Microsoft has begun recommending the use of lagged copies more broadly.

1. Lagged copies are better in 2013

Exchange 2013 includes a number of improvements to the lagged copy mechanism. In particular, the new loose truncation feature introduced in SP1 means that you can prevent a lagged copy from taking up too much log space by adjusting the the amount of log space that the replay mechanism will use; when that limit is reached the logs will be played down to make room. Exchange 2013 (and SP1) also make a number of improvements to the Safety Net mechanism (discussed fully in Chapter 2 of the book), which can be used to play missing messages back into a lagged copy by retrieving them from the transport subsystem.

2. Lagged copies are continuously verified

When you back up a database, Exchange checks the page checksum of every page as it is backed up by computing the checksum and comparing it to the stored checksum; if that check fails, you get the dreaded JET_errReadVerifyFailure (-1018) error. However, just because you can successfully complete the backup doesn’t mean that you’ll be able to restore it when the time comes. By comparison, the Exchange log playback mechanism will log errors immediately when they are encountered during log playback. If you’re monitoring event logs on your servers, you’ll be notified as soon as this happens and you’ll know that your lagged copy is unusable now, not when you need to restore it. If you’re not monitoring your event logs, then lagged copies are the least of your problems.

3. Lagged copies give you more flexibility for recovery

When your active and passive copies of a database become unusable and you need to fall back to your lagged copy, you have several choices, as described in TechNet. You can easily play back every log that hasn’t yet been committed to the database, in the correct order, by using Move-ActiveMailboxDatabase. If you’d rather, you can play back the logs up to a certain point in time by removing the log files that you don’t want to play back. You can also play messages back directly from Safety Net into the lagged copy.

4. There’s no hardware penalty for keeping a lagged copy

Some administrators assume that you have to keep lagged copies of databases on a separate server. While this is certainly supported, you don’t have to have a “lag server” or anything like unto it. The normal practice in most designs has been to store lagged copies on other servers in the same DAG, but you don’t even have to do that. Microsoft recommends that you keep your mailbox databases no bigger than 2TB. Stuff your server with a JBOD array of the new 8TB disks (or, better yet, buy a Dell PowerVault MD1220) and you can easily put four databases on a single disk: the active copy of DB1, the primary passive copy of DB2, the secondary passive copy of DB3, and the lagged copy of DB4. This gives you an easy way to get the benefits of a 4-copy DAG while still using the full capacity of the disks you have: the additional IOPS load of the lagged copy will be low, so hosting it on a volume that already has active and passive copies of other databases is a reasonable approach (one, however, that you’ll want to test with jetstress).

It’s always been the case that the architecture Microsoft recommends when a new version of Windows or Exchange is released evolves over time as they, and we, get more experience with it in the real world. That’s clearly what has happened here; changes in the product, improvements in storage hardware, and a shift in the economic viability of conventional backups mean that lagged copies are now much more appropriate for use as a data protection mechanism than they were in the past. I expect to see them deployed more and more often as Exchange 2013 deployments continue and our collective knowledge of best practices for them improves.

1 Comment

Filed under UC&C

MEC 2014 wrapup

BLUF: it was a fantastic conference, far and away the best MEC I’ve attended. The quality of the speakers and technical presentations was very high, and the degree of community interaction and engagement was too.

I arrived in Austin Sunday afternoon and went immediately to dinner at County Line on the Lake, a justly famous Austin BBQ restaurant, to put on a “thank you” dinner for some of the folks who helped me with my book. Unfortunately, the conference staff had scheduled a speakers’ meeting at the same time, and a number of folks couldn’t attend due to flight delays or other last-minute intrusions. Next time I’ll poll invitees for their preferred time, and perhaps that will help. However, the dinner and company were both excellent, and I now have a copy of the book signed by all in attendance as a keepsake— a nice reversal of my usual pattern of signing books and giving them away.

Monday began with the keynote. If you follow me (or any number of other Exchange MVPs) on Twitter, you already know what I think: neither the content of the presentation nor its actual presentation was up to snuff when compared either to prior MEC events or other events such as Lync Conference. At breakfast Monday, Jason Sherry and I were excitedly told by an attendee that his Microsoft account rep insisted that he attend the keynote, and for the life of me I couldn’t figure out why until the tablet giveaway. That raised the energy level quite a bit! I think that for the next MEC, Julia White should be handed the gavel and left to run the keynote as she sees fit; I can guarantee that would result in a more lively and informative event.  (For another time: a review of the Venue 8 Pro, which I like a great deal based on my use of it so far). One area where the keynote excelled, though, was in its use of humor. The video vignette featuring Greg Taylor and David Espinoza was one of the funniest such I’ve ever seen, and all of the other bits were good as well— check them out here. The keynote also featured a few good-natured pokes at the community, such as this:

Ripped

For the record, although I’ve been lifting diligently, I am not (yet) built like the guy who’s wearing my face on screen… but there’s hope.

I took detailed notes on each of the sessions I attended, so I’ll be posting about the individual sessions over the next few days. It’s fair to say that I learned several valuable things at each session, which is sort of the point behind MEC. I found that the quality of the “unplugged” sessions I attended varied a bit between sessions; the worst was merely OK, while the best (probably the one on Managed Availability) was extremely informative. It’s interesting that Tony and I seemed to choose very few of the same sessions, so his write-ups and mine will largely complement each other. My Monday schedule started with Kamal Janardhan’s session on compliance and information protection. Let me start by saying that Kamal is one of my favorite Microsoft people ever. She is unfailingly cheerful, and she places a high value on transparency and openness. When she asks for feedback on product features or futures, it’s clear that she is sincerely seeking honest feedback, not just saying it pro forma. Her session was great; from there, I did my two back-to-back sessions, both of which went smoothly. I was a little surprised to see a nearly-full room (I think there were around 150 people) for my UM session, and even more surprised to see that nearly everyone in the room had already deployed UM on either Exchange 2010 or 2013. That’s a significant change from the percentage of attendees deploying UM at MEC 2012. I then went to the excellent “Unplugged” session on “Exchange Top Issues”, presented by the supportability team and moderated by Tony. After the show closed for the day, I was fortunate to be able to attend the dinner thrown by ENow Software for MVPs/MCMs and some of their key customers. Jay and Jess Gundotra, as always, were exceptional hosts, the meal (at III Forks) was excellent, and the company and conversation were delightful. Sadly I had to go join a work conference call right after dinner, so I missed the attendee party.

Tuesday started with a huge surprise. On my way to the “Exchange Online Migrations Technical Deep Dive” session (which was good but not great; it wasn’t as deep as I expected), I noticed the picture below flashing on the hallway screens. Given that it was April Fool’s Day, I wasn’t surprised to see the event planners playing jokes on attendees, I just wasn’t expecting to be featured as part of their plans. Sadly, although I’m happy to talk to people about migrating to Office 365, the FAA insists that I do it on the ground and not in the air. For lunch, I had the good fortune to join a big group of other Dell folks (including brand-new MVP Andrew Higginbotham, MCM Todd Hawkins, Michael Przytula, and a number of people from Dell Software I’d not previously met) at Iron Works BBQ. The food and company were both wonderful, and they were followed by a full afternoon of excellent sessions. The highlight of my sessions on Tuesday was probably Charlie Chung’s session on Managed Availability, which was billed as a 300-level session but was more like a 1300-level. I will definitely have to watch the recording a few times to make sure I didn’t miss any of the nuances.

Surprise!

This is why I need my commercial pilot’s license— so I can conduct airborne sessions at the next MEC.

Tony has already written at length about the “Exchange Oscars” dinner we had Tuesday night at Moonshine. I was surprised and humbled to be selected to receive the “Hall of Fame” award for sustained contributions to the Exchange community; I feel like there are many other MVPs, current and past, who deserve the award at least as much, if not more. It was great to be among so many friends spanning my more than 15 years working with Exchange; the product group turned out en masse and the conversation, fellowship, and celebration was the high point of the entire conference for me. I want to call out Shawn McGrath, who received the “Best Tool” award for the Exchange Remote Connectivity Analyzer, which became TestExchangeConnectivity.com. Shawn took a good idea and relentlessly drove it from conception to implementation, and the whole world of Exchange admins has benefited from his effort.

Wednesday started with the best “Unplugged” session I attended: it covered Managed Availability and, unlike the other sessions I went to, featured a panel made mostly of engineers from the development team. There were a lot of deep technical questions and a number of pointed roadmap discussions (not all of which were at my instigation). The most surprising session I attended, I think, was the session on updates to Outlook authentication— turns out that true single sign-on (SSO) is coming to all the Office 2013 client applications, and fairly soon, at least for Office 365 customers. More on that in my detailed session write-ups. The MVPs were also invited to a special private session with Perry Clarke. I can’t discuss most of what we talked about, but I can say that I learned about the CAP theorem (which hadn’t even been invented when I got my computer science degree, sigh), and that Perry recognizes the leadership role Exchange engineering has played in bringing Microsoft’s server products to high scale. Fun stuff!

Then I flew home: my original flight was delayed so they put me on one leaving an hour earlier. The best part of the return trip might have been flying on one of American’s new A319s to Huntsville. These planes are a huge improvement over the nasty old MD80s that AA used to fly DFW-HSV, and they’re nicer than DL’s ex-AirTran 717s to boot. So AA is still in contention for my westbound travel business.

A word about the Hilton Austin Downtown, the closest hotel to the conference center: their newly refurbished rooms include a number of extremely practical touches. There’s a built-in nightlight in the bathroom light switch, and each bedside table features its own 3-outlet power strip plus a USB port, and the work desk has its own USB charging ports as well. Charging my phone, Kindle, Venue 8 Pro, and backup battery was much simpler thanks to the plethora of outlets. The staff was unfailingly friendly and helpful too, which is always welcome. However, the surrounding area seemed to have more than its share of sirens and other loud noises; next time I might pick a hotel a little farther away.

I’ll close by saying how much I enjoyed seeing old friends and making new ones at this conference. I don’t have room (or a good enough memory) to make a comprehensive list, but to everyone who took the time to say hello in the hall, ask good questions in a session, wave at me across the expo floor, or pass the rolls at dinner— thank you.

Now to get ready for TechEd and Exchange Connections…

Leave a comment

Filed under UC&C

Getting ready for MEC 2014

Wow, it’s been nearly a month since my last post here. In general I am not a believer in posting stuff on a regular schedule, preferring instead to wait until I have something to say. All of my “saying” lately has been on behalf of my employer though. I have barely even had time to fly. For another time: a detailed discussion of the ins and outs of shopping for an airplane. For now, though, I am making my final preparations to attend this year’s Microsoft Exchange Conference (MEC) in Austin! My suitcase is packed, all my devices are charged, my slides are done, and I am prepared to overindulge in knowledge sharing, BBQ eating, and socializing.

It is interesting to see the difference in flavor between Microsoft’s major enterprise-focused conferences. This year was my first trip to Lync Conference, which I would summarize as being a pretty even split between deeply technical sessions and marketing focused around the business and customer value of “universal communications”. In reviewing the session attendance and rating numbers, it was no surprise that the most-attended sessions and the highest-rated sessions tended to be 400-level technical sessions such as Brian Ricks’ excellent deep-dive on Lync client sign-in behavior. While I’ve never been to a SharePoint Conference, from what my fellow MVPs say about it, there was a great deal of effort expended by Microsoft on highlighting the social features of the SharePoint ecosystem, with a heavy focus on customization and somewhat less attention directed at SharePoint Online and Office 365. (Oh, and YAMMER YAMMER YAMMER YAMMER YAMMER.) Judging from reactions in social media, this focus was well-received but inevitably less technical given the newness of the technology.

That brings us to the 2014 edition of MEC. The event planners have done something unique by loading the schedule with “Unplugged” panel discussions, moderated by MVP and MCM/MCSM experts and consisting of Microsoft and industry experts in particular technologies. These panels provide an unparalleled opportunity to get, and give, very candid feedback around individual parts of Exchange and I plan on attending as many of them as I can. This is in no way meant to slight the many other excellent sessions and speakers that will be there. I’d planned to summarize specific sessions that I thought might be noteworthy, but Tony published an excellent post this morning that far outdoes what I had in mind, breaking down sessions by topic area and projected attendance. Give it a read.

I’m doing two sessions on Monday: Exchange Unified Messaging Deep Dive at 245p and Exchange ActiveSync: Management Challenges and Best Practices at 1145a. The latter is a vendor session with the folks from BoxTone, during which attendees both get lunch (yay) and the opportunity to see BoxTone’s products in action. They’re also doing a really interesting EAS health check, during which you provide CAS logs and they run them through a static analysis tool that, I can almost guarantee, will tell you things you didn’t know about your EAS environment. Drop by and say hello!

Leave a comment

Filed under UC&C

Office 365 Personal Archives limited to 100GB

There’s a bit of misinformation, or lack of information, floating around about the use of Office 365 Personal Archives. This feature, which is included in the higher-end Office 365 service plans (including E3/E4 and the corresponding A3/A4 plans for academic organizations), is often cited as one of the major justifications for moving to Office 365. It’s attractive because of the potential savings from greatly reducing PST file use and eliminating (or at least sharply reducing) the use of on-premises archiving systems such as Enterprise Vault.

Some Microsoft folks have been spreading the good news that archives are unlimited (samples here and here), and so have many consultants, partners, and vendors– including me. In fact, I had a conversation with a large customer last week in which they expressed positive glee about being able to get their data out of on-prem archives and into the cloud.

The only problem? Saying the archives are unlimited isn’t quiiiiite true.

If you read the service description for Exchange Online (which we all should be doing regularly anyway, as it changes from time to time), you’ll see this:

Clip from Nov 2013 O365 service description

Clip from Nov 2013 O365 service description

See that little “3″? Here’s its text:

Each subscriber receives 50 GB of storage in the primary mailbox, plus unlimited storage in the archive mailbox. A default quota of 100 GB is set on the archive mailbox, which will generally accommodate reasonable use, including the import of one user’s historical email. In the unlikely event that a user reaches this quota, a call to Office 365 support is required. Administrators can’t increase or decrease this quota.

So as an official matter, there is no size limit. As a practical matter, the archive is soft-limited to 100GB, and if you want to store more data than that, you’ll have to call Microsoft support to ask for a quota increase. My current understanding is that 170GB is the real limit, as that is the maximum size to which the quota can currently be increased. I don’t know if Microsoft has stated this publicly anywhere yet but it’s certainly not in the service descriptions. That limit leads me to wonder what the maximum functional size of an Office 365 mailbox is– that is, if Microsoft didn’t have the existing 100GB quota limit in place, how big a mailbox could they comfortably support? (Note that this is not the same as asking what size mailbox Outlook can comfortably support, and I bet those two numbers wouldn’t match anyway.) I suppose that in future service updates we’ll find out, given that Microsoft is continuing to shovel mailbox space at users as part of its efforts to compete with Google.

Is this limit a big deal? Not really; the number of Office 365 customers who will need more than 100GB of archive space for individual user mailboxes is likely to be very small. The difference between “unlimited” and “so large that you’ll never encounter the limit” is primarily one of semantics. However, there’s always a danger that customers will react badly to poor semantics, perhaps because they believe that what they get isn’t what they were promised. While I would like to see more precision in the service descriptions, it’s probably more useful to focus on making sure that customers (especially those who are heavy users of on-premises archives or PST files) know that there’s currently a 100GB quota, which is why I wrote this post.

For another time: a discussion of how hard, or easy, it is to get large volumes of archive data into Office 365 in the first place. That’s one of the many topics I expect to see explored in great depth at MEC 2014, where we’ll get the Exchange team’s perspective, and then again at Exchange Connections 2014, where I suspect we’ll get a more nuanced view.

3 Comments

Filed under Office 365, UC&C

Microsoft, encryption, and Office 365

So the gloves are starting to come off: Microsoft general counsel Brad Smith wrote a long blog post this morning discussing how Microsoft plans to protect its customers’ data from unlawful interception by “unauthorized government access”. He never specifically mentions NSA, GCHQ, et al, but clearly the Five Eyes partners are who he’s talking about. Many other news outlets have dissected Smith’s post in detail, so I wanted to focus on a couple of lesser-known aspects.

First is that Microsoft is promising to use perfect forward secrecy (PFS) when it encrypts communications links. Most link-encryption protocols, including IPsec and SSL, use a key exchange algorithm known as Diffie-Hellman to allow  the two endpoints can agree on a temporary session key by using their longer-term private/public key pairs. The session key is usually  be renegotiated for each conversation. If Eve the eavesdropper or Mallet the man-in-the-middle intercept the communications, they may be able to decrypt it if they can guess or obtain the session key. Without PFS, an attacker who can intercept and record a communication stream now and can guess or obtain the private key of either endpoint can decrypt the stream. Think of this like finding a message in a bottle written in an unknown language, then next year seeing Rosetta Stone begin to offer a course in the language. PFS protects an encrypted communication stream now from future attack by changing the way the session keys are generated and shared. Twitter, Google, and a number of other cloud companies have already deployed PFS (Google, in fact, started in 2011) so it is great to see Microsoft joining in this trend. (A topic for another day: under what conditions can on-premises Exchange and Lync use PFS? Paging Mark Smith…)

Second is that Microsoft is acknowledging that they use data-at-rest encryption, and will be using it more often. Probably more than any other vendor, Microsoft is responsible for democratizing disk encryption by including BitLocker in Windows Vista and its successors, then steadily improving it. (Yes, I know that TrueCrypt and PGP predated BitLocker, but their installed bases are tiny by comparison.) Back in 2011 I wrote about some of the tradeoffs in using BitLocker with Exchange, and I suspected that Microsoft was using BitLocker in their Office 365 data centers, a suspicion that was confirmed recently during a presentation by some of the Office 365 engineering team and, now, by Smith’s post. Having said that, data-at-rest encryption isn’t that wonderful in the context of Office 365 because the risk of an attacker (or even an insider) stealing data by stealing/copying physical disks from an Office 365 data center is already low. There are many layers of physical and procedural security that help keep this risk low, so encrypting the stored data on disk is of relatively low value compared to encrypting the links over which that data travels.

The third aspect is actually something that’s missing from Smith’s post, expressed as one word: Skype. Outlook.com, Office 365, SkyDrive, and Azure are all mentioned specifically as targets for improved encryption, but nothing about Skype? That seems like a telling omission, especially given Microsoft’s lack of prior transparency about interception of Skype communications. Given the PR benefits that the company undoubtedly expects from announcing how they’re going to strengthen security, the fact that Smith was silent on Skype indicates, at least to suspicious folks like me, that for  now they aren’t making any changes. Perhaps the newly-announced transparency centers will provide neutral third parties an opportunity to inspect the Skype source code to verify its integrity.

Finally, keep in mind that nothing discussed in Smith’s post addresses targeted operations where the attacker (or government agency, take your pick) mounts man-in-the-middle attacks (QUANTUM/FOXACID)  or infiltrates malware onto a specific target’s computer. That’s not necessarily a problem that Microsoft can solve on its own.

Leave a comment

Filed under Office 365, UC&C

Exchange 2013 Cumulative Update 3 released

I thought it might be fun to write an annotated version of the Exchange team blog post announcing the availability of CU3 for Exchange Server 2013. So here goes…

The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013.  Cumulative Update 3  for Exchange Server 2013 and updated UM Language Packs are now available on the Microsoft Download Center.  Cumulative Update 3 includes fixes for customer reported issues, minor product enhancements and previously released security bulletins.   A complete list of customer reported issues resolved in Exchange Server 2013 Cumulative Update 3 can be found in Knowledge Base Article KB2892464.

Translation: “We’re getting the hang of this cumulative update model. Notice that we gave you a list of bug fixes in this release, just like y’all asked for last time, although we’re not saying that this is a comprehensive list of every bug fixed in the CU.

We would like to call attention to an important fix in Exchange Server 2013 Cumulative Update 3 which impacts customers who rely upon Backup and Recovery mechanisms to protect Exchange data.  Cumulative Update 3 includes a fix for an issue which may randomly prevent a backup dataset taken from Exchange Server 2013 from restoring correctly.  Customers who rely on Backup and Recovery in their day-to-day operations are encouraged to deploy Cumulative Update 3 and initiate backups of their data to ensure that data contained in backups may be restored correctly.  More information on this fix is available in KB2888315.

Translation: “Backups are sooooo 2005. Why are you even doing them instead of using Exchange native data protection? DAGs and JBOD, baby. Just make sure you have at least 3 database copies. But if you are, well, take another backup right quick to make sure you can restore later.” [ Note that I am manfully resisting the urge to ask how this issue slipped through testing. --PR]

In addition to the customer reported fixes in Cumulative Update 3, the following new enhancements and improvements to existing functionality have also been added for Exchange Server 2013 customers:

  • Usability improvements when adding members to new and existing groups in the Exchange Administration Console
  • Online RMS available for use by non-cloud based Exchange eployments
  • Improved admin audit log experience
  • Windows 8.1/IE11 no longer require the use of OWA Light

Translation: “Who doesn’t like new features?  We promised to deliver new features on-premises, and we did, so yay us! However, notice how we avoided saying ‘on-premises’, instead using the clumsy ‘non-cloud based’ term instead.

More information on these topics can be found in our What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet. Cumulative Update 3 includes Exchange related updates to Active Directory schema and configuration.  For information on extending schema and configuring the active directory please review the appropriate TechNet documentation.   Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed.  To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded.  If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Translation: “Because we love you and want you to be happy, we’ve included a schema update to keep your Active Directory looking shiny and fresh. Remember, we can push schema updates in CUs now. Sorry if this means your organizational change control process means you have to delay installing the CU for months while you wait for the change to be assessed and approved.

Our next update for Exchange Server 2013, Cumulative Update 4, will be released as Exchange Server 2013 Service Pack 1.  Customers who are accustomed to deploying Cumulative Updates should consider Service Pack 1 to be equivalent to Cumulative Update 4 and deploy as normal.

Translation: “CU4 will be so awesome that it’s really a service pack, if you like service packs, but if you don’t, then it’s not. Because every CU can include both features and fixes now, we have lots of flexibility to choose when to deploy features. Part of the reason we changed the servicing model was to get people away from the ‘wait for SP1′ attitude, so if SP1 is really just CU4, that helps show there’s no reason to wait.

Reminder:  Customers in hybrid deployments where Exchange is deployed in-house and in the cloud, or who are using Exchange Online Archiving with their in-house Exchange deployment are required to maintain currency on Cumulative Update releases.

Translation: “Surprise! Since you can’t control what release your Office 365 tenant is running, if you’re in hybrid mode (or want to be), you now must commit to remaining on the current CU. If that’s a problem because of schema changes, well, good luck with that. I suppose if enough people complain we might start pre-announcing which CUs will contain schema changes so you can plan ahead.

Overall, I’m looking forward to seeing CU3 be widely deployed. It seems to be a stable and solid release based on my experience with it. The new features will be welcome, and I am heartened to see the team continuing to hit their release cadence.

Leave a comment

Filed under UC&C

Exchange 2013 SP1 coming in early 2014

Microsoft today announced that Service Pack 1 for Exchange 2013 is coming in “early 2014”. The announcement has a few interesting nuances:

  • The Edge Server role is coming back. Not by popular demand, as far as I can tell; I presume this is being introduced to pacify a few large, noisy customers who are using Edge, because I haven’t seen any signs that customers are demanding it. I would not expect to see significant feature improvements or investments in this role, either in SP1 or going forward.
  • S/MIME for OWA support is coming. This has been known for some time; as yet we don’t know the specific details of which browsers will be supported.
  • SP1 will require a schema update. I will have more to say about this in the very near future.

Interestingly, SP1 is essentially CU4: it is applied in the same way as other CUs, and if you skip SP1 and install CU5 later on, you’ll get all the fixes and features included in SP1. The Lync team is doing the same thing with their CUs; the old rule that only service packs could include new features is dead and buried.

1 Comment

Filed under UC&C

iOS 7 Exchange ActiveSync problems revisited

Back in September I posted an article about a problem that occurred when synchronizing iOS 7 devices against Exchange 2010 SP2. The wheels of justice grind slowly, but Microsoft has released a KB article and accompanying hotfix that describe the symptoms precisely.

I also got an odd report from a large enterprise customer; they had several hundred iOS 7.0.2 devices, all on Verizon in one specific region, that were having synchronization problems. The issue here turned out to be a network configuration issue on Verizon’s network that required some action from them to fix.

Now you’re probably starting to see the value in solutions like those from BoxTone

 

 

 

3 Comments

Filed under UC&C

Keeping up: Office 365 OnRamp changes

Microsoft Exchange Server 2013 Inside Out: Clients, Connectivity, and UM (colloquially known as “the book”) is now in production! I’ve reviewed all the page proofs, corrected the few composition and layout mistakes I found, and returned the proofs to the editorial staff so they can turn PDFs into paper. It’s pretty exciting, although thanks to my tardiness the book won’t be ready in time to be sold at Exchange Connections (about which, more tomorrow.) However, I’ve been assured that Tony’s book on Mailbox and HA will be available there.

About a month ago, I wrote this in the Office 365 chapter:

One of the difficulties inherent in writing about cloud services is that they can change rapidly and often. The screen shots of Office 365 in this chapter reflect its appearance and function as of late 2013, but it’s likely that some of the underlying Office 365 code will change, so don’t be surprised if what you see on screen doesn’t exactly match what you read here.

As if to reinforce that point, today Microsoft has changed the OnRamp tool that you use to assess your organizational readiness for Office 365. The readiness review portion of the tool seems to have disappeared, leaving the checklist portion (which is similar in intent to the Exchange Deployment Assistant, another topic covered in the chapter). I haven’t found where the readiness review went, but I’m fairly sure it still exists somewhere in the maze of Office 365 tools.

The moral of this story? Although Microsoft likes to mock Google’s habit of suddenly introducing changes to end users without warning, they are starting to develop the same habit, except it mostly affects administrators. I hope this particular change was just a slip and not a harbinger of the way toolset changes will be handled in the future. (The secondary moral: man, it’s going to be a challenge to keep up with Office 365 updates in anything I write in the future!)

Leave a comment

Filed under UC&C

Do mailbox quotas matter to Outlook and OWA?

Great question from my main homie Brian Hill:

Is there a backend DB reason for setting quotas at a certain size? I have found several links (like this one) discussing the need to set quotas due to the way the Outlook client handles large numbers of messages or OST files, but for someone who uses OWA, does any of this apply?

Short answer: no.

Somewhat longer answer: no.

The quota mechanism in Exchange is an outgrowth of those dark times when a large Exchange server might host a couple hundred users on an 8GB disk drive. Because storage was so expensive, Microsoft’s customers demanded a way to clamp down on mailbox size, so we got the trinity of quota limits: prohibit send, prohibit send and receive, and warn. These have been with us for a while and persist, essentially unchanged, in Exchange 2013, although it is now common to see quotas of 5GB or more on a single mailbox.

Outlook has never had a formal quota mechanism of its own, apart from the former limit of 2GB on PST files imposed by the 32-bit offsets used as pointers in the original PST file format. This limit was enforced in part by a dialog that would tell you that your PST file was full and in part by bugs in various versions of Outlook that would occasionally corrupt your PST file as it approached the 2GB size limit. Outlook 2007 and later pretty much extinguished those bugs, and the Unicode PST file format doesn’t have the 2GB limit any longer. Outlook 2010 and 2013 set a soft limit on Unicode PSTs of 50GB, but you can increase the limit if you need to.

Outlook’s performance is driven not by the size of the PST file itself (thought experiment: imagine a PST with a single 10GB item in it as opposed to one with 1 million 100KB messages) but by the number of items in any given folder. Microsoft has long recommended that you keep Outlook item counts to a maximum of around 5,000 items per folder (see KB 905803 for one example of this guidance). However, Outlook 2010 and 2013, when used with Exchange 2010 or 2013, can handle substantially more items without performance degradation: the Exchange 2010 documentation says 100,000 items per folder is acceptable, though there’s no published guidance for Exchange 2013. There’s still no hard limit, though. The reasons why the number of items (and the number of associated stored views) are well enumerated in this 2009 article covering Exchange 2007. Some of the mechanics described in that article have changed in later versions of Exchange but the basic truth remains: the more views you have, and/or the more items that are found or selected by those views, the longer it will take Exchange to process them.

If you’re wondering whether your users’ complaints of poor Outlook performance are related to high item counts, one way to find out is to use a script like this to look for folders with high item counts.

Circling back to the original question: there is a performance impact with high item count folders in OWA, but there’s no quota mechanism for dealing with it. If you have a user who reports persistently poor OWA performance on particular folders, high item counts are one possible culprit worth investigating. Of course, if OWA performance is poor across multiple folders that don’t have lots of items, or across multiple users, you might want to seek other causes.

Leave a comment

Filed under UC&C

Microsoft Certified Systems Master certification now dead

I received a very unwelcome e-mail late last night:

Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013. The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry to determine if there’s a different certification needed for the pinnacle of our program.

This is terrible news, both for the community of existing MCM/MCSM holders but also for the broader Exchange community. It is a clear sign of how Microsoft values the skills of on-premises administrators of all its products (because all the MCSM certifications are going away, not just the one for Exchange). If all your messaging, directory, communications, and database services come from the cloud (or so I imagine the thinking goes), you don’t need to spend money on advanced certifications for your administrators who work on those technologies.

This is also an unfair punishment for candidates who attended the training rotation but have yet to take the exam, or those who were signed up for the already-scheduled upgrade rotations, and those who were signed up for future rotations. Now they’re stuck unless they can take, and pass, the certification exams before October 1… which is pretty much impossible. It greatly devalues the certification, of course, for those who already have it. Employers and potential clients can look at “MCM” on a resume and form their own value judgement about its worth given that Microsoft has dropped it. I’m not quite ready to consign MCM status to the same pile as CNE, but it’s pretty close.

The manner of the announcement was exceptionally poor in my opinion, too: a mass e-mail sent out just after midnight Central time last night. Who announces news late on Friday nights? People who are trying to minimize it, that’s who. Predictably, and with justification, the MCM community lists are blowing up with angry reaction, but, completely unsurprisingly, no one from Microsoft is taking part, or defending their position, in these discussions.

As a longtime MCM/MCSM instructor, I have seen firsthand the incredible growth and learning that takes place during the MCM rotations. Perhaps more importantly, the community of architects, support experts, and engineers who earned the MCM has been a terrific resource for learning and sharing throughout their respective product spaces; MCMs have been an extremely valuable connection between the real world of large-scale enterprise deployments and the product group.

In my opinion, this move is a poorly-advised and ill-timed slap in the face from Microsoft, and I believe it will work to their detriment.

18 Comments

Filed under FAIL, UC&C

Microsoft releases new OWA apps for iPhone, iPad

Well, this is gonna be fun: Microsoft just released a new native mail/calendar/contacts app (which they’re calling “OWA”) for the iPhone and iPad. A few quick notes:

  • It is only supported with Office 365 wave 15 mailboxes. It may, or may not, work against on-premises Exchange 2013 mailboxes. (Update 130716 1509: Microsoft has in fact committed to on-prem support, but haven’t said when.)
  • It is a native app, with separate versions for iPhone (iPhone 4 and later) and iPad (iPad 2 and later). Both versions require iOS 6. Making a native app rather than just a bound web control means that the app can include some other cool features.. including gesture controls and voice control.
  • It supports Information Rights Management (and, yay, reading signed S/MIME messages). Oh, and it supports delegate access too. Oh, and online Personal Archives… and shared calendars, too!
  • No support for public folders, I’m afraid.
  • It uses Exchange Web Services, not EAS; to the Exchange CAS and mailbox roles, OWA on a device looks almost exactly like OWA in a browser.
  • VOTING BUTTON SUPPORT YES REALLY WOO HOO.
  • It has full offline functionality, powered by a local sqlite database.
  • When you request a remote wipe, the wipe request removes the app and all the data from its device but leaves the rest of the device untouched. This is a huge feature.

Of course, I’ll have full coverage of the app (and how to administer and manage it) in the clients chapter of Exchange 2013 Inside Out: Clients, Connectivity, and Unified Messaging. Until then, grab the client and play with it! I was able to download, install, and use it on my iPad3 without any trouble, but the App Store refused to allow me to download it to an iPhone 4. Stay tuned…

 

1 Comment

Filed under UC&C