Category Archives: UC&C

Getting ready for TechEd 2014

Wow, this snuck up on me! TechEd 2014 starts in 10 days, and I am nowhere near ready.

A few years ago, I started a new policy: I only attend TechEd to speak, not as a general attendee or press member; the level of technical content for the products I work with has declined steadily over the years. This is to be expected; in a four-day event, there’s a finite number of sessions that Microsoft can present, and as they add new products, every fiefdom must have its due. There are typically around 30 sessions that involve unified communications in some way; that number has remained fairly constant since 2005 or so. Over the last several years, the mix of sessions has changed to accommodate new versions of Exchange, Lync, and Office 365, but the limited number of sessions means that TechEd can’t offer the depth of MEC, Exchange Connections, or Lync Conference. This year there are 28 Exchange-related sessions, including several that are really about Office 365— so about 25% the content of MEC.

I can’t keep track of how many previous TechEd events I’ve been to; if you look at the list, you’ll see that they tend to be concentrated in a small number of cities and so they all kind of blend together. (Interestingly, this 2007 list of the types of attendees you see at TechEd is still current.) The most memorable events for me have been the ones in Europe (especially last year’s event in Madrid, where I’d never been before).

This year I was asked to pinch-hit and present OFC-B318, “What’s New in Lync Mobile.” That’s right— so far this year, I have presented on Lync at Lync Conference and MEC, plus this session, plus another Lync session at Exchange Connections! If I am not careful I’ll get a reputation. Anyway, I am about ready to dive into shining up my demos, which will feature Lync Mobile on a variety of devices— plus some special guests will be joining me on stage, including my favorite Canadian, an accomplished motorcycle rider, and a CrossFitter. You’ll have to attend the session to find out who these people are though: 3pm, Monday the 12th— see you there! I’ll also be working in the Microsoft booth area at some point, but I don’t know when yet; stay tuned for updates.

Leave a comment

Filed under UC&C

Speaking at Exchange Connections 2014

I’m excited to say that I’ll be presenting at Exchange Connections 2014, coming up this fall at the Aria in Las Vegas.

Tony posted the complete list of speakers and session titles a couple of days ago. I’m doing three sessions:

  • “Who Wears the Pants In Your Datacenter: Taming Managed Availability”: an all-new session in which the phrase “you’re not the boss of me” will feature prominently. You might want to prepare by reading my Windows IT Pro article on MA, sort of to set the table.
  • “Just Like Lemmings: Mass Migration to Office 365”: an all-new session that discusses the hows and whys of moving large volumes of mailbox and PST data into the service, using both Microsoft and third-party tools. (On the sometimes-contentious topic of public folder migration, I plead ignorance; see Sigi Jagott’s session if you want to know more). There is a big gap between theory and practice here and I plan to shine some light into it.
  • “Deep Dive: Exchange 2013 and Lync 2013 Integration” covers the nuts and bolts of how to tie Lync and Exchange 2013 together. Frankly, if you saw me present on this topic at DellWorld, MEC, or Lync Conference, you don’t need to attend this iteration. However, every time I’ve presented it, the room has been packed to capacity, so there’s clearly still demand for the material!

Exchange Connections always has a more relaxed, intimate feeling about it than the bigger Microsoft-themed conferences. This is in part because it’s not a Microsoft event and in part because it is considerably smaller. As a speaker, I really enjoy the chance to engage more deeply with the attendees than is possible at mega-events. If you’re planning to be there, great— and, if not, you should change your plans!

1 Comment

Filed under Office 365, UC&C

MEC 2014 wrap-up by the numbers

The MEC 2014 conference team sent out a statistical summary of the conference to speakers, and it makes for fascinating reading. I wanted to share a few of the highlights of the report because I think it makes some really interesting points about the state of the Exchange market and community.

First: the 101 sessions were attended by a total of 13,079 people. The average attendance across all sessions was 129, which is impressive (though skewed a bit by the size of some of the mega-sessions; Microsoft had to make a bet that lots of people would attend these sessions, which they did!). In terms of attendance, the top 10 sessions were mostly focused on architecture and deployment:

  • Exchange Server 2013 Architecture
  • Ready, set, deploy: Exchange Server 2013
  • Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?
  • Exchange Server 2013 Tips & Tricks
  • The latest on High Availability & Site Resilience
  • Exchange hybrid: architecture and deployment
  • Experts Unplugged: Exchange Deployment
  • Exchange Server 2013 Transport Architecture
  • Exchange Server 2013 Virtualization Best Practices
  • Exchange Design Concepts and Best Practices
RS IV, not life size To put this in perspective, the top session on this list had just over 600 attendees and the bottom had just under 300. Overall attendance to sessions on the architecture track was about double that of the next contender, the deployment and migration track. That tells me that there is still a large audience for discussions of fundamental architecture topics, in addition to the day-in, day-out operational material that we’d normally see emerging as the mainstay of content at this point in the product lifecycle.Next takeaway: Tim McMichael is a rock star. He captured the #1 and #2 slots in the session ratings, which is no surprise to anyone who’s ever heard him speak. I am very hopeful that I’ll get to hear him speak at Exchange Connections this year. The overall quality of speakers was superb, in my biased opinion. I’d like to see my ratings improve (more demos!) but there’s no shame in being outranked by heavy hitters such as Tony, Michael, Jeff Mealiffe, Ross Smith IV (pictured at left; not actual size), or the ebullient Kamal Janardhan. MEC provides an excellent venue for the speakers to mingle with attendees, too, both at structured events like MAPI Hour and in unstructured post-session or hallway conversations. To me, that direct interaction is one of the most valuable parts of attending a conference, both as a speaker and because I can ask other speakers questions about their particular areas of expertise.

Third, the Unplugged sessions were very popular, as measured both by attendance numbers and session ratings. I loved both the format and content of the ones I attended, but they depend on having a good moderator— someone who is both knowledgeable about the topic at hand and experienced at steering a group of opinionated folks back on topic when needed. While I am naturally bad at that, the moderators overall did an excellent job and I hope to see more Unplugged sessions at future events. When attendees added sessions to their calendar, the event staff used that as a means of gauging interest and assigning rooms based on the likely number of attendees. However, looking at the data shows that people flocked to sessions based on word-of-mouth and didn’t necessarily update their calendars; I calculated the attendance split by dividing the number of people who attended an actual session by the number who said they would attend. If 100 calendared the session but 50 attended, that would be a 50% split. The average split across all sessions (except one) was 53.8%— not bad considering how dynamic the attendance was. The one session I left out was “Experts Unplugged: Architecture – HA and Storage”, which had a split of 1167%! Of the top 10 splits (i.e. sessions where the largest percentage of people stood by their original plans), 4 were Unplugged sessions.

Of course, MEC was much more than the numbers, but this kind of data helps Microsoft understand what people want from future events, measured not just by asking them but by observing their actual preferences and actions. I can’t wait to see what the next event, whenever it may be, will look like!

2 Comments

Filed under UC&C

Microsoft updates Recoverable Items quota for Office 365 users

Remember when I posted about the 100GB limit for Personal Archive mailboxes in Office 365? It turns out that there was another limit that almost no one knew about, primarily because it involves mailbox retention. As of today, when you put an Office 365 mailbox on In-Place Hold, the size of the Recoverable Items folder is capped at 30GB. This is plenty for the vast majority of customers because a) not many customers use In-Place Hold in the first place and b) not many users have mailboxes that are large enough to exceed the 30GB quota. Multiply two small numbers together and you get another small number.

However, there are some customers for whom this is a problem. One of the most interesting things about Office 365 to me is the speed at which Microsoft can respond to their requests by changing aspects of the service architecture and provisioning. In this case, the Exchange team is planning to increase the size of the Recoverable Items quota to 100GB. Interestingly, they’re actually starting by increasing the quota for user mailboxes that are now on hold— so from now until July 2014, they’ll be silently increasing the quota for those users. If you put a user on hold today, however, their quota may not be set to 100GB until sometime later.

If you need an immediate quota increase, or if you’re using a dedicated tenant, you’ll still have to use the existing mechanism of filing a support ticket to have the quota increased.

There’s no public post on this yet, but I expect one shortly. In the meantime, bask in the knowledge that with a 50GB mailbox, 100GB Personal Archive, and 100GB Recoverable Items quota, your users probably aren’t going to run out of mailbox space any time soon.

2 Comments

Filed under Office 365, UC&C

Two-factor authentication for Outlook and Office 2013 clients

I don’t usually put on my old man hat, but indulge me for a second. Back in February 2000, in my long-forgotten column for TechNet, here’s what I said about single-factor passwords:

I’m going to let you in on a secret that’s little discussed outside the security world: reusable passwords are evil.

I stand by the second half of that statement: reusable passwords are still evil, 14 years later, but at least the word is getting out, and multi-factor authentication is becoming more and more common in both consumer and business systems. I was wrong when I assumed that smart cards would become ubiquitous as a second authentication factor; instead, the “something you have” role is increasingly often filled by a mobile phone that can receive SMS messages. Microsoft bought into that trend with their 2012 purchase of PhoneFactor, which is now integrated into Azure. Now Microsoft is extending MFA support into Outlook and the rest of the Office 2013 client applications, with a few caveats. I attended a great session at MEC 2014 presented by Microsoft’s Erik Ashby and Franklin Williams that both outlined the current state of Office 365-integrated MFA and outlined Microsoft’s plans to extend MFA to Outlook.

First, keep in mind that Office 365 already offers multi-factor authentication, once you enable it, for your web-based clients. You can use SMS-based authentication, have the service call you via phone, or use a mobile app that generates authentication codes, and you can define “app passwords” that are used instead of your primary credentials for applications— like Outlook, as it happens— that don’t currently understand MFA. You have to enable MFA for your tenant, then enable it for individual users. All of these services are included with Office 365 SKUs, and they rely on the Azure MFA service. You can, if you wish, buy a separate subscription to Azure MFA if you want additional functionality, like the ability to customize the caller ID that appears when the service calls your users.

With that said, here’s what Erik and Franklin talked about…

To start with, we have to distinguish between the three types of identities that can be used to authenticate against the service. Without going into every detail, it’s fair to summarize these as follows:

  • Cloud identities are homed in Azure Active Directory (AAD). There’s no synchronization with on-premises AD because there isn’t one.
  • Directory sync (or just “dirsync”) uses Microsoft’s dirsync tool, or an equivalent third-party tool, to sync an on-premises account with AAD. This essentially gives services that consume AAD a mostly-read-only copy of your organization’s AD.
  • Federated identity uses a federation broker or service such as Active Directory Federation Services (AD FS), Okta, Centrify, and Ping to allow your organization’s AD to answer authentication queries from Office 365 services. In January 2014 Microsoft announced a “Works With Office 365 – Identity” logo program, so if you don’t want to use AD FS you can choose another federation toolset that better meets your requirements.

Client updates are coming to the Office 2013 clients: Outlook, Lync, Word, Excel,  PowerPoint, and SkyDrive Pro. With these updates, you’ll see a single unified authentication window for all of the clients, similar (but not necessarily identical) to the existing login window you get on Windows when signing into a SkyDrive or SkyDrive Pro library from within an Office client. From that authentication window, you’ll be able to enter the second authentication factor that you received via phone call, SMS, or authentication app. During the presentation, Franklin (or maybe Erik?) said “if you can authenticate in a web browser, you can authenticate in Office clients”— very cool. (PowerShell will be getting MFA support too, but it wasn’t clear to me exactly when that was happening).

These client updates will also provide support for two specific types of smart cards: the US Department of Defense Common Access Card (CAC) and the similar-but-civilian Personal Identity Verification (PIV) card. Instead of using a separate authentication token provided by the service, you’ll plug in your smart card, authenticate to it with your PIN, and away you go.

All three of the identity types of these methods provide support for MFA; federated identity will gain the ability to do true single sign-on (SSO) jn Office 2013 clients, which will be a welcome usability improvement. Outlook will get SSO capabilities with the other two identity types, too.

How do the updates work? That’s where the magic part comes in. The Azure Active Directory Authentication Library (ADAL) is being extended to provide support for MFA. When the Office client makes a request to the service the service will return a header that instructs the client to visit a security token service (STS) using OAuth. At that point, Office uses ADAL to launch the browser control that displays the authentication page, then, as Erik puts it, “MFA and federation magic happens transparent to Office.” If the authentication succeeds, Office gets security tokens that it caches and uses for service authentication. (The flow is described in more detail in the video from the session, which is available now for MEC attendees and will be available in 60 days or so for non-attendees).

There are two important caveats that were a little buried in the presentation. First is that MFA in Outlook 2013 will require the use of MAPI/HTTP. More seriously, MFA will not be available to on-premises Exchange 2013 deployments until some time in the future. This aligns with Microsoft’s cloud-first strategy, but it is going to aggravate on-premises customers something fierce. In fairness, because you need the MFA infrastructure hosted in the Microsoft cloud to take advantage of this feature, I’m not sure there’s a feasible way to deliver SMS- or voice-based MFA for purely on-prem environments, and if you’re in a hybrid, then you’re good to go.

Microsoft hasn’t announced a specific timeframe for these updates (other than “second half calendar 2014”), and they didn’t say anything about Mac support, though I would imagine that the rumored v.next of Mac Office would provide this same functionality. The ability to use MFA across all the Office client apps will make it easier for end users, reducing the chance that they’ll depend solely on reusable passwords and thus reducing the net amount of evil in the world— a blessing to us all.

Leave a comment

Filed under Office 365, UC&C

Script to download MEC 2014 presentations

Yay for code reuse! Tom Arbuthnot wrote a nifty script to download all the Lync Conference 2014 presentations, and since Microsoft used the same event management system for MEC 2014, I grabbed his script and tweaked it so that it will download the MEC 2014 session decks and videos. It only works if you are able to sign into the MyMEC site, as only attendees can download the presentations and videos at this time. I can’t guarantee that the script will pull all the sessions but it seems to be working so far— give it a try. (And remember, the many “Unplugged” sessions weren’t recorded so you won’t see any recordings or decks for them). If the script works, thank Tom; if it doesn’t, blame me.

Download the script

3 Comments

Filed under UC&C

The value of lagged copies for Exchange 2013

Let’s talk about… lagged copies.

For most Exchange administrators, the subject of lagged database copies falls somewhere between “the Kardashians’ shoe sizes” and “which of the 3 Stooges was the funniest” in terms of interest level. The concept is easy enough to understand: a lagged copy is merely a passive copy of a mailbox database where the log files are not immediately played back, as they are with ordinary passive copies. The period between the arrival of a log file and the time when it’s committed to the database is known as the lag interval. If you have a lag interval of 24 hours set to a database, a new log for that database generated at 3pm on April 4th won’t be played into the lagged copy until at least 3pm on April 5th (I say “at least” because the exact time of playback will depend on the copy queue length). The longer the lag interval, the more “distance” there is between the active copy of the mailbox database and the lagged copy.

Lagged copies are intended as a last-ditch “goalkeeper” safety mechanism in case of logical corruption. Physical corruption caused by a hardware failure will happen after Exchange has handed the data off to be written, so it won’t be replicated. Logical corruption introduced by components other than Exchange (say, an improperly configured file-level AV scanner) that directly write to the MDB or transaction log files wouldn’t be replicated in any event, so the real use case for the lagged copy is to give you a window in time during which logical corruption caused by Exchange or its clients hasn’t yet been replicated to the lagged copy. Obviously the size of this window depends on the length of the lag interval, and whether or not it is sufficient for you to a) notice that the active database has become corrupted b) play the accumulated logs forward into the lagged copy and c) activate the lagged copy depends on your environment.

The prevailing sentiment in the Exchange world has largely been “ I do backups already so lagged copies don’t give me anything.” When Exchange 2010 first introduced the notion of a lagged copy, Tony Redmond weighed in on it. Here’s what he said back then:

For now, I just can’t see how I could recommend the deployment of lagged database copies.

That seems like a reasonable stance, doesn’t it? At MEC this year, though, Microsoft came out swinging in defense of lagged copies. Why would they do that? Why would you even think of implementing lagged copies? It turns out that there are some excellent reasons that aren’t immediately apparent. (It may help to review some of the resiliency and HA improvements delivered in Exchange 2013; try this this excellent omnibus article by Microsoft’s Scott Schnoll if you want a refresher.) Here are some of the reasons why Microsoft has begun recommending the use of lagged copies more broadly.

1. Lagged copies are better in 2013

Exchange 2013 includes a number of improvements to the lagged copy mechanism. In particular, the new loose truncation feature introduced in SP1 means that you can prevent a lagged copy from taking up too much log space by adjusting the the amount of log space that the replay mechanism will use; when that limit is reached the logs will be played down to make room. Exchange 2013 (and SP1) also make a number of improvements to the Safety Net mechanism (discussed fully in Chapter 2 of the book), which can be used to play missing messages back into a lagged copy by retrieving them from the transport subsystem.

2. Lagged copies are continuously verified

When you back up a database, Exchange checks the page checksum of every page as it is backed up by computing the checksum and comparing it to the stored checksum; if that check fails, you get the dreaded JET_errReadVerifyFailure (-1018) error. However, just because you can successfully complete the backup doesn’t mean that you’ll be able to restore it when the time comes. By comparison, the Exchange log playback mechanism will log errors immediately when they are encountered during log playback. If you’re monitoring event logs on your servers, you’ll be notified as soon as this happens and you’ll know that your lagged copy is unusable now, not when you need to restore it. If you’re not monitoring your event logs, then lagged copies are the least of your problems.

3. Lagged copies give you more flexibility for recovery

When your active and passive copies of a database become unusable and you need to fall back to your lagged copy, you have several choices, as described in TechNet. You can easily play back every log that hasn’t yet been committed to the database, in the correct order, by using Move-ActiveMailboxDatabase. If you’d rather, you can play back the logs up to a certain point in time by removing the log files that you don’t want to play back. You can also play messages back directly from Safety Net into the lagged copy.

4. There’s no hardware penalty for keeping a lagged copy

Some administrators assume that you have to keep lagged copies of databases on a separate server. While this is certainly supported, you don’t have to have a “lag server” or anything like unto it. The normal practice in most designs has been to store lagged copies on other servers in the same DAG, but you don’t even have to do that. Microsoft recommends that you keep your mailbox databases no bigger than 2TB. Stuff your server with a JBOD array of the new 8TB disks (or, better yet, buy a Dell PowerVault MD1220) and you can easily put four databases on a single disk: the active copy of DB1, the primary passive copy of DB2, the secondary passive copy of DB3, and the lagged copy of DB4. This gives you an easy way to get the benefits of a 4-copy DAG while still using the full capacity of the disks you have: the additional IOPS load of the lagged copy will be low, so hosting it on a volume that already has active and passive copies of other databases is a reasonable approach (one, however, that you’ll want to test with jetstress).

It’s always been the case that the architecture Microsoft recommends when a new version of Windows or Exchange is released evolves over time as they, and we, get more experience with it in the real world. That’s clearly what has happened here; changes in the product, improvements in storage hardware, and a shift in the economic viability of conventional backups mean that lagged copies are now much more appropriate for use as a data protection mechanism than they were in the past. I expect to see them deployed more and more often as Exchange 2013 deployments continue and our collective knowledge of best practices for them improves.

1 Comment

Filed under UC&C