Category Archives: UC&C

Microsoft replaces MEC, LyncConf, SPC with new “unified technology event”

So the news is out: Microsoft is rolling MEC, Lync Conference, and SharePoint Conference into a single “unified commercial technology conference” in Chicago next year. MVPs were notified that this change was in the works, and there was a lot of vigorous discussion. Now that the cat has been debagged, I wanted to share a few thoughts about this new conference. For perspective, I should say that I attended almost all of the original MEC conferences back in the day and hit both “next-gen” MECs and this year’s Lync Conference. I have also spoken at TechEd around a dozen times all told; I co-chaired Exchange Connections for a number of years and am a repeat speaker there as well, so I am thoroughly familiar with the landscape of Exchange and Lync-oriented conferences. (Since I haven’t been to SPC, any time I talk about MEC or LyC you can just mentally search-and-replace “SPC” in there if you like.)

Is this just TechEd 2.0?

The announcement, bylined with Julia White’s name, says that Microsoft is combining MEC, LyC, and SPC to provide a unified event that will give attendees “clearer visibility into Microsoft’s future technology vision and roadmap” and “unparalleled access to Microsoft senior leaders and the developers who write the code.” One of the most valuable aspects of the current set of product-specific conferences, of course, is the deep engagement with people from each specific product group. The enthusiasm and passion that the developers, testers, support engineers, PMs, and leaders of the Exchange and Lync product groups shines through: they are just as happy and excited to be there as the attendees are, and this creates a unique energy and sense of community that are consistently absent from TechEd.

Microsoft has been very successful at positioning TechEd as the generalists’ conference, with coverage of every part of their stack. Developers, architects, security engineers, and business decision makers all had content targeted at them, but it was often driven by Microsoft’s marketing agenda and not by customer demand. As the number of products in Microsoft’s portfolio has grown, TechEd hasn’t lengthened to accommodate more sessions; instead, the number of Exchange/Lync/Office 365 sessions has remained roughly constant even as those products have expanded. I think it’s fair to say that as a vehicle for deep technical information, TechEd’s glory days are far behind it. On the other hand, as a vehicle to showcase the Microsoft party line, TechEd thrived. It became clear several years ago that individual product communities would greatly benefit from having their own conferences to focus on their unique needs. Exchange Connections did a good job of filling this niche, of course, but first SPC, then LyC, then MEC proved that these product-specific conferences engendered a very high degree of attendee (and exhibitor) satisfaction and engagement, and they proved the high value of having a Microsoft-led and -organized conference with enthusiastic participation from the big wheels in each product group.

The announcement goes on to say “feedback from attendees across the past conferences asking for more content and product team engagement across Microsoft versus just within one product area.” In complete sincerity, I can say that none of the hundreds of MEC or LyC attendees, or MVPs, or Microsoft product group folks I have spoken to have said “gee, what we really need is a big conference that covers all of Microsoft’s UC&C products.” I do know that the product groups have aggressively sought and carefully considered feedback from attendees at these conferences, so it’s certainly possible that they’ve been hearing something very different than I have. It is true that people whose duties or interests span multiple products have to go to multiple conferences, and this is a valid complaint. Many consultants can’t spare multiple weeks of bench time to attend all of the relevant conferences, and many smaller companies that are using multiple products aren’t able to budget multiple conferences either. So from their standpoint, perhaps this unification is a win.

Tony points out that there are great logistical and cost-savings benefits to Microsoft in consolidating the conference, and that exhibitors may prefer to have a larger, more diverse audience. I agree with the former; on the latter, I’m not sure. Companies whose product lines span multiple parts of the UC&C ecosystem may benefit; for example, ENow makes both Exchange and Lync monitoring solutions, so having both Lync and Exchange admins in the crowd is great for them. I’m not sure the same is true for exhibitors such as Polycom, AvePoint, or Sherpa Software, whose products focus on one Microsoft server.

Julia goes on to promise that “this unified conference will be every bit as awesome, every bit as valuable and in fact, it will exceed on both these measures. That is our maniacal focus and commitment to you, so hold us to it!” While I am naturally skeptical of broad and unsupported promises such as this, the many, many people involved in the existing round of conferences— from Julia and her staff to the individual product group folks like Jamie Stark and Brian Shiers to the MVP and MCM speakers— all have a huge interest in making sure that the new event meets the high bar set by the existing conference. That helps temper my skepticism with a high degree of optimism. The announcement promises more details on the conference (perhaps including a name?) in September, and I’d expect to see more details at TechEd EMEA in October.

One last note for speculation: if you were Julia, and you were planning on introducing new versions of your flagship products, wouldn’t it be logical to do it with a big splash at a new event? May 2015 is, conveniently, in the first half of calendar year 2015, and at MEC 2014 Microsoft told us to expect a new on-prem version of Exchange in the second half of 2015.

Leave a comment

Filed under Office 365, UC&C

Does Azure Machine Learning open the door for on-premises Office Graph?

Microsoft continues to expand the reach of its Azure services by introducing new capabilities, seemingly on a daily basis. Today I was surprised to see an announcement for the new Azure Machine Learning service (more background in this NY Times article). The link for the service apparently isn’t live yet, though.

The availability of this service raises some interesting questions around Office Graph, the set of nifty social-ish features that Microsoft introduced at SPC and reiterated at MEC and TechEd. We recently learned that, at least for now, there are no plans to offer Office Graph, and its associated features, to on-premises customers in the next release of Exchange Server. Carefully parse that statement; it could mean anything from “there will never be Office Graph features in on-prem Exchange” to “we can change our plans and include them at any time.”

It’s fair to say that Office Graph is designed to leverage the high scale of Office 365, and that because it is a resource-intesive group of processes and services, that there’s likely to be a lot of infrastructure for management, monitoring, and tuning of its components— not necessarily something that could trivially be unleashed on the existing base of on-premises customers. I’d bet that these services have a lot of interconnections, too. However, if Microsoft is adopting the Amazon approach of  “everything is a service”, as they seem to be, you’d think that having some parts of Office Graph running on Azure ML is not only possible but probable. And the Azure folks are clearly comfortable with hybrid environments, as witness the fact that the Forza 5 and Titanfall video games on Xbox One both make extensive use of Azure-based resources.

So, if Office Graph is (or could be) consuming Azure ML as a service, it would seem to lower the barrier for getting Office Graph-related services into on-prem Exchange. I’ll be watching closely to see what Microsoft announces, and even more closely to see what they do, around this issue— it seems like the best possible world would be one where on-prem customers can harness the scale of Azure to get access to Office Graph features and where Microsoft doesn’t have to engineer a complete support system around on-prem variants of the Office Graph components. Stay tuned…

Leave a comment

Filed under Office 365, UC&C

Creating an Office 365 demo tenant

One of the big advantage of software as a service (SaaS) is supposed to be reduced overhead: there are no servers to install or configure, so provisioning services is supposed to be much easier. That might be true for customers, but it isn’t necessarily true for us as administrators and consultants. Learning about Office 365 really requires hands-on experience. You can only get so far from reading the (voluminous) documentation and watching the (many and excellent) training videos that Microsoft has produced. However, there’s a problem: Office 365 costs money.

There are a few routes to get free access to Office 365. If you’re an MVP, you can get a free subscription, limited (I think) to 25 users. If you’re an MSDN subscriber, you can get a tenant with a single user license, which is fine for playtime but not terribly useful if you need a bigger lab. Microsoft also has a 30-day trial program (for some plans: Small Business Premium, Midsize Business, and Enterprise) that allows you to set up a tenant and use it, but at the end of that 30-day period the tenant goes away if you don’t pay for it. That means you can potentially waste a lot of effort customizing a tenant, creating users, and so on only to have it vanish unless you whip out the credit card.

I was a little surprised to find out recently that there’s another alternative: Microsoft has a tool that will create a new demo tenant on demand for you. You can customize many aspects of the tenant behavior, and you can use the provided user accounts (which include contact photos and real-looking sample emails and documents) or create your own. There are even vertical-specific packs that customize the environment for particular customer types. And it’s all free; no payment information is required. However, you do have to have a Windows Live ID that is associated with a Microsoft Partner Network (MPN) account. If you don’t have one, you can join MPN fairly easily.
All this goodness is available from www.microsoftofficedemos.com. Here’s what you need to do to use it.
  1. Go to http://www.microsoftofficedemos.com/ and log in.
  2. Click the “Get Demo” link in the top nav bar, or the “Create Demo” link on the page, or just go to https://www.microsoftofficedemos.com/Provision_step1.aspx. That will display the page below. Note that you can download VHDs that provide an on-prem version of the demo environment if you want those instead.
    Tenant01
  3. Make sure you’ve selected “Office 365 tenant” from the pulldown, then click “Next”. That will display a new page with four choices, all of which are pretty much self-explanatory. If you want an empty tenant to play around with, choose the “Create an empty Office 365 tenant”. If you want one that has users, email, documents, and so on, choose “Create new demo environment” instead.
    tenant02
  4. On the next page, you can choose whether you want the standard demo content or a vertical-specific demo pack. This will be a really useful option once Microsoft adds more vertical packs, but for now the only semi-interesting one is retail, and the provided demo guides (IMHO) are more useful for the standard set, so that’s what I’d pick. After you choose a data set, click “Create Your Demo”.
  5. The next page is where you name the tenant, and where Microsoft asks you to prove you’re not a bot by entering a code that they send to your mobile phone. (Bonus points if you know why I picked this particular tenant name!) The optional “Personalize Your Environment” button lets you change the user names (both aliases and full names) and contact pictures, so if you’re doing a demo for a particular customer you can put in the names of the people who will attend the demo to add a little spice. The simple option is to customize a single user; there’s one main user for each of the demos (which I’ll get to in a minute), but you can customize any or all of the 25 default users.
    Tenant04
  6. Once you click “Create My Account”, the demo engine will start creating your tenant  and provisioning it. This takes a while; for example, yesterday it took about 12 hours from start to finish. Provisioning demos is just about last on Microsoft’s priority list, so if you need a tenant in a hurry use the “create a blank tenant” option I mentioned earlier. You’ll see a progress page like the one below, but you’ll also get a notification email to the address you provided in step 5 when everything’s finished, so there’s no need to sit and watch it.
    Tenant06
Once the tenant is provisioned, you can log into it using any of the test users, or the default “admin” user. How do you know which users are configured (presuming you didn’t customize them, that is)? Excellent question. The demo guides provide a complete step-by-step script both for setting up the demo environment and executing the demo itself. For example, the Office 365 Enterprise “hero demo” is an exhaustive set of steps that covers all the setup you need to do on the tenant and whatever client machines you’re planning on using.
Once the tenant is provisioned, it’s good for 90 days. You can’t renew it, but at any time during the 90 days you can refresh the demo content so that emails, document modification times, and so on are fresh. And on the 91st day, you can just recreate the tenant; there doesn’t seem to be any explicit limit to the number of tenants you can create or the number of times you can create a tenant with a given name.
While the demo data set is quite rich, and the provided demo scripts give you a great walkthrough to show off Office 365, you don’t have to use them. If you just want a play area that you can test with, this environment is pretty much ideal. It has full SMTP connectivity, although I haven’t tested to verify that every federation and sharing feature works properly (so, for example, you might not be able to set up free/busy sharing with your on-prem accounts). I also don’t know whether there are any admin functions that have been RBAC’d to be off limits. (If you see anything like that, please post a comment here.)
Enjoy!

7 Comments

Filed under Office 365, UC&C

Mailbox-level backups in Office 365

Executive summary: there aren’t any, so plan accordingly.

Recently I was working with a customer (let’s call him Joe, as in “Joe Customer”) who was considering moving to Office 365. They went to our executive briefing center in Austin, where some Dell sales hotshots met and briefed them, then I joined in via Lync (with video!) for a demo. The demo went really well, and I was feeling good about our odds of winning the deal… until the Q&A period.

“How does Office 365 provide mailbox-level backups?” Joe asked.

“Well, it doesn’t,” I said. “Microsoft doesn’t give you direct access to the mailbox databases. Instead, they give you deleted item retention, plus you can use single-item retention and various types of holds.” Then I sent him this link.

“Let me tell you why I’m asking,” Joe retorted after skimming the link. “A couple of times we’ve lost our CIO’s calendar. He uses an Outlook add-in that prints out his calendar every day, and sometimes it corrupts calendar items. We need to be able to do mailbox-level backups so that we can restore any damaged items.”

At that point I had to admit to being stumped. Sure enough, there is no Office 365 feature or capability that protects against this kind of logical corruption. You can’t use New-MailboxExportRequest or the EAC to export the contents of Office 365 mailboxes to PST files. You obviously can’t run backup tools that run on the Exchange server against your Office 365 mailbox databases; there may exist tools that use EWS to directly access a mailbox and make a backup copy, but I don’t know of any that are built for that purpose.

I ran Joe’s query past a few folks I know on the 365 team. Apart from the (partially helpful) suggestion not to run Outlook add-ins that are known to corrupt data, none of them had good answers either.

While it’s tempting to view the inability to do mailbox-level backups as a limitation, it’s perfectly understandable. Microsoft spent years trying to get people not to run brick-level backups using MAPI. The number of use cases for this feature is getting smaller each year as both the data-integrity and retention features of Exchange get better. In fact, one of the major reasons that we now have single-item recovery in its current form is because customers kept asking for expanded tools to recover deleted items, either after an accidental deletion or a purge. Exchange also incorporates all sorts of infrastructure to protect against data loss, both for stored data and data in transit, but nothing really helps in this case: the corrupt data comes from the client, and Exchange is faithfully storing and replicating what it gets from the client. In fairness, we have seen business logic added to Exchange in the past to protect against problems caused by malformed calendar entries created by old versions of Outlook, but clearly Microsoft can’t do that for every random add-in that might stomp on a user’s calendar.

A few days after the original presentation, I sent Joe an email summarizing what I’d found out and telling him that, if mailbox-level backup was an absolute requirement, he probably shouldn’t move those mailboxes to Office 365.

The moral of this story, to an extent that there is one, is that Microsoft is engineering Office 365 for the majority of their users and their needs. Just as Word (for instance) is supplemented by specialized plugins for reference and footnote tracking, mathematical typesetting, and chemistry diagrams, Exchange has a whole ecosystem of products that connect to it in various ways, and Office 365 doesn’t support every single one of those. The breadth and diversity of the Exchange ecosystem is one of the major reasons that I expect on-premises Exchange to be with us for years to come. Until it finally disappears, don’t forget to do some kind of backups.

2 Comments

Filed under Office 365, UC&C

US lawyers and Office 365

Every field has its own unique constraints; the things the owner of a small manufacturing business worries about will have some overlap, but many differences, compared to what the CEO of a multi-billion-dollar energy company is concerned with. The legal industry is no exception; one major area of concern for lawyers is ethics. No, I don’t mean that they’re concerned about not having any. (I will try to refrain from adding any further lawyer jokes in this post unless, you know, they’re funny).

Disclaimer: I am not a lawyer. This is not legal advice. Seriously.

The entire US legal system is based on a number of core principles, including that of precedent, or what laymen might call “tradition”. For that reason, as well as the stiff professional penalties that may result from a finding of malpractice or incompetence, many in the legal profession have been slower to embrace technology than their peers in other industries. When there is no settled precedent to answer a question, someone has to generate precedent, often by taking a case to court. Various professional standards bodies can generate opinions that are considered to be more or less binding on their members, too. To cite one example of what I mean, here’s what the Lawyers’ Professional Responsibility Board of the state of Minnesota has to say about one small aspect of legal ethics, the safeguarding and use of metadata:

…a lawyer is ethically required to act competently to avoid improper disclosure of confidential and privileged information in metadata in electronic documents.

That seems pretty straightforward; the body responsible for “the operation of the professional responsibility system in Minnesota” issued an opinion calling for attorneys in that state to safeguard metadata and refrain from using it in ways that conflict with their other ethical obligations. With that opinion now extant, lawyers in Minnesota can, presumably, be disciplined for failing to meet that standard.

With that as background, let me share this fascinating link: a list of ethics opinions related to the use of cloud services by lawyers and law firms. (I found the list at Sharon Nelson’s excellent “Ride the Lightning” blog, which I commend to your attention.)

Let that sink in for a minute: some of the organizations responsible for setting ethical standards for lawyers in various states are weighing in on the ethics of legal use of cloud services.

This strikes me as remarkable for several reasons. Consider, for example, that there don’t seem to be similar guidelines for e-mail admins, or professional engineers, or cosmetologists, or any other profession that I can think of. In pretty much every other market, if you want to use cloud services, feel free! Oh, sure, you may want to consider the ramifications of putting sensitive or protected data into the cloud, especially if you have specific requirements around compliance or governance. By and large, though, no one is going to punish you for using cloud services in your business if that choice turns out to be inappropriate. On the other hand, if you’re a lawyer, you can be professionally liable for failing to protect your clients’ confidentiality, as might happen in case of a data breach at your cloud provider.

The existence of these opinions, then, means that in at least 14 states, there are now defined standards that practitioners are expected to follow when choosing and using cloud services. For example, the Alabama standard (which I picked because it is simple, because I live in Alabama, and because it was first in the alphabetical list) says:

…a lawyer may use “cloud computing” or third-party providers to store client data provided that the attorney exercises reasonable care in doing so… The duty of reasonable care requires the lawyer to become knowledgeable about how the provider will handle the storage and security of the data being stored and to reasonably ensure that the provider will abide by a confidentiality agreement in handling the data. Additionally, because technology is constantly evolving, the lawyer will have a continuing duty to stay abreast of appropriate security safeguards that should be employed by the lawyer and the third-party provider. If there is a breach of confidentiality, the focus of any inquiry will be whether the lawyer acted reasonably in selecting the method of storage and/or the third party provider.

The other state opinions are generally similar in that they require an attorney to act with “reasonable care” in choosing a cloud service provider. That makes Microsoft’s recent relaunch of the expanded Office 365 Trust Center a great move: it succinctly addresses “appropriate security safeguards” that are applied throughout the Office 365 stack. Reading it will give you a solid grounding in the physical. technical, and operational safeguards that Microsoft has in place.

Compared to its major SaaS competitors, Microsoft’s site has more breadth and depth about security in Office 365, and it’s written in an approachable style that is appropriate for non-technical people… including attorneys. In particular, the top-10 lists provide easily digestible bites that help to reassure customers that there data, and metadata, are safe within Microsoft’s cloud. By comparison, the Google Apps security page is limited in both breadth and depth; the Dropbox page is laughable, and the Box.net page is basically a quick list of bullets without much depth to back them up.

The Office 365 Trust Center certainly provides the information necessary for an attorney to “become knowledgeable about how the provider will handle the storage and security of the data being stored”, and it is equally useful for the rest of us because we can do the same thing. If you haven’t already done so, it’s worth a few minutes of your time to go check it out; you’ll probably come away with a better idea of the number and type of security measures that Microsoft applies to Office 365 operations, which will help you if a) you go to law school and/or b) you are considering moving to Office 365.

2 Comments

Filed under Office 365, UC&C

Exchange Server and Azure: “not now” vs “never”

Wow, look what I found in my drafts folder: an old post.

Lots of Exchange admins have been wondering whether Windows Azure can be used to host Exchange. This is to be expected for two reasons. First, Microsoft has been steadily raising the volume of Azure-related announcements, demos, and other collateral material. TechEd 2014 was a great example: there were several Azure-related announcements, including the availability of ExpressRoute for private connections to the Azure cloud and several major new storage improvements. These changes build on their aggressive evangelism, which has been attempting, and succeeding, to convince iOS and Android developers to use Azure as the back-end service for their apps. The other reason, sadly, is why I’m writing: there’s a lot of misinformation about Exchange on Azure (e.g. this article from SearchExchange titled “Points to consider before running Exchange on Azure”, which is wrong, wrong, and wrong), and you need to be prepared to defuse its wrongness with customers who may misunderstand what they’re potentially getting into.

On its face, Azure’s infrastructure-as-a-service (IaaS) offering seems pretty compelling: you can build Windows Server VMs and host them in the Azure cloud. That seems like it would be a natural fit for Exchange, which is increasingly viewed as an infrastructure service by customers who depend on it. However, there are at least three serious problems with this approach.

First: it’s not supported by Microsoft, something that the “points to consider” article doesn’t even mention. The Exchange team doesn’t support Exchange 2010 or Exchange 2013 on Azure or Amazon EC2 or anyone else’s cloud service at present. It is possible that this will change in the future, but for now any customer who runs Exchange on Azure will be in an unsupported state. It’s fun to imagine scenarios where the Azure team takes over first-line support responsibility for customers running Exchange and other Microsoft server applications; this sounds a little crazy but the precedent exists, as EMC and other storage companies did exactly this for users of their replication solutions back in Exchange 5.5/2000 times. Having said that, don’t hold your breath. The Azure team has plenty of other more pressing work to do first, so I think that any change in this support model will require the Exchange team to buy in to it. The Azure team has been able to get that buy-in from SharePoint, Dynamics, and other major product groups within Microsoft, so this is by no means impossible.

Second: it’s more work. In some ways Azure gives you the worst of the hosted Exchange model: you have to do just as much work as you would if Exchange were hosted on-premises, but you’re also subject to service outages, inconsistent network latency, and all the other transient or chronic irritations that come, at no extra cost, with cloud services. Part of the reason that the Exchange team doesn’t support Azure is because there’s no way to guarantee that any IaaS provider is offering enough IOPS, low-enough latency, and so on, so troubleshooting performance or behavior problems with a service such as Azure can quickly turn into a nightmare. If Azure is able to provide guaranteed service levels for disk I/O throughput and latency, that would help quite a bit, but this would probably require significant engineering effort. Although I don’t recommend that you do it at the moment, you might be interested in this writeup on how to deploy Exchange on Azure; it gives a good look at some of the operational challenges you might face in setting up Exchange+Azure for test or demo use.

Third: it’s going to cost more. Remember that IaaS networks typically charge for resource consumption. Exchange 2013 (and Exchange 2010, too) is designed to be “always on”. The workload management features in Exchange 2013 provide throttling, sure, but they don’t eliminate all of the background maintenance that Exchange is more-or-less continuously performing. These tasks, including GAL grammar generation for Exchange UM, the managed folder assistant, calendar repair, and various database-related tasks, have to be run, and so IaaS-based Exchange servers are continually going to be racking up storage, CPU, and network charges. In fairness, I haven’t estimated what these charges might be for a typical test-lab environment; it’s possible that they’d be cheap enough to be tolerable, but I’m not betting on it, and no doubt a real deployment would be significantly more expensive.

Of course, all three of these problems are soluble: the Exchange team could at any time change their support policy for Exchange on Azure, and/or the Azure team could adjust the cost model to make the cost for doing so competitive with Office 365 or other hosted solutions. Interestingly, though, two different groups would have to make those decisions, and their interests don’t necessarily align, so it’s not clear to me if or when we might see this happen. Remember, the Office 365 team at Microsoft uses physical hardware exclusively for their operations.

Does that mean that Azure has no value for Exchange? On the contrary. At TechEd New Orleans in June 2013, Microsoft’s Scott Schnoll said they were studying the possibility of using an Azure VM as the witness server for DAGs in Exchange 2013 CU2 and later. This would be a super feature because it would allow customers with two or more physically separate data centers to build large DAGs that weren’t dependent on site interconnects (at the risk, of course, of requiring always-on connectivity to Azure). The cost and workload penalty for running an FSW on Azure would be low, too. In August 2013, the word came down: Azure in its present implementation isn’t suitable for use as an FSW. However, the Exchange team has requested some Azure functionality changes that would make it possible to run this configuration in the future, so we have that to look forward to.

Then we have the wide world of IaaS capabilities opened up by Windows Azure Active Directory (WAAD), Azure Rights Management Services, Azure Multi-Factor Authentication, and the large-volume disk ingestion program (now known as the Azure Import/Export Service). As time passes, Microsoft keeps delivering more, and better, Azure services that complement on-premises Exchange, which has been really interesting to watch. I expect that trend to continue, and there are other, less expensive ways to use IaaS for Exchange if you only want it for test labs and the like. More on that in a future post….

1 Comment

Filed under General Tech Stuff, UC&C

Getting ready for TechEd 2014

Wow, this snuck up on me! TechEd 2014 starts in 10 days, and I am nowhere near ready.

A few years ago, I started a new policy: I only attend TechEd to speak, not as a general attendee or press member; the level of technical content for the products I work with has declined steadily over the years. This is to be expected; in a four-day event, there’s a finite number of sessions that Microsoft can present, and as they add new products, every fiefdom must have its due. There are typically around 30 sessions that involve unified communications in some way; that number has remained fairly constant since 2005 or so. Over the last several years, the mix of sessions has changed to accommodate new versions of Exchange, Lync, and Office 365, but the limited number of sessions means that TechEd can’t offer the depth of MEC, Exchange Connections, or Lync Conference. This year there are 28 Exchange-related sessions, including several that are really about Office 365— so about 25% the content of MEC.

I can’t keep track of how many previous TechEd events I’ve been to; if you look at the list, you’ll see that they tend to be concentrated in a small number of cities and so they all kind of blend together. (Interestingly, this 2007 list of the types of attendees you see at TechEd is still current.) The most memorable events for me have been the ones in Europe (especially last year’s event in Madrid, where I’d never been before).

This year I was asked to pinch-hit and present OFC-B318, “What’s New in Lync Mobile.” That’s right— so far this year, I have presented on Lync at Lync Conference and MEC, plus this session, plus another Lync session at Exchange Connections! If I am not careful I’ll get a reputation. Anyway, I am about ready to dive into shining up my demos, which will feature Lync Mobile on a variety of devices— plus some special guests will be joining me on stage, including my favorite Canadian, an accomplished motorcycle rider, and a CrossFitter. You’ll have to attend the session to find out who these people are though: 3pm, Monday the 12th— see you there! I’ll also be working in the Microsoft booth area at some point, but I don’t know when yet; stay tuned for updates.

Leave a comment

Filed under UC&C

Speaking at Exchange Connections 2014

I’m excited to say that I’ll be presenting at Exchange Connections 2014, coming up this fall at the Aria in Las Vegas.

Tony posted the complete list of speakers and session titles a couple of days ago. I’m doing three sessions:

  • “Who Wears the Pants In Your Datacenter: Taming Managed Availability”: an all-new session in which the phrase “you’re not the boss of me” will feature prominently. You might want to prepare by reading my Windows IT Pro article on MA, sort of to set the table.
  • “Just Like Lemmings: Mass Migration to Office 365”: an all-new session that discusses the hows and whys of moving large volumes of mailbox and PST data into the service, using both Microsoft and third-party tools. (On the sometimes-contentious topic of public folder migration, I plead ignorance; see Sigi Jagott’s session if you want to know more). There is a big gap between theory and practice here and I plan to shine some light into it.
  • “Deep Dive: Exchange 2013 and Lync 2013 Integration” covers the nuts and bolts of how to tie Lync and Exchange 2013 together. Frankly, if you saw me present on this topic at DellWorld, MEC, or Lync Conference, you don’t need to attend this iteration. However, every time I’ve presented it, the room has been packed to capacity, so there’s clearly still demand for the material!

Exchange Connections always has a more relaxed, intimate feeling about it than the bigger Microsoft-themed conferences. This is in part because it’s not a Microsoft event and in part because it is considerably smaller. As a speaker, I really enjoy the chance to engage more deeply with the attendees than is possible at mega-events. If you’re planning to be there, great— and, if not, you should change your plans!

1 Comment

Filed under Office 365, UC&C

MEC 2014 wrap-up by the numbers

The MEC 2014 conference team sent out a statistical summary of the conference to speakers, and it makes for fascinating reading. I wanted to share a few of the highlights of the report because I think it makes some really interesting points about the state of the Exchange market and community.

First: the 101 sessions were attended by a total of 13,079 people. The average attendance across all sessions was 129, which is impressive (though skewed a bit by the size of some of the mega-sessions; Microsoft had to make a bet that lots of people would attend these sessions, which they did!). In terms of attendance, the top 10 sessions were mostly focused on architecture and deployment:

  • Exchange Server 2013 Architecture
  • Ready, set, deploy: Exchange Server 2013
  • Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?
  • Exchange Server 2013 Tips & Tricks
  • The latest on High Availability & Site Resilience
  • Exchange hybrid: architecture and deployment
  • Experts Unplugged: Exchange Deployment
  • Exchange Server 2013 Transport Architecture
  • Exchange Server 2013 Virtualization Best Practices
  • Exchange Design Concepts and Best Practices
RS IV, not life size To put this in perspective, the top session on this list had just over 600 attendees and the bottom had just under 300. Overall attendance to sessions on the architecture track was about double that of the next contender, the deployment and migration track. That tells me that there is still a large audience for discussions of fundamental architecture topics, in addition to the day-in, day-out operational material that we’d normally see emerging as the mainstay of content at this point in the product lifecycle.Next takeaway: Tim McMichael is a rock star. He captured the #1 and #2 slots in the session ratings, which is no surprise to anyone who’s ever heard him speak. I am very hopeful that I’ll get to hear him speak at Exchange Connections this year. The overall quality of speakers was superb, in my biased opinion. I’d like to see my ratings improve (more demos!) but there’s no shame in being outranked by heavy hitters such as Tony, Michael, Jeff Mealiffe, Ross Smith IV (pictured at left; not actual size), or the ebullient Kamal Janardhan. MEC provides an excellent venue for the speakers to mingle with attendees, too, both at structured events like MAPI Hour and in unstructured post-session or hallway conversations. To me, that direct interaction is one of the most valuable parts of attending a conference, both as a speaker and because I can ask other speakers questions about their particular areas of expertise.

Third, the Unplugged sessions were very popular, as measured both by attendance numbers and session ratings. I loved both the format and content of the ones I attended, but they depend on having a good moderator— someone who is both knowledgeable about the topic at hand and experienced at steering a group of opinionated folks back on topic when needed. While I am naturally bad at that, the moderators overall did an excellent job and I hope to see more Unplugged sessions at future events. When attendees added sessions to their calendar, the event staff used that as a means of gauging interest and assigning rooms based on the likely number of attendees. However, looking at the data shows that people flocked to sessions based on word-of-mouth and didn’t necessarily update their calendars; I calculated the attendance split by dividing the number of people who attended an actual session by the number who said they would attend. If 100 calendared the session but 50 attended, that would be a 50% split. The average split across all sessions (except one) was 53.8%— not bad considering how dynamic the attendance was. The one session I left out was “Experts Unplugged: Architecture – HA and Storage”, which had a split of 1167%! Of the top 10 splits (i.e. sessions where the largest percentage of people stood by their original plans), 4 were Unplugged sessions.

Of course, MEC was much more than the numbers, but this kind of data helps Microsoft understand what people want from future events, measured not just by asking them but by observing their actual preferences and actions. I can’t wait to see what the next event, whenever it may be, will look like!

2 Comments

Filed under UC&C

Microsoft updates Recoverable Items quota for Office 365 users

Remember when I posted about the 100GB limit for Personal Archive mailboxes in Office 365? It turns out that there was another limit that almost no one knew about, primarily because it involves mailbox retention. As of today, when you put an Office 365 mailbox on In-Place Hold, the size of the Recoverable Items folder is capped at 30GB. This is plenty for the vast majority of customers because a) not many customers use In-Place Hold in the first place and b) not many users have mailboxes that are large enough to exceed the 30GB quota. Multiply two small numbers together and you get another small number.

However, there are some customers for whom this is a problem. One of the most interesting things about Office 365 to me is the speed at which Microsoft can respond to their requests by changing aspects of the service architecture and provisioning. In this case, the Exchange team is planning to increase the size of the Recoverable Items quota to 100GB. Interestingly, they’re actually starting by increasing the quota for user mailboxes that are now on hold— so from now until July 2014, they’ll be silently increasing the quota for those users. If you put a user on hold today, however, their quota may not be set to 100GB until sometime later.

If you need an immediate quota increase, or if you’re using a dedicated tenant, you’ll still have to use the existing mechanism of filing a support ticket to have the quota increased.

There’s no public post on this yet, but I expect one shortly. In the meantime, bask in the knowledge that with a 50GB mailbox, 100GB Personal Archive, and 100GB Recoverable Items quota, your users probably aren’t going to run out of mailbox space any time soon.

2 Comments

Filed under Office 365, UC&C

Two-factor authentication for Outlook and Office 2013 clients

I don’t usually put on my old man hat, but indulge me for a second. Back in February 2000, in my long-forgotten column for TechNet, here’s what I said about single-factor passwords:

I’m going to let you in on a secret that’s little discussed outside the security world: reusable passwords are evil.

I stand by the second half of that statement: reusable passwords are still evil, 14 years later, but at least the word is getting out, and multi-factor authentication is becoming more and more common in both consumer and business systems. I was wrong when I assumed that smart cards would become ubiquitous as a second authentication factor; instead, the “something you have” role is increasingly often filled by a mobile phone that can receive SMS messages. Microsoft bought into that trend with their 2012 purchase of PhoneFactor, which is now integrated into Azure. Now Microsoft is extending MFA support into Outlook and the rest of the Office 2013 client applications, with a few caveats. I attended a great session at MEC 2014 presented by Microsoft’s Erik Ashby and Franklin Williams that both outlined the current state of Office 365-integrated MFA and outlined Microsoft’s plans to extend MFA to Outlook.

First, keep in mind that Office 365 already offers multi-factor authentication, once you enable it, for your web-based clients. You can use SMS-based authentication, have the service call you via phone, or use a mobile app that generates authentication codes, and you can define “app passwords” that are used instead of your primary credentials for applications— like Outlook, as it happens— that don’t currently understand MFA. You have to enable MFA for your tenant, then enable it for individual users. All of these services are included with Office 365 SKUs, and they rely on the Azure MFA service. You can, if you wish, buy a separate subscription to Azure MFA if you want additional functionality, like the ability to customize the caller ID that appears when the service calls your users.

With that said, here’s what Erik and Franklin talked about…

To start with, we have to distinguish between the three types of identities that can be used to authenticate against the service. Without going into every detail, it’s fair to summarize these as follows:

  • Cloud identities are homed in Azure Active Directory (AAD). There’s no synchronization with on-premises AD because there isn’t one.
  • Directory sync (or just “dirsync”) uses Microsoft’s dirsync tool, or an equivalent third-party tool, to sync an on-premises account with AAD. This essentially gives services that consume AAD a mostly-read-only copy of your organization’s AD.
  • Federated identity uses a federation broker or service such as Active Directory Federation Services (AD FS), Okta, Centrify, and Ping to allow your organization’s AD to answer authentication queries from Office 365 services. In January 2014 Microsoft announced a “Works With Office 365 – Identity” logo program, so if you don’t want to use AD FS you can choose another federation toolset that better meets your requirements.

Client updates are coming to the Office 2013 clients: Outlook, Lync, Word, Excel,  PowerPoint, and SkyDrive Pro. With these updates, you’ll see a single unified authentication window for all of the clients, similar (but not necessarily identical) to the existing login window you get on Windows when signing into a SkyDrive or SkyDrive Pro library from within an Office client. From that authentication window, you’ll be able to enter the second authentication factor that you received via phone call, SMS, or authentication app. During the presentation, Franklin (or maybe Erik?) said “if you can authenticate in a web browser, you can authenticate in Office clients”— very cool. (PowerShell will be getting MFA support too, but it wasn’t clear to me exactly when that was happening).

These client updates will also provide support for two specific types of smart cards: the US Department of Defense Common Access Card (CAC) and the similar-but-civilian Personal Identity Verification (PIV) card. Instead of using a separate authentication token provided by the service, you’ll plug in your smart card, authenticate to it with your PIN, and away you go.

All three of the identity types of these methods provide support for MFA; federated identity will gain the ability to do true single sign-on (SSO) jn Office 2013 clients, which will be a welcome usability improvement. Outlook will get SSO capabilities with the other two identity types, too.

How do the updates work? That’s where the magic part comes in. The Azure Active Directory Authentication Library (ADAL) is being extended to provide support for MFA. When the Office client makes a request to the service the service will return a header that instructs the client to visit a security token service (STS) using OAuth. At that point, Office uses ADAL to launch the browser control that displays the authentication page, then, as Erik puts it, “MFA and federation magic happens transparent to Office.” If the authentication succeeds, Office gets security tokens that it caches and uses for service authentication. (The flow is described in more detail in the video from the session, which is available now for MEC attendees and will be available in 60 days or so for non-attendees).

There are two important caveats that were a little buried in the presentation. First is that MFA in Outlook 2013 will require the use of MAPI/HTTP. More seriously, MFA will not be available to on-premises Exchange 2013 deployments until some time in the future. This aligns with Microsoft’s cloud-first strategy, but it is going to aggravate on-premises customers something fierce. In fairness, because you need the MFA infrastructure hosted in the Microsoft cloud to take advantage of this feature, I’m not sure there’s a feasible way to deliver SMS- or voice-based MFA for purely on-prem environments, and if you’re in a hybrid, then you’re good to go.

Microsoft hasn’t announced a specific timeframe for these updates (other than “second half calendar 2014”), and they didn’t say anything about Mac support, though I would imagine that the rumored v.next of Mac Office would provide this same functionality. The ability to use MFA across all the Office client apps will make it easier for end users, reducing the chance that they’ll depend solely on reusable passwords and thus reducing the net amount of evil in the world— a blessing to us all.

Leave a comment

Filed under Office 365, UC&C

Script to download MEC 2014 presentations

Yay for code reuse! Tom Arbuthnot wrote a nifty script to download all the Lync Conference 2014 presentations, and since Microsoft used the same event management system for MEC 2014, I grabbed his script and tweaked it so that it will download the MEC 2014 session decks and videos. It only works if you are able to sign into the MyMEC site, as only attendees can download the presentations and videos at this time. I can’t guarantee that the script will pull all the sessions but it seems to be working so far— give it a try. (And remember, the many “Unplugged” sessions weren’t recorded so you won’t see any recordings or decks for them). If the script works, thank Tom; if it doesn’t, blame me.

Download the script

3 Comments

Filed under UC&C

The value of lagged copies for Exchange 2013

Let’s talk about… lagged copies.

For most Exchange administrators, the subject of lagged database copies falls somewhere between “the Kardashians’ shoe sizes” and “which of the 3 Stooges was the funniest” in terms of interest level. The concept is easy enough to understand: a lagged copy is merely a passive copy of a mailbox database where the log files are not immediately played back, as they are with ordinary passive copies. The period between the arrival of a log file and the time when it’s committed to the database is known as the lag interval. If you have a lag interval of 24 hours set to a database, a new log for that database generated at 3pm on April 4th won’t be played into the lagged copy until at least 3pm on April 5th (I say “at least” because the exact time of playback will depend on the copy queue length). The longer the lag interval, the more “distance” there is between the active copy of the mailbox database and the lagged copy.

Lagged copies are intended as a last-ditch “goalkeeper” safety mechanism in case of logical corruption. Physical corruption caused by a hardware failure will happen after Exchange has handed the data off to be written, so it won’t be replicated. Logical corruption introduced by components other than Exchange (say, an improperly configured file-level AV scanner) that directly write to the MDB or transaction log files wouldn’t be replicated in any event, so the real use case for the lagged copy is to give you a window in time during which logical corruption caused by Exchange or its clients hasn’t yet been replicated to the lagged copy. Obviously the size of this window depends on the length of the lag interval, and whether or not it is sufficient for you to a) notice that the active database has become corrupted b) play the accumulated logs forward into the lagged copy and c) activate the lagged copy depends on your environment.

The prevailing sentiment in the Exchange world has largely been “ I do backups already so lagged copies don’t give me anything.” When Exchange 2010 first introduced the notion of a lagged copy, Tony Redmond weighed in on it. Here’s what he said back then:

For now, I just can’t see how I could recommend the deployment of lagged database copies.

That seems like a reasonable stance, doesn’t it? At MEC this year, though, Microsoft came out swinging in defense of lagged copies. Why would they do that? Why would you even think of implementing lagged copies? It turns out that there are some excellent reasons that aren’t immediately apparent. (It may help to review some of the resiliency and HA improvements delivered in Exchange 2013; try this this excellent omnibus article by Microsoft’s Scott Schnoll if you want a refresher.) Here are some of the reasons why Microsoft has begun recommending the use of lagged copies more broadly.

1. Lagged copies are better in 2013

Exchange 2013 includes a number of improvements to the lagged copy mechanism. In particular, the new loose truncation feature introduced in SP1 means that you can prevent a lagged copy from taking up too much log space by adjusting the the amount of log space that the replay mechanism will use; when that limit is reached the logs will be played down to make room. Exchange 2013 (and SP1) also make a number of improvements to the Safety Net mechanism (discussed fully in Chapter 2 of the book), which can be used to play missing messages back into a lagged copy by retrieving them from the transport subsystem.

2. Lagged copies are continuously verified

When you back up a database, Exchange checks the page checksum of every page as it is backed up by computing the checksum and comparing it to the stored checksum; if that check fails, you get the dreaded JET_errReadVerifyFailure (-1018) error. However, just because you can successfully complete the backup doesn’t mean that you’ll be able to restore it when the time comes. By comparison, the Exchange log playback mechanism will log errors immediately when they are encountered during log playback. If you’re monitoring event logs on your servers, you’ll be notified as soon as this happens and you’ll know that your lagged copy is unusable now, not when you need to restore it. If you’re not monitoring your event logs, then lagged copies are the least of your problems.

3. Lagged copies give you more flexibility for recovery

When your active and passive copies of a database become unusable and you need to fall back to your lagged copy, you have several choices, as described in TechNet. You can easily play back every log that hasn’t yet been committed to the database, in the correct order, by using Move-ActiveMailboxDatabase. If you’d rather, you can play back the logs up to a certain point in time by removing the log files that you don’t want to play back. You can also play messages back directly from Safety Net into the lagged copy.

4. There’s no hardware penalty for keeping a lagged copy

Some administrators assume that you have to keep lagged copies of databases on a separate server. While this is certainly supported, you don’t have to have a “lag server” or anything like unto it. The normal practice in most designs has been to store lagged copies on other servers in the same DAG, but you don’t even have to do that. Microsoft recommends that you keep your mailbox databases no bigger than 2TB. Stuff your server with a JBOD array of the new 8TB disks (or, better yet, buy a Dell PowerVault MD1220) and you can easily put four databases on a single disk: the active copy of DB1, the primary passive copy of DB2, the secondary passive copy of DB3, and the lagged copy of DB4. This gives you an easy way to get the benefits of a 4-copy DAG while still using the full capacity of the disks you have: the additional IOPS load of the lagged copy will be low, so hosting it on a volume that already has active and passive copies of other databases is a reasonable approach (one, however, that you’ll want to test with jetstress).

It’s always been the case that the architecture Microsoft recommends when a new version of Windows or Exchange is released evolves over time as they, and we, get more experience with it in the real world. That’s clearly what has happened here; changes in the product, improvements in storage hardware, and a shift in the economic viability of conventional backups mean that lagged copies are now much more appropriate for use as a data protection mechanism than they were in the past. I expect to see them deployed more and more often as Exchange 2013 deployments continue and our collective knowledge of best practices for them improves.

1 Comment

Filed under UC&C

MEC 2014 wrapup

BLUF: it was a fantastic conference, far and away the best MEC I’ve attended. The quality of the speakers and technical presentations was very high, and the degree of community interaction and engagement was too.

I arrived in Austin Sunday afternoon and went immediately to dinner at County Line on the Lake, a justly famous Austin BBQ restaurant, to put on a “thank you” dinner for some of the folks who helped me with my book. Unfortunately, the conference staff had scheduled a speakers’ meeting at the same time, and a number of folks couldn’t attend due to flight delays or other last-minute intrusions. Next time I’ll poll invitees for their preferred time, and perhaps that will help. However, the dinner and company were both excellent, and I now have a copy of the book signed by all in attendance as a keepsake— a nice reversal of my usual pattern of signing books and giving them away.

Monday began with the keynote. If you follow me (or any number of other Exchange MVPs) on Twitter, you already know what I think: neither the content of the presentation nor its actual presentation was up to snuff when compared either to prior MEC events or other events such as Lync Conference. At breakfast Monday, Jason Sherry and I were excitedly told by an attendee that his Microsoft account rep insisted that he attend the keynote, and for the life of me I couldn’t figure out why until the tablet giveaway. That raised the energy level quite a bit! I think that for the next MEC, Julia White should be handed the gavel and left to run the keynote as she sees fit; I can guarantee that would result in a more lively and informative event.  (For another time: a review of the Venue 8 Pro, which I like a great deal based on my use of it so far). One area where the keynote excelled, though, was in its use of humor. The video vignette featuring Greg Taylor and David Espinoza was one of the funniest such I’ve ever seen, and all of the other bits were good as well— check them out here. The keynote also featured a few good-natured pokes at the community, such as this:

Ripped

For the record, although I’ve been lifting diligently, I am not (yet) built like the guy who’s wearing my face on screen… but there’s hope.

I took detailed notes on each of the sessions I attended, so I’ll be posting about the individual sessions over the next few days. It’s fair to say that I learned several valuable things at each session, which is sort of the point behind MEC. I found that the quality of the “unplugged” sessions I attended varied a bit between sessions; the worst was merely OK, while the best (probably the one on Managed Availability) was extremely informative. It’s interesting that Tony and I seemed to choose very few of the same sessions, so his write-ups and mine will largely complement each other. My Monday schedule started with Kamal Janardhan’s session on compliance and information protection. Let me start by saying that Kamal is one of my favorite Microsoft people ever. She is unfailingly cheerful, and she places a high value on transparency and openness. When she asks for feedback on product features or futures, it’s clear that she is sincerely seeking honest feedback, not just saying it pro forma. Her session was great; from there, I did my two back-to-back sessions, both of which went smoothly. I was a little surprised to see a nearly-full room (I think there were around 150 people) for my UM session, and even more surprised to see that nearly everyone in the room had already deployed UM on either Exchange 2010 or 2013. That’s a significant change from the percentage of attendees deploying UM at MEC 2012. I then went to the excellent “Unplugged” session on “Exchange Top Issues”, presented by the supportability team and moderated by Tony. After the show closed for the day, I was fortunate to be able to attend the dinner thrown by ENow Software for MVPs/MCMs and some of their key customers. Jay and Jess Gundotra, as always, were exceptional hosts, the meal (at III Forks) was excellent, and the company and conversation were delightful. Sadly I had to go join a work conference call right after dinner, so I missed the attendee party.

Tuesday started with a huge surprise. On my way to the “Exchange Online Migrations Technical Deep Dive” session (which was good but not great; it wasn’t as deep as I expected), I noticed the picture below flashing on the hallway screens. Given that it was April Fool’s Day, I wasn’t surprised to see the event planners playing jokes on attendees, I just wasn’t expecting to be featured as part of their plans. Sadly, although I’m happy to talk to people about migrating to Office 365, the FAA insists that I do it on the ground and not in the air. For lunch, I had the good fortune to join a big group of other Dell folks (including brand-new MVP Andrew Higginbotham, MCM Todd Hawkins, Michael Przytula, and a number of people from Dell Software I’d not previously met) at Iron Works BBQ. The food and company were both wonderful, and they were followed by a full afternoon of excellent sessions. The highlight of my sessions on Tuesday was probably Charlie Chung’s session on Managed Availability, which was billed as a 300-level session but was more like a 1300-level. I will definitely have to watch the recording a few times to make sure I didn’t miss any of the nuances.

Surprise!

This is why I need my commercial pilot’s license— so I can conduct airborne sessions at the next MEC.

Tony has already written at length about the “Exchange Oscars” dinner we had Tuesday night at Moonshine. I was surprised and humbled to be selected to receive the “Hall of Fame” award for sustained contributions to the Exchange community; I feel like there are many other MVPs, current and past, who deserve the award at least as much, if not more. It was great to be among so many friends spanning my more than 15 years working with Exchange; the product group turned out en masse and the conversation, fellowship, and celebration was the high point of the entire conference for me. I want to call out Shawn McGrath, who received the “Best Tool” award for the Exchange Remote Connectivity Analyzer, which became TestExchangeConnectivity.com. Shawn took a good idea and relentlessly drove it from conception to implementation, and the whole world of Exchange admins has benefited from his effort.

Wednesday started with the best “Unplugged” session I attended: it covered Managed Availability and, unlike the other sessions I went to, featured a panel made mostly of engineers from the development team. There were a lot of deep technical questions and a number of pointed roadmap discussions (not all of which were at my instigation). The most surprising session I attended, I think, was the session on updates to Outlook authentication— turns out that true single sign-on (SSO) is coming to all the Office 2013 client applications, and fairly soon, at least for Office 365 customers. More on that in my detailed session write-ups. The MVPs were also invited to a special private session with Perry Clarke. I can’t discuss most of what we talked about, but I can say that I learned about the CAP theorem (which hadn’t even been invented when I got my computer science degree, sigh), and that Perry recognizes the leadership role Exchange engineering has played in bringing Microsoft’s server products to high scale. Fun stuff!

Then I flew home: my original flight was delayed so they put me on one leaving an hour earlier. The best part of the return trip might have been flying on one of American’s new A319s to Huntsville. These planes are a huge improvement over the nasty old MD80s that AA used to fly DFW-HSV, and they’re nicer than DL’s ex-AirTran 717s to boot. So AA is still in contention for my westbound travel business.

A word about the Hilton Austin Downtown, the closest hotel to the conference center: their newly refurbished rooms include a number of extremely practical touches. There’s a built-in nightlight in the bathroom light switch, and each bedside table features its own 3-outlet power strip plus a USB port, and the work desk has its own USB charging ports as well. Charging my phone, Kindle, Venue 8 Pro, and backup battery was much simpler thanks to the plethora of outlets. The staff was unfailingly friendly and helpful too, which is always welcome. However, the surrounding area seemed to have more than its share of sirens and other loud noises; next time I might pick a hotel a little farther away.

I’ll close by saying how much I enjoyed seeing old friends and making new ones at this conference. I don’t have room (or a good enough memory) to make a comprehensive list, but to everyone who took the time to say hello in the hall, ask good questions in a session, wave at me across the expo floor, or pass the rolls at dinner— thank you.

Now to get ready for TechEd and Exchange Connections…

Leave a comment

Filed under UC&C

Getting ready for MEC 2014

Wow, it’s been nearly a month since my last post here. In general I am not a believer in posting stuff on a regular schedule, preferring instead to wait until I have something to say. All of my “saying” lately has been on behalf of my employer though. I have barely even had time to fly. For another time: a detailed discussion of the ins and outs of shopping for an airplane. For now, though, I am making my final preparations to attend this year’s Microsoft Exchange Conference (MEC) in Austin! My suitcase is packed, all my devices are charged, my slides are done, and I am prepared to overindulge in knowledge sharing, BBQ eating, and socializing.

It is interesting to see the difference in flavor between Microsoft’s major enterprise-focused conferences. This year was my first trip to Lync Conference, which I would summarize as being a pretty even split between deeply technical sessions and marketing focused around the business and customer value of “universal communications”. In reviewing the session attendance and rating numbers, it was no surprise that the most-attended sessions and the highest-rated sessions tended to be 400-level technical sessions such as Brian Ricks’ excellent deep-dive on Lync client sign-in behavior. While I’ve never been to a SharePoint Conference, from what my fellow MVPs say about it, there was a great deal of effort expended by Microsoft on highlighting the social features of the SharePoint ecosystem, with a heavy focus on customization and somewhat less attention directed at SharePoint Online and Office 365. (Oh, and YAMMER YAMMER YAMMER YAMMER YAMMER.) Judging from reactions in social media, this focus was well-received but inevitably less technical given the newness of the technology.

That brings us to the 2014 edition of MEC. The event planners have done something unique by loading the schedule with “Unplugged” panel discussions, moderated by MVP and MCM/MCSM experts and consisting of Microsoft and industry experts in particular technologies. These panels provide an unparalleled opportunity to get, and give, very candid feedback around individual parts of Exchange and I plan on attending as many of them as I can. This is in no way meant to slight the many other excellent sessions and speakers that will be there. I’d planned to summarize specific sessions that I thought might be noteworthy, but Tony published an excellent post this morning that far outdoes what I had in mind, breaking down sessions by topic area and projected attendance. Give it a read.

I’m doing two sessions on Monday: Exchange Unified Messaging Deep Dive at 245p and Exchange ActiveSync: Management Challenges and Best Practices at 1145a. The latter is a vendor session with the folks from BoxTone, during which attendees both get lunch (yay) and the opportunity to see BoxTone’s products in action. They’re also doing a really interesting EAS health check, during which you provide CAS logs and they run them through a static analysis tool that, I can almost guarantee, will tell you things you didn’t know about your EAS environment. Drop by and say hello!

Leave a comment

Filed under UC&C