Category Archives: General Tech Stuff

The difference between Suunto cadence and bike pods

I spent way too much time trying to figure this out today, so I’m blogging it in hopes that the intertubez will make it easy for future generations to find the answer to this question: what’s the difference between a cadence pod and a bike pod according to Suunto?

See, the Suunto Ambit series of watches can pair with a wide range of sensors that use the ANT+ standard. You can mix and match ANT+ devices from different manufacturers, so a Garmin sensor will work with a Suunto watch, or a Wahoo heart-rate belt will work with a Specialized bike computer. I wanted to get a speed and cadence sensor for my bike. These sensors measure two parameters: how fast you’re going and how rapidly you’re pedaling. (This is a great explanation of what these sensors really measure and how they work.) Ideally you want a nice, steady cadence of 75-90 rpm. I knew I had a variable cadence, and I wanted to measure it to get a sense for where I was at.

I ordered a Wahoo combined cadence/speed sensor from Amazon and installed it on the bike, which was pretty straightforward. Then I paired it with the watch using the “bike POD” option. (Suunto, for some reason, calls sensors “PODs”). That seemed to work fine, except that I wasn’t getting any cadence or speed data. But I knew the sensor was working because the watch paired with it. I tried changing the sensor battery, moving the sensor and its magnets around, and creating a new tracking activity that didn’t use GPS to see if I got speed data from the sensor. Then I thought “maybe it’s because I didn’t pair a cadence pod”, so I tried that, but no matter what I did, the watch refused to see the Wahoo sensor as a cadence sensor.

Here’s why: to Suunto, a “bike POD” is a combined speed/cadence sensor. A “cadence pod” is for cadence only. Like Bluetooth devices, each ANT+ device emits a profile that tells the host device what it is. That’s why the watch wouldn’t see the sensor, which reported itself as a combined cadence/speed unit, when I tried to pair a cadence pod. After I figured that out, I quit trying to pair the cadence pod… but I still didn’t get speed or cadence data.

The solution turned out to be simple. For some reason, in the cycling sport activity, the “Bike POD” sensor was unchecked, so the watch wasn’t reading its data stream during the activity. I don’t remember unchecking the box, but maybe I did. In any event, once I checked the “Bike POD” box and updated the watch, I immediately started getting cadence and speed data, so I set out for a ride.

NewImage

Hint: if you uncheck any of these boxes the watch will never, ever pay attention to that sensor

I thought it was a pretty good ride from a speed perspective, even though I took a new route that had a number of hills– I had some trouble with that. But look at my cadence… you can see that it definitely needs work. Sigh. One of the nifty things about Suunto’s web site is that it shows vertical speed when you point at cadence data, so I could see where I was struggling to get up hills (meaning I needed to change gears) or loafing when going downhill. Just one more thing to put on my to-fix list…

NewImage

Leave a comment

Filed under Fitness, General Tech Stuff, HOWTO

Exchange Server and Azure: “not now” vs “never”

Wow, look what I found in my drafts folder: an old post.

Lots of Exchange admins have been wondering whether Windows Azure can be used to host Exchange. This is to be expected for two reasons. First, Microsoft has been steadily raising the volume of Azure-related announcements, demos, and other collateral material. TechEd 2014 was a great example: there were several Azure-related announcements, including the availability of ExpressRoute for private connections to the Azure cloud and several major new storage improvements. These changes build on their aggressive evangelism, which has been attempting, and succeeding, to convince iOS and Android developers to use Azure as the back-end service for their apps. The other reason, sadly, is why I’m writing: there’s a lot of misinformation about Exchange on Azure (e.g. this article from SearchExchange titled “Points to consider before running Exchange on Azure”, which is wrong, wrong, and wrong), and you need to be prepared to defuse its wrongness with customers who may misunderstand what they’re potentially getting into.

On its face, Azure’s infrastructure-as-a-service (IaaS) offering seems pretty compelling: you can build Windows Server VMs and host them in the Azure cloud. That seems like it would be a natural fit for Exchange, which is increasingly viewed as an infrastructure service by customers who depend on it. However, there are at least three serious problems with this approach.

First: it’s not supported by Microsoft, something that the “points to consider” article doesn’t even mention. The Exchange team doesn’t support Exchange 2010 or Exchange 2013 on Azure or Amazon EC2 or anyone else’s cloud service at present. It is possible that this will change in the future, but for now any customer who runs Exchange on Azure will be in an unsupported state. It’s fun to imagine scenarios where the Azure team takes over first-line support responsibility for customers running Exchange and other Microsoft server applications; this sounds a little crazy but the precedent exists, as EMC and other storage companies did exactly this for users of their replication solutions back in Exchange 5.5/2000 times. Having said that, don’t hold your breath. The Azure team has plenty of other more pressing work to do first, so I think that any change in this support model will require the Exchange team to buy in to it. The Azure team has been able to get that buy-in from SharePoint, Dynamics, and other major product groups within Microsoft, so this is by no means impossible.

Second: it’s more work. In some ways Azure gives you the worst of the hosted Exchange model: you have to do just as much work as you would if Exchange were hosted on-premises, but you’re also subject to service outages, inconsistent network latency, and all the other transient or chronic irritations that come, at no extra cost, with cloud services. Part of the reason that the Exchange team doesn’t support Azure is because there’s no way to guarantee that any IaaS provider is offering enough IOPS, low-enough latency, and so on, so troubleshooting performance or behavior problems with a service such as Azure can quickly turn into a nightmare. If Azure is able to provide guaranteed service levels for disk I/O throughput and latency, that would help quite a bit, but this would probably require significant engineering effort. Although I don’t recommend that you do it at the moment, you might be interested in this writeup on how to deploy Exchange on Azure; it gives a good look at some of the operational challenges you might face in setting up Exchange+Azure for test or demo use.

Third: it’s going to cost more. Remember that IaaS networks typically charge for resource consumption. Exchange 2013 (and Exchange 2010, too) is designed to be “always on”. The workload management features in Exchange 2013 provide throttling, sure, but they don’t eliminate all of the background maintenance that Exchange is more-or-less continuously performing. These tasks, including GAL grammar generation for Exchange UM, the managed folder assistant, calendar repair, and various database-related tasks, have to be run, and so IaaS-based Exchange servers are continually going to be racking up storage, CPU, and network charges. In fairness, I haven’t estimated what these charges might be for a typical test-lab environment; it’s possible that they’d be cheap enough to be tolerable, but I’m not betting on it, and no doubt a real deployment would be significantly more expensive.

Of course, all three of these problems are soluble: the Exchange team could at any time change their support policy for Exchange on Azure, and/or the Azure team could adjust the cost model to make the cost for doing so competitive with Office 365 or other hosted solutions. Interestingly, though, two different groups would have to make those decisions, and their interests don’t necessarily align, so it’s not clear to me if or when we might see this happen. Remember, the Office 365 team at Microsoft uses physical hardware exclusively for their operations.

Does that mean that Azure has no value for Exchange? On the contrary. At TechEd New Orleans in June 2013, Microsoft’s Scott Schnoll said they were studying the possibility of using an Azure VM as the witness server for DAGs in Exchange 2013 CU2 and later. This would be a super feature because it would allow customers with two or more physically separate data centers to build large DAGs that weren’t dependent on site interconnects (at the risk, of course, of requiring always-on connectivity to Azure). The cost and workload penalty for running an FSW on Azure would be low, too. In August 2013, the word came down: Azure in its present implementation isn’t suitable for use as an FSW. However, the Exchange team has requested some Azure functionality changes that would make it possible to run this configuration in the future, so we have that to look forward to.

Then we have the wide world of IaaS capabilities opened up by Windows Azure Active Directory (WAAD), Azure Rights Management Services, Azure Multi-Factor Authentication, and the large-volume disk ingestion program (now known as the Azure Import/Export Service). As time passes, Microsoft keeps delivering more, and better, Azure services that complement on-premises Exchange, which has been really interesting to watch. I expect that trend to continue, and there are other, less expensive ways to use IaaS for Exchange if you only want it for test labs and the like. More on that in a future post….

3 Comments

Filed under General Tech Stuff, UC&C

2-factor Lync authentication and missing Exchange features

Two-factor authentication (or just 2FA) is increasingly important as a means of controlling access to a variety of systems. I’m delighted that SMS-based authentication  (which I wrote about in 2008), has become a de facto standard for many banks and online services. Microsoft bought PhoneFactor and offers its SMS-based system as part of multi-factor authentication for Azure, which makes it even easier to deploy 2FA in your own applications.

Customers have been demanding 2FA for Lync, Exchange, and other on-premises applications for a while now. Exchange supports the use of smart cards for authentication with Outlook Anywhere and OWA, and various third parties such as RSA have shipped authentication solutions that support other authentication factors, such as one-time codes or tokens. Lync, however, has been a little later to the party. With the July 2013 release of Lync Server 2013 CU2, Lync supports the use of smart cards (whether physical or virtual) as an authentication mechanism. Recently I became aware that there are some Lync features that aren’t available when the client authenticates with a smart card– that’s because the client authenticates to two different endpoints. It authenticates to Lync using two-factor authentication, but the Lync client can’t currently authenticate to Exchange using the same smart card, so services based on access through Exchange Web Services (EWS) won’t work. The docs say that this is “by design,” which I hope means “we didn’t have time to get to it yet.”

The result of this limitation means that Lync 2013 clients using 2FA cannot use several features, including

  • the Unified Contact Store. You’ll need to use Invoke-CsUcsRollback to disable Lync 2FA users’ UCS access if you’ve enabled it.
  • the ability to automatically set presence based on the user’s calendar state, i.e. the Lync client will no longer set your presence to “out of office”, “in a meeting,” etc. based on what’s on your calendar. Presence that indicates call states such as “in a conference call” still works.
  • integration with the Exchange-based Conversation History folder. If you’ve configured the use of Exchange 2013 as an archive for Lync on the server side, that still works.
  • Access to high-definition user photos
  • The ability to see and access Exchange UM voicemail messages from the Lync client

These limitations weren’t fixed in CU3, but I am hopeful that a not-too-distant future version of the client will enable full 2FA use. In the meantime, if you’re planning on using 2FA, keep these limitations in mind.

1 Comment

Filed under General Tech Stuff, UC&C

Need Windows licensing help? Better call Paul

No, I’m not giving it. That would be like me giving advice on how to do a pencil drawing, or what wine goes with In-N-Out Burger.

A year or so ago, I had a very complex Windows licensing questions that Microsoft was unable to answer. More to the point, no two Microsoft people were able to give me the same answer. I did a little digging and found Paul DeGroot of Pica Communications, author of the only book on Microsoft licensing that I know of. Paul quickly and clearly answered my question, and a couple of rounds of follow-up questions after that. Armed with his information, I was able to solve the particular problem I was having in a less expensive, less painful way than just buying all the licenses. As I was cleaning out my inbox, I found our discussion and remembered, guiltily, that I meant to mention Paul’s services earlier. Under the banner “better late than never” consider this a belated, and strong, recommendation.

Leave a comment

Filed under General Tech Stuff, UC&C

PC reliability: Apple, Dell, and lessons for servers?

Via Ed Bott, a fascinating article on real-world robustness from Windows 7 and Windows 8 PCs: Want the most reliable Windows PC? Buy a Mac (or maybe a Dell). You should read the article, which outlines a report issued by Soluto, a cloud-based PC health and service monitoring company. Their report analyzes data reported to their service by customers to attempt to answer the question of which manufacturer’s PCs are the most reliable. Apple’s 13″ MacBook Pro comes out on top, with Acer’s Aspire E1-571 coming in second and Dell’s XPS 13 in third. In fact, out of the top 10, Apple has two spots, Acer has two spots, and Dell has five. Ed points out that it’s odd that Hewlett-Packard doesn’t have any entries in the list, and that Lenovo (which I have long considered the gold standard for laptops not made by Apple) only has one.

The report, and Ed’s column, speculate on why the results came out this way. I don’t know enough about the PC laptop world to have a good feel for how many of the models on their list are consumer-targeted versus business-targeted, although they do include cost figures that help provide some clues. There’s no doubt that the amount of random crap that PC vendors shovel on to their machines makes a big difference in the results, although I have to suspect that the quality of vendor-provided drivers makes a bigger difference. Graphics drivers are especially critical, since they run in kernel mode and can easily crash the entire machine; the bundled crapware included by many vendors strikes me as more of an annoyance than a reliability hazard (at least in terms of unwanted reboots or  crashes.)

The results raise the interesting question of whether there are similar results for servers. Given that servers from major vendors such as Dell and H-P come with very clean Windows installs, I wouldn’t expect to see driver issues play a major part in server reliability. My intuition is that the basic hardware designs from tier 1 vendors are all roughly equal in reliability, and that components such as SAN HBAs or RAID controllers probably have a bigger negative impact on overall reliability than the servers themselves– but I don’t have data to back that up. I’m sure that server vendors do, and equally sure that they guard it jealously.

More broadly, it’s fascinating that we can even have this discussion.

First of all, the rise of cloud-based services like Soluto (and Microsoft’s own Windows Intune) means that now we have data that can tell us fascinating things. I remember that during the development period of Windows 2003, Microsoft spent a great deal of effort persuading customers to send them crash dumps for analysis. The analysis revealed that the top two causes of server failures were badly behaving drivers and administrator errors. There’s not much we can do about problem #2, but Microsoft attacked the first problem in a number of ways, including restructuring how drivers are loaded and introducing driver signing as a means of weeding out unstable or buggy drivers. But that was a huge engineering effort led by a single vendor, using data that only they had– and Microsoft certainly didn’t embarrass or praise any particular OEM based on the number of crashes their hardware and drivers had.

Second, Microsoft’s ongoing effort to turn itself into a software + services + devices company (or whatever they’re calling it this week) means that they are able to gather a huge wealth of data about usage and behavior. We’ve seen them use that data to design the Office fluent interface, redesign the Xbox 360 dashboard multiple times, and push a consistent visual design language across Windows 8, Windows Phone 8, Xbox 360, and apps for other platforms such as Xbox SmartGlass. It’s interesting to think about the kind of data they are gathering from operating Office 365, and what kind of patterns that might reveal. I can imagine that Microsoft would like to encourage Exchange 2013 customers to share data gathered by Managed Availability, but there are challenges in persuading customers to allow that data collection, so we’ll have to see what happens.

To the cloud…

1 Comment

Filed under General Tech Stuff, UC&C

Loading PowerShell snap-ins from a script

So I wanted to launch an Exchange Management Shell (EMS) script to do some stuff for a project at work. Normally this would be straightforward, but because of the way our virtualized lab environment works, it took me some fiddling to get it working.

What I needed to do was something like this:

c:\windows\system32\powershell\v1.0\powershell.exe -command "someStuff"

That worked fine as long as all I wanted to do was run basic PowerShell cmdlets. Once I started trying to run EMS cmdlets, things got considerably more complex because I needed a full EMS environment. First I had to deal with the fact that EMS, when it starts, tries to perform a CRL check. On a non-Internet-connected system, it will take 5 minutes or so to time out. I had completely forgotten this, so I spent some time fooling around with various combinations of RAM and virtual CPUs trying to figure out what the holdup was. Luckily Jeff Guillet set me straight when he pointed me to this article, helpfully titled “Configuring Exchange Servers Without Internet Access.” That cut the startup time waaaaay down.

However, I was still having a problem: my scripts wouldn’t run. They were complaining that “No snap-ins have been registered for Windows PowerShell version 2″. What the heck? Off to Bing I went, whereupon I found that most of the people reporting similar problems were trying to launch PowerShell.exe and load snap-ins from web-based applications. That puzzled me, so I did some more digging. Running my script from the PowerShell session that appears when you click the icon in the quick launch bar seemed to work OK. Directly running the executable by its path (i.e. %windir%\system32\powershell\v1.0\powershell.exe) worked OK too… but it didn’t work when I did the same thing from my script launcher.

Back to Bing I went. On about the fifth page of results, I found this gem at StackExchange. The first answer got me pointed in the right direction. I had completely forgotten about file system virtualization, the Windows security feature that, as a side effect, helps erase the distinction between x64 and x86 binaries by automatically loading the proper executable even when you supply the “wrong” path. In my case, I wanted the x64 version of PowerShell, but that’s not always what I was getting because my script launcher is a 32-bit x86 process. When it launched PowerShell.exe from any path, I was getting the x86 version, which can’t load x64 snap-ins and thus couldn’t run EMS.

The solution? All I had to do was read a bit further down in the StackExchange article to see this MSDN article on developing applications for SharePoint Foundation, which points out that you must use %windir%\sysnative as the path when running PowerShell scripts after a Visual Studio build. Why? Because Visual Studio is a 32-bit application, but the SharePoint snap-in is x64 and must be run from an x64 PowerShell session… just like Exchange.

Armed with that knowledge, I modified my scripts to run PowerShell using sysnative vice the “real” path and poof! Problem solved. (Thanks also to Michael B. Smith for some bonus assistance.)

1 Comment

Filed under General Tech Stuff, UC&C

Why you should keep multiple backups

I spent the weekend a) in Huntsville with the boys and b) in a fog of cold medication. In fact, I called in sick to work today, which is really unusual for me. A coworker e-mailed me to ask for a couple of documents I’d written, and when I saw her e-mail (I called in sick, not dead, so I was still checking e-mail), I couldn’t find the files in SkyDrive on my Surface Pro. “Oh,” I thought. “I must have checked them in to our SharePoint site.”

Nope.

“Maybe they’re on my work desktop.” A quick RDP connection and… nope.

Now I was beginning to freak out a bit. I knew I had written these documents. I knew right where I’d left them. But they were nowhere to be found.

I went back to my MacBook Pro, which is sort-of my desktop now.… nope.

Then the fog lifted, oh so briefly, and I figured out what had happened.

For some reason, about a month ago, the SkyDrive client for OS X started pegging the CPU at random intervals. It was still syncing, most of the time, but when it started burning the CPU it would kick the MacBook Pro’s fans into turbo mode, so I started shutting the app off until I explicitly wanted it to sync. (This reminded me of the ancient technology known as Groove, but let us never speak of it again.) Eventually I got tired of this and started troubleshooting the problem. The easiest solution was to remove and reinstall the app, so I did. Before doing so, I made a backup copy of the entire SkyDrive folder, renamed it to “Old SkyDrive,” and let the newly installed app resync from the cloud. Then I deleted the old copy.

Fast-forward to today. I realized what had happened: the documents had been in the old SkyDrive folder, they never got synced, and now they were gone.

But wait! I do regular backups to Time Machine when I’m in Mountain View. I looked in Time Machine… nope.

“Oh, that’s right,” I muttered. “I created those files and ‘fixed’ SkyDrive last time I was in Huntsville.”

But wait! I also use CrashPlan! I fired up the app… nope.

Then I noticed the little “Show deleted files” checkbox. I checked it, typed in the name of the files I wanted, and in 90 seconds had all of them restored to my local disk.

So, the moral of the story is: a) make backups and then b) make backups of your backups. Oh, and go easy on the Benadryl.

Leave a comment

Filed under General Tech Stuff