Misguided analyst editorial – update: Called it

Wow.   Rob Enderle has a lot of readers on Computerworld I expect.   I read the odd article from him.   I had thought he would be offering solid business advice in light of the viral “Comcastic” support call from hell [ http://www.huffingtonpost.com/2014/07/14/the-comcast-call-from-hell_n_5586476.html ] I stand corrected.

Basically, Enderle shows to be a run-of-the-mill, CYA sycophant extraordinaire advocating analytics to essentially make two classes of customer, rather than use it to help monitor and improve your customer relationships as a whole.    His original article I’m ranting about is here: Don’t Be Comcast: Use Analytics, Monitoring to Prevent a Viral Disaster – Computerworld

He starts somewhat sanely, in having a list of the biggest customers available to managers, so that, as he did, you don’t cancel a supply contract you have from someone who happens to be your largest customer.  I get the impression that it wasn’t a healthy business relationship, and may have been grounded more in back-room “you scratch my back I’ll scratch yours” deals than good business if a cancellation resulted in that sort of fallout.   Either that or the company Enderle was working for ALSO wasn’t competitive, and let’s just say what goes around comes around in that case.

But when he then takes social media, and proposes to use it to monitor when you might have a PR issue on your hands, or track negative and positive PR, that’s just good sense.   But that’s not what he proposes.   He takes the idea of the “influencers”, people who have larger pull in social media and PR, being often celebrities or journalists, and having your real-time analytics alert you when they are contacting the company in support to give them extra-special treatment.   Basically, make them an “elite”customer, and screw the rest of us.  

You know what social media does then?   Check the hashtag count on twitter.   You’ll still get #comcastic from all the rest of your customers relating serious issues and problems, and you’ll have a few media celebrities having positive PR.  And it will catch up with you, and if your customer service sucks, I’ll trust my second-cousin’s-friend’s opinion of your shop far more than some privileged A-list celeb on where I take my business. 

Enderle is a “mover-shaker-fool” that is looking for quick results and his own rep in a corp rather than actually making your business the leader in the category.   Fix the problem.  Treat your customers correctly and install that in your employees.  And don’t incentivize them disproportionately against that.   I’ll bet good money that  the call rep at Comcast is paid good coin on a “save” of a leaving customer, so he will work his rear end off to the point shown in that recording to make that save.   It’s worth it as I expect his performance and his compensation is so skewed to making the save that it’s not worth his time to be courteous and walk people through it professionally.   Enderle says the rep should be fired.   If I were the CEO I would start with looking at how the incentive programs for the call centre are set up, especially in the “customer retention” area.   And adjust the attitudes of the people setting that up.   

And you know the irony?   I bet the whole behaviour of grinding so hard to keep a customer (and up in Canada Shaw and Telus do it just as much, but I didn’t run into quite the level of zealousness that was in the viral posting) is based on analytics.  Enderle hasn’t yet learned the lesson that people who actually THINK about analytics and their application has, which is you still need to have a goal in mind when you apply them.   Comcast has a goal when it hits customer retention, as do all these telecom/internet providers.  Keep the customer at all costs because customer acquisition is very expensive.  The numbers say it’s a lost cause so go all out.   Any win is a great bonus.  Social media is re-empowering the consumer and making the businesses play honest with everyone.   Enderle doesn’t get it.   Make sure you make a better decision than he advocates.

 

UPDATE:  Looks like I called that better than the paid analyst did.   http://venturebeat.com/2014/07/22/comcasts-retention-policies-take-the-blame-for-that-customer-service-call-from-hell/ pretty much outlines what I figured was the core of the issue.   Perhaps new metrics NOT from the accounting department need to be added in?

Snowden we know about but…. who else?

I’ve been following the Snowden revelations.  And commentary by people like Bruce Schnier, as well as responses from the NSA and others on TED Talks.  It’s a complex issue.   If there was ever a time a US Agency succeeded in undermining the entire US economy before, I can’t think of it.   But I think this time it will reach that far.   

Because of that, Snowden is being vilified.   He should never has spoken up, many say, and the damage he has done to the US and their reputation, and via that to their economy, is treason.   Criminal treason.  

He broke the law.  I have no argument there.   Anyone engaging in civil disobedience is acting against the law as a matter of conscience.   Ed Snowden obviously cares a very great deal about what he did, and he is paying a very high price for those actions, but he feels they were worthwhile.   They were not for personal gain.   It was for a principle.  A principle that said the US was honourable, at least to its own citizens. 

I’ve not felt the US is all that honourable to *anyone* since the DMCA and the Patriot acts came in.   The government acts almost entirely to facilitate special interests and powerful elite individuals and organizations.   Not to make the land of unlimited opportunity it was founded on.   The people who seized those opportunities don’t wish anyone else to threaten their successes.   They don’t want the next generation succeeding them.   Or beating them.   But in all this turmoil, all these differing viewpoints, motivations, and possibilities, there is one question that I haven’t heard yet.

Who else besides Snowden?  Snowden was a rare character with a social conscience that could see that this was morally wrong, and was against the law of the land he served and cherished.   More importantly, it was against the spirit that his entire nation was founded on.  That the government served the people.   

So these programs have existed for a long time.   They have known vectors, and methods.   They have catalogued vulnerabilities.   These things are extremely valuable, especially to foreign powers, and criminal powers.   This is the backbone of information for major organizations.   These vulnerabilities are the keys to the kingdom.   The knowledge was a top secret weapon.   But the holes in the infrastructure are in all the infrastructure.   Including the US, Canada and all of the other  allies of the US.   That’s a really, really valuable thing, as long as the US believes that these vulnerabilities are still their own little secret ,and nobody else’s.  

Snowden was one relatively minor actor in these organizations.   He had a conscience.   He served his conscience as he saw fit.   But suppose any one of the thousands of others with knowledge of these programs was wronged or felt they deserved a better result from their own labours?   Suppose they felt they could keep the US operating as they desired and not leak the information in good conscience, but instead sell it.  Secretly.   To a foreign power or criminal organizations.   There are so many pieces they would only need a handful, and probably would profit immensely from each one.  Who’s it gonna hurt?   The NSA knows about the vulnerabilities, so it shouldn’t hurt them.   And so what if a few US companies get caught in the crossfire.   The NSA thinks that’s ok.   The US Government thinks it’s ok if they do it.  So spread the wealth a little.   

How many of the foreign attacks we have seen have been through these intentionally introduced vulnerabilities?   How many times have advantages been given to hostile powers, to those who would do harm to others with this power?

Snowden did us a favour.   He gave us a shot at stopping all of it.   And hopefully being wary of it happening again.

Who else gave away copies of the keys to the kingdom?

Currently playing in iTunes: Christmas Song by Dave Matthews Band

Avahi… linker flags to compile the examples – Original post June 1, 2011

So, let’s assume you want to attack using mDNS also known as Zeroconf on Linux to advertise your new service in a modern, portable, discoverable way with no pain on the user’s part?   Simple, just pop over to http://avahi.org and look at the examples.   Compile them, try them out.   If you’re on a Mac, avail yourself of Bonjour Browser to have a look at the services popping in and out of existence as you test it.   There may be a Linux zeroconf service browser, but I didn’t find one ready to go.   If you know of one, please add it into the comments!

Wait, you’re saying you can’t get it to compile and link?   Ah.   Yes, so there’s two normal landmines people step on in Linux development.   The first one can be solved by a general rule of thumb.   You need the avahi-devel packages to be installed so you can use the headers in the examples and in your software that allow you to link into the avahi ABI.  On an RPM-based system, yum install avahi-devel will get you there.   Normally, pacakagename-devel is going to get you these developer libraries.   Avahi is already on most RPM distros so you can have your software run out of the box without this install on the target machine.   You just need it for development and compilation of the binary.

Wait, still not working you say?   Ah.  You’re getting a few pages of “undefined reference” you say?   All to avahi functions?   So you try the link flag -lavahi as that usually gets it right?   No go.   libavahi doesn’t exist.  So you go googling.   I did all this.   It’s an interpretive pain.  Here’s the magic incantation to build the service publishing example:

gcc avahitest.c -lavahi-glib -lavahi-core -lavahi-common -lavahi-client

That links in all the avahi libraries I could find (and you don’t really need them all, but they are listed here for completeness).   Then it runs and works brilliantly.  If you’re wondering where all these are located, it’s in /usr/lib64 at least on x86_64 Fedora.

I wandered through the avahi wiki at some speed, and couldn’t find anything simply listing this time-consuming necessity.  So I’m posting it here in the hopes that a future frustrated searching developer might find just a bit of relief and save themselves a bunch of blind stumbling to little effect.

 

 

 

Currently playing in iTunes: Know Us by Jillian Ann

Static IP on Fedora Core 10 – original post July 27, 2009

I’d like to know when in my Linux server hiatus somebody decided to make the Fedora system so “end-user-friendly” that it became a serious pain to configure a server.

I won’t repeat the Network Manager rant here, but if you want to set up a server with a static IP address, start by incanting it out of existence:

chkconfig NetworkManager off

which will at least get you one metric tonne less pain in fighting it.

 

Then, set yourself up manually on the static ip address. The config files still seem to get written correctly with:

system-config-network

and go through that setting your interfaces for the static IP and gateway and netmask as you require. Getting the DNS together while there is also a good idea. 😉 Save and exit

 

Now, from what I’ve seen, that doesn’t do too much. You then need to add a link to the network daemon. If, like any good sysadmin you’re running without a gui, then you add it to the rc3.d directory. If you run a gui on the server, it will be in the rc5.d. Heck, add it in both. For runlevel 3, the symbolic link to create:

ln -s /etc/rc.d/init.d/network /etc/rc.d/rc3.d/S07network

and that will get it to start up and pull in the network configuration you set up. That should get you up and running with a nice static IP on Fedora Core 10. And give you more time to curse the myopia that screwed the system up so much in making it friendly. If you’re going to add automation, you still allow the manual config and automate the manual config. Seems like the network configuration and manager had a serious case of either Not Invented Here or I Don’t Need That So Nobody Else Does Either going on. Extremely aggravating. Even with DHCP the interfaces wouldn’t come up automatically from a stock in stall on Core 9. By the time NetworkManager gets fixed people will be so used to turning if off it will never get the respect or use it may deserve at that point. Very unfortunate.

 

Eating of the Dogfood – original post 2006.09.19

There is much that has been said about “eating your own dog food“. Indeed, there is no better way to ensure that what you are building is of real use. It’s not always possible mind you, as is the case in our company, where we provide data and analysis software for the energy and engineering segments, and we aren’t in either of those businesses. As a result, our products keep our customers very close in design and implementation for our best success.

My head did a bit of a turn sideways to contemplate a very odd thing the other day. I’ve been working in C# on .NET 2.0 creating a relational database walker to do a transform/load into a custom schema and management system we need to work with. As I’m creating this system, I need to discover the primary keys in the Microsoft SQL Server 2005 tables I’m crawling. Oddly, the metadata doesn’t seem to contain any such information. There are some constraints, procedures, and the table and columns of course, but the primary key is absent. As the SQL Server Management Studio shows the primary key, I’m pretty sure I’ve done something wrong. So into Google and MSDN we go.

The part that made my brain do a serious double-take, and say “I emmust/em have read that wrong, was looking at the ADO.NET 2 documentation, specifically this page on MSDN. It states quite clearly that the SQL Server provider doesn’t do primary keys. But emOracle’s/em provider does. Apparently SQL Management Studio has a connection with the greater cosmos that allows it to magically divine primary keys from the fabric of space-time. After some more searching, I’ve found this gem from the Program Manager of SQL 2005/Whidbey. I appreciate it “bothering him”, but the management studio obviously wasn’t eating the dog food of the Whidbey release on the schema collections, and is instead using some alternate mysticism to achieve the desired results.

By the looks of the XML file Carl provides, it appears the magic is within the mystical (and decidedly specific) System Tables. This is obviously up there with a hack, as it’s a graft into the .NET config files of the workstation you install and run on, but given the relative simplicity of the configuration change, I’m rather baffled as to how this was missed in the product release. Again, from a dog food perspective this wouldn’t have been missed without disabling the entire SQL Server Management Studio from being useful at all if the only public remote interface was the one that the ADO.NET system provides. It would have improved the .NET system and the SQL Server Client metadata at a minimum.

So the short of it is, thanks to blogs and other “out-of-channel” communication, there is a very awkward work-around to what seems to be a very fundamental oversight. But again, if you want a better product, dig in an use what you build as much as possible. Adhere to your own published interfaces, and if you have an API, make sure your other products are using it, and not some other obscure method for integration.

Convention vs. Configuration – original post 2006.09.05

Unless you’ve stayed removed from the open Internet and general development news the past year, you’ve been inundated with articles about Ruby on Rails. There has been an immense level of hype, and more recently, established systems and frameworks have been almost counterattacking for lack of a better word against Rails.

Much like the polarized debates I attended at the DDJ Architecture and Design World conference in Chicago a few months ago, the screaming is always about volume, not content. At the conference, those who believe Model Driven Architectures and such are dead-end academic nonsense shout louder each year with more invective and never engaging in meaningful explorations. It’s not that they don’t believe any of the ideas, in fact, some are on both sides of the debate quite deeply, but the public discussions too often degenerate into a shouting match of x vs. y.

With Rails, it gives another view of the increase in capability of a developer, much like MDA allows in some scenarios. It’s no panacea, and like MDA, Rails has it’s solid successes, it’s marginal areas, and areas where it just isn’t up to the task compared to other frameworks today.

The point is the core idea that you will come across in article after article and book after book on Rails. Convention over Configuration. In ASP.NET and Java web applications, you have quite a serious amount of configuration in the development of the application. Some of the IDEs automate portions of this, but in every case, the power of the environment hits you square in the head when you’re using it, even when you don’t need the power. That’s where Rails changes the model, and finally, some of the Java frameworks (and possibly others) are taking notice. They give intelligent, most-often-used cases as defaults. Conventions of capability and use rather than an open box of parts. You can still, sometimes with a great deal of effort mind you, change that aspect and configure things specifically to your needs. That’s the right thing. But needing a simple CRUD web app for a database, you should not need to configure the three tiers, and distributed transactions and processing systems for a 3 day project.

I made the case that it is precisely because we are still writing in C# and Java and these decidedly C-like languages that so much work IS getting outsourced, and why we as IT professionals continue to fail in satisfying our users. We are using tools and languages that represent the same incremental abstraction from the processor that C and Fortran did in the 60’s. Our representations of components, modules, concepts and constructs are still linked directly to the idea of index variable loops and branching constructs of comparisons. It’s not that we need to do away with those. Some of the decision and looping logic is fundamental to software development. The point is we are continuously trying to achieve more sophisticated systems, richer interaction models and more scalable and distributed systems with the same tools we were writing command line utilities and our first compilers with. Seems rather inefficient. I regard it as a unique failing of our profession to date, and it’s why I will continue to explore technologies like Rails, like Ruby and MDA, and even go back to examine the more powerful technologies of LISP and other conceptual languages to find a way to get out of the rut we are trapped in.

It’s not about which framework is “best”. It’s about taking ideas to enable us to develop more sophisticated systems more easily. Components, modules, services, these are all some level of wrapping and abstraction, but even those are all built with these crude tools. In 30 years, we’ve basically achieved P-code and automated memory management. Not good enough.

Currently playing in iTunes: Pacific Coast Party by Smash Mouth

Be the Ronin with your Languages – Original post 2006.08.21

For anyone who has been in the industry more than a few years, and for everyone that really enjoys software and computing systems, odds are you’ve learned more than one language in your travels and studies. If not, I suggest you’ve done yourself a disservice.

The lesscode blog isn’t too active any more, but it has much sage advice, and a good post from last year that’s been referred to many times. “The Philosopher’s Song” is a ranging muse on the efficacy of languages, and the rage of Ruby vs. Python and the sameness of now to history.

At any rate, a Ronin is a samurai without a master. A warrior that seeks a purpose in some ways. When it comes to developing software, do not forge yourself into a single tool. Further, do not sit within the pure OO or pure procedural languages, and further don’t stay within a single family or style of languages. Try LISP, try Smalltalk, try Python, try Ruby, try C#, try Java. Extend into Javascript, and understand assembler on at least one architecture if you can. Touch Perl and a shell script, look into languages fringe and mainstream. But most of all, understand how different families and types of languages approach programming differently. LISP does things in such brevity and simplicity for certain problems that you will wonder why some problems are done in any other way. Ruby yields higher constructs that C# can seem downright tedious for many tasks. All languages have a purpose, and a strength, and a weakness.

Branch, enrich, learn, but do not stay beholden to one language in your skills. When all you have is a hammer….

Currently playing in iTunes: The Ride of the Rohirrim by Howard Shore

Joel Says it more Clearly Than I – original post 2006.08.03

I’m a Mac user. I work on Windows. I have a Linux server. I’ve administered every one of those platforms, plus Solaris, and programmed decent sized apps on all of them as well. I get asked a great deal “Why did you switch to Macintosh?”. Because I can concentrate on using my computer rather than fixing, updating or configuring it. Windows just doesn’t have access to the open source software in the way I like to get at it. Mac does (Fink anyone?). Linux was actually really cool and has a pile of configurable pieces, amazing arrays of doing anything you want, and the ability to swap just about anything out or tweak it just so. Problem is, I spent a lot of time making it “just so” and there wasn’t that much benefit over adapting to a mainstream OS.

The interesting thing is that there is a perception that the Mac has a better UI and is easier to use. It is if you have never used a computer before. If you have, you iwill/i have some unlearning to do. Once you get used to it, there are many things I do find more intuitive on the Mac, and in general I believe the quality of the software on the platform is higher. (I cite Omni on that count primarily, who make some absolutely top-notch software) Drag and drop is really pervasive in so many ways. Windows is coming along for sure, but I still find the Mac a much more creative place to reside on in my personal time, and even for some aspects of my job. I’m comfortable in Windows, and there are a lot of unique strengths and advantages there as well, different from the other two platforms, but personal taste rules personal dollars.

The real point though, is made by Joel Spolsky, which is what you expect. So a long-time Windows user is going to be put off and find the Mac awkward initially, as I did. And anyone that isn’t a Linux buff is going to be downright boggled by the do it yourself nature of the OS until they adapt. I’ve got a friend that has ranted eloquently on that a few times, but the refs are down on his blog at present, so no link. Sorry Norm! At any rate, check out Joel’s shot across the bow on this one: “Usability in One Easy Step (First Draft) – Joel on Software

Currently playing in iTunes: Code Monkey by Jonathan Coulton