WordPress hole – number of big blogs hit – original post September 6, 2009

Well, it looks like anything with less than a 2.8.4 version of WordPress got itself busted up if it was a searched target. That’s not overly huge news. The latest version that has been out for a while (August 12, 2009 according to the WP blog over here.

Not news until one of the “big names” on the blogger lists decides to blame WordPress for him being quite careless about his blog. Honestly I think at this point you’ve got to take Robert Scoble’s musings with a larger grain of salt. Perhaps it’s “Do as I say, not as I do.” I don’t read the guy regularly as he gained notoriety as a blogger from within Microsoft, and since he’s left, he seems more like a CNet blowhard, more inflammatory adjectives than substance. Probably why his post is getting so much attention. Yes, even from me. I’m making a bigger point than him needing to blame software for his poor system administration practices.

Scoble is a writer. First and foremost. The guys’ knowledge and experience probably lend him to a lot of advisory and strategic roles at Rackspace, but basically, his core value since he was an evangelist in the Redmond juggernaut is writing. Blogging. It’s his bread and butter.

Not keeping his software even remotely up to date (2.7.x) and having no backup says he really doesn’t value his work or his paycheque. That’s up there with a software developer not backing up his code, and not having version control, or any other historic copy or archive of many thousands of hours of work.

Robert Scoble is building an online social community or some such thing at Rackspace. This is now an evangelist for the cloud, and for online web properties and public participation (Web 2.0 if you still tolerate that moniker) systems. The example he sets is that it’s not important to protect the data. It’s all good. Software is perfect. Happy days and butterflys flitting through the pastures. And now he’s probably done a nice bit of damage to his own properties and to online computing. At least he’s serving as an example.

There’s no shortage of WP advice on securing, backing up and protecting your blog. JCS Hosting (yes, I’m less masochistic than Scoble on this. I’ve done sysadmin many times before, and I know to do it right, you better be adding value. Managing a single or few WordPress blogs is *not* adding value. Leave it to the experts) has it all set up nicely, and you can add WordPress plugins very easily. The system at JCS notifies me via email as soon as any of the software is out of date, letting me know I should update it ASAP.

I was getting the emails on 2.8.4 for a few weeks before I bothered clicking a single link (after backing the content up) to update the software. It’s hard to get easier than that unless you have a sysadmin doing it for you. Seems Robert Scoble had neither, as he’s his own sysadmin, and he wasn’t doing his job.

Software has bugs. Even with the excellent internal practices and courses Microsoft has in their campus classrooms, and even with all the research and people digging in looking for flaws, there’s always a few more it seems. Thinking WordPress was secure was pure naivety on Scoble’s part, and he most definitely should know better.

These blogs of mine are not core properties. But I do care about my time, and if I ever get enough readers to start leaving comments, I will value those all the more (those that Akismet doesn’t kill off first of course 😉 ). Each post, mine or others, equates to time. One of our most precious resources. They are worth of protection, and respect. The more people involved in your blog, the more value is being contributed and accrued, to say nothing of the content’s value to others.

There’s a plugin for WordPress that will email you a backup of either the content (postings and comments) or the whole database regularly from the site. Automatic staged backups. Add in a personal backup for your laptop or desktop (and with all the automated solutions, there’s no excuse not to have one of those either, at least onsite, if not offsite automated) and you have solidly protected your property and the value it represents. You are caring about the readers and posters’ time committed to your site.

Robert Scoble, and many, many others are probably doing a much better job now (RS notes he’s doing backups and locked it down), but the point is you need to consider the value of what you create as you’re creating it, not after somebody takes and puts graffiti all over it. Balance the risk. More value should equal bigger protection. Try and come up with the post you would write explaining how you lost all the posts your users contributed to your site due to negligence on your part and you should have a very clear idea of just how valuable it is and how much protection it requires.

Windows 7 upgrade path – Are you kidding me? Really? – original post August 8, 2009

‘Dumb’ Windows 7 upgrade chart sparks spat

This looks like it will push a few more into Apple’s arms. I mean, who came up with that? Rather than prettying up the desktop and making cooler startup sounds and graphic effects, maybe some care should have been taken around the upgrade and migration process?

One item of note, the “new PC” argument. There’s a big difference there too. Apple has a migration assistant. You put your old machine on the network, plug in the new Mac, and tell it during your setup to migrate over your apps, data, users, etc. Then you head out to a movie for a few hours and let it work.

Last time I did a Windows upgrade (which was thankfully a LONG while ago) there was no such feature. There’s a few enterprise tools in the Microsoft regime, but that’s for those wacky system administrators to deal with. There’s even (as usual) a third-party tool or two to help you for an added fee.

I just don’t get why the team at Redmond continues to treat their users with utter contempt, thinking that a computer user in the 21st century is going to putter around and fiddling with things like they did in the 80s and 90s. People who use computers now are not technology enthusiasts. They are not tinkerers from the dawn of personal computing. They are not a developer or computer professional. It’s our job as developers to help them out, not create a “clean install” full-day attended experience followed by reinstalling all the software onto the computer from original installation files rather than copying them across and (maybe) re-entering the license key.

I’ll chalk it up to (another) part of the Windows Experience I don’t miss in the least. I’ve got a lot of other things to do with a day of time. Fortunately, when Snow Leopard ships this fall, the computer will do the bulk of the work and let me go do those things with my kids.

Sorry, what was that 2B for again???? – original post August 2, 2009

So, in the realm of value and buyer beware, it seems that eBay might have had some lawyers that didn’t remember to put their reading glasses on. This story over at tweaktown.com would imply that 2B was the value (well, 1.7B now after a write-down) of a network minus a central piece of the technology.

Sorry, but if you’re paying that much, you get a perpetual license to it at a reasonable rate adjusted for inflation or valuation or the like. You don’t put that much money on the table and have that sort of escape clause.

Now regardles of what the gents at joltid may or may not have the rights to do, swinging that sort of deal on that much cash, which being possibly legally legit, is a very crooked, fly-by-night way of doing business. Pulling such a license from eBay after taking the money on the deal would put these guys in one of the least trustworthy tiers of businessmen on the planet in my books. That is, essentially, ripping their customer (eBay in this case) off.

I really hope there’s a misunderstanding in here and that this is a rumour or hiccup in the business relationship that got heard and blown out of whack. I hope that both sides are being honourable and honest in the dealings, and fair.

But that lawyer still needs their prescription checked.

Gruber: Microsoft’s Long, Slow Decline — and more – original post July 31, 2009

John Gruber has an interesting commentary on the recent Microsoft results and on a few comments floating around their execs these days. Have a read at ? Microsoft’s Long, Slow Decline

In my books, there’s another aspect to this. Microsoft is about cost and profitability. Every company is, but Microsoft is making that associated with their brand. Cost. Race to the bottom. Not usually a game for the faint of heart, and never for an innovator. What happened? I’ve never been a Microsoft fan, but I have had a deep grudging respect for the engineers at Redmond. They have a number of talented, driven, capable developers down there. They do put together good systems, and in the rare cases when the product design is great, you get a great product. It has been happening less often, but that’s just the sliding maturity of the Redmond Juggernaut.

Now you’re getting a rebranding to “cheaper”. Not a great connotation. “Cheaper” is not the answer to their somewhat vague “Where do you want to go today?”. It is definitely not a good connotation to the enterprise campaign of “People Powered” enterprise computing. Cheaper is, as Gruber notes and is in the consciousness of most North American consumers, Wal*Mart.

Apple has gone for “better”. More capable, secure, easier, and other adjectives, but the brand has been associated to “better” in their strategy. It’s had more expensive bolted to it, but they have queited that enough to be given a thought, and with the iPhone and iPod being copied left right and centre for features and ideas, the consumers easily find the conclusion “better” beside Apple’s name. Everybody is copying them and talking about them.

Microsoft used to be “full featured” or “powerful” or “fully integrated”. Piles of things that let you know this was serious stuff that did anything you needed. Sure, a bit of complexity, but really, you needed that complexity to do the jobs you needed done. And generally, they were right. It fit. Now, competitors, and not just Apple, have chewed into that with simpler, easier solutions that solved most of the things a lot of people needed, and did some parts better or more elegantly. But Microsoft has always been the juggernaut. It WILL be able to do it.

Now it’s just bending to “cheaper”. I’m not a fan of them, but I expect better from them. I expect some vision. I expect capability, prowess, some arrogance. John Gruber is right. They lost the geeks. They lost the consumers in a lot of cases, at least the ones that care as he points out. They aren’t messaging the users anymore. They’re messaging to CIOs, and to people that done do that much with the computer. It’s like an extra TV in the spare room now. Oh yeah, the computer for email and facebook. Yeah, I think it runs Firefox. Windows? Oh I guess so. It must be windows. Is that Firefox? I use Internet Explorer. Is that Windows?

That’s the customer they are targetting. Ouch. I’ll pay the money for the quality and keep the Mac thanks. Maybe they are missing Bill Gates a lot more than Steve Ballmer would like to admit…

(Original thread from Daring Fireball.)

Static IP on Fedora Core 10 – original post July 27, 2009

I’d like to know when in my Linux server hiatus somebody decided to make the Fedora system so “end-user-friendly” that it became a serious pain to configure a server.

I won’t repeat the Network Manager rant here, but if you want to set up a server with a static IP address, start by incanting it out of existence:

chkconfig NetworkManager off

which will at least get you one metric tonne less pain in fighting it.

 

Then, set yourself up manually on the static ip address. The config files still seem to get written correctly with:

system-config-network

and go through that setting your interfaces for the static IP and gateway and netmask as you require. Getting the DNS together while there is also a good idea. 😉 Save and exit

 

Now, from what I’ve seen, that doesn’t do too much. You then need to add a link to the network daemon. If, like any good sysadmin you’re running without a gui, then you add it to the rc3.d directory. If you run a gui on the server, it will be in the rc5.d. Heck, add it in both. For runlevel 3, the symbolic link to create:

ln -s /etc/rc.d/init.d/network /etc/rc.d/rc3.d/S07network

and that will get it to start up and pull in the network configuration you set up. That should get you up and running with a nice static IP on Fedora Core 10. And give you more time to curse the myopia that screwed the system up so much in making it friendly. If you’re going to add automation, you still allow the manual config and automate the manual config. Seems like the network configuration and manager had a serious case of either Not Invented Here or I Don’t Need That So Nobody Else Does Either going on. Extremely aggravating. Even with DHCP the interfaces wouldn’t come up automatically from a stock in stall on Core 9. By the time NetworkManager gets fixed people will be so used to turning if off it will never get the respect or use it may deserve at that point. Very unfortunate.

 

Bottom-up Outsourcing – original post July 27, 2009

I happened upon this little tidbit on my blog backlog. The unconventional James Governor taking a whack at outsourcing as done from the trenches: James Governor’s Monkchips’ Give Every Developer a $5k Outsourcing Budget

Some of my past compatriots may recall the idea I had worked up that didn’t go quite this grass roots, but was a variation on a theme that might appeal to the slightly more conservative innovators.

The suggestion was that the projects be done to understand an explore outsourcing for the company, and learn to manage and use it down in the people that would be the internal leads. The developers would examine the train of tasks coming through, and the projects they had assigned or that were of value to them (which may be their own projects). They would then propose how to outsource, under their direction and management, a project or component. The collection of these proposals would be collected periodically (or continuously) and assessed for value, risk, and other criteria the company and team may see fit. At that point, the selection would whittle it down to a few and distribute the budget accordingly in the proposals (with possibly a round of refinement if the numbers don’t all add up) and the developers would have both skin in the game, and would stand to gain both valuable management and communication skills and experience, but also would show some of the soft-skill capabilities to the company.

Unfortunately, the corporate and technical leadership at the time figured that big projects and minimal oversight was the way to go, so not only did the staff gain near-zero experience in this global toolset, but the outsourcing also had a number of large failures, little learning, and didn’t bring value to the company in any reasonable time frame. Lessons have been learned since, but I still stand by this approach if you want your team to think of outsourcing as a partnering and supplier-style tool, they need to be involved and committed.

If you’re faced with the opportunity or need, consider a variety of approaches, and also consider strategically how you expect outsourcing to work in your company on a continuing basis, what you need to make that happen as far as your staff skills, and finally what it will take to make that transition start. Delegating into the team the responsibility and control serves a number of needs and strategic goals if you’re serious about adding outsourcing to your tool aresenal.

Currently playing in iTunes: Comfort by Jillian Ann

SOA and COTS Orthogonality – a follow up and clarification – Original post 2007-07-08

Thanks to some very creative and insightful feedback on the original post from a few colleagues of mine, a bit of clarity and expansion on the dimensions is warranted.

For the sake of illustration in the article, I polarized the camps in the post as “SOA” and “COTS”. The intent of the SOA label was a best-of-breed, service-oriented connectivity model in an organization that is as likely as not to use a different vendor for every business core application. I believe that aspect was reasonably clear, but I blew it and mislabelled the “COTS” approach. Obviously, the label has a nod to IBM, Oracle and Microsoft in that they clearly offer integrated, custom, single-vendor stacks of software that work quite well emwithout/em needing to do very much integration. The list goes on with SAP, Sun and many more. The bias isn’t clear at this point, as an SOA organization can use these stacks quite well, and those stacks can be adopted completely and still integrate with yet other applications by SOA.

The labels really weren’t the point. They were intended to illustrate the bias in an organization towards a number of product specialists, being developers that will take a stack and be very capable at extending it and pushing it in all the required dimensions, vs. the generalists which in general don’t know the products to a great degree, but are very capable at putting very different and foreign systems together and making the integration the strength of their skill and of the enterprise, and can cobble custom code and applications together reasonably well.

Now, with a big nod to my colleagues, obviously, the vast majority of organizations have both types of developers (including ours) and most enterprises especially those of middle to large size will have experts of both types within their boundaries. It’s quite healthy indeed to have such a mix, and enables the best features of both approaches.

Flexibility and such multi-faceted capability matrices don’t come across very well at the higher levels of executive logic. Thus you wind up with a bit of a polarization. In one, the executive either has buzzworded themselves and you’re chasing SOA (likely with some preferred vendor), or you need to reduce and consolidate on a single vendor stack for efficiency and getting a unified set of developers and resources. (Work disclaimer, our executives are emnot/em as strongly afflicted by this condition as many, fortunately!)

That brings up the core discussion point. Is “best of breed” an approach to wander off and try to get the perfect vendor and thrash with continuous re-implementation of core business systems? Is “single stack” a simple procurement optimization that lets you have one company to pay the bills to and yell at when things go off the rails? Why does this either-or conceptualization exist?

Most often, when communicating the strategy of an IT organization, a “mission statement” of sorts is what is needed. That often grinds down into specific aspects of the implementation, and what shows up on the accounts payable. That simplification is problematic at best, for as noted earlier, the vast majority of teams is comprised of both aspects, and both types of developers. So then to the executive level, the teams are disorganized and inefficient, which is patently incorrect.

Really, good development teams need to get a job done, and they will use the tools they know, followed by the tools they are interested in, followed by the tools people they trust talk about, and finally, followed by the tools they are told to. Dire employment threats may elevate the use of the mandated tools higher in the stack, but that introduces a legislative inefficiency in the development process much as economists complain about legislative interference in the market creating inefficiencies. So the cases all boil back down to a question. What is the goal?

Getting the job done in the quickest way possible creates an inefficiency in that as changes are required, they usually require more effort than if some flexibility around that need was built into the solution in the first place. The trade was speed for flexibility. That’s most often the COTS approach. You get a “good enough” fit from pulling a pile of ready-to-rock code off the coat rack, and adding a nice shirt, tie, belt and decent shoes to go with it from your closet, and you’ll survive the presentation to the board just fine, it didn’t cost you a pile, and it can be replaced by next year’s style without too much pain. Unless the shirt and shoes start looking aged…/p

If you need the job to be done and each piece to be near the top of the game, say for competitive advantage, then you want a bit more out of the box. You want a top of the line suit, tailored to your specific needs, and not just anybody in a 42 Tall, you want it to fit your business down to the inseam. The tie is selected to go with the suit, and complement it precisely. The shirt and shoes and belt likewise. The goal is the ultimate ensemble to present to the world and take the headlines at E3. I’m stretching the metaphors again, but this gets to the central piece of the COTS vs. SOA, or specifically, COTS vs. Custom integration debate.

What’s the business advantage? If you aren’t making money off of your accounting department (and if you are, the SEC may wish a word), then your accounting system is a COTS piece with minimal integration into your other enterprise systems. If your accounting is part of an integrated ERP approach because your business is driven by operational efficiencies, and that defines at least a reasonable amount of your competitive advantage, then you want that system to fit precisely, and drive more benefits to the corporation directly. That’s the payback and that’s the decision fulcrum.

So I’m going to backtrack from my original assertion of bias on a team. It’s bias on a component, especially in the SOA world of today and tomorrow. In some cases you want your COTS/Stack integration team to ripple through and keep the overhead low on necessary but non-advantageous systems, and in others, your custom integration and customization teams will take the a product and adjust it to fit the company exactly, and bring competitive advantage and leverage to the table in a system that is critical to the nature and core business of the organization.

Just don’t do a custom job when the simple stack will suffice, and don’t settle for off-the-shelf when your core business depends on it.

SOA and COTS Orthogonality – original posting 2007-03-30

I’ve been diving much more deeply into SOA and moving an enterprise towards SOA. In that process I’ve been considering an idea that seems to have a certain truism to it.

An Enterprise strategy that relies on moving heavily to using COTS (Commercial, off-the-shelf) products is not the same enterprise strategy that would fully embrace an SOA (Service Oriented Architecture) strategy. As a great number of companies claim to embrace both in their IT organizations, let me more fully explain what I mean.

The essence is in what ability your seek to enable to the greater degree in your organization. SOA and COTS have biases and while not perhaps completely orthogonal, you certainly can’t take full advantage of both simultaneously without having a rather large conceptual dividing line in the team doing the work.

COTS strategy embraces a stack in most cases, or out-of-the-box-instant-integration. Systems are configured by default to work with each other in certain dimensions, and those dimensions are most often walled garden integration paths either not open to outside systems or not surfaced in a way that effectively enables interoperation with other stacks. The benefit and bias is one of getting up to a certain level in a very short period of time. Most people would think Microsoft with this strategy, and they are perhaps the best at enabling the integration in the stack, but Oracle, BEA, IBM, Sun and many, many other vendors to this in varying degrees, markets and technical layers.

The price most often is in the ability to swap out a piece of the stack if it doesn’t suit your business needs or if it needs to be customized beyond the ability of that integrated stack. At that point the benefit most often has to be unwound to a large degree and portions reconstructed to gain the original integrated functionality back as well as the desired extension. Tightly integrated systems in their very nature are less flexible outside of their designed purposes and capabilities. That is the trade-off in all software. Tight integration with speed and efficiency in exchange for a loss of flexibility and openness. If somebody says you can have both, check your wallet. They should be telling everyone how to change the art of the design trade-off in software engineering worldwide rather than selling you a product stack.

The other side or axis if you stick with my observation, is SOA. SOA is designed around a loose-coupling, highly flexible approach. Swapping a component out is much easier in this case, as the integration is abstracted and highly flexible in nature. The system can be tailored to bring many technology pieces together, and arrange and integrate them in many ways. The approach also seeks for the services to be somewhat granular. Discrete, and at any rate smaller than a COTS stack or large and highly capable product set. Vendors sell more of the enablers of this, and components in those COTS stacks also expose themselves and pieces as services.

The trade-off with SOA is that you don’t get a whizzy-integrated system out of the box, and it’s going to take some work and discipline to get the systems work decently well. The advantage you are taking the hit for is flexibility and agility and the ability to bring entirely new components, services and capabilities into the mix with very little pain and rework comparatively. You also, as a side-effect, are not nearly as beholden to a single vendor as you are if you wire everything into a single COTS stack, as the COTS integration points are not, as a rule, generic.

That doesn’t say why the two are orthogonal though. As I note, the COTS products can expose capabilities and features as services to be used in the stack. But by orthogonal, I was talking organizational strategy. One of the two needs to take precedence to really be able to get the proper leverage out of either solution.

The COTS solution requires the team to become expert at building and customizing that vendors products, and building on that stack. This isn’t a C# vs. Java argument, this is getting a team of developers expert with Exchange Server, Sharepoint, Commerce Server, BizTalk and other vendor systems. Expert in building solutions on top of those systems, and taking advantage of all the capabilities they create in each other. Each of these systems has a set of capabilities, and you learn to use and extend them and the stack.

The other side is a team that is expert at learning the boundaries of systems, often but not always smaller systems or sets of services, and integrating, sequencing and orchestrating them together. If the invoice system isn’t doing what the business needs, the approach looks at how to augment it with another system or replace it entirely without deconstructing the enterprise.

Where COTS would extend and customize the existing system to fix the issue, and thereby become expert in extending and partially writing/customizing the inventory system and generation process (something I would note you expressly bought the COTS piece so you would not have to do it), the SOA approach looks at getting the tool that does solve the needs for the invoicing working within the enterprise, but that process is around getting information in and out, and controlling the system. It does not get into the system itself.

The experienced ones will know full well the environment is not this cut and dry. I am not claiming that it is, but polarizing the backdrop to better see the trade-off. The trade-off is what kind of team and enterprise you want as the main driver and focus. Most decent sized enterprises will have both sets of people, as there is always a bit of COTS customization to do (it’s not that big a deficit to throw the product out) and there’s always a bit of SOA or Application Integration (The SAAS provider for CRM doesn’t talk our COTS stack) but the bias is a conscious decision that affects the team. Do you want the IT group to bring in solutions fast and cheap (COTS) by default and specialize in a specific set of vendor tools as their primary skill set, or do you want the IT group to work to put together new and legacy systems that fit your business and spend the time putting those pieces together in flexible ways, taking a bit more time and money to do so? There is a long list of advantages and disadvantages to both. I contest that you need to pick one of the two as your primary thrust (or neither if that’s your preference), but you can’t honestly pick both.

Currently playing in iTunes: On My Way by El Chupacabra

Eating of the Dogfood – original post 2006.09.19

There is much that has been said about “eating your own dog food“. Indeed, there is no better way to ensure that what you are building is of real use. It’s not always possible mind you, as is the case in our company, where we provide data and analysis software for the energy and engineering segments, and we aren’t in either of those businesses. As a result, our products keep our customers very close in design and implementation for our best success.

My head did a bit of a turn sideways to contemplate a very odd thing the other day. I’ve been working in C# on .NET 2.0 creating a relational database walker to do a transform/load into a custom schema and management system we need to work with. As I’m creating this system, I need to discover the primary keys in the Microsoft SQL Server 2005 tables I’m crawling. Oddly, the metadata doesn’t seem to contain any such information. There are some constraints, procedures, and the table and columns of course, but the primary key is absent. As the SQL Server Management Studio shows the primary key, I’m pretty sure I’ve done something wrong. So into Google and MSDN we go.

The part that made my brain do a serious double-take, and say “I emmust/em have read that wrong, was looking at the ADO.NET 2 documentation, specifically this page on MSDN. It states quite clearly that the SQL Server provider doesn’t do primary keys. But emOracle’s/em provider does. Apparently SQL Management Studio has a connection with the greater cosmos that allows it to magically divine primary keys from the fabric of space-time. After some more searching, I’ve found this gem from the Program Manager of SQL 2005/Whidbey. I appreciate it “bothering him”, but the management studio obviously wasn’t eating the dog food of the Whidbey release on the schema collections, and is instead using some alternate mysticism to achieve the desired results.

By the looks of the XML file Carl provides, it appears the magic is within the mystical (and decidedly specific) System Tables. This is obviously up there with a hack, as it’s a graft into the .NET config files of the workstation you install and run on, but given the relative simplicity of the configuration change, I’m rather baffled as to how this was missed in the product release. Again, from a dog food perspective this wouldn’t have been missed without disabling the entire SQL Server Management Studio from being useful at all if the only public remote interface was the one that the ADO.NET system provides. It would have improved the .NET system and the SQL Server Client metadata at a minimum.

So the short of it is, thanks to blogs and other “out-of-channel” communication, there is a very awkward work-around to what seems to be a very fundamental oversight. But again, if you want a better product, dig in an use what you build as much as possible. Adhere to your own published interfaces, and if you have an API, make sure your other products are using it, and not some other obscure method for integration.