TrueNAS – I hoped, I tried, it’s looking like UnRAID

Well it’s been a VERY long time.  So much has changed.  But enough about me and my gap in posting.

Given the discontinuation of Apple’s Time Capsules a while back, I knew that ultimately I would have to put some other sort of NAS storage system on the network to continue to enjoy the seamless backups.  That time came this summer when the Time Capsule was starting to make some unpleasant sounds which often precede a disk failure. It was time. 

Cue a fair bit of research.  Looking at Synology, Western Digital, Buffalo, Asustor, Qnap, Terramaster.  Many great products. All commercial, simple, decently supported.  But pricey for what you get.  I had just been burned by a bad Kickstarter that was promising a good home NAS and server system, and I wanted all that storage and flexibility, and that price.  So I looked at what the state of the DIY sphere was. 

Ultimately it came down to TrueNAS and Unraid for being able to have solid disk arrays, flexibility and resilience.  TrueNAS is more of an enterprise offering and also had what seemed to be an amazing ecosystem with TrueCharts.  Unraid was also a great alternative, simpler, perhaps a bit less powerful, but also booted off of a flash drive, which just didn’t quite sit right with me.  So ultimately, I went TrueNAS. 

The config:  AMD Ryzen 7 8700g with 64 GB of DDR 5 RAM (not ECC, if you’re into the NAS TrueNAS scene, it’s a debate), 1 TB NVME boot drive, Asus Prime B650M-A AX motherboard, 750w Prime Gold 80+ seasonic power supply, Fractal Design Node 804 case (GREAT case for a server with space for 8 3.5” drives, and really easy access), 5 8 TB Iron World NAS drives and 3 4 TB Iron Wolf NAS drives (2 pools and a somewhat constrained budget).  All that was going to get me a lot more storage than the commercial offerings, a lot of performance for running containers/servers at home, transcoding capability if I did put a Plex server together, and 2.5 Gbit ethernet wired connectivity.  Start the adventure. 

I put it all together, installed TrueNAS and got things up and running.  Just started with a pure Time Machine backup on an SMB share configured for multi user Time Machine.  MUCH faster, and worked great.  I made some mistakes though, and the array started getting fatally corrupted. Cue a fair bit of trouble shooting looking for drive failures, long drive testing, etc.  It seems the motherboard SATA controllers might be getting overloaded, and ZFS is a bit temperamental about the hardware (again, the ECC debate comes up, but that wasn’t where it was happening).  So add in a PCI card with 8 SATA connectors on it per TrueNAS forum recommendations with the 9211 JBOD-configured PCI card.  Rebuild the pools.  All is well.  No more disk corruption.  So.  Pretty sure that problem was diagnosed and solved correctly.  

After more successful backups and stability, it’s time to add in a few servers.  Simple.  Gitea (a source code control server based on Git) and the Postgres database server to support it.  Easy install, easy startup, all good.  TrueCharts was as promised.  

Then I started to get random reboots.  Zero logs, just the computer would randomly reboot itself.  No warning.  Cue another investigation.  It seemed to correlate with the start of Time Machine backups and disk activity spikes.  Hmmmm.  Again testing disks, nothing.  Power?  750W was WELL in excess of the disks all startup up at once.  Not sure.  One of my sons has use for a power supply on a gaming build, so I grabbed a 1000W gold 80+ seasonic.  But before that I grabbed a live Ubuntu image on USB, and booted the system clean on that.  Mounted all the drives and did stress tests on the CPU, RAM, and ever disk including the NVME.  For a week.  Perfectly stable.  Tuned the stress tests to maximize the load and minimize wait times on the arrays, and also to burst it all.  Rock solid.  The hardware recommendations from the TrueNAS forums were as promised.  

So.  Full clean install then.  Wiped everything.  EVERYTHING.  Latest 24.04.2.3 version of TrueNAS, clean install, new arrays, and off we go again.  No load.  How long will it run by itself just doing snapshots (DO THESE WITH ZFS.  It’s like Time Machine for your arrays without piles of backup storage, but you DO still need full backups not in the system, that’s a different post). Flawless.  No issues, smooth as silk for a week again.  Start the Time Machine backups.  Also all good.  Ran for over a week.  Zero issues.  Then random reboots started up again.  The reboots stopped when I reinstalled the entire system, and now are starting again?  This smells of a bug now.  Zero errors of any kind reported by ZFS or S.M.A.R.T. on anything in the disks, NVME, or HDD.  But no disk corruption.  Before I had to revert to a snapshot to have Time Machine work again.  Not this time.  Curiouser and Curiouser.  

Seemed semi-stable now though.  I started the servers again.  Same behaviour for another week.  Still generally stable but random reboots.  Keep looking, keep checking firmware and any other clue or possibility I could find in the forums.  Everything SHOULD be solid. 

And now, corruption of the Time Machine backup again.  The array is again fine, as it has been ever since I put the PCI SATA card in the system.  There’s a bug somewhere.  And all through this the TrueNAS system will randomly reprint IP address for the interfaces despite it being a static address (also done with a permanently assigned DHCP address, same behaviour).  

Well, I KNOW TrueNAS is amazing and solid and brilliant for literally thousands of people and that iXSystems has a great offering for thousands of companies.  But I think I’m throwing the towel in on this.  TrueCharts had a major blow-up with the community and abruptly just UNPUBLISHED all of the repo.  So now there’s an open source war on the ecosystem that was also valuable.  It was all built on Helm Charts on k3s (mini Kubernetes, or k8s if you’re into that) and it was another lfit to just take a simple docker container and get that going, but I was less worried about that.  Now it was a pain. 

Add it all up and then Unraid is now supporting ZFS in their beta stream of 7.0.0-beta-x.  Well, then I looked into backup options and experience on that flash boot drive.  Nothing alarming, and generally really great reviews of response and support.  It is a paid license.  But a very reasonable amount and it is a lifetime license.  No call home things of some of the mainstream commercial stuff, all very open, open standards, and generally a much more “grass roots” company, but very successful.  Standard docker support, and again a great and flourishing ecosystem.  Noted as much more user friendly.  

To be clear, I’ve done a fair bit of sysadmin work over my career, and nothing TrueNAS is doing is leaving me in the dust, but at some point, I want the systems to work.  I’m not hacking on these things, they are supporting my hacking efforts.  I’m more software than hardware.  

So I’m going to look into pulling it all down and trying out Unraid now.  I still back up everything to detached SSD drives and the Git and DB stuff is all also locally mirrored so nothing is at risk.  And I have offsite backups on separate HDDs.  This is an experiment to do more and enable more and also move past the Time Capsules without having drives dangling everywhere.  

So I still have a pile of respect for the TrueNAS system and community, and they are moving with the upcoming 24.10 electric eel releases (in release candidate status as of this posting) to support pure docker and put the TrueCharts fiasco behind them.  All great things.  It’s the fact the foundational reason for this all, the backups, is not reliable FOR ME.  It’s obviously reliable for other people, but I’ve invested a lot of time to make it work, and just haven’t had success, and I can’t swap out every piece of hardware just to try to find some edge case issue in the software after all the mainstream recommended stress tests showed all the hardware to be rock solid.  

So hopefully I will actually do some follow up posts for anyone reading this with what I discover.  🙂 

Misguided analyst editorial – update: Called it

Wow.   Rob Enderle has a lot of readers on Computerworld I expect.   I read the odd article from him.   I had thought he would be offering solid business advice in light of the viral “Comcastic” support call from hell [ http://www.huffingtonpost.com/2014/07/14/the-comcast-call-from-hell_n_5586476.html ] I stand corrected.

Basically, Enderle shows to be a run-of-the-mill, CYA sycophant extraordinaire advocating analytics to essentially make two classes of customer, rather than use it to help monitor and improve your customer relationships as a whole.    His original article I’m ranting about is here: Don’t Be Comcast: Use Analytics, Monitoring to Prevent a Viral Disaster – Computerworld

He starts somewhat sanely, in having a list of the biggest customers available to managers, so that, as he did, you don’t cancel a supply contract you have from someone who happens to be your largest customer.  I get the impression that it wasn’t a healthy business relationship, and may have been grounded more in back-room “you scratch my back I’ll scratch yours” deals than good business if a cancellation resulted in that sort of fallout.   Either that or the company Enderle was working for ALSO wasn’t competitive, and let’s just say what goes around comes around in that case.

But when he then takes social media, and proposes to use it to monitor when you might have a PR issue on your hands, or track negative and positive PR, that’s just good sense.   But that’s not what he proposes.   He takes the idea of the “influencers”, people who have larger pull in social media and PR, being often celebrities or journalists, and having your real-time analytics alert you when they are contacting the company in support to give them extra-special treatment.   Basically, make them an “elite”customer, and screw the rest of us.  

You know what social media does then?   Check the hashtag count on twitter.   You’ll still get #comcastic from all the rest of your customers relating serious issues and problems, and you’ll have a few media celebrities having positive PR.  And it will catch up with you, and if your customer service sucks, I’ll trust my second-cousin’s-friend’s opinion of your shop far more than some privileged A-list celeb on where I take my business. 

Enderle is a “mover-shaker-fool” that is looking for quick results and his own rep in a corp rather than actually making your business the leader in the category.   Fix the problem.  Treat your customers correctly and install that in your employees.  And don’t incentivize them disproportionately against that.   I’ll bet good money that  the call rep at Comcast is paid good coin on a “save” of a leaving customer, so he will work his rear end off to the point shown in that recording to make that save.   It’s worth it as I expect his performance and his compensation is so skewed to making the save that it’s not worth his time to be courteous and walk people through it professionally.   Enderle says the rep should be fired.   If I were the CEO I would start with looking at how the incentive programs for the call centre are set up, especially in the “customer retention” area.   And adjust the attitudes of the people setting that up.   

And you know the irony?   I bet the whole behaviour of grinding so hard to keep a customer (and up in Canada Shaw and Telus do it just as much, but I didn’t run into quite the level of zealousness that was in the viral posting) is based on analytics.  Enderle hasn’t yet learned the lesson that people who actually THINK about analytics and their application has, which is you still need to have a goal in mind when you apply them.   Comcast has a goal when it hits customer retention, as do all these telecom/internet providers.  Keep the customer at all costs because customer acquisition is very expensive.  The numbers say it’s a lost cause so go all out.   Any win is a great bonus.  Social media is re-empowering the consumer and making the businesses play honest with everyone.   Enderle doesn’t get it.   Make sure you make a better decision than he advocates.

 

UPDATE:  Looks like I called that better than the paid analyst did.   http://venturebeat.com/2014/07/22/comcasts-retention-policies-take-the-blame-for-that-customer-service-call-from-hell/ pretty much outlines what I figured was the core of the issue.   Perhaps new metrics NOT from the accounting department need to be added in?

Snowden we know about but…. who else?

I’ve been following the Snowden revelations.  And commentary by people like Bruce Schnier, as well as responses from the NSA and others on TED Talks.  It’s a complex issue.   If there was ever a time a US Agency succeeded in undermining the entire US economy before, I can’t think of it.   But I think this time it will reach that far.   

Because of that, Snowden is being vilified.   He should never has spoken up, many say, and the damage he has done to the US and their reputation, and via that to their economy, is treason.   Criminal treason.  

He broke the law.  I have no argument there.   Anyone engaging in civil disobedience is acting against the law as a matter of conscience.   Ed Snowden obviously cares a very great deal about what he did, and he is paying a very high price for those actions, but he feels they were worthwhile.   They were not for personal gain.   It was for a principle.  A principle that said the US was honourable, at least to its own citizens. 

I’ve not felt the US is all that honourable to *anyone* since the DMCA and the Patriot acts came in.   The government acts almost entirely to facilitate special interests and powerful elite individuals and organizations.   Not to make the land of unlimited opportunity it was founded on.   The people who seized those opportunities don’t wish anyone else to threaten their successes.   They don’t want the next generation succeeding them.   Or beating them.   But in all this turmoil, all these differing viewpoints, motivations, and possibilities, there is one question that I haven’t heard yet.

Who else besides Snowden?  Snowden was a rare character with a social conscience that could see that this was morally wrong, and was against the law of the land he served and cherished.   More importantly, it was against the spirit that his entire nation was founded on.  That the government served the people.   

So these programs have existed for a long time.   They have known vectors, and methods.   They have catalogued vulnerabilities.   These things are extremely valuable, especially to foreign powers, and criminal powers.   This is the backbone of information for major organizations.   These vulnerabilities are the keys to the kingdom.   The knowledge was a top secret weapon.   But the holes in the infrastructure are in all the infrastructure.   Including the US, Canada and all of the other  allies of the US.   That’s a really, really valuable thing, as long as the US believes that these vulnerabilities are still their own little secret ,and nobody else’s.  

Snowden was one relatively minor actor in these organizations.   He had a conscience.   He served his conscience as he saw fit.   But suppose any one of the thousands of others with knowledge of these programs was wronged or felt they deserved a better result from their own labours?   Suppose they felt they could keep the US operating as they desired and not leak the information in good conscience, but instead sell it.  Secretly.   To a foreign power or criminal organizations.   There are so many pieces they would only need a handful, and probably would profit immensely from each one.  Who’s it gonna hurt?   The NSA knows about the vulnerabilities, so it shouldn’t hurt them.   And so what if a few US companies get caught in the crossfire.   The NSA thinks that’s ok.   The US Government thinks it’s ok if they do it.  So spread the wealth a little.   

How many of the foreign attacks we have seen have been through these intentionally introduced vulnerabilities?   How many times have advantages been given to hostile powers, to those who would do harm to others with this power?

Snowden did us a favour.   He gave us a shot at stopping all of it.   And hopefully being wary of it happening again.

Who else gave away copies of the keys to the kingdom?

Currently playing in iTunes: Christmas Song by Dave Matthews Band

HTTPS links in Redmine email

We’ve been shifting a few things around in our infrastructure, and one of them was the Redmine server.   Getting it up and running on a custom port using https is all pretty straightforward, but when the email notifications on issues started going out to watchers, they all had http://hostname:port/.… in them.   The problem was it was an https server.

Pulling the default swiss-army-problem-solver of searching the web out, it’s in the Redmine FAQ as well as a number of posted solutions.   Apache isn’t passing through the protocol.   So you add in a magic little Apache line for RequestHeader on forwarding the protocol as https and….

It doesn’t work.   

Then into some hairy mod_rewrite.   Firewall configs cause that to turn into a serious conflagration.   What to do, what to do.

Oh look, there’s a setting IN REDMINE on the general page, just under your host name with custom port if you have one of “http” or “https”.

Set that to https, problem solved.

Now trying to get that into the Redmine documentation as an outsider is a whole other adventure, so in the mean time at least it will be here in the swiss-army-problem-solver of the internet, and if you’re specific enough, you got here with a good ol’ search through DuckDuckGo.com.   Or if you got here through a filter-bubble of a more popular search engine, well heck, I guess my page rank went way up now didn’t it?   🙂

Happy configuring.

Paid Upgrades and the Mac App Store

There’s been some solid debate recently around Mac App Store pricing, and the idea of paid upgrades, continual ownership and updates.  It’s relevant, and it is a change in revenue model for software companies.  Will Shipley, always a thoughtful and thought-proking individual weighed in with this opinion.   Largely proposing a way for the Mac App Store (and by inference, I would say it includes the iTunes iOS app store) to provide for paid upgrades on major revisions of the software.   It’s valid from a more traditional point of view on revenue and pricing of software.

A very solid counterpoint was provided by The Mental Faculty in the blog post Paid Upgrades and the Mac App Store.

I believe there is a missing piece there that major upgrades are treated differently, by Apple and others.

My expectation is as follows.   Initial prices for software will drop.   Not to $0.99 or any of that.   Major software is going to cost money.   $10-$30 for a lot of normal apps that might now go for $50-$70.  Those apps will have feee updates for their lifecycle.   Then when a major upgrade is going to be released, it’s a *new* application.   Migrating data will be a bit of a pain in sandboxing and application security, but a bit of Dropbox or other transfer ingenuity will alleviate that.

My rationale?    It comes down to essentially a “license” that you sell for the duration of a version, rather than the duration of a year or the like.   The great part for the customer is the license never expires (unless the software no longer works on an OS upgrade or device upgrade).   The upside for the developer is that you get lower cost for adoption, and you do have a recurring revenue stream for solid bits of new work.   It is a change in the upgrade and sales model.   The other option is to introduce major uplifts in features as paid in-app purchases, which is another option with what already exists.

I don’t think it’s either full price new versions or free upgrades for life.   There’s a lot of capability in the revenue models Apple has in the app stores they run, and while those models are nowhere near exhaustive, they are, I believe, sufficient to support a wide range of development models and companies.

I expect that the iLife and iWork applications will go this way as well, and as that happens, iCloud is how those apps will move across data and settings between versions.   Mountain Lion will be a new app on the app store, even though it’s an “upgrade”.   The supporting evidence is that Apple showed us long ago the OS was a lower cost upgrade than say Microsoft provides.   This gives incentive for people to upgrade much more readily than a higher price point would, yet everyone pays the same price.

Every sale is an upgrade.   From the previous major version to the current version, or the first version to the latest version.  One price upgrades.  Even if it’s upgrading from nothing to a new customer.

Ads in Apps and Resource Usage

John Gruber linked in a study from Purdue about the majority of battery life going into supporting and displaying ads.   Rather surprising really, but largely due to the fact I hadn’t considered it much on mobile devices.   I usually get paid versions of apps as I’m just not a fan of ads eating screen real estate.   That said, I use ClickToFlash to limit ads on web pages and further cull their activity with Little Snitch blocking a lot of track sites and flash serving ad sets.   I’d rather have a way to subscribe, or read inline advertising as Gruber does with Daring Fireball already.   It’s much more effective as well than the pop-over ads and interstitials I go looking for the close button on before the thing even gets rendered.

I wonder if this sort of study is going to affect purchasing decisions in people if it becomes more widespread?   And if that happens it will affect developer decisions and will actually wind up changing some of the ad industry approach in that they too will have to consider efficiency and battery life.   It won’t just be the application developer dealing with the restriction.   The ad is a supporting mechanism, so really it should use no more than say 10% of the resources the application uses itself.   Definitely not 3x the resources at any rate.

In-App Ads Consume Mucho Battery Life:

Jacob Aron, NewScientist:

Up to 75 per cent of the energy used by free versions of Android apps is spent serving up ads or tracking and uploading user data: running just one app could drain your battery in around 90 minutes.

Abhinav Pathak, a computer scientist at Purdue University, Indiana, and colleagues made the discovery after developing software to analyse apps’ energy usage. When they looked at popular apps such as Angry Birds, Free Chess and NYTimes they found that only 10 to 30 per cent of the energy was spent powering the app’s core function.

[…]

(Via Daring Fireball.)

Things are moved, and we are underway!

Apologies for the deluge of reposts as I moved everything over.   The original post dates are in the titles of each should you be interested, and things are backed up and configured as needed at last.

So after a few years of being largely idle, Digital Katana Technologies Ltd. is underway and operating.   Currently we are doing work both with software development and coding as well as architecture and technology consulting.   The the midst of that, some time is finally being allocated to work more consistently on the iOS apps Digital Katana has been designing and exploring over the past year.

The blog posts on technology will start to flow and anytime a software release or other company news comes about, the news will be published here first.

 

Avahi… linker flags to compile the examples – Original post June 1, 2011

So, let’s assume you want to attack using mDNS also known as Zeroconf on Linux to advertise your new service in a modern, portable, discoverable way with no pain on the user’s part?   Simple, just pop over to http://avahi.org and look at the examples.   Compile them, try them out.   If you’re on a Mac, avail yourself of Bonjour Browser to have a look at the services popping in and out of existence as you test it.   There may be a Linux zeroconf service browser, but I didn’t find one ready to go.   If you know of one, please add it into the comments!

Wait, you’re saying you can’t get it to compile and link?   Ah.   Yes, so there’s two normal landmines people step on in Linux development.   The first one can be solved by a general rule of thumb.   You need the avahi-devel packages to be installed so you can use the headers in the examples and in your software that allow you to link into the avahi ABI.  On an RPM-based system, yum install avahi-devel will get you there.   Normally, pacakagename-devel is going to get you these developer libraries.   Avahi is already on most RPM distros so you can have your software run out of the box without this install on the target machine.   You just need it for development and compilation of the binary.

Wait, still not working you say?   Ah.  You’re getting a few pages of “undefined reference” you say?   All to avahi functions?   So you try the link flag -lavahi as that usually gets it right?   No go.   libavahi doesn’t exist.  So you go googling.   I did all this.   It’s an interpretive pain.  Here’s the magic incantation to build the service publishing example:

gcc avahitest.c -lavahi-glib -lavahi-core -lavahi-common -lavahi-client

That links in all the avahi libraries I could find (and you don’t really need them all, but they are listed here for completeness).   Then it runs and works brilliantly.  If you’re wondering where all these are located, it’s in /usr/lib64 at least on x86_64 Fedora.

I wandered through the avahi wiki at some speed, and couldn’t find anything simply listing this time-consuming necessity.  So I’m posting it here in the hopes that a future frustrated searching developer might find just a bit of relief and save themselves a bunch of blind stumbling to little effect.

 

 

 

Currently playing in iTunes: Know Us by Jillian Ann

One platform, two, three, more? – Original Post November 24, 2010

James Governor over at Red Monk (a great bunch of analysts I alternately seem to agree with and take issue with) posted a take on the mobile platform development continuum and HTML5 as an alternative. Take a read and the disagreement I voice below will have a better context.   I’m reposting the reply here because it is a relevant discussion and I leave him the right to do with his site as he will, and that includes having it go away.   So I’ll keep my work where I can get at it.   On my laptop in MarsEdit and on the web on my own blog as well.   🙂

[I feel] James, [is] getting a bit absolutist for an analyst that usually sees how things fit together more completely.   It’s not a winner-take-all end game.   Some apps work better on a local platform, native toolchain, optimized into the hardware, fixed system, no browser barrier.   Usually have a higher development cost than the average web app.   These are, of course, generalizations.

Just look at the Samsung Galaxy response.   Some people like it.   Different form factor.   Different purpose.   iPad app onto a Galaxy?   Most wouldn’t feel right or be a great experience.   The web gives you a “common denominator” approach, and that usually gets offset by a pile of javascript conditionals adjusting for the nuances of browsers.

I tell you, trying to use a mutual fund screening system on the web today was horribly painful.   The state and ability to pull things in and out of a spreadsheet app is just not happening.   The app is older and could be improved with some newer approached and technology, but really, I want my <em>data</em> to be able to flow back and forth securely between devices, but when I’m working, I want it to be <em>here</em>.   That’s why the browsers keep adding in desktop features.   Because the experience in the browser is and has been inferior.

WebGL?  Offline Storage?  Hardware graphics acceleration?   Codec hardware acceleration?   These are all desktop features that get pulled into the browser over time as it trails behind.   It’s not surprising, and it’s not going to change.  Standards move slower than most proprietary innovation.  Web apps are usually for a cheaper and/or broader approach, with a radically different revenue model.

It just baffles me that development gets lumped into a bucket of “proprietary platform” or “web standards”.  Both are two ends of a continuum.   OpenGL is pretty cross-platform in the bulk of the API.   And it’s well defined on experience.   There’s shades of gray all over.   Are you telling me the same technology and approach for Farmville is appropriate for Doom3 or Call of Duty Black Ops?   I don’t hear the game companies complaining about supporting 5 or 6 platforms, let alone 2 or 3.

Of course, the desktop pulled in automatic updates and notfications and sync from the web in many ways to bring those strengths across too, so it’s not totally a one-way street.

But it’s not a one-lane street either.

 

The future of personal communication? – Original post August 16, 2010

Apple may be laying out a foundation of pretty serious evolution in personal communications right under our noses with Facetime.

Facetime is an open spec.  Always good for adoption.  It’s supported by one of the hottest consumer cell phones out there.   It’s data-based, but also rooted in SIP.  Now, if the rumour sites are right, the latest incarnations won’t only go to phone numbers, but to email addresses associated with devices.

Big deal you say?  Think it through for a moment. If you have multiple devices, they register to a single email address.  You have a universal number now.  Any connection to that email address via Facetime (voice, video, however) and you alert every device it’s associated to (or the most recently live one is a default, or user preference) regardless of location.

Ok, so that’s Instant Messenger on many devices.  Again, big deal you say.

SIP can associate a phone number in there as well of course, and the phone has a phone number in there, but let’s theorize something.

Let’s say for theoretical argument’s sake that Apple replaces AIM as the mobileme transport with this “universal iContact system”.  Anyone on it with an iChat client or Facetime device (iPod, iPhone, iPad, Mac (again, theoretically, we’re forecasting)) can connect to anyone on any of their devices with a voice, video or even plain text or data session at any time.  SMS goes bye-bye as it’s just a typed text chat line to the facetime address.  You have a new unlocked iPhone 4 and you pick up a local data plan wherever you travel to and all your “calls” are forwarded automatically at local rates for data.   Roaming charges to bye-bye.

Now let’s say the Apple creates a SIP gateway that can associate the iContact ID with a phone number they provide, sort of like a SkypeIn number or other “real” phone number in a voice system.   When the call goes to that number, it gets forwarded as a translated Facetime voice chat to any and all of your associated Facetime iContact devices, and you pick it up just like a Grand Central number (Google Voice now).

Apple has replaced present-day mobile phone numbers with a universal number, added in the video calling, text chatting, voice calling and crunched it all down to data streams on a standardized open protocol.  The cell companies finally are reduced to data plan sellers (which is all they *should* be charging us for today anyway, the rest is restricted repackaged data at ridiculous markups (read: SMS)) and the roaming charges that get travellers so irate are gone without losing contact by having local sim cards and phone numbers everywhere.  Now it’s starting to look like a big deal.

Deploy VOIP and messaging as a primary avenue with a bunch of features everyone thought were in the future 20 years ago (video calling) and already have a device that’s creating a critical mass.   Lump all that on top of the ability for other device manufacturers to jump on the bandwagon with compatible services and offerings, and we finally get connected in a quality way, a universal way, and a fair way, and it happens soon.

It may even be happening now.